Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 24
Filter
Add more filters










Publication year range
1.
IEEE Trans Vis Comput Graph ; 30(8): 5875-5892, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38630564

ABSTRACT

This system paper documents the technical foundations for the extension of the Topology ToolKit (TTK) to distributed-memory parallelism with the Message Passing Interface (MPI). While several recent papers introduced topology-based approaches for distributed-memory environments, these were reporting experiments obtained with tailored, mono-algorithm implementations. In contrast, we describe in this paper a versatile approach (supporting both triangulated domains and regular grids) for the support of topological analysis pipelines, i.e., a sequence of topological algorithms interacting together, possibly on distinct numbers of processes. While developing this extension, we faced several algorithmic and software engineering challenges, which we document in this paper. Specifically, we describe an MPI extension of TTK's data structure for triangulation representation and traversal, a central component to the global performance and generality of TTK's topological implementations. We also introduce an intermediate interface between TTK and MPI, both at the global pipeline level, and at the fine-grain algorithmic level. We provide a taxonomy for the distributed-memory topological algorithms supported by TTK, depending on their communication needs and provide examples of hybrid MPI+thread parallelizations. Detailed performance analyses show that parallel efficiencies range from 20% to 80% (depending on the algorithms), and that the MPI-specific preconditioning introduced by our framework induces a negligible computation time overhead. We illustrate the new distributed-memory capabilities of TTK with an example of advanced analysis pipeline, combining multiple algorithms, run on the largest publicly available dataset we have found (120 billion vertices) on a standard cluster with 64 nodes (for a total of 1536 cores). Finally, we provide a roadmap for the completion of TTK's MPI extension, along with generic recommendations for each algorithm communication category.

2.
IEEE Trans Vis Comput Graph ; 30(4): 1942-1955, 2024 Apr.
Article in English | MEDLINE | ID: mdl-37030777

ABSTRACT

This article presents a well-scaling parallel algorithm for the computation of Morse-Smale (MS) segmentations, including the region separators and region boundaries. The segmentation of the domain into ascending and descending manifolds, solely defined on the vertices, improves the computational time using path compression and fully segments the border region. Region boundaries and region separators are generated using a multi-label marching tetrahedra algorithm. This enables a fast and simple solution to find optimal parameter settings in preliminary exploration steps by generating an MS complex preview. It also poses a rapid option to generate a fast visual representation of the region geometries for immediate utilization. Two experiments demonstrate the performance of our approach with speedups of over an order of magnitude in comparison to two publicly available implementations. The example section shows the similarity to the MS complex, the useability of the approach, and the benefits of this method with respect to the presented datasets. We provide our implementation with the paper.

3.
IEEE Trans Vis Comput Graph ; 30(1): 1095-1105, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37878452

ABSTRACT

Comparative visualization of scalar fields is often facilitated using similarity measures such as edit distances. In this paper, we describe a novel approach for similarity analysis of scalar fields that combines two recently introduced techniques: Wasserstein geodesics/barycenters as well as path mappings, a branch decomposition-independent edit distance. Effectively, we are able to leverage the reduced susceptibility of path mappings to small perturbations in the data when compared with the original Wasserstein distance. Our approach therefore exhibits superior performance and quality in typical tasks such as ensemble summarization, ensemble clustering, and temporal reduction of time series, while retaining practically feasible runtimes. Beyond studying theoretical properties of our approach and discussing implementation aspects, we describe a number of case studies that provide empirical insights into its utility for comparative visualization, and demonstrate the advantages of our method in both synthetic and real-world scenarios. We supply a C++ implementation that can be used to reproduce our results.

4.
IEEE Trans Vis Comput Graph ; 30(2): 1638-1651, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37930922

ABSTRACT

This article presents a computational framework for the concise encoding of an ensemble of persistence diagrams, in the form of weighted Wasserstein barycenters Turner et al. (2014), Vidal et al. (2020) of a dictionary of atom diagrams. We introduce a multi-scale gradient descent approach for the efficient resolution of the corresponding minimization problem, which interleaves the optimization of the barycenter weights with the optimization of the atom diagrams. Our approach leverages the analytic expressions for the gradient of both sub-problems to ensure fast iterations and it additionally exploits shared-memory parallelism. Extensive experiments on public ensembles demonstrate the efficiency of our approach, with Wasserstein dictionary computations in the orders of minutes for the largest examples. We show the utility of our contributions in two applications. First, we apply Wassserstein dictionaries to data reduction and reliably compress persistence diagrams by concisely representing them with their weights in the dictionary. Second, we present a dimensionality reduction framework based on a Wasserstein dictionary defined with a small number of atoms (typically three) and encode the dictionary as a low dimensional simplex embedded in a visual space (typically in 2D). In both applications, quantitative experiments assess the relevance of our framework. Finally, we provide a C++ implementation that can be used to reproduce our results.

5.
Article in English | MEDLINE | ID: mdl-38015696

ABSTRACT

This paper presents a computational framework for the Wasserstein auto-encoding of merge trees (MT-WAE), a novel extension of the classical auto-encoder neural network architecture to the Wasserstein metric space of merge trees. In contrast to traditional auto-encoders which operate on vectorized data, our formulation explicitly manipulates merge trees on their associated metric space at each layer of the network, resulting in superior accuracy and interpretability. Our novel neural network approach can be interpreted as a non-linear generalization of previous linear attempts [72] at merge tree encoding. It also trivially extends to persistence diagrams. Extensive experiments on public ensembles demonstrate the efficiency of our algorithms, with MT-WAE computations in the orders of minutes on average. We show the utility of our contributions in two applications adapted from previous work on merge tree encoding [72]. First, we apply MT-WAE to merge tree compression, by concisely representing them with their coordinates in the final layer of our auto-encoder. Second, we document an application to dimensionality reduction, by exploiting the latent space of our auto-encoder, for the visual analysis of ensemble data. We illustrate the versatility of our framework by introducing two penalty terms, to help preserve in the latent space both the Wasserstein distances between merge trees, as well as their clusters. In both applications, quantitative experiments assess the relevance of our framework. Finally, we provide a C++ implementation that can be used for reproducibility.

6.
Article in English | MEDLINE | ID: mdl-37021884

ABSTRACT

This paper introduces an efficient algorithm for persistence diagram computation, given an input piecewise linear scalar field f defined on a d-dimensional simplicial complex K, with d ≤ 3. Our work revisits the seminal algorithm "PairSimplices" [31], [103] with discrete Morse theory (DMT) [34], [80], which greatly reduces the number of input simplices to consider. Further, we also extend to DMT and accelerate the stratification strategy described in "PairSimplices" [31], [103] for the fast computation of the 0th and (d-1)th diagrams, noted D0(f) and Dd-1(f). Minima-saddle persistence pairs ( D0(f)) and saddle-maximum persistence pairs ( Dd-1(f)) are efficiently computed by processing , with a Union-Find , the unstable sets of 1-saddles and the stable sets of (d-1)-saddles. We provide a detailed description of the (optional) handling of the boundary component of K when processing (d-1)-saddles. This fast pre-computation for the dimensions 0 and (d-1) enables an aggressive specialization of [4] to the 3D case, which results in a drastic reduction of the number of input simplices for the computation of D1(f), the intermediate layer of the sandwich. Finally, we document several performance improvements via shared-memory parallelism. We provide an open-source implementation of our algorithm for reproducibility purposes. We also contribute a reproducible benchmark package, which exploits three-dimensional data from a public repository and compares our algorithm to a variety of publicly available implementations. Extensive experiments indicate that our algorithm improves by two orders of magnitude the time performance of the seminal "PairSimplices" algorithm it extends. Moreover, it also improves memory footprint and time performance over a selection of 14 competing approaches, with a substantial gain over the fastest available approaches, while producing a strictly identical output. We illustrate the utility of our contributions with an application to the fast and robust extraction of persistent 1-dimensional generators on surfaces, volume data and high-dimensional point clouds.

7.
Phys Chem Chem Phys ; 25(8): 5942-5947, 2023 Feb 22.
Article in English | MEDLINE | ID: mdl-36722896

ABSTRACT

A novel strategy for extracting axial (AV) and toroidal (TV) vortices in the magnetically-induced current density (MICD) in molecular systems is introduced, and its pilot application to LiH molecule is demonstrated. It exploits differences in the topologies of AV and TV cores and involves two key steps: selecting a scalar function that can describe vortex cores in MICD and its subsequent topological analysis. The scalar function of choice is ΩBα based on the velocity-gradient Ω method known in research on classical flows. The Topological Data Analysis (TDA) is then used to analyze the ΩBα scalar field. In particular, TDA robustly assigns distinct topological features of this field to different vortex types in LiH: AV to saddle-maximum separatrices which connect maxima to 2-saddles located on the domain's boundary, TV to a 1-cycle of the super-level sets of the input data. Both are extracted as the most persistent features of the topologically-simplified ΩBα scalar field.

8.
IEEE Trans Vis Comput Graph ; 29(2): 1573-1589, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36251893

ABSTRACT

This article presents a computational framework for the Principal Geodesic Analysis of merge trees (MT-PGA), a novel adaptation of the celebrated Principal Component Analysis (PCA) framework (K. Pearson 1901) to the Wasserstein metric space of merge trees (Pont et al. 2022). We formulate MT-PGA computation as a constrained optimization problem, aiming at adjusting a basis of orthogonal geodesic axes, while minimizing a fitting energy. We introduce an efficient, iterative algorithm which exploits shared-memory parallelism, as well as an analytic expression of the fitting energy gradient, to ensure fast iterations. Our approach also trivially extends to extremum persistence diagrams. Extensive experiments on public ensembles demonstrate the efficiency of our approach - with MT-PGA computations in the orders of minutes for the largest examples. We show the utility of our contributions by extending to merge trees two typical PCA applications. First, we apply MT-PGA to data reduction and reliably compress merge trees by concisely representing them by their first coordinates in the MT-PGA basis. Second, we present a dimensionality reduction framework exploiting the first two directions of the MT-PGA basis to generate two-dimensional layouts of the ensemble. We augment these layouts with persistence correlation views, enabling global and local visual inspections of the feature variability in the ensemble. In both applications, quantitative experiments assess the relevance of our framework. Finally, we provide a C++ implementation that can be used to reproduce our results.

9.
IEEE Trans Vis Comput Graph ; 28(1): 291-301, 2022 01.
Article in English | MEDLINE | ID: mdl-34596544

ABSTRACT

This paper presents a unified computational framework for the estimation of distances, geodesics and barycenters of merge trees. We extend recent work on the edit distance [104] and introduce a new metric, called the Wasserstein distance between merge trees, which is purposely designed to enable efficient computations of geodesics and barycenters. Specifically, our new distance is strictly equivalent to the $L$2-Wasserstein distance between extremum persistence diagrams, but it is restricted to a smaller solution space, namely, the space of rooted partial isomorphisms between branch decomposition trees. This enables a simple extension of existing optimization frameworks [110] for geodesics and barycenters from persistence diagrams to merge trees. We introduce a task-based algorithm which can be generically applied to distance, geodesic, barycenter or cluster computation. The task-based nature of our approach enables further accelerations with shared-memory parallelism. Extensive experiments on public ensembles and SciVis contest benchmarks demonstrate the efficiency of our approach - with barycenter computations in the orders of minutes for the largest examples - as well as its qualitative ability to generate representative barycenter merge trees, visually summarizing the features of interest found in the ensemble. We show the utility of our contributions with dedicated visualization applications: feature tracking, temporal reduction and ensemble clustering. We provide a lightweight C++ implementation that can be used to reproduce our results.

10.
IEEE Trans Vis Comput Graph ; 27(6): 2833-2850, 2021 Jun.
Article in English | MEDLINE | ID: mdl-33606631

ABSTRACT

This article introduces progressive algorithms for the topological analysis of scalar data. Our approach is based on a hierarchical representation of the input data and the fast identification of topologically invariant vertices, which are vertices that have no impact on the topological description of the data and for which we show that no computation is required as they are introduced in the hierarchy. This enables the definition of efficient coarse-to-fine topological algorithms, which leverage fast update mechanisms for ordinary vertices and avoid computation for the topologically invariant ones. We demonstrate our approach with two examples of topological algorithms (critical point extraction and persistence diagram computation), which generate interpretable outputs upon interruption requests and which progressively refine them otherwise. Experiments on real-life datasets illustrate that our progressive strategy, in addition to the continuous visual feedback it provides, even improves run time performance with regard to non-progressive algorithms and we describe further accelerations with shared-memory parallelism. We illustrate the utility of our approach in batch-mode and interactive setups, where it respectively enables the control of the execution time of complete topological pipelines as well as previews of the topological features found in a dataset, with progressive updates delivered within interactive times.

11.
IEEE Trans Vis Comput Graph ; 27(2): 572-582, 2021 Feb.
Article in English | MEDLINE | ID: mdl-33048688

ABSTRACT

This paper describes a localized algorithm for the topological simplification of scalar data, an essential pre-processing step of topological data analysis (TDA). Given a scalar field f and a selection of extrema to preserve, the proposed localized topological simplification (LTS) derives a function g that is close to f and only exhibits the selected set of extrema. Specifically, sub- and superlevel set components associated with undesired extrema are first locally flattened and then correctly embedded into the global scalar field, such that these regions are guaranteed-from a combinatorial perspective-to no longer contain any undesired extrema. In contrast to previous global approaches, LTS only and independently processes regions of the domain that actually need to be simplified, which already results in a noticeable speedup. Moreover, due to the localized nature of the algorithm, LTS can utilize shared-memory parallelism to simplify regions simultaneously with a high parallel efficiency (70%). Hence, LTS significantly improves interactivity for the exploration of simplification parameters and their effect on subsequent topological analysis. For such exploration tasks, LTS brings the overall execution time of a plethora of TDA pipelines from minutes down to seconds, with an average observed speedup over state-of-the-art techniques of up to ×36. Furthermore, in the special case where preserved extrema are selected based on topological persistence, an adapted version of LTS partially computes the persistence diagram and simultaneously simplifies features below a predefined persistence threshold. The effectiveness of LTS, its parallel efficiency, and its resulting benefits for TDA are demonstrated on several simulated and acquired datasets from different application domains, including physics, chemistry, and biomedical imaging.

12.
IEEE Trans Vis Comput Graph ; 27(2): 561-571, 2021 Feb.
Article in English | MEDLINE | ID: mdl-33048736

ABSTRACT

Multidimensional Projection is a fundamental tool for high-dimensional data analytics and visualization. With very few exceptions, projection techniques are designed to map data from a high-dimensional space to a visual space so as to preserve some dissimilarity (similarity) measure, such as the Euclidean distance for example. In fact, although adopting distinct mathematical formulations designed to favor different aspects of the data, most multidimensional projection methods strive to preserve dissimilarity measures that encapsulate geometric properties such as distances or the proximity relation between data objects. However, geometric relations are not the only interesting property to be preserved in a projection. For instance, the analysis of particular structures such as clusters and outliers could be more reliably performed if the mapping process gives some guarantee as to topological invariants such as connected components and loops. This paper introduces TopoMap, a novel projection technique which provides topological guarantees during the mapping process. In particular, the proposed method performs the mapping from a high-dimensional space to a visual space, while preserving the 0-dimensional persistence diagram of the Rips filtration of the high-dimensional data, ensuring that the filtrations generate the same connected components when applied to the original as well as projected data. The presented case studies show that the topological guarantee provided by TopoMap not only brings confidence to the visual analytic process but also can be used to assist in the assessment of other projection methods.

13.
Article in English | MEDLINE | ID: mdl-31403427

ABSTRACT

This paper presents an efficient algorithm for the progressive approximation of Wasserstein barycenters of persistence diagrams, with applications to the visual analysis of ensemble data. Given a set of scalar fields, our approach enables the computation of a persistence diagram which is representative of the set, and which visually conveys the number, data ranges and saliences of the main features of interest found in the set. Such representative diagrams are obtained by computing explicitly the discrete Wasserstein barycenter of the set of persistence diagrams, a notoriously computationally intensive task. In particular, we revisit efficient algorithms for Wasserstein distance approximation [12,51] to extend previous work on barycenter estimation [94]. We present a new fast algorithm, which progressively approximates the barycenter by iteratively increasing the computation accuracy as well as the number of persistent features in the output diagram. Such a progressivity drastically improves convergence in practice and allows to design an interruptible algorithm, capable of respecting computation time constraints. This enables the approximation of Wasserstein barycenters within interactive times. We present an application to ensemble clustering where we revisit the k-means algorithm to exploit our barycenters and compute, within execution time constraints, meaningful clusters of ensemble data along with their barycenter diagram. Extensive experiments on synthetic and real-life data sets report that our algorithm converges to barycenters that are qualitatively meaningful with regard to the applications, and quantitatively comparable to previous techniques, while offering an order of magnitude speedup when run until convergence (without time constraint). Our algorithm can be trivially parallelized to provide additional speedups in practice on standard workstations. We provide a lightweight C++ implementation of our approach that can be used to reproduce our results.

14.
Article in English | MEDLINE | ID: mdl-30207954

ABSTRACT

This paper presents a new approach for the visualization and analysis of the spatial variability of features of interest represented by critical points in ensemble data. Our framework, called Persistence Atlas, enables the visualization of the dominant spatial patterns of critical points, along with statistics regarding their occurrence in the ensemble. The persistence atlas represents in the geometrical domain each dominant pattern in the form of a confidence map for the appearance of critical points. As a by-product, our method also provides 2-dimensional layouts of the entire ensemble, highlighting the main trends at a global level. Our approach is based on the new notion of Persistence Map, a measure of the geometrical density in critical points which leverages the robustness to noise of topological persistence to better emphasize salient features. We show how to leverage spectral embedding to represent the ensemble members as points in a low-dimensional Euclidean space, where distances between points measure the dissimilarities between critical point layouts and where statistical tasks, such as clustering, can be easily carried out. Further, we show how the notion of mandatory critical point can be leveraged to evaluate for each cluster confidence regions for the appearance of critical points. Most of the steps of this framework can be trivially parallelized and we show how to efficiently implement them. Extensive experiments demonstrate the relevance of our approach. The accuracy of the confidence regions provided by the persistence atlas is quantitatively evaluated and compared to a baseline strategy using an off-the-shelf clustering approach. We illustrate the importance of the persistence atlas in a variety of real-life datasets, where clear trends in feature layouts are identified and analyzed. We provide a lightweight VTK-based C++ implementation of our approach that can be used for reproduction purposes.

15.
IEEE Trans Vis Comput Graph ; 24(1): 832-842, 2018 01.
Article in English | MEDLINE | ID: mdl-28866503

ABSTRACT

This system paper presents the Topology ToolKit (TTK), a software platform designed for the topological analysis of scalar data in scientific visualization. While topological data analysis has gained in popularity over the last two decades, it has not yet been widely adopted as a standard data analysis tool for end users or developers. TTK aims at addressing this problem by providing a unified, generic, efficient, and robust implementation of key algorithms for the topological analysis of scalar data, including: critical points, integral lines, persistence diagrams, persistence curves, merge trees, contour trees, Morse-Smale complexes, fiber surfaces, continuous scatterplots, Jacobi sets, Reeb spaces, and more. TTK is easily accessible to end users due to a tight integration with ParaView. It is also easily accessible to developers through a variety of bindings (Python, VTK/C++) for fast prototyping or through direct, dependency-free, C++, to ease integration into pre-existing complex systems. While developing TTK, we faced several algorithmic and software engineering challenges, which we document in this paper. In particular, we present an algorithm for the construction of a discrete gradient that complies to the critical points extracted in the piecewise-linear setting. This algorithm guarantees a combinatorial consistency across the topological abstractions supported by TTK, and importantly, a unified implementation of topological data simplification for multi-scale exploration and analysis. We also present a cached triangulation data structure, that supports time efficient and generic traversals, which self-adjusts its memory usage on demand for input simplicial meshes and which implicitly emulates a triangulation for regular grids with no memory overhead. Finally, we describe an original software architecture, which guarantees memory efficient and direct accesses to TTK features, while still allowing for researchers powerful and easy bindings and extensions. TTK is open source (BSD license) and its code, online documentation and video tutorials are available on TTK's website [108].

16.
IEEE Trans Vis Comput Graph ; 23(7): 1782-1795, 2017 07.
Article in English | MEDLINE | ID: mdl-28113433

ABSTRACT

Isosurfaces are fundamental geometrical objects for the analysis and visualization of volumetric scalar fields. Recent work has generalized them to bivariate volumetric fields with fiber surfaces, the pre-image of polygons in range space. However, the existing algorithm for their computation is approximate, and is limited to closed polygons. Moreover, its runtime performance does not allow instantaneous updates of the fiber surfaces upon user edits of the polygons. Overall, these limitations prevent a reliable and interactive exploration of the space of fiber surfaces. This paper introduces the first algorithm for the exact computation of fiber surfaces in tetrahedral meshes. It assumes no restriction on the topology of the input polygon, handles degenerate cases and better captures sharp features induced by polygon bends. The algorithm also allows visualization of individual fibers on the output surface, better illustrating their relationship with data features in range space. To enable truly interactive exploration sessions, we further improve the runtime performance of this algorithm. In particular, we show that it is trivially parallelizable and that it scales nearly linearly with the number of cores. Further, we study acceleration data-structures both in geometrical domain and range space and we show how to generalize interval trees used in isosurface extraction to fiber surface extraction. Experiments demonstrate the superiority of our algorithm over previous work, both in terms of accuracy and running time, with up to two orders of magnitude speedups. This improvement enables interactive edits of range polygons with instantaneous updates of the fiber surface for exploration purpose. A VTK-based reference implementation is provided as additional material to reproduce our results.

17.
IEEE Trans Vis Comput Graph ; 23(1): 960-969, 2017 01.
Article in English | MEDLINE | ID: mdl-27875209

ABSTRACT

This paper presents an efficient algorithm for the computation of the Reeb space of an input bivariate piecewise linear scalar function f defined on a tetrahedral mesh. By extending and generalizing algorithmic concepts from the univariate case to the bivariate one, we report the first practical, output-sensitive algorithm for the exact computation of such a Reeb space. The algorithm starts by identifying the Jacobi set of f, the bivariate analogs of critical points in the univariate case. Next, the Reeb space is computed by segmenting the input mesh along the new notion of Jacobi Fiber Surfaces, the bivariate analog of critical contours in the univariate case. We additionally present a simplification heuristic that enables the progressive coarsening of the Reeb space. Our algorithm is simple to implement and most of its computations can be trivially parallelized. We report performance numbers demonstrating orders of magnitude speedups over previous approaches, enabling for the first time the tractable computation of bivariate Reeb spaces in practice. Moreover, unlike range-based quantization approaches (such as the Joint Contour Net), our algorithm is parameter-free. We demonstrate the utility of our approach by using the Reeb space as a semi-automatic segmentation tool for bivariate data. In particular, we introduce continuous scatterplot peeling, a technique which enables the reduction of the cluttering in the continuous scatterplot, by interactively selecting the features of the Reeb space to project. We provide a VTK-based C++ implementation of our algorithm that can be used for reproduction purposes or for the development of new Reeb space based visualization techniques.

18.
IEEE Trans Vis Comput Graph ; 21(3): 350-62, 2015 Mar.
Article in English | MEDLINE | ID: mdl-26357067

ABSTRACT

Gigapixel panoramas are an increasingly popular digital image application. They are often created as a mosaic of many smaller images. The mosaic acquisition can take many hours causing the individual images to differ in exposure and lighting conditions. A blending operation is often necessary to give the appearance of a seamless image. The blending quality depends on the magnitude of discontinuity along the image boundaries. Often, new boundaries, or seams, are first computed that minimize this transition. Current techniques based on multi-labeling Graph Cuts are too slow and memory intensive for gigapixel sized panoramas. In this paper, we present a parallel, out-of-core seam computing technique that is fast, has small memory footprint, and is capable of running efficiently on different types of parallel systems. Its maximum memory usage is configurable, in the form of a cache, which can improve performance by reducing redundant disk I/O and computations. It shows near-perfect scaling on symmetric multiprocessing systems and good scaling on clusters and distributed shared memory systems. Our technique improves the time required to compute seams for gigapixel imagery from many hours (or even days) to just a few minutes, while still producing boundaries with energy that is on-par with Graph Cuts.

19.
IEEE Trans Vis Comput Graph ; 20(12): 2476-85, 2014 Dec.
Article in English | MEDLINE | ID: mdl-26356961

ABSTRACT

Interactions between atoms have a major influence on the chemical properties of molecular systems. While covalent interactions impose the structural integrity of molecules, noncovalent interactions govern more subtle phenomena such as protein folding, bonding or self assembly. The understanding of these types of interactions is necessary for the interpretation of many biological processes and chemical design tasks. While traditionally the electron density is analyzed to interpret the quantum chemistry of a molecular system, noncovalent interactions are characterized by low electron densities and only slight variations of them--challenging their extraction and characterization. Recently, the signed electron density and the reduced gradient, two scalar fields derived from the electron density, have drawn much attention in quantum chemistry since they enable a qualitative visualization of these interactions even in complex molecular systems and experimental measurements. In this work, we present the first combinatorial algorithm for the automated extraction and characterization of covalent and noncovalent interactions in molecular systems. The proposed algorithm is based on a joint topological analysis of the signed electron density and the reduced gradient. Combining the connectivity information of the critical points of these two scalar fields enables to visualize, enumerate, classify and investigate molecular interactions in a robust manner. Experiments on a variety of molecular systems, from simple dimers to proteins or DNA, demonstrate the ability of our technique to robustly extract these interactions and to reveal their structural relations to the atoms and bonds forming the molecules. For simple systems, our analysis corroborates the observations made by the chemists while it provides new visual and quantitative insights on chemical interactions for larger molecular systems.


Subject(s)
Computer Graphics , DNA/ultrastructure , Models, Molecular , Molecular Conformation , Proteins/ultrastructure , Chemical Phenomena , Computational Biology , Protein Structure, Secondary
20.
IEEE Trans Vis Comput Graph ; 20(12): 2595-603, 2014 Dec.
Article in English | MEDLINE | ID: mdl-26356973

ABSTRACT

Morse-Smale (MS) complexes have been gaining popularity as a tool for feature-driven data analysis and visualization. However, the quality of their geometric embedding and the sole dependence on the input scalar field data can limit their applicability when expressing application-dependent features. In this paper we introduce a new combinatorial technique to compute an MS complex that conforms to both an input scalar field and an additional, prior segmentation of the domain. The segmentation constrains the MS complex computation guaranteeing that boundaries in the segmentation are captured as separatrices of the MS complex. We demonstrate the utility and versatility of our approach with two applications. First, we use streamline integration to determine numerically computed basins/mountains and use the resulting segmentation as an input to our algorithm. This strategy enables the incorporation of prior flow path knowledge, effectively resulting in an MS complex that is as geometrically accurate as the employed numerical integration. Our second use case is motivated by the observation that often the data itself does not explicitly contain features known to be present by a domain expert. We introduce edit operations for MS complexes so that a user can directly modify their features while maintaining all the advantages of a robust topology-based representation.

SELECTION OF CITATIONS
SEARCH DETAIL
...