Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 48
Filtrar
1.
JCO Clin Cancer Inform ; 7: e2300057, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37490642

RESUMO

PURPOSE: To determine prognostic and predictive clinical outcomes in metastatic hormone-sensitive prostate cancer (mHSPC) and metastatic castrate-resistant prostate cancer (mCRPC) on the basis of a combination of plasma-derived genomic alterations and lipid features in a longitudinal cohort of patients with advanced prostate cancer. METHODS: A multifeature classifier was constructed to predict clinical outcomes using plasma-based genomic alterations detected in 120 genes and 772 lipidomic species as informative features in a cohort of 71 patients with mHSPC and 144 patients with mCRPC. Outcomes of interest were collected over 11 years of follow-up. These included in mHSPC state early failure of androgen-deprivation therapy (ADT) and exceptional responders to ADT; early death (poor prognosis) and long-term survivors in mCRPC state. The approach was to build binary classification models that identified discriminative candidates with optimal weights to predict outcomes. To achieve this, we built multi-omic feature-based classifiers using traditional machine learning (ML) methods, including logistic regression with sparse regularization, multi-kernel Gaussian process regression, and support vector machines. RESULTS: The levels of specific ceramides (d18:1/14:0 and d18:1/17:0), and the presence of CHEK2 mutations, AR amplification, and RB1 deletion were identified as the most crucial factors associated with clinical outcomes. Using ML models, the optimal multi-omics feature combination determined resulted in AUC scores of 0.751 for predicting mHSPC survival and 0.638 for predicting ADT failure; and in mCRPC state, 0.687 for prognostication and 0.727 for exceptional survival. The models were observed to be superior than using a limited candidate number of features for developing multi-omic prognostic and predictive signatures. CONCLUSION: Using a ML approach that incorporates multiple omic features improves the prediction accuracy for metastatic prostate cancer outcomes significantly. Validation of these models will be needed in independent data sets in future.


Assuntos
Neoplasias de Próstata Resistentes à Castração , Masculino , Humanos , Neoplasias de Próstata Resistentes à Castração/diagnóstico , Neoplasias de Próstata Resistentes à Castração/genética , Neoplasias de Próstata Resistentes à Castração/terapia , Antagonistas de Androgênios/uso terapêutico , Lipidômica , Multiômica , Estudos Retrospectivos , Genômica
2.
IEEE Trans Vis Comput Graph ; 28(12): 4546-4557, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34191729

RESUMO

Robustly handling collisions between individual particles in a large particle-based simulation has been a challenging problem. We introduce particle merging-and-splitting, a simple scheme for robustly handling collisions between particles that prevents inter-penetrations of separate objects without introducing numerical instabilities. This scheme merges colliding particles at the beginning of the time-step and then splits them at the end of the time-step. Thus, collisions last for the duration of a time-step, allowing neighboring particles of the colliding particles to influence each other. We show that our merging-and-splitting method is effective in robustly handling collisions and avoiding penetrations in particle-based simulations. We also show how our merging-and-splitting approach can be used for coupling different simulation systems using different and otherwise incompatible integrators. We present simulation tests involving complex solid-fluid interactions, including solid fractures generated by fluid interactions.

3.
IEEE Trans Vis Comput Graph ; 27(9): 3781-3793, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32248111

RESUMO

Extraction of multiscale features using scale-space is one of the fundamental approaches to analyze scalar fields. However, similar techniques for vector fields are much less common, even though it is well known that, for example, turbulent flows contain cascades of nested vortices at different scales. The challenge is that the ideas related to scale-space are based upon iteratively smoothing the data to extract features at progressively larger scale, making it difficult to extract overlapping features. Instead, we consider spatial regions of influence in vector fields as scale, and introduce a new approach for the multiscale analysis of vector fields. Rather than smoothing the flow, we use the natural Helmholtz-Hodge decomposition to split it into small-scale and large-scale components using progressively larger neighborhoods. Our approach creates a natural separation of features by extracting local flow behavior, for example, a small vortex, from large-scale effects, for example, a background flow. We demonstrate our technique on large-scale, turbulent flows, and show multiscale features that cannot be extracted using state-of-the-art techniques.

4.
IEEE Trans Vis Comput Graph ; 26(1): 162-172, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-31425105

RESUMO

High-order finite element methods (HO-FEM) are gaining popularity in the simulation community due to their success in solving complex flow dynamics. There is an increasing need to analyze the data produced as output by these simulations. Simultaneously, topological analysis tools are emerging as powerful methods for investigating simulation data. However, most of the current approaches to topological analysis have had limited application to HO-FEM simulation data for two reasons. First, the current topological tools are designed for linear data (polynomial degree one), but the polynomial degree of the data output by these simulations is typically higher (routinely up to polynomial degree six). Second, the simulation data and derived quantities of the simulation data have discontinuities at element boundaries, and these discontinuities do not match the input requirements for the topological tools. One solution to both issues is to transform the high-order data to achieve low-order, continuous inputs for topological analysis. Nevertheless, there has been little work evaluating the possible transformation choices and their downstream effect on the topological analysis. We perform an empirical study to evaluate two commonly used data transformation methodologies along with the recently introduced L-SIAC filter for processing high-order simulation data. Our results show diverse behaviors are possible. We offer some guidance about how best to consider a pipeline of topological analysis of HO-FEM simulations with the currently available implementations of topological analysis.

5.
IEEE Trans Vis Comput Graph ; 24(12): 3268-3296, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-29990196

RESUMO

This article surveys the history and current state of the art of visualization in meteorology, focusing on visualization techniques and tools used for meteorological data analysis. We examine characteristics of meteorological data and analysis tasks, describe the development of computer graphics methods for visualization in meteorology from the 1960s to today, and visit the state of the art of visualization techniques and tools in operational weather forecasting and atmospheric research. We approach the topic from both the visualization and the meteorological side, showing visualization techniques commonly used in meteorological practice, and surveying recent studies in visualization research aimed at meteorological applications. Our overview covers visualization techniques from the fields of display design, 3D visualization, flow dynamics, feature-based visualization, comparative visualization and data fusion, uncertainty and ensemble visualization, interactive visual analysis, efficient rendering, and scalability and reproducibility. We discuss demands and challenges for visualization research targeting meteorological data analysis, highlighting aspects in demonstration of benefit, interactive visual analysis, seamless visualization, ensemble visualization, 3D visualization, and technical issues.

6.
IEEE Trans Vis Comput Graph ; 24(1): 903-912, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-28866517

RESUMO

As the finite element method (FEM) and the finite volume method (FVM), both traditional and high-order variants, continue their proliferation into various applied engineering disciplines, it is important that the visualization techniques and corresponding data analysis tools that act on the results produced by these methods faithfully represent the underlying data. To state this in another way: the interpretation of data generated by simulation needs to be consistent with the numerical schemes that underpin the specific solver technology. As the verifiable visualization literature has demonstrated: visual artifacts produced by the introduction of either explicit or implicit data transformations, such as data resampling, can sometimes distort or even obfuscate key scientific features in the data. In this paper, we focus on the handling of elemental continuity, which is often only continuous or piecewise discontinuous, when visualizing primary or derived fields from FEM or FVM simulations. We demonstrate that traditional data handling and visualization of these fields introduce visual errors. In addition, we show how the use of the recently proposed line-SIAC filter provides a way of handling elemental continuity issues in an accuracy-conserving manner with the added benefit of casting the data in a smooth context even if the representation is element discontinuous.

8.
IEEE Comput Graph Appl ; 36(3): 60-71, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26186768

RESUMO

The visualization of variability in surfaces embedded in 3D, which is a type of ensemble uncertainty visualization, provides a means of understanding the underlying distribution of a collection or ensemble of surfaces. This work extends the contour boxplot technique to 3D and evaluates it against an enumeration-style visualization of the ensemble members and other conventional visualizations used by atlas builders. The authors demonstrate the efficacy of using the 3D contour boxplot ensemble visualization technique to analyze shape alignment and variability in atlas construction and analysis as a real-world application.


Assuntos
Imageamento Tridimensional , Imageamento Tridimensional/métodos
9.
J Sci Comput ; 63(3): 745-768, 2016 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-25983388

RESUMO

In this paper, we present a method based on Radial Basis Function (RBF)-generated Finite Differences (FD) for numerically solving diffusion and reaction-diffusion equations (PDEs) on closed surfaces embedded in ℝ d . Our method uses a method-of-lines formulation, in which surface derivatives that appear in the PDEs are approximated locally using RBF interpolation. The method requires only scattered nodes representing the surface and normal vectors at those scattered nodes. All computations use only extrinsic coordinates, thereby avoiding coordinate distortions and singularities. We also present an optimization procedure that allows for the stabilization of the discrete differential operators generated by our RBF-FD method by selecting shape parameters for each stencil that correspond to a global target condition number. We show the convergence of our method on two surfaces for different stencil sizes, and present applications to nonlinear PDEs simulated both on implicit/parametric surfaces and more general surfaces represented by point clouds.

10.
J Comput Appl Math ; 257: 195-211, 2014 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-25202164

RESUMO

The finite element method (FEM) is a widely employed numerical technique for approximating the solution of partial differential equations (PDEs) in various science and engineering applications. Many of these applications benefit from fast execution of the FEM pipeline. One way to accelerate the FEM pipeline is by exploiting advances in modern computational hardware, such as the many-core streaming processors like the graphical processing unit (GPU). In this paper, we present the algorithms and data-structures necessary to move the entire FEM pipeline to the GPU. First we propose an efficient GPU-based algorithm to generate local element information and to assemble the global linear system associated with the FEM discretization of an elliptic PDE. To solve the corresponding linear system efficiently on the GPU, we implement a conjugate gradient method preconditioned with a geometry-informed algebraic multi-grid (AMG) method preconditioner. We propose a new fine-grained parallelism strategy, a corresponding multigrid cycling stage and efficient data mapping to the many-core architecture of GPU. Comparison of our on-GPU assembly versus a traditional serial implementation on the CPU achieves up to an 87 × speedup. Focusing on the linear system solver alone, we achieve a speedup of up to 51 × versus use of a comparable state-of-the-art serial CPU linear system solver. Furthermore, the method compares favorably with other GPU-based, sparse, linear solvers.

11.
J Comput Phys ; 257(PA): 813-829, 2014 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-24748685

RESUMO

We present a numerical discretisation of an embedded two-dimensional manifold using high-order continuous Galerkin spectral/hp elements, which provide exponential convergence of the solution with increasing polynomial order, while retaining geometric flexibility in the representation of the domain. Our work is motivated by applications in cardiac electrophysiology where sharp gradients in the solution benefit from the high-order discretisation, while the computational cost of anatomically-realistic models can be significantly reduced through the surface representation and use of high-order methods. We describe and validate our discretisation and provide a demonstration of its application to modelling electrochemical propagation across a human left atrium.

12.
J Phys Chem B ; 118(28): 8190-202, 2014 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-24605768

RESUMO

Coarse-grained models are becoming increasingly popular due to their ability to access time and length scales that are prohibitively expensive with atomistic models. However, as a result of decreasing the degrees of freedom, coarse-grained models often have diminished accuracy, representability, and transferability compared with their finer grained counterparts. Uncertainty quantification (UQ) can help alleviate this challenge by providing an efficient and accurate method to evaluate the effect of model parameters on the properties of the system. This method is useful in finding parameter sets that fit the model to several experimental properties simultaneously. In this work we use UQ as a tool for the evaluation and optimization of a coarse-grained model. We efficiently sample the five-dimensional parameter space of the coarse-grained monatomic water (mW) model to determine what parameter sets best reproduce experimental thermodynamic, structural and dynamical properties of water. Generalized polynomial chaos (gPC) was used to reconstruct the analytical surfaces of density, enthalpy of vaporization, radial and angular distribution functions, and diffusivity of liquid water as a function of the input parameters. With these surfaces, we evaluated the sensitivity of these properties to perturbations of the model input parameters and the accuracy and representability of the coarse-grained models. In particular, we investigated what is the optimum length scale of the water-water interactions needed to reproduce the properties of liquid water with a monatomic model with two- and three-body interactions. We found that there is an optimum cutoff length of 4.3 Å, barely longer than the size of the first neighbor shell in water. As cutoffs deviate from this optimum value, the ability of the model to simultaneously reproduce the structure and thermodynamics is severely diminished.

13.
IEEE Trans Vis Comput Graph ; 20(1): 70-83, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24201327

RESUMO

This paper describes a new volume rendering system for spectral/hp finite-element methods that has as its goal to be both accurate and interactive. Even though high-order finite element methods are commonly used by scientists and engineers, there are few visualization methods designed to display this data directly. Consequently, visualizations of high-order data are generally created by first sampling the high-order field onto a regular grid and then generating the visualization via traditional methods based on linear interpolation. This approach, however, introduces error into the visualization pipeline and requires the user to balance image quality, interactivity, and resource consumption. We first show that evaluation of the volume rendering integral, when applied to the composition of piecewise-smooth transfer functions with the high-order scalar field, typically exhibits second-order convergence for a wide range of high-order quadrature schemes, and has worst case first-order convergence. This result provides bounds on the ability to achieve high-order convergence to the volume rendering integral. We then develop an algorithm for optimized evaluation of the volume rendering integral, based on the categorization of each ray according to the local behavior of the field and transfer function. We demonstrate the effectiveness of our system by running performance benchmarks on several high-order fluid-flow simulations.

14.
IEEE Trans Vis Comput Graph ; 20(1): 140-54, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24201332

RESUMO

We propose an approach for verification of volume rendering correctness based on an analysis of the volume rendering integral, the basis of most DVR algorithms. With respect to the most common discretization of this continuous model (Riemann summation), we make assumptions about the impact of parameter changes on the rendered results and derive convergence curves describing the expected behavior. Specifically, we progressively refine the number of samples along the ray, the grid size, and the pixel size, and evaluate how the errors observed during refinement compare against the expected approximation errors. We derive the theoretical foundations of our verification approach, explain how to realize it in practice, and discuss its limitations. We also report the errors identified by our approach when applied to two publicly available volume rendering packages.

15.
IEEE Trans Vis Comput Graph ; 20(12): 2654-63, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26356979

RESUMO

In simulation science, computational scientists often study the behavior of their simulations by repeated solutions with variations in parameters and/or boundary values or initial conditions. Through such simulation ensembles, one can try to understand or quantify the variability or uncertainty in a solution as a function of the various inputs or model assumptions. In response to a growing interest in simulation ensembles, the visualization community has developed a suite of methods for allowing users to observe and understand the properties of these ensembles in an efficient and effective manner. An important aspect of visualizing simulations is the analysis of derived features, often represented as points, surfaces, or curves. In this paper, we present a novel, nonparametric method for summarizing ensembles of 2D and 3D curves. We propose an extension of a method from descriptive statistics, data depth, to curves. We also demonstrate a set of rendering and visualization strategies for showing rank statistics of an ensemble of curves, which is a generalization of traditional whisker plots or boxplots to multidimensional curves. Results are presented for applications in neuroimaging, hurricane forecasting and fluid dynamics.


Assuntos
Gráficos por Computador , Processamento de Imagem Assistida por Computador/métodos , Modelos Estatísticos , Algoritmos , Hidrodinâmica , Meteorologia , Neuroimagem
16.
IEEE Trans Vis Comput Graph ; 19(12): 2713-22, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24051838

RESUMO

Ensembles of numerical simulations are used in a variety of applications, such as meteorology or computational solid mechanics, in order to quantify the uncertainty or possible error in a model or simulation. Deriving robust statistics and visualizing the variability of an ensemble is a challenging task and is usually accomplished through direct visualization of ensemble members or by providing aggregate representations such as an average or pointwise probabilities. In many cases, the interesting quantities in a simulation are not dense fields, but are sets of features that are often represented as thresholds on physical or derived quantities. In this paper, we introduce a generalization of boxplots, called contour boxplots, for visualization and exploration of ensembles of contours or level sets of functions. Conventional boxplots have been widely used as an exploratory or communicative tool for data analysis, and they typically show the median, mean, confidence intervals, and outliers of a population. The proposed contour boxplots are a generalization of functional boxplots, which build on the notion of data depth. Data depth approximates the extent to which a particular sample is centrally located within its density function. This produces a center-outward ordering that gives rise to the statistical quantities that are essential to boxplots. Here we present a generalization of functional data depth to contours and demonstrate methods for displaying the resulting boxplots for two-dimensional simulation data in weather forecasting and computational fluid dynamics.


Assuntos
Algoritmos , Gráficos por Computador , Interpretação Estatística de Dados , Modelos Estatísticos , Interface Usuário-Computador , Simulação por Computador
17.
J Comput Phys ; 250: 403-424, 2013 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-23913980

RESUMO

With the goal of non-invasively localizing cardiac ischemic disease using body-surface potential recordings, we attempted to reconstruct the transmembrane potential (TMP) throughout the myocardium with the bidomain heart model. The task is an inverse source problem governed by partial differential equations (PDE). Our main contribution is solving the inverse problem within a PDE-constrained optimization framework that enables various physically-based constraints in both equality and inequality forms. We formulated the optimality conditions rigorously in the continuum before deriving finite element discretization, thereby making the optimization independent of discretization choice. Such a formulation was derived for the L2-norm Tikhonov regularization and the total variation minimization. The subsequent numerical optimization was fulfilled by a primal-dual interior-point method tailored to our problem's specific structure. Our simulations used realistic, fiber-included heart models consisting of up to 18,000 nodes, much finer than any inverse models previously reported. With synthetic ischemia data we localized ischemic regions with roughly a 10% false-negative rate or a 20% false-positive rate under conditions up to 5% input noise. With ischemia data measured from animal experiments, we reconstructed TMPs with roughly 0.9 correlation with the ground truth. While precisely estimating the TMP in general cases remains an open problem, our study shows the feasibility of reconstructing TMP during the ST interval as a means of ischemia localization.

18.
Appl Numer Math ; 63: 58-77, 2013 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-23585704

RESUMO

The Immersed Boundary (IB) method is a widely-used numerical methodology for the simulation of fluid-structure interaction problems. The IB method utilizes an Eulerian discretization for the fluid equations of motion while maintaining a Lagrangian representation of structural objects. Operators are defined for transmitting information (forces and velocities) between these two representations. Most IB simulations represent their structures with piecewise linear approximations and utilize Hookean spring models to approximate structural forces. Our specific motivation is the modeling of platelets in hemodynamic flows. In this paper, we study two alternative representations - radial basis functions (RBFs) and Fourier-based (trigonometric polynomials and spherical harmonics) representations - for the modeling of platelets in two and three dimensions within the IB framework, and compare our results with the traditional piecewise linear approximation methodology. For different representative shapes, we examine the geometric modeling errors (position and normal vectors), force computation errors, and computational cost and provide an engineering trade-off strategy for when and why one might select to employ these different representations.

19.
Breast J ; 19(1): 49-55, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23186153

RESUMO

Histologic confirmation of axillary nodal metastases preoperatively avoids a sentinel node biopsy and enables a one step surgical procedure. The aim of this study was to establish the local positive predictive value of axillary ultrasound (AUS) and guided needle core biopsy (NCB) in axillary staging of breast cancer, and to identify factors influencing yield. A prospective audit of 142 consecutive patients (screening and symptomatic) presenting from 1st December 2008-31st May 2009 with breast lesions categorized R4-R5, who underwent a preoperative AUS, and proceeded to surgery was undertaken. Ultrasound-guided NCB was performed on nodes radiologically classified R3-R5. Lymph node size, number, and morphological features were documented. Yield was correlated with tumor size, grade, and histologic type. AUS/NCB was correlated with post surgical pathologic findings to determine sensitivity, specificity, positive and negative predictive value of AUS and NCB. A total of 142 patients underwent surgery, of whom 52 (37%) had lymph node metastases on histology. All had a preoperative AUS, 51 (36%) had abnormal ultrasound findings. 46 (90%) underwent axillary node NCB of which 24 (52%) were positive. The smallest tumor size associated with positive nodes at surgery was 11.5 mm. The sensitivity of AUS was 65%. Specificity was 81%, with a positive predictive value (PPV) of 67% and negative predictive (NPV) value of 80%. Sensitivity of U/S-guided NCB was 75%, with a specificity of 100%, PPV 100% and NPV 64%. Sensitivity of AUS for lobular carcinoma was 36% versus 76% for all other histologies. Sensitivity of NCB for lobular cancer was 33% versus 79% for all other histologies. The most significant factor producing discordance between preoperative AUS and definitive histologic evidence of lymph node metastasis was tumor type. Accurate preoperative lymph node staging was prejudiced by lobular histology (p < 0.0019).


Assuntos
Neoplasias da Mama/patologia , Carcinoma Ductal de Mama/secundário , Carcinoma Lobular/secundário , Linfonodos/diagnóstico por imagem , Linfonodos/patologia , Adulto , Idoso , Idoso de 80 Anos ou mais , Axila , Biópsia por Agulha , Neoplasias da Mama/cirurgia , Carcinoma Ductal de Mama/cirurgia , Carcinoma Lobular/cirurgia , Feminino , Humanos , Linfonodos/cirurgia , Metástase Linfática , Auditoria Médica , Pessoa de Meia-Idade , Micrometástase de Neoplasia/diagnóstico por imagem , Micrometástase de Neoplasia/patologia , Estadiamento de Neoplasias , Valor Preditivo dos Testes , Cuidados Pré-Operatórios , Estudos Prospectivos , Ultrassonografia
20.
SIAM J Sci Comput ; 35(5): c473-c494, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-25221418

RESUMO

Generating numerical solutions to the eikonal equation and its many variations has a broad range of applications in both the natural and computational sciences. Efficient solvers on cutting-edge, parallel architectures require new algorithms that may not be theoretically optimal, but that are designed to allow asynchronous solution updates and have limited memory access patterns. This paper presents a parallel algorithm for solving the eikonal equation on fully unstructured tetrahedral meshes. The method is appropriate for the type of fine-grained parallelism found on modern massively-SIMD architectures such as graphics processors and takes into account the particular constraints and capabilities of these computing platforms. This work builds on previous work for solving these equations on triangle meshes; in this paper we adapt and extend previous two-dimensional strategies to accommodate three-dimensional, unstructured, tetrahedralized domains. These new developments include a local update strategy with data compaction for tetrahedral meshes that provides solutions on both serial and parallel architectures, with a generalization to inhomogeneous, anisotropic speed functions. We also propose two new update schemes, specialized to mitigate the natural data increase observed when moving to three dimensions, and the data structures necessary for efficiently mapping data to parallel SIMD processors in a way that maintains computational density. Finally, we present descriptions of the implementations for a single CPU, as well as multicore CPUs with shared memory and SIMD architectures, with comparative results against state-of-the-art eikonal solvers.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA