Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
IEEE Trans Med Imaging ; 40(12): 3748-3761, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34264825

RESUMO

Lung cancer is by far the leading cause of cancer death in the US. Recent studies have demonstrated the effectiveness of screening using low dose CT (LDCT) in reducing lung cancer related mortality. While lung nodules are detected with a high rate of sensitivity, this exam has a low specificity rate and it is still difficult to separate benign and malignant lesions. The ISBI 2018 Lung Nodule Malignancy Prediction Challenge, developed by a team from the Quantitative Imaging Network of the National Cancer Institute, was focused on the prediction of lung nodule malignancy from two sequential LDCT screening exams using automated (non-manual) algorithms. We curated a cohort of 100 subjects who participated in the National Lung Screening Trial and had established pathological diagnoses. Data from 30 subjects were randomly selected for training and the remaining was used for testing. Participants were evaluated based on the area under the receiver operating characteristic curve (AUC) of nodule-wise malignancy scores generated by their algorithms on the test set. The challenge had 17 participants, with 11 teams submitting reports with method description, mandated by the challenge rules. Participants used quantitative methods, resulting in a reporting test AUC ranging from 0.698 to 0.913. The top five contestants used deep learning approaches, reporting an AUC between 0.87 - 0.91. The team's predictor did not achieve significant differences from each other nor from a volume change estimate (p =.05 with Bonferroni-Holm's correction).


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Algoritmos , Humanos , Pulmão , Neoplasias Pulmonares/diagnóstico por imagem , Curva ROC , Nódulo Pulmonar Solitário/diagnóstico por imagem , Tomografia Computadorizada por Raios X
2.
Med Phys ; 47(5): 2150-2160, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-32030769

RESUMO

PURPOSE: Multiview two-dimensional (2D) convolutional neural networks (CNNs) and three-dimensional (3D) CNNs have been successfully used for analyzing volumetric data in many state-of-the-art medical imaging applications. We propose an alternative modular framework that analyzes volumetric data with an approach that is analogous to radiologists' interpretation, and apply the framework to reduce false positives that are generated in computer-aided detection (CADe) systems for pulmonary nodules in thoracic computed tomography (CT) scans. METHODS: In our approach, a deep network consisting of 2D CNNs first processes slices individually. The features extracted in this stage are then passed to a recurrent neural network (RNN), thereby modeling consecutive slices as a sequence of temporal data and capturing the contextual information across all three dimensions in the volume of interest. Outputs of the RNN layer are weighed before the final fully connected layer, enabling the network to scale the importance of different slices within a volume of interest in an end-to-end training framework. RESULTS: We validated the proposed architecture on the false positive reduction track of the lung nodule analysis (LUNA) challenge for pulmonary nodule detection in chest CT scans, and obtained competitive results compared to 3D CNNs. Our results show that the proposed approach can encode the 3D information in volumetric data effectively by achieving a sensitivity >0.8 with just 1/8 false positives per scan. CONCLUSIONS: Our experimental results demonstrate the effectiveness of temporal analysis of volumetric images for the application of false positive reduction in chest CT scans and show that state-of-the-art 2D architectures from the literature can be directly applied to analyzing volumetric medical data. As newer and better 2D architectures are being developed at a much faster rate compared to 3D architectures, our approach makes it easy to obtain state-of-the-art performance on volumetric data using new 2D architectures.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagem , Redes Neurais de Computação , Radiografia Torácica , Tomografia Computadorizada por Raios X , Reações Falso-Positivas , Humanos , Sensibilidade e Especificidade
3.
J Phys Condens Matter ; 31(44): 445901, 2019 Nov 06.
Artigo em Inglês | MEDLINE | ID: mdl-31300625

RESUMO

We propose an efficient machine learning based approach in modeling the magnetism of diluted magnetic semiconductors (DMSs) leading to the prediction of new compounds with enhanced magnetic properties. The approach combines accurate ab initio methods with statistical tools to uncover the correlation between the magnetic features of DMSs and electronic properties of the constituent atoms to determine the underlying factors responsible for the DMS-magnetism. Taking the electronic properties of different DMS systems as descriptors to train different regression models allows us to achieve a speed up of several orders of magnitude in the search for an optimum combination of the host semiconductor and the dopants with enhanced magnetic properties. We demonstrate this by analyzing a large set of descriptors for a wide range of systems and show that only 30% of these features are more likely to contribute to this property. We also show that training regression models with the reduced set of features to predict the total magnetic moment of new candidate DMSs has reduced the mean square error by about 20% compared to the models trained using the whole set of features. Furthermore, our results indicate that the predictive power of our method can be improved even more by extending our descriptor set.

4.
Artigo em Inglês | MEDLINE | ID: mdl-30887013

RESUMO

Feature selection in Liquid Chromatography-Mass Spectrometry (LC-MS)-based metabolomics data (biomarker discovery) have become an important topic for machine learning researchers. High dimensionality and small sample size of LC-MS data make feature selection a challenging task. The goal of biomarker discovery is to select the few most discriminative features among a large number of irreverent ones. To improve the reliability of the discovered biomarkers, we use an ensemble-based approach. Ensemble learning can improve the accuracy of feature selection by combining multiple algorithms that have complementary information. In this paper, we propose an ensemble approach to combine the results of filter-based feature selection methods. To evaluate the proposed approach, we compared it to two commonly used methods, t-test and PLS-DA, using a real data set.

5.
IEEE Trans Med Imaging ; 36(11): 2239-2249, 2017 11.
Artigo em Inglês | MEDLINE | ID: mdl-28650806

RESUMO

SCoTS captures a sparse representation of shapes in an input image through a linear span of previously delineated shapes in a training repository. The model updates shape prior over level set iterations and captures variabilities in shapes by a sparse combination of the training data. The level set evolution is therefore driven by a data term as well as a term capturing valid prior shapes. During evolution, the shape prior influence is adjusted based on shape reconstruction, with the assigned weight determined from the degree of sparsity of the representation. For the problem of lung nodule segmentation in X-ray CT, SCoTS offers a unified framework, capable of segmenting nodules of all types. Experimental validations are demonstrated on 542 3-D lung nodule images from the LIDC-IDRI database. Despite its generality, SCoTS is competitive with domain specific state of the art methods for lung nodule segmentation.


Assuntos
Imageamento Tridimensional/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Bases de Dados Factuais , Humanos , Pulmão/diagnóstico por imagem , Neoplasias Pulmonares/diagnóstico por imagem
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2016: 6449-6452, 2016 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28269723

RESUMO

In this paper, a novel method of embedding shape information into level set image segmentation is proposed. Our method is based on inferring shape variations by a sparse linear combination of instances in the shape repository. Given a sufficient number of training shapes with variations, a new shape can be approximated by a linear span of training shapes associated with those variations. At each step of curve evolution the curve is moved to minimize Chan-Vese energy functional as well as toward the best approximation based on a linear combination of training samples. Although the method is general, in this paper it has been applied to the problem of segmentation of corpus callosum from 2D sagittal MR images.


Assuntos
Corpo Caloso/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Humanos , Modelos Lineares , Aprendizado de Máquina
7.
Neural Netw ; 22(5-6): 642-50, 2009.
Artigo em Inglês | MEDLINE | ID: mdl-19592217

RESUMO

Principal component analysis (PCA) is a mathematical method that reduces the dimensionality of the data while retaining most of the variation in the data. Although PCA has been applied in many areas successfully, it suffers from sensitivity to noise and is limited to linear principal components. The noise sensitivity problem comes from the least-squares measure used in PCA and the limitation to linear components originates from the fact that PCA uses an affine transform defined by eigenvectors of the covariance matrix and the mean of the data. In this paper, a robust kernel PCA method that extends the kernel PCA and uses fuzzy memberships is introduced to tackle the two problems simultaneously. We first introduce an iterative method to find robust principal components, called Robust Fuzzy PCA (RF-PCA), which has a connection with robust statistics and entropy regularization. The RF-PCA method is then extended to a non-linear one, Robust Kernel Fuzzy PCA (RKF-PCA), using kernels. The modified kernel used in the RKF-PCA satisfies the Mercer's condition, which means that the derivation of the K-PCA is also valid for the RKF-PCA. Formal analyses and experimental results suggest that the RKF-PCA is an efficient non-linear dimension reduction method and is more noise-robust than the original kernel PCA.


Assuntos
Lógica Fuzzy , Análise de Componente Principal/métodos , Algoritmos , Intervalos de Confiança , Modelos Lineares , Dinâmica não Linear , Software
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA