Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
Med Phys ; 50(7): 4255-4268, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-36630691

RESUMO

PURPOSE: Machine learning algorithms are best trained with large quantities of accurately annotated samples. While natural scene images can often be labeled relatively cheaply and at large scale, obtaining accurate annotations for medical images is both time consuming and expensive. In this study, we propose a cooperative labeling method that allows us to make use of weakly annotated medical imaging data for the training of a machine learning algorithm. As most clinically produced data are weakly-annotated - produced for use by humans rather than machines and lacking information machine learning depends upon - this approach allows us to incorporate a wider range of clinical data and thereby increase the training set size. METHODS: Our pseudo-labeling method consists of multiple stages. In the first stage, a previously established network is trained using a limited number of samples with high-quality expert-produced annotations. This network is used to generate annotations for a separate larger dataset that contains only weakly annotated scans. In the second stage, by cross-checking the two types of annotations against each other, we obtain higher-fidelity annotations. In the third stage, we extract training data from the weakly annotated scans, and combine it with the fully annotated data, producing a larger training dataset. We use this larger dataset to develop a computer-aided detection (CADe) system for nodule detection in chest CT. RESULTS: We evaluated the proposed approach by presenting the network with different numbers of expert-annotated scans in training and then testing the CADe using an independent expert-annotated dataset. We demonstrate that when availability of expert annotations is severely limited, the inclusion of weakly-labeled data leads to a 5% improvement in the competitive performance metric (CPM), defined as the average of sensitivities at different false-positive rates. CONCLUSIONS: Our proposed approach can effectively merge a weakly-annotated dataset with a small, well-annotated dataset for algorithm training. This approach can help enlarge limited training data by leveraging the large amount of weakly labeled data typically generated in clinical image interpretation.


Assuntos
Algoritmos , Tomografia Computadorizada por Raios X , Humanos , Aprendizado de Máquina , Aprendizado de Máquina Supervisionado , Processamento de Imagem Assistida por Computador/métodos
2.
IEEE Trans Med Imaging ; 40(12): 3748-3761, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34264825

RESUMO

Lung cancer is by far the leading cause of cancer death in the US. Recent studies have demonstrated the effectiveness of screening using low dose CT (LDCT) in reducing lung cancer related mortality. While lung nodules are detected with a high rate of sensitivity, this exam has a low specificity rate and it is still difficult to separate benign and malignant lesions. The ISBI 2018 Lung Nodule Malignancy Prediction Challenge, developed by a team from the Quantitative Imaging Network of the National Cancer Institute, was focused on the prediction of lung nodule malignancy from two sequential LDCT screening exams using automated (non-manual) algorithms. We curated a cohort of 100 subjects who participated in the National Lung Screening Trial and had established pathological diagnoses. Data from 30 subjects were randomly selected for training and the remaining was used for testing. Participants were evaluated based on the area under the receiver operating characteristic curve (AUC) of nodule-wise malignancy scores generated by their algorithms on the test set. The challenge had 17 participants, with 11 teams submitting reports with method description, mandated by the challenge rules. Participants used quantitative methods, resulting in a reporting test AUC ranging from 0.698 to 0.913. The top five contestants used deep learning approaches, reporting an AUC between 0.87 - 0.91. The team's predictor did not achieve significant differences from each other nor from a volume change estimate (p =.05 with Bonferroni-Holm's correction).


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Algoritmos , Humanos , Pulmão , Neoplasias Pulmonares/diagnóstico por imagem , Curva ROC , Nódulo Pulmonar Solitário/diagnóstico por imagem , Tomografia Computadorizada por Raios X
3.
Med Phys ; 48(7): 3741-3751, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33932241

RESUMO

PURPOSE: Most state-of-the-art automated medical image analysis methods for volumetric data rely on adaptations of two-dimensional (2D) and three-dimensional (3D) convolutional neural networks (CNNs). In this paper, we develop a novel unified CNN-based model that combines the benefits of 2D and 3D networks for analyzing volumetric medical images. METHODS: In our proposed framework, multiscale contextual information is first extracted from 2D slices inside a volume of interest (VOI). This is followed by dilated 1D convolutions across slices to aggregate in-plane features in a slice-wise manner and encode the information in the entire volume. Moreover, we formalize a curriculum learning strategy for a two-stage system (i.e., a system that consists of screening and false positive reduction), where the training samples are presented to the network in a meaningful order to further improve the performance. RESULTS: We evaluated the proposed approach by developing a computer-aided detection (CADe) system for lung nodules. Our results on 888 CT exams demonstrate that the proposed approach can effectively analyze volumetric data by achieving a sensitivity of > 0.99 in the screening stage and a sensitivity of > 0.96 at eight false positives per case in the false positive reduction stage. CONCLUSION: Our experimental results show that the proposed method provides competitive results compared to state-of-the-art 3D frameworks. In addition, we illustrate the benefits of curriculum learning strategies in two-stage systems that are of common use in medical imaging applications.


Assuntos
Neoplasias Pulmonares , Sistemas Computacionais , Humanos , Pulmão/diagnóstico por imagem , Neoplasias Pulmonares/diagnóstico por imagem , Redes Neurais de Computação , Tomografia Computadorizada por Raios X
4.
Med Phys ; 47(5): 2150-2160, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-32030769

RESUMO

PURPOSE: Multiview two-dimensional (2D) convolutional neural networks (CNNs) and three-dimensional (3D) CNNs have been successfully used for analyzing volumetric data in many state-of-the-art medical imaging applications. We propose an alternative modular framework that analyzes volumetric data with an approach that is analogous to radiologists' interpretation, and apply the framework to reduce false positives that are generated in computer-aided detection (CADe) systems for pulmonary nodules in thoracic computed tomography (CT) scans. METHODS: In our approach, a deep network consisting of 2D CNNs first processes slices individually. The features extracted in this stage are then passed to a recurrent neural network (RNN), thereby modeling consecutive slices as a sequence of temporal data and capturing the contextual information across all three dimensions in the volume of interest. Outputs of the RNN layer are weighed before the final fully connected layer, enabling the network to scale the importance of different slices within a volume of interest in an end-to-end training framework. RESULTS: We validated the proposed architecture on the false positive reduction track of the lung nodule analysis (LUNA) challenge for pulmonary nodule detection in chest CT scans, and obtained competitive results compared to 3D CNNs. Our results show that the proposed approach can encode the 3D information in volumetric data effectively by achieving a sensitivity >0.8 with just 1/8 false positives per scan. CONCLUSIONS: Our experimental results demonstrate the effectiveness of temporal analysis of volumetric images for the application of false positive reduction in chest CT scans and show that state-of-the-art 2D architectures from the literature can be directly applied to analyzing volumetric medical data. As newer and better 2D architectures are being developed at a much faster rate compared to 3D architectures, our approach makes it easy to obtain state-of-the-art performance on volumetric data using new 2D architectures.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagem , Redes Neurais de Computação , Radiografia Torácica , Tomografia Computadorizada por Raios X , Reações Falso-Positivas , Humanos , Sensibilidade e Especificidade
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2017: 3405-3408, 2017 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-29060628

RESUMO

The conventional graph cuts technique has been widely used for image segmentation due to its ability to find the global minimum and its ease of implementation. However, it is an intensity-based technique and as a result is limited to segmentation applications where there is significant contrast between the object and the background. We modified the conventional graph cuts method by adding shape prior and motion information. Active shape models (ASM) with signed distance functions were used to capture the shape prior information, preventing unwanted surrounding tissue from becoming part of the segmented object. The optical flow method was used to estimate the local motion and to extend 3D segmentation to 4D by warping a prior shape model through time. The method has been applied to segmentation of whole lung boundary and whole liver boundary from respiratory gated CT data. 4D lung boundary segmentation in five patients, and 4D liver boundary segmentation in five patients were performed and in each case, results were compared with the results from expert-delineated ground truth. 4D segmentation for five phases of CT data took approximately ten minutes on a PC workstation with AMD Phenom II and 32GB of memory. An important by-product is quantitative whole organ volumes from respiratory gated CT from end-inspiration to end-expiration which can be determined with high accuracy.


Assuntos
Fígado , Pulmão , Algoritmos , Humanos , Movimento (Física) , Tomografia Computadorizada por Raios X
6.
IEEE Trans Med Imaging ; 36(11): 2239-2249, 2017 11.
Artigo em Inglês | MEDLINE | ID: mdl-28650806

RESUMO

SCoTS captures a sparse representation of shapes in an input image through a linear span of previously delineated shapes in a training repository. The model updates shape prior over level set iterations and captures variabilities in shapes by a sparse combination of the training data. The level set evolution is therefore driven by a data term as well as a term capturing valid prior shapes. During evolution, the shape prior influence is adjusted based on shape reconstruction, with the assigned weight determined from the degree of sparsity of the representation. For the problem of lung nodule segmentation in X-ray CT, SCoTS offers a unified framework, capable of segmenting nodules of all types. Experimental validations are demonstrated on 542 3-D lung nodule images from the LIDC-IDRI database. Despite its generality, SCoTS is competitive with domain specific state of the art methods for lung nodule segmentation.


Assuntos
Imageamento Tridimensional/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Bases de Dados Factuais , Humanos , Pulmão/diagnóstico por imagem , Neoplasias Pulmonares/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA