Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 104
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
BMC Med Imaging ; 22(1): 123, 2022 07 09.
Artigo em Inglês | MEDLINE | ID: mdl-35810273

RESUMO

OBJECTIVES: Accurate contouring of the clinical target volume (CTV) is a key element of radiotherapy in cervical cancer. We validated a novel deep learning (DL)-based auto-segmentation algorithm for CTVs in cervical cancer called the three-channel adaptive auto-segmentation network (TCAS). METHODS: A total of 107 cases were collected and contoured by senior radiation oncologists (ROs). Each case consisted of the following: (1) contrast-enhanced CT scan for positioning, (2) the related CTV, (3) multiple plain CT scans during treatment and (4) the related CTV. After registration between (1) and (3) for the same patient, the aligned image and CTV were generated. Method 1 is rigid registration, method 2 is deformable registration, and the aligned CTV is seen as the result. Method 3 is rigid registration and TCAS, method 4 is deformable registration and TCAS, and the result is generated by a DL-based method. RESULTS: From the 107 cases, 15 pairs were selected as the test set. The dice similarity coefficient (DSC) of method 1 was 0.8155 ± 0.0368; the DSC of method 2 was 0.8277 ± 0.0315; the DSCs of method 3 and 4 were 0.8914 ± 0.0294 and 0.8921 ± 0.0231, respectively. The mean surface distance and Hausdorff distance of methods 3 and 4 were markedly better than those of method 1 and 2. CONCLUSIONS: The TCAS achieved comparable accuracy to the manual delineation performed by senior ROs and was significantly better than direct registration.


Assuntos
Aprendizado Profundo , Neoplasias do Colo do Útero , Algoritmos , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Planejamento da Radioterapia Assistida por Computador/métodos , Espécies Reativas de Oxigênio , Neoplasias do Colo do Útero/diagnóstico por imagem , Neoplasias do Colo do Útero/radioterapia
2.
J Appl Clin Med Phys ; 23(2): e13470, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34807501

RESUMO

OBJECTIVES: Because radiotherapy is indispensible for treating cervical cancer, it is critical to accurately and efficiently delineate the radiation targets. We evaluated a deep learning (DL)-based auto-segmentation algorithm for automatic contouring of clinical target volumes (CTVs) in cervical cancers. METHODS: Computed tomography (CT) datasets from 535 cervical cancers treated with definitive or postoperative radiotherapy were collected. A DL tool based on VB-Net was developed to delineate CTVs of the pelvic lymph drainage area (dCTV1) and parametrial area (dCTV2) in the definitive radiotherapy group. The training/validation/test number is 157/20/23. CTV of the pelvic lymph drainage area (pCTV1) was delineated in the postoperative radiotherapy group. The training/validation/test number is 272/30/33. Dice similarity coefficient (DSC), mean surface distance (MSD), and Hausdorff distance (HD) were used to evaluate the contouring accuracy. Contouring times were recorded for efficiency comparison. RESULTS: The mean DSC, MSD, and HD values for our DL-based tool were 0.88/1.32 mm/21.60 mm for dCTV1, 0.70/2.42 mm/22.44 mm for dCTV2, and 0.86/1.15 mm/20.78 mm for pCTV1. Only minor modifications were needed for 63.5% of auto-segmentations to meet the clinical requirements. The contouring accuracy of the DL-based tool was comparable to that of senior radiation oncologists and was superior to that of junior/intermediate radiation oncologists. Additionally, DL assistance improved the performance of junior radiation oncologists for dCTV2 and pCTV1 contouring (mean DSC increases: 0.20 for dCTV2, 0.03 for pCTV1; mean contouring time decrease: 9.8 min for dCTV2, 28.9 min for pCTV1). CONCLUSIONS: DL-based auto-segmentation improves CTV contouring accuracy, reduces contouring time, and improves clinical efficiency for treating cervical cancer.


Assuntos
Aprendizado Profundo , Neoplasias do Colo do Útero , Algoritmos , Feminino , Humanos , Órgãos em Risco , Planejamento da Radioterapia Assistida por Computador , Neoplasias do Colo do Útero/diagnóstico por imagem , Neoplasias do Colo do Útero/radioterapia
3.
Pattern Recognit ; 122: 108341, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34565913

RESUMO

Segmentation of infections from CT scans is important for accurate diagnosis and follow-up in tackling the COVID-19. Although the convolutional neural network has great potential to automate the segmentation task, most existing deep learning-based infection segmentation methods require fully annotated ground-truth labels for training, which is time-consuming and labor-intensive. This paper proposed a novel weakly supervised segmentation method for COVID-19 infections in CT slices, which only requires scribble supervision and is enhanced with the uncertainty-aware self-ensembling and transformation-consistent techniques. Specifically, to deal with the difficulty caused by the shortage of supervision, an uncertainty-aware mean teacher is incorporated into the scribble-based segmentation method, encouraging the segmentation predictions to be consistent under different perturbations for an input image. This mean teacher model can guide the student model to be trained using information in images without requiring manual annotations. On the other hand, considering the output of the mean teacher contains both correct and unreliable predictions, equally treating each prediction in the teacher model may degrade the performance of the student network. To alleviate this problem, the pixel level uncertainty measure on the predictions of the teacher model is calculated, and then the student model is only guided by reliable predictions from the teacher model. To further regularize the network, a transformation-consistent strategy is also incorporated, which requires the prediction to follow the same transformation if a transform is performed on an input image of the network. The proposed method has been evaluated on two public datasets and one local dataset. The experimental results demonstrate that the proposed method is more effective than other weakly supervised methods and achieves similar performance as those fully supervised.

4.
BMC Med Imaging ; 21(1): 57, 2021 03 23.
Artigo em Inglês | MEDLINE | ID: mdl-33757431

RESUMO

BACKGROUND: Spatial and temporal lung infection distributions of coronavirus disease 2019 (COVID-19) and their changes could reveal important patterns to better understand the disease and its time course. This paper presents a pipeline to analyze statistically these patterns by automatically segmenting the infection regions and registering them onto a common template. METHODS: A VB-Net is designed to automatically segment infection regions in CT images. After training and validating the model, we segmented all the CT images in the study. The segmentation results are then warped onto a pre-defined template CT image using deformable registration based on lung fields. Then, the spatial distributions of infection regions and those during the course of the disease are calculated at the voxel level. Visualization and quantitative comparison can be performed between different groups. We compared the distribution maps between COVID-19 and community acquired pneumonia (CAP), between severe and critical COVID-19, and across the time course of the disease. RESULTS: For the performance of infection segmentation, comparing the segmentation results with manually annotated ground-truth, the average Dice is 91.6% ± 10.0%, which is close to the inter-rater difference between two radiologists (the Dice is 96.1% ± 3.5%). The distribution map of infection regions shows that high probability regions are in the peripheral subpleural (up to 35.1% in probability). COVID-19 GGO lesions are more widely spread than consolidations, and the latter are located more peripherally. Onset images of severe COVID-19 (inpatients) show similar lesion distributions but with smaller areas of significant difference in the right lower lobe compared to critical COVID-19 (intensive care unit patients). About the disease course, critical COVID-19 patients showed four subsequent patterns (progression, absorption, enlargement, and further absorption) in our collected dataset, with remarkable concurrent HU patterns for GGO and consolidations. CONCLUSIONS: By segmenting the infection regions with a VB-Net and registering all the CT images and the segmentation results onto a template, spatial distribution patterns of infections can be computed automatically. The algorithm provides an effective tool to visualize and quantify the spatial patterns of lung infection diseases and their changes during the disease course. Our results demonstrate different patterns between COVID-19 and CAP, between severe and critical COVID-19, as well as four subsequent disease course patterns of the severe COVID-19 patients studied, with remarkable concurrent HU patterns for GGO and consolidations.


Assuntos
COVID-19/diagnóstico por imagem , Infecções Comunitárias Adquiridas/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Algoritmos , Progressão da Doença , Humanos , Pneumonia/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos
5.
Hum Brain Mapp ; 38(6): 2865-2874, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-28295833

RESUMO

Understanding the early dynamic development of the human cerebral cortex remains a challenging problem. Cortical thickness, as one of the most important morphological attributes of the cerebral cortex, is a sensitive indicator for both normal neurodevelopment and neuropsychiatric disorders, but its early postnatal development remains largely unexplored. In this study, we investigate a key question in neurodevelopmental science: can we predict the future dynamic development of cortical thickness map in an individual infant based on its available MRI data at birth? If this is possible, we might be able to better model and understand the early brain development and also early detect abnormal brain development during infancy. To this end, we develop a novel learning-based method, called Dynamically-Assembled Regression Forest (DARF), to predict the development of the cortical thickness map during the first postnatal year, based on neonatal MRI features. We applied our method to 15 healthy infants and predicted their cortical thickness maps at 3, 6, 9, and 12 months of age, with respectively mean absolute errors of 0.209 mm, 0.332 mm, 0.340 mm, and 0.321 mm. Moreover, we found that the prediction precision is region-specific, with high precision in the unimodal cortex and relatively low precision in the high-order association cortex, which may be associated with their differential developmental patterns. Additional experiments also suggest that using more early time points for prediction can further significantly improve the prediction accuracy. Hum Brain Mapp 38:2865-2874, 2017. © 2017 Wiley Periodicals, Inc.


Assuntos
Mapeamento Encefálico , Córtex Cerebral/anatomia & histologia , Córtex Cerebral/crescimento & desenvolvimento , Dinâmica não Linear , Fatores Etários , Córtex Cerebral/diagnóstico por imagem , Feminino , Idade Gestacional , Humanos , Processamento de Imagem Assistida por Computador , Lactente , Recém-Nascido , Aprendizado de Máquina , Imageamento por Ressonância Magnética , Masculino , Modelos Neurológicos , Valor Preditivo dos Testes
6.
Neurocomputing (Amst) ; 229: 3-12, 2017 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-28133417

RESUMO

Automatic labeling of the hippocampus in brain MR images is highly demanded, as it has played an important role in imaging-based brain studies. However, accurate labeling of the hippocampus is still challenging, partially due to the ambiguous intensity boundary between the hippocampus and surrounding anatomies. In this paper, we propose a concatenated set of spatially-localized random forests for multi-atlas-based hippocampus labeling of adult/infant brain MR images. The contribution in our work is two-fold. First, each forest classifier is trained to label just a specific sub-region of the hippocampus, thus enhancing the labeling accuracy. Second, a novel forest selection strategy is proposed, such that each voxel in the test image can automatically select a set of optimal forests, and then dynamically fuses their respective outputs for determining the final label. Furthermore, we enhance the spatially-localized random forests with the aid of the auto-context strategy. In this way, our proposed learning framework can gradually refine the tentative labeling result for better performance. Experiments show that, regarding the large datasets of both adult and infant brain MR images, our method owns satisfactory scalability by segmenting the hippocampus accurately and efficiently.

7.
Neuroimage ; 134: 223-235, 2016 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-27046107

RESUMO

Quantitative study of perivascular spaces (PVSs) in brain magnetic resonance (MR) images is important for understanding the brain lymphatic system and its relationship with neurological diseases. One of the major challenges is the accurate extraction of PVSs that have very thin tubular structures with various directions in three-dimensional (3D) MR images. In this paper, we propose a learning-based PVS segmentation method to address this challenge. Specifically, we first determine a region of interest (ROI) by using the anatomical brain structure and the vesselness information derived from eigenvalues of image derivatives. Then, in the ROI, we extract a number of randomized Haar features which are normalized with respect to the principal directions of the underlying image derivatives. The classifier is trained by the random forest model that can effectively learn both discriminative features and classifier parameters to maximize the information gain. Finally, a sequential learning strategy is used to further enforce various contextual patterns around the thin tubular structures into the classifier. For evaluation, we apply our proposed method to the 7T brain MR images scanned from 17 healthy subjects aged from 25 to 37. The performance is measured by voxel-wise segmentation accuracy, cluster-wise classification accuracy, and similarity of geometric properties, such as volume, length, and diameter distributions between the predicted and the true PVSs. Moreover, the accuracies are also evaluated on the simulation images with motion artifacts and lacunes to demonstrate the potential of our method in segmenting PVSs from elderly and patient populations. The experimental results show that our proposed method outperforms all existing PVS segmentation methods.


Assuntos
Algoritmos , Artérias Cerebrais/diagnóstico por imagem , Veias Cerebrais/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Angiografia por Ressonância Magnética/métodos , Reconhecimento Automatizado de Padrão/métodos , Adulto , Angiografia Cerebral/métodos , Artérias Cerebrais/anatomia & histologia , Veias Cerebrais/anatomia & histologia , Feminino , Humanos , Aumento da Imagem/métodos , Aprendizado de Máquina , Masculino , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
8.
Hum Brain Mapp ; 37(11): 4129-4147, 2016 11.
Artigo em Inglês | MEDLINE | ID: mdl-27380969

RESUMO

Longitudinal neuroimaging analysis of the dynamic brain development in infants has received increasing attention recently. Many studies expect a complete longitudinal dataset in order to accurately chart the brain developmental trajectories. However, in practice, a large portion of subjects in longitudinal studies often have missing data at certain time points, due to various reasons such as the absence of scan or poor image quality. To make better use of these incomplete longitudinal data, in this paper, we propose a novel machine learning-based method to estimate the subject-specific, vertex-wise cortical morphological attributes at the missing time points in longitudinal infant studies. Specifically, we develop a customized regression forest, named dynamically assembled regression forest (DARF), as the core regression tool. DARF ensures the spatial smoothness of the estimated maps for vertex-wise cortical morphological attributes and also greatly reduces the computational cost. By employing a pairwise estimation followed by a joint refinement, our method is able to fully exploit the available information from both subjects with complete scans and subjects with missing scans for estimation of the missing cortical attribute maps. The proposed method has been applied to estimating the dynamic cortical thickness maps at missing time points in an incomplete longitudinal infant dataset, which includes 31 healthy infant subjects, each having up to five time points in the first postnatal year. The experimental results indicate that our proposed framework can accurately estimate the subject-specific vertex-wise cortical thickness maps at missing time points, with the average error less than 0.23 mm. Hum Brain Mapp 37:4129-4147, 2016. © 2016 Wiley Periodicals, Inc.


Assuntos
Córtex Cerebral/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Imageamento por Ressonância Magnética , Neuroimagem/métodos , Córtex Cerebral/crescimento & desenvolvimento , Pré-Escolar , Humanos , Lactente , Estudos Longitudinais , Imageamento por Ressonância Magnética/métodos , Tamanho do Órgão , Análise de Regressão
9.
Neurocomputing (Amst) ; 173(2): 317-331, 2016 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-26752809

RESUMO

In1 recent years, there has been a great interest in prostate segmentation, which is a important and challenging task for CT image guided radiotherapy. In this paper, a learning-based segmentation method via joint transductive feature selection and transductive regression is presented, which incorporates the physician's simple manual specification (only taking a few seconds), to aid accurate segmentation, especially for the case with large irregular prostate motion. More specifically, for the current treatment image, experienced physician is first allowed to manually assign the labels for a small subset of prostate and non-prostate voxels, especially in the first and last slices of the prostate regions. Then, the proposed method follows the two step: in prostate-likelihood estimation step, two novel algorithms: tLasso and wLapRLS, will be sequentially employed for transductive feature selection and transductive regression, respectively, aiming to generate the prostate-likelihood map. In multi-atlases based label fusion step, the final segmentation result will be obtained according to the corresponding prostate-likelihood map and the previous images of the same patient. The proposed method has been substantially evaluated on a real prostate CT dataset including 24 patients with 330 CT images, and compared with several state-of-the-art methods. Experimental results show that the proposed method outperforms the state-of-the-arts in terms of higher Dice ratio, higher true positive fraction, and lower centroid distances. Also, the results demonstrate that simple manual specification can help improve the segmentation performance, which is clinically feasible in real practice.

10.
Neuroimage ; 108: 160-72, 2015 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25541188

RESUMO

Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/anatomia & histologia , Imageamento por Ressonância Magnética/métodos , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Lactente , Recém-Nascido , Reconhecimento Automatizado de Padrão/métodos
11.
Neuroimage ; 84: 141-58, 2014 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-23968736

RESUMO

The segmentation of neonatal brain MR image into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF), is challenging due to the low spatial resolution, severe partial volume effect, high image noise, and dynamic myelination and maturation processes. Atlas-based methods have been widely used for guiding neonatal brain segmentation. Existing brain atlases were generally constructed by equally averaging all the aligned template images from a population. However, such population-based atlases might not be representative of a testing subject in the regions with high inter-subject variability and thus often lead to a low capability in guiding segmentation in those regions. Recently, patch-based sparse representation techniques have been proposed to effectively select the most relevant elements from a large group of candidates, which can be used to generate a subject-specific representation with rich local anatomical details for guiding the segmentation. Accordingly, in this paper, we propose a novel patch-driven level set method for the segmentation of neonatal brain MR images by taking advantage of sparse representation techniques. Specifically, we first build a subject-specific atlas from a library of aligned, manually segmented images by using sparse representation in a patch-based fashion. Then, the spatial consistency in the probability maps from the subject-specific atlas is further enforced by considering the similarities of a patch with its neighboring patches. Finally, the probability maps are integrated into a coupled level set framework for more accurate segmentation. The proposed method has been extensively evaluated on 20 training subjects using leave-one-out cross validation, and also on 132 additional testing subjects. Our method achieved a high accuracy of 0.919±0.008 for white matter and 0.901±0.005 for gray matter, respectively, measured by Dice ratio for the overlap between the automated and manual segmentations in the cortical region.


Assuntos
Algoritmos , Encéfalo/anatomia & histologia , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Modelos Anatômicos , Modelos Neurológicos , Reconhecimento Automatizado de Padrão/métodos , Simulação por Computador , Feminino , Humanos , Aumento da Imagem/métodos , Recém-Nascido , Masculino , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
12.
Neuroimage ; 89: 152-64, 2014 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-24291615

RESUMO

Segmentation of infant brain MR images is challenging due to poor spatial resolution, severe partial volume effect, and the ongoing maturation and myelination processes. During the first year of life, the brain image contrast between white and gray matters undergoes dramatic changes. In particular, the image contrast inverses around 6-8months of age, where the white and gray matter tissues are isointense in T1 and T2 weighted images and hence exhibit the extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a general framework that adopts sparse representation to fuse the multi-modality image information and further incorporate the anatomical constraints for brain tissue segmentation. Specifically, we first derive an initial segmentation from a library of aligned images with ground-truth segmentations by using sparse representation in a patch-based fashion for the multi-modality T1, T2 and FA images. The segmentation result is further iteratively refined by integration of the anatomical constraint. The proposed method was evaluated on 22 infant brain MR images acquired at around 6months of age by using a leave-one-out cross-validation, as well as other 10 unseen testing subjects. Our method achieved a high accuracy for the Dice ratios that measure the volume overlap between automated and manual segmentations, i.e., 0.889±0.008 for white matter and 0.870±0.006 for gray matter.


Assuntos
Encéfalo/anatomia & histologia , Imagem de Tensor de Difusão , Interpretação Estatística de Dados , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Lactente , Masculino , Fibras Nervosas Mielinizadas
13.
IEEE Trans Med Imaging ; 42(2): 557-567, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36459600

RESUMO

With rapid worldwide spread of Coronavirus Disease 2019 (COVID-19), jointly identifying severe COVID-19 cases from mild ones and predicting the conversion time (from mild to severe) is essential to optimize the workflow and reduce the clinician's workload. In this study, we propose a novel framework for COVID-19 diagnosis, termed as Structural Attention Graph Neural Network (SAGNN), which can combine the multi-source information including features extracted from chest CT, latent lung structural distribution, and non-imaging patient information to conduct diagnosis of COVID-19 severity and predict the conversion time from mild to severe. Specifically, we first construct a graph to incorporate structural information of the lung and adopt graph attention network to iteratively update representations of lung segments. To distinguish different infection degrees of left and right lungs, we further introduce a structural attention mechanism. Finally, we introduce demographic information and develop a multi-task learning framework to jointly perform both tasks of classification and regression. Experiments are conducted on a real dataset with 1687 chest CT scans, which includes 1328 mild cases and 359 severe cases. Experimental results show that our method achieves the best classification (e.g., 86.86% in terms of Area Under Curve) and regression (e.g., 0.58 in terms of Correlation Coefficient) performance, compared with other comparison methods.


Assuntos
COVID-19 , Humanos , Teste para COVID-19 , Redes Neurais de Computação , Pulmão/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos
14.
Quant Imaging Med Surg ; 13(12): 8641-8656, 2023 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-38106268

RESUMO

Background: Accurate diagnosis of pneumonia is vital for effective disease management and mortality reduction, but it can be easily confused with other conditions on chest computed tomography (CT) due to an overlap in imaging features. We aimed to develop and validate a deep learning (DL) model based on chest CT for accurate classification of viral pneumonia (VP), bacterial pneumonia (BP), fungal pneumonia (FP), pulmonary tuberculosis (PTB), and no pneumonia (NP) conditions. Methods: In total, 1,776 cases from five hospitals in different regions were retrospectively collected from September 2019 to June 2023. All cases were enrolled according to inclusion and exclusion criteria, and ultimately 1,611 cases were used to develop the DL model with 5-fold cross-validation, with 165 cases being used as the external test set. Five radiologists blindly reviewed the images from the internal and external test sets first without and then with DL model assistance. Precision, recall, F1-score, weighted F1-average, and area under the curve (AUC) were used to evaluate the model performance. Results: The F1-scores of the DL model on the internal and external test sets were, respectively, 0.947 [95% confidence interval (CI): 0.936-0.958] and 0.933 (95% CI: 0.916-0.950) for VP, 0.511 (95% CI: 0.487-0.536) and 0.591 (95% CI: 0.557-0.624) for BP, 0.842 (95% CI: 0.824-0.860) and 0.848 (95% CI: 0.824-0.873) for FP, 0.843 (95% CI: 0.826-0.861) and 0.795 (95% CI: 0.767-0.822) for PTB, and 0.975 (95% CI: 0.968-0.983) and 0.976 (95% CI: 0.965-0.986) for NP, with a weighted F1-average of 0.883 (95% CI: 0.867-0.898) and 0.846 (95% CI: 0.822-0.871), respectively. The model performed well and showed comparable performance in both the internal and external test sets. The F1-score of the DL model was higher than that of radiologists, and with DL model assistance, radiologists achieved a higher F1-score. On the external test set, the F1-score of the DL model (F1-score 0.848; 95% CI: 0.824-0.873) was higher than that of the radiologists (F1-score 0.541; 95% CI: 0.507-0.575) as was its precision for the other three pneumonia conditions (all P values <0.001). With DL model assistance, the F1-score for FP (F1-score 0.541; 95% CI: 0.507-0.575) was higher than that achieved without assistance (F1-score 0.778; 95% CI: 0.750-0.807) as was its precision for the other three pneumonia conditions (all P values <0.001). Conclusions: The DL approach can effectively classify pneumonia and can help improve radiologists' performance, supporting the full integration of DL results into the routine workflow of clinicians.

15.
Neuro Oncol ; 25(3): 544-556, 2023 03 14.
Artigo em Inglês | MEDLINE | ID: mdl-35943350

RESUMO

BACKGROUND: Errors have seldom been evaluated in computer-aided detection on brain metastases. This study aimed to analyze false negatives (FNs) and false positives (FPs) generated by a brain metastasis detection system (BMDS) and by readers. METHODS: A deep learning-based BMDS was developed and prospectively validated in a multicenter, multireader study. Ad hoc secondary analysis was restricted to the prospective participants (148 with 1,066 brain metastases and 152 normal controls). Three trainees and 3 experienced radiologists read the MRI images without and with the BMDS. The number of FNs and FPs per patient, jackknife alternative free-response receiver operating characteristic figure of merit (FOM), and lesion features associated with FNs were analyzed for the BMDS and readers using binary logistic regression. RESULTS: The FNs, FPs, and the FOM of the stand-alone BMDS were 0.49, 0.38, and 0.97, respectively. Compared with independent reading, BMDS-assisted reading generated 79% fewer FNs (1.98 vs 0.42, P < .001); 41% more FPs (0.17 vs 0.24, P < .001) but 125% more FPs for trainees (P < .001); and higher FOM (0.87 vs 0.98, P < .001). Lesions with small size, greater number, irregular shape, lower signal intensity, and located on nonbrain surface were associated with FNs for readers. Small, irregular, and necrotic lesions were more frequently found in FNs for BMDS. The FPs mainly resulted from small blood vessels for the BMDS and the readers. CONCLUSIONS: Despite the improvement in detection performance, attention should be paid to FPs and small lesions with lower enhancement for radiologists, especially for less-experienced radiologists.


Assuntos
Neoplasias Encefálicas , Humanos , Estudos Prospectivos , Curva ROC , Neoplasias Encefálicas/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Computadores , Sensibilidade e Especificidade
16.
Med Phys ; 39(10): 6372-87, 2012 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-23039673

RESUMO

PURPOSE: The segmentation of prostate in CT images is of essential importance to external beam radiotherapy, which is one of the major treatments for prostate cancer nowadays. During the radiotherapy, the prostate is radiated by high-energy x rays from different directions. In order to maximize the dose to the cancer and minimize the dose to the surrounding healthy tissues (e.g., bladder and rectum), the prostate in the new treatment image needs to be accurately localized. Therefore, the effectiveness and efficiency of external beam radiotherapy highly depend on the accurate localization of the prostate. However, due to the low contrast of the prostate with its surrounding tissues (e.g., bladder), the unpredicted prostate motion, and the large appearance variations across different treatment days, it is challenging to segment the prostate in CT images. In this paper, the authors present a novel classification based segmentation method to address these problems. METHODS: To segment the prostate, the proposed method first uses sparse representation based classification (SRC) to enhance the prostate in CT images by pixel-wise classification, in order to overcome the limitation of poor contrast of the prostate images. Then, based on the classification results, previous segmented prostates of the same patient are used as patient-specific atlases to align onto the current treatment image and the majority voting strategy is finally adopted to segment the prostate. In order to address the limitations of the traditional SRC in pixel-wise classification, especially for the purpose of segmentation, the authors extend SRC from the following four aspects: (1) A discriminant subdictionary learning method is proposed to learn a discriminant and compact representation of training samples for each class so that the discriminant power of SRC can be increased and also SRC can be applied to the large-scale pixel-wise classification. (2) The L1 regularized sparse coding is replaced by the elastic net in order to obtain a smooth and clear prostate boundary in the classification result. (3) Residue-based linear regression is incorporated to improve the classification performance and to extend SRC from hard classification to soft classification. (4) Iterative SRC is proposed by using context information to iteratively refine the classification results. RESULTS: The proposed method has been comprehensively evaluated on a dataset consisting of 330 CT images from 24 patients. The effectiveness of the extended SRC has been validated by comparing it with the traditional SRC based on the proposed four extensions. The experimental results show that our extended SRC can obtain not only more accurate classification results but also smoother and clearer prostate boundary than the traditional SRC. Besides, the comparison with other five state-of-the-art prostate segmentation methods indicates that our method can achieve better performance than other methods under comparison. CONCLUSIONS: The authors have proposed a novel prostate segmentation method based on the sparse representation based classification, which can achieve considerably accurate segmentation results in CT prostate segmentation.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Próstata/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Inteligência Artificial , Análise Discriminante , Humanos , Modelos Lineares , Masculino
17.
IEEE Trans Med Imaging ; 41(1): 88-102, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34383647

RESUMO

Early and accurate severity assessment of Coronavirus disease 2019 (COVID-19) based on computed tomography (CT) images offers a great help to the estimation of intensive care unit event and the clinical decision of treatment planning. To augment the labeled data and improve the generalization ability of the classification model, it is necessary to aggregate data from multiple sites. This task faces several challenges including class imbalance between mild and severe infections, domain distribution discrepancy between sites, and presence of heterogeneous features. In this paper, we propose a novel domain adaptation (DA) method with two components to address these problems. The first component is a stochastic class-balanced boosting sampling strategy that overcomes the imbalanced learning problem and improves the classification performance on poorly-predicted classes. The second component is a representation learning that guarantees three properties: 1) domain-transferability by prototype triplet loss, 2) discriminant by conditional maximum mean discrepancy loss, and 3) completeness by multi-view reconstruction loss. Particularly, we propose a domain translator and align the heterogeneous data to the estimated class prototypes (i.e., class centers) in a hyper-sphere manifold. Experiments on cross-site severity assessment of COVID-19 from CT images show that the proposed method can effectively tackle the imbalanced learning problem and outperform recent DA approaches.


Assuntos
COVID-19 , Humanos , SARS-CoV-2 , Tomografia Computadorizada por Raios X
18.
Nat Commun ; 13(1): 6566, 2022 11 02.
Artigo em Inglês | MEDLINE | ID: mdl-36323677

RESUMO

In radiotherapy for cancer patients, an indispensable process is to delineate organs-at-risk (OARs) and tumors. However, it is the most time-consuming step as manual delineation is always required from radiation oncologists. Herein, we propose a lightweight deep learning framework for radiotherapy treatment planning (RTP), named RTP-Net, to promote an automatic, rapid, and precise initialization of whole-body OARs and tumors. Briefly, the framework implements a cascade coarse-to-fine segmentation, with adaptive module for both small and large organs, and attention mechanisms for organs and boundaries. Our experiments show three merits: 1) Extensively evaluates on 67 delineation tasks on a large-scale dataset of 28,581 cases; 2) Demonstrates comparable or superior accuracy with an average Dice of 0.95; 3) Achieves near real-time delineation in most tasks with <2 s. This framework could be utilized to accelerate the contouring process in the All-in-One radiotherapy scheme, and thus greatly shorten the turnaround time of patients.


Assuntos
Aprendizado Profundo , Neoplasias , Humanos , Tomografia Computadorizada por Raios X , Órgãos em Risco , Neoplasias/radioterapia , Processamento de Imagem Assistida por Computador
19.
IEEE Trans Med Imaging ; 41(4): 771-781, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-34705640

RESUMO

Lung cancer is the leading cause of cancer deaths worldwide. Accurately diagnosing the malignancy of suspected lung nodules is of paramount clinical importance. However, to date, the pathologically-proven lung nodule dataset is largely limited and is highly imbalanced in benign and malignant distributions. In this study, we proposed a Semi-supervised Deep Transfer Learning (SDTL) framework for benign-malignant pulmonary nodule diagnosis. First, we utilize a transfer learning strategy by adopting a pre-trained classification network that is used to differentiate pulmonary nodules from nodule-like tissues. Second, since the size of samples with pathological-proven is small, an iterated feature-matching-based semi-supervised method is proposed to take advantage of a large available dataset with no pathological results. Specifically, a similarity metric function is adopted in the network semantic representation space for gradually including a small subset of samples with no pathological results to iteratively optimize the classification network. In this study, a total of 3,038 pulmonary nodules (from 2,853 subjects) with pathologically-proven benign or malignant labels and 14,735 unlabeled nodules (from 4,391 subjects) were retrospectively collected. Experimental results demonstrate that our proposed SDTL framework achieves superior diagnosis performance, with accuracy = 88.3%, AUC = 91.0% in the main dataset, and accuracy = 74.5%, AUC = 79.5% in the independent testing dataset. Furthermore, ablation study shows that the use of transfer learning provides 2% accuracy improvement, and the use of semi-supervised learning further contributes 2.9% accuracy improvement. Results implicate that our proposed classification network could provide an effective diagnostic tool for suspected lung nodules, and might have a promising application in clinical practice.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/patologia , Estudos Retrospectivos , Nódulo Pulmonar Solitário/diagnóstico por imagem , Nódulo Pulmonar Solitário/patologia , Aprendizado de Máquina Supervisionado , Tomografia Computadorizada por Raios X/métodos
20.
Front Oncol ; 12: 995870, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36338695

RESUMO

Background: Different pathological subtypes of lung adenocarcinoma lead to different treatment decisions and prognoses, and it is clinically important to distinguish invasive lung adenocarcinoma from preinvasive adenocarcinoma (adenocarcinoma in situ and minimally invasive adenocarcinoma). This study aims to investigate the performance of the deep learning approach based on high-resolution computed tomography (HRCT) images in the classification of tumor invasiveness and compare it with the performances of currently available approaches. Methods: In this study, we used a deep learning approach based on 3D conventional networks to automatically predict the invasiveness of pulmonary nodules. A total of 901 early-stage non-small cell lung cancer patients who underwent surgical treatment at Shanghai Chest Hospital between November 2015 and March 2017 were retrospectively included and randomly assigned to a training set (n=814) or testing set 1 (n=87). We subsequently included 116 patients who underwent surgical treatment and intraoperative frozen section between April 2019 and January 2020 to form testing set 2. We compared the performance of our deep learning approach in predicting tumor invasiveness with that of intraoperative frozen section analysis and human experts (radiologists and surgeons). Results: The deep learning approach yielded an area under the receiver operating characteristic curve (AUC) of 0.946 for distinguishing preinvasive adenocarcinoma from invasive lung adenocarcinoma in the testing set 1, which is significantly higher than the AUCs of human experts (P<0.05). In testing set 2, the deep learning approach distinguished invasive adenocarcinoma from preinvasive adenocarcinoma with an AUC of 0.862, which is higher than that of frozen section analysis (0.755, P=0.043), senior thoracic surgeons (0.720, P=0.006), radiologists (0.766, P>0.05) and junior thoracic surgeons (0.768, P>0.05). Conclusions: We developed a deep learning model that achieved comparable performance to intraoperative frozen section analysis in determining tumor invasiveness. The proposed method may contribute to clinical decisions related to the extent of surgical resection.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA