Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 104
Filtrar
1.
Quant Imaging Med Surg ; 13(12): 8641-8656, 2023 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-38106268

RESUMO

Background: Accurate diagnosis of pneumonia is vital for effective disease management and mortality reduction, but it can be easily confused with other conditions on chest computed tomography (CT) due to an overlap in imaging features. We aimed to develop and validate a deep learning (DL) model based on chest CT for accurate classification of viral pneumonia (VP), bacterial pneumonia (BP), fungal pneumonia (FP), pulmonary tuberculosis (PTB), and no pneumonia (NP) conditions. Methods: In total, 1,776 cases from five hospitals in different regions were retrospectively collected from September 2019 to June 2023. All cases were enrolled according to inclusion and exclusion criteria, and ultimately 1,611 cases were used to develop the DL model with 5-fold cross-validation, with 165 cases being used as the external test set. Five radiologists blindly reviewed the images from the internal and external test sets first without and then with DL model assistance. Precision, recall, F1-score, weighted F1-average, and area under the curve (AUC) were used to evaluate the model performance. Results: The F1-scores of the DL model on the internal and external test sets were, respectively, 0.947 [95% confidence interval (CI): 0.936-0.958] and 0.933 (95% CI: 0.916-0.950) for VP, 0.511 (95% CI: 0.487-0.536) and 0.591 (95% CI: 0.557-0.624) for BP, 0.842 (95% CI: 0.824-0.860) and 0.848 (95% CI: 0.824-0.873) for FP, 0.843 (95% CI: 0.826-0.861) and 0.795 (95% CI: 0.767-0.822) for PTB, and 0.975 (95% CI: 0.968-0.983) and 0.976 (95% CI: 0.965-0.986) for NP, with a weighted F1-average of 0.883 (95% CI: 0.867-0.898) and 0.846 (95% CI: 0.822-0.871), respectively. The model performed well and showed comparable performance in both the internal and external test sets. The F1-score of the DL model was higher than that of radiologists, and with DL model assistance, radiologists achieved a higher F1-score. On the external test set, the F1-score of the DL model (F1-score 0.848; 95% CI: 0.824-0.873) was higher than that of the radiologists (F1-score 0.541; 95% CI: 0.507-0.575) as was its precision for the other three pneumonia conditions (all P values <0.001). With DL model assistance, the F1-score for FP (F1-score 0.541; 95% CI: 0.507-0.575) was higher than that achieved without assistance (F1-score 0.778; 95% CI: 0.750-0.807) as was its precision for the other three pneumonia conditions (all P values <0.001). Conclusions: The DL approach can effectively classify pneumonia and can help improve radiologists' performance, supporting the full integration of DL results into the routine workflow of clinicians.

2.
Neuro Oncol ; 25(3): 544-556, 2023 03 14.
Artigo em Inglês | MEDLINE | ID: mdl-35943350

RESUMO

BACKGROUND: Errors have seldom been evaluated in computer-aided detection on brain metastases. This study aimed to analyze false negatives (FNs) and false positives (FPs) generated by a brain metastasis detection system (BMDS) and by readers. METHODS: A deep learning-based BMDS was developed and prospectively validated in a multicenter, multireader study. Ad hoc secondary analysis was restricted to the prospective participants (148 with 1,066 brain metastases and 152 normal controls). Three trainees and 3 experienced radiologists read the MRI images without and with the BMDS. The number of FNs and FPs per patient, jackknife alternative free-response receiver operating characteristic figure of merit (FOM), and lesion features associated with FNs were analyzed for the BMDS and readers using binary logistic regression. RESULTS: The FNs, FPs, and the FOM of the stand-alone BMDS were 0.49, 0.38, and 0.97, respectively. Compared with independent reading, BMDS-assisted reading generated 79% fewer FNs (1.98 vs 0.42, P < .001); 41% more FPs (0.17 vs 0.24, P < .001) but 125% more FPs for trainees (P < .001); and higher FOM (0.87 vs 0.98, P < .001). Lesions with small size, greater number, irregular shape, lower signal intensity, and located on nonbrain surface were associated with FNs for readers. Small, irregular, and necrotic lesions were more frequently found in FNs for BMDS. The FPs mainly resulted from small blood vessels for the BMDS and the readers. CONCLUSIONS: Despite the improvement in detection performance, attention should be paid to FPs and small lesions with lower enhancement for radiologists, especially for less-experienced radiologists.


Assuntos
Neoplasias Encefálicas , Humanos , Estudos Prospectivos , Curva ROC , Neoplasias Encefálicas/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Computadores , Sensibilidade e Especificidade
3.
IEEE Trans Med Imaging ; 42(2): 557-567, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36459600

RESUMO

With rapid worldwide spread of Coronavirus Disease 2019 (COVID-19), jointly identifying severe COVID-19 cases from mild ones and predicting the conversion time (from mild to severe) is essential to optimize the workflow and reduce the clinician's workload. In this study, we propose a novel framework for COVID-19 diagnosis, termed as Structural Attention Graph Neural Network (SAGNN), which can combine the multi-source information including features extracted from chest CT, latent lung structural distribution, and non-imaging patient information to conduct diagnosis of COVID-19 severity and predict the conversion time from mild to severe. Specifically, we first construct a graph to incorporate structural information of the lung and adopt graph attention network to iteratively update representations of lung segments. To distinguish different infection degrees of left and right lungs, we further introduce a structural attention mechanism. Finally, we introduce demographic information and develop a multi-task learning framework to jointly perform both tasks of classification and regression. Experiments are conducted on a real dataset with 1687 chest CT scans, which includes 1328 mild cases and 359 severe cases. Experimental results show that our method achieves the best classification (e.g., 86.86% in terms of Area Under Curve) and regression (e.g., 0.58 in terms of Correlation Coefficient) performance, compared with other comparison methods.


Assuntos
COVID-19 , Humanos , Teste para COVID-19 , Redes Neurais de Computação , Pulmão/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos
4.
Nat Commun ; 13(1): 6566, 2022 11 02.
Artigo em Inglês | MEDLINE | ID: mdl-36323677

RESUMO

In radiotherapy for cancer patients, an indispensable process is to delineate organs-at-risk (OARs) and tumors. However, it is the most time-consuming step as manual delineation is always required from radiation oncologists. Herein, we propose a lightweight deep learning framework for radiotherapy treatment planning (RTP), named RTP-Net, to promote an automatic, rapid, and precise initialization of whole-body OARs and tumors. Briefly, the framework implements a cascade coarse-to-fine segmentation, with adaptive module for both small and large organs, and attention mechanisms for organs and boundaries. Our experiments show three merits: 1) Extensively evaluates on 67 delineation tasks on a large-scale dataset of 28,581 cases; 2) Demonstrates comparable or superior accuracy with an average Dice of 0.95; 3) Achieves near real-time delineation in most tasks with <2 s. This framework could be utilized to accelerate the contouring process in the All-in-One radiotherapy scheme, and thus greatly shorten the turnaround time of patients.


Assuntos
Aprendizado Profundo , Neoplasias , Humanos , Tomografia Computadorizada por Raios X , Órgãos em Risco , Neoplasias/radioterapia , Processamento de Imagem Assistida por Computador
5.
Front Oncol ; 12: 995870, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36338695

RESUMO

Background: Different pathological subtypes of lung adenocarcinoma lead to different treatment decisions and prognoses, and it is clinically important to distinguish invasive lung adenocarcinoma from preinvasive adenocarcinoma (adenocarcinoma in situ and minimally invasive adenocarcinoma). This study aims to investigate the performance of the deep learning approach based on high-resolution computed tomography (HRCT) images in the classification of tumor invasiveness and compare it with the performances of currently available approaches. Methods: In this study, we used a deep learning approach based on 3D conventional networks to automatically predict the invasiveness of pulmonary nodules. A total of 901 early-stage non-small cell lung cancer patients who underwent surgical treatment at Shanghai Chest Hospital between November 2015 and March 2017 were retrospectively included and randomly assigned to a training set (n=814) or testing set 1 (n=87). We subsequently included 116 patients who underwent surgical treatment and intraoperative frozen section between April 2019 and January 2020 to form testing set 2. We compared the performance of our deep learning approach in predicting tumor invasiveness with that of intraoperative frozen section analysis and human experts (radiologists and surgeons). Results: The deep learning approach yielded an area under the receiver operating characteristic curve (AUC) of 0.946 for distinguishing preinvasive adenocarcinoma from invasive lung adenocarcinoma in the testing set 1, which is significantly higher than the AUCs of human experts (P<0.05). In testing set 2, the deep learning approach distinguished invasive adenocarcinoma from preinvasive adenocarcinoma with an AUC of 0.862, which is higher than that of frozen section analysis (0.755, P=0.043), senior thoracic surgeons (0.720, P=0.006), radiologists (0.766, P>0.05) and junior thoracic surgeons (0.768, P>0.05). Conclusions: We developed a deep learning model that achieved comparable performance to intraoperative frozen section analysis in determining tumor invasiveness. The proposed method may contribute to clinical decisions related to the extent of surgical resection.

6.
BMC Med Imaging ; 22(1): 123, 2022 07 09.
Artigo em Inglês | MEDLINE | ID: mdl-35810273

RESUMO

OBJECTIVES: Accurate contouring of the clinical target volume (CTV) is a key element of radiotherapy in cervical cancer. We validated a novel deep learning (DL)-based auto-segmentation algorithm for CTVs in cervical cancer called the three-channel adaptive auto-segmentation network (TCAS). METHODS: A total of 107 cases were collected and contoured by senior radiation oncologists (ROs). Each case consisted of the following: (1) contrast-enhanced CT scan for positioning, (2) the related CTV, (3) multiple plain CT scans during treatment and (4) the related CTV. After registration between (1) and (3) for the same patient, the aligned image and CTV were generated. Method 1 is rigid registration, method 2 is deformable registration, and the aligned CTV is seen as the result. Method 3 is rigid registration and TCAS, method 4 is deformable registration and TCAS, and the result is generated by a DL-based method. RESULTS: From the 107 cases, 15 pairs were selected as the test set. The dice similarity coefficient (DSC) of method 1 was 0.8155 ± 0.0368; the DSC of method 2 was 0.8277 ± 0.0315; the DSCs of method 3 and 4 were 0.8914 ± 0.0294 and 0.8921 ± 0.0231, respectively. The mean surface distance and Hausdorff distance of methods 3 and 4 were markedly better than those of method 1 and 2. CONCLUSIONS: The TCAS achieved comparable accuracy to the manual delineation performed by senior ROs and was significantly better than direct registration.


Assuntos
Aprendizado Profundo , Neoplasias do Colo do Útero , Algoritmos , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Planejamento da Radioterapia Assistida por Computador/métodos , Espécies Reativas de Oxigênio , Neoplasias do Colo do Útero/diagnóstico por imagem , Neoplasias do Colo do Útero/radioterapia
7.
Neuro Oncol ; 24(9): 1559-1570, 2022 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-35100427

RESUMO

BACKGROUND: Accurate detection is essential for brain metastasis (BM) management, but manual identification is laborious. This study developed, validated, and evaluated a BM detection (BMD) system. METHODS: Five hundred seventy-three consecutive patients (10 448 lesions) with newly diagnosed BMs and 377 patients without BMs were retrospectively enrolled to develop a multi-scale cascaded convolutional network using 3D-enhanced T1-weighted MR images. BMD was validated using a prospective validation set comprising an internal set (46 patients with 349 lesions; 44 patients without BMs) and three external sets (102 patients with 717 lesions; 108 patients without BMs). The lesion-based detection sensitivity and the number of false positives (FPs) per patient were analyzed. The detection sensitivity and reading time of three trainees and three experienced radiologists from three hospitals were evaluated using the validation set. RESULTS: The detection sensitivity and FPs were 95.8% and 0.39 in the test set, 96.0% and 0.27 in the internal validation set, and ranged from 88.9% to 95.5% and 0.29 to 0.66 in the external sets. The BMD system achieved higher detection sensitivity (93.2% [95% CI, 91.6-94.7%]) than all radiologists without BMD (ranging from 68.5% [95% CI, 65.7-71.3%] to 80.4% [95% CI, 78.0-82.8%], all P < .001). Radiologist detection sensitivity improved with BMD, reaching 92.7% to 95.0%. The mean reading time was reduced by 47% for trainees and 32% for experienced radiologists assisted by BMD relative to that without BMD. CONCLUSIONS: BMD enables accurate BM detection. Reading with BMD improves radiologists' detection sensitivity and reduces their reading times.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/secundário , Humanos , Imageamento por Ressonância Magnética/métodos , Estudos Retrospectivos
8.
IEEE Trans Med Imaging ; 41(1): 88-102, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34383647

RESUMO

Early and accurate severity assessment of Coronavirus disease 2019 (COVID-19) based on computed tomography (CT) images offers a great help to the estimation of intensive care unit event and the clinical decision of treatment planning. To augment the labeled data and improve the generalization ability of the classification model, it is necessary to aggregate data from multiple sites. This task faces several challenges including class imbalance between mild and severe infections, domain distribution discrepancy between sites, and presence of heterogeneous features. In this paper, we propose a novel domain adaptation (DA) method with two components to address these problems. The first component is a stochastic class-balanced boosting sampling strategy that overcomes the imbalanced learning problem and improves the classification performance on poorly-predicted classes. The second component is a representation learning that guarantees three properties: 1) domain-transferability by prototype triplet loss, 2) discriminant by conditional maximum mean discrepancy loss, and 3) completeness by multi-view reconstruction loss. Particularly, we propose a domain translator and align the heterogeneous data to the estimated class prototypes (i.e., class centers) in a hyper-sphere manifold. Experiments on cross-site severity assessment of COVID-19 from CT images show that the proposed method can effectively tackle the imbalanced learning problem and outperform recent DA approaches.


Assuntos
COVID-19 , Humanos , SARS-CoV-2 , Tomografia Computadorizada por Raios X
9.
IEEE Trans Med Imaging ; 41(4): 771-781, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-34705640

RESUMO

Lung cancer is the leading cause of cancer deaths worldwide. Accurately diagnosing the malignancy of suspected lung nodules is of paramount clinical importance. However, to date, the pathologically-proven lung nodule dataset is largely limited and is highly imbalanced in benign and malignant distributions. In this study, we proposed a Semi-supervised Deep Transfer Learning (SDTL) framework for benign-malignant pulmonary nodule diagnosis. First, we utilize a transfer learning strategy by adopting a pre-trained classification network that is used to differentiate pulmonary nodules from nodule-like tissues. Second, since the size of samples with pathological-proven is small, an iterated feature-matching-based semi-supervised method is proposed to take advantage of a large available dataset with no pathological results. Specifically, a similarity metric function is adopted in the network semantic representation space for gradually including a small subset of samples with no pathological results to iteratively optimize the classification network. In this study, a total of 3,038 pulmonary nodules (from 2,853 subjects) with pathologically-proven benign or malignant labels and 14,735 unlabeled nodules (from 4,391 subjects) were retrospectively collected. Experimental results demonstrate that our proposed SDTL framework achieves superior diagnosis performance, with accuracy = 88.3%, AUC = 91.0% in the main dataset, and accuracy = 74.5%, AUC = 79.5% in the independent testing dataset. Furthermore, ablation study shows that the use of transfer learning provides 2% accuracy improvement, and the use of semi-supervised learning further contributes 2.9% accuracy improvement. Results implicate that our proposed classification network could provide an effective diagnostic tool for suspected lung nodules, and might have a promising application in clinical practice.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/patologia , Estudos Retrospectivos , Nódulo Pulmonar Solitário/diagnóstico por imagem , Nódulo Pulmonar Solitário/patologia , Aprendizado de Máquina Supervisionado , Tomografia Computadorizada por Raios X/métodos
10.
J Appl Clin Med Phys ; 23(2): e13470, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34807501

RESUMO

OBJECTIVES: Because radiotherapy is indispensible for treating cervical cancer, it is critical to accurately and efficiently delineate the radiation targets. We evaluated a deep learning (DL)-based auto-segmentation algorithm for automatic contouring of clinical target volumes (CTVs) in cervical cancers. METHODS: Computed tomography (CT) datasets from 535 cervical cancers treated with definitive or postoperative radiotherapy were collected. A DL tool based on VB-Net was developed to delineate CTVs of the pelvic lymph drainage area (dCTV1) and parametrial area (dCTV2) in the definitive radiotherapy group. The training/validation/test number is 157/20/23. CTV of the pelvic lymph drainage area (pCTV1) was delineated in the postoperative radiotherapy group. The training/validation/test number is 272/30/33. Dice similarity coefficient (DSC), mean surface distance (MSD), and Hausdorff distance (HD) were used to evaluate the contouring accuracy. Contouring times were recorded for efficiency comparison. RESULTS: The mean DSC, MSD, and HD values for our DL-based tool were 0.88/1.32 mm/21.60 mm for dCTV1, 0.70/2.42 mm/22.44 mm for dCTV2, and 0.86/1.15 mm/20.78 mm for pCTV1. Only minor modifications were needed for 63.5% of auto-segmentations to meet the clinical requirements. The contouring accuracy of the DL-based tool was comparable to that of senior radiation oncologists and was superior to that of junior/intermediate radiation oncologists. Additionally, DL assistance improved the performance of junior radiation oncologists for dCTV2 and pCTV1 contouring (mean DSC increases: 0.20 for dCTV2, 0.03 for pCTV1; mean contouring time decrease: 9.8 min for dCTV2, 28.9 min for pCTV1). CONCLUSIONS: DL-based auto-segmentation improves CTV contouring accuracy, reduces contouring time, and improves clinical efficiency for treating cervical cancer.


Assuntos
Aprendizado Profundo , Neoplasias do Colo do Útero , Algoritmos , Feminino , Humanos , Órgãos em Risco , Planejamento da Radioterapia Assistida por Computador , Neoplasias do Colo do Útero/diagnóstico por imagem , Neoplasias do Colo do Útero/radioterapia
11.
Pattern Recognit ; 122: 108341, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34565913

RESUMO

Segmentation of infections from CT scans is important for accurate diagnosis and follow-up in tackling the COVID-19. Although the convolutional neural network has great potential to automate the segmentation task, most existing deep learning-based infection segmentation methods require fully annotated ground-truth labels for training, which is time-consuming and labor-intensive. This paper proposed a novel weakly supervised segmentation method for COVID-19 infections in CT slices, which only requires scribble supervision and is enhanced with the uncertainty-aware self-ensembling and transformation-consistent techniques. Specifically, to deal with the difficulty caused by the shortage of supervision, an uncertainty-aware mean teacher is incorporated into the scribble-based segmentation method, encouraging the segmentation predictions to be consistent under different perturbations for an input image. This mean teacher model can guide the student model to be trained using information in images without requiring manual annotations. On the other hand, considering the output of the mean teacher contains both correct and unreliable predictions, equally treating each prediction in the teacher model may degrade the performance of the student network. To alleviate this problem, the pixel level uncertainty measure on the predictions of the teacher model is calculated, and then the student model is only guided by reliable predictions from the teacher model. To further regularize the network, a transformation-consistent strategy is also incorporated, which requires the prediction to follow the same transformation if a transform is performed on an input image of the network. The proposed method has been evaluated on two public datasets and one local dataset. The experimental results demonstrate that the proposed method is more effective than other weakly supervised methods and achieves similar performance as those fully supervised.

12.
Front Oncol ; 11: 700210, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34604036

RESUMO

OBJECTIVE: To develop a deep learning-based model using esophageal thickness to detect esophageal cancer from unenhanced chest CT images. METHODS: We retrospectively identified 141 patients with esophageal cancer and 273 patients negative for esophageal cancer (at the time of imaging) for model training. Unenhanced chest CT images were collected and used to build a convolutional neural network (CNN) model for diagnosing esophageal cancer. The CNN is a VB-Net segmentation network that segments the esophagus and automatically quantifies the thickness of the esophageal wall and detect positions of esophageal lesions. To validate this model, 52 false negatives and 48 normal cases were collected further as the second dataset. The average performance of three radiologists and that of the same radiologists aided by the model were compared. RESULTS: The sensitivity and specificity of the esophageal cancer detection model were 88.8% and 90.9%, respectively, for the validation dataset set. Of the 52 missed esophageal cancer cases and the 48 normal cases, the sensitivity, specificity, and accuracy of the deep learning esophageal cancer detection model were 69%, 61%, and 65%, respectively. The independent results of the radiologists had a sensitivity of 25%, 31%, and 27%; specificity of 78%, 75%, and 75%; and accuracy of 53%, 54%, and 53%. With the aid of the model, the results of the radiologists were improved to a sensitivity of 77%, 81%, and 75%; specificity of 75%, 74%, and 74%; and accuracy of 76%, 77%, and 75%, respectively. CONCLUSIONS: Deep learning-based model can effectively detect esophageal cancer in unenhanced chest CT scans to improve the incidental detection of esophageal cancer.

13.
Comput Med Imaging Graph ; 90: 101889, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33848755

RESUMO

Screening of pulmonary nodules in computed tomography (CT) is crucial for early diagnosis and treatment of lung cancer. Although computer-aided diagnosis (CAD) systems have been designed to assist radiologists to detect nodules, fully automated detection is still challenging due to variations in nodule size, shape, and density. In this paper, we first propose a fully automated nodule detection method using a cascade and heterogeneous neural network trained on chest CT images of 12155 patients, then evaluate the performance by using phantom (828 CT images) and clinical datasets (2640 CT images) scanned with different imaging parameters. The nodule detection network employs two feature pyramid networks (FPNs) and a classification network (BasicNet). The first FPN is trained to achieve high sensitivity for nodule detection, and the second FPN refines the candidates for false positive reduction (FPR). Then, a BasicNet is combined with the second FPR to classify the candidates into either nodules or non-nodules for the final refinement. This study investigates the performance of nodule detection of solid and ground-glass nodules in phantom and patient data scanned with different imaging parameters. The results show that the detection of the solid nodules is robust to imaging parameters, and for GGO detection, reconstruction methods "iDose4-YA" and "STD-YA" achieve better performance. For thin-slice images, higher performance is achieved across different nodule sizes with reconstruction method "iDose4-STD". For 5 mm slice thickness, the best choice is the reconstruction method "iDose4-YA" for larger nodules (>5 mm). Overall, the reconstruction method "iDose4-YA" is suggested to achieve the best balanced results for both solid and GGO nodules.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Diagnóstico por Computador , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Redes Neurais de Computação , Imagens de Fantasmas , Interpretação de Imagem Radiográfica Assistida por Computador , Sensibilidade e Especificidade , Nódulo Pulmonar Solitário/diagnóstico por imagem , Tomografia Computadorizada por Raios X
14.
BMC Med Imaging ; 21(1): 57, 2021 03 23.
Artigo em Inglês | MEDLINE | ID: mdl-33757431

RESUMO

BACKGROUND: Spatial and temporal lung infection distributions of coronavirus disease 2019 (COVID-19) and their changes could reveal important patterns to better understand the disease and its time course. This paper presents a pipeline to analyze statistically these patterns by automatically segmenting the infection regions and registering them onto a common template. METHODS: A VB-Net is designed to automatically segment infection regions in CT images. After training and validating the model, we segmented all the CT images in the study. The segmentation results are then warped onto a pre-defined template CT image using deformable registration based on lung fields. Then, the spatial distributions of infection regions and those during the course of the disease are calculated at the voxel level. Visualization and quantitative comparison can be performed between different groups. We compared the distribution maps between COVID-19 and community acquired pneumonia (CAP), between severe and critical COVID-19, and across the time course of the disease. RESULTS: For the performance of infection segmentation, comparing the segmentation results with manually annotated ground-truth, the average Dice is 91.6% ± 10.0%, which is close to the inter-rater difference between two radiologists (the Dice is 96.1% ± 3.5%). The distribution map of infection regions shows that high probability regions are in the peripheral subpleural (up to 35.1% in probability). COVID-19 GGO lesions are more widely spread than consolidations, and the latter are located more peripherally. Onset images of severe COVID-19 (inpatients) show similar lesion distributions but with smaller areas of significant difference in the right lower lobe compared to critical COVID-19 (intensive care unit patients). About the disease course, critical COVID-19 patients showed four subsequent patterns (progression, absorption, enlargement, and further absorption) in our collected dataset, with remarkable concurrent HU patterns for GGO and consolidations. CONCLUSIONS: By segmenting the infection regions with a VB-Net and registering all the CT images and the segmentation results onto a template, spatial distribution patterns of infections can be computed automatically. The algorithm provides an effective tool to visualize and quantify the spatial patterns of lung infection diseases and their changes during the disease course. Our results demonstrate different patterns between COVID-19 and CAP, between severe and critical COVID-19, as well as four subsequent disease course patterns of the severe COVID-19 patients studied, with remarkable concurrent HU patterns for GGO and consolidations.


Assuntos
COVID-19/diagnóstico por imagem , Infecções Comunitárias Adquiridas/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Algoritmos , Progressão da Doença , Humanos , Pneumonia/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos
15.
Comput Med Imaging Graph ; 89: 101899, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33761446

RESUMO

Computed tomography (CT) screening is essential for early lung cancer detection. With the development of artificial intelligence techniques, it is particularly desirable to explore the ability of current state-of-the-art methods and to analyze nodule features in terms of a large population. In this paper, we present an artificial-intelligence lung image analysis system (ALIAS) for nodule detection and segmentation. And after segmenting the nodules, the locations, sizes, as well as imaging features are computed at the population level for studying the differences between benign and malignant nodules. The results provide better understanding of the underlying imaging features and their ability for early lung cancer diagnosis.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Inteligência Artificial , Humanos , Inteligência , Pulmão/diagnóstico por imagem , Neoplasias Pulmonares/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador , Nódulo Pulmonar Solitário/diagnóstico por imagem , Tomografia Computadorizada por Raios X
16.
Phys Med Biol ; 66(6): 065031, 2021 03 17.
Artigo em Inglês | MEDLINE | ID: mdl-33729998

RESUMO

The worldwide spread of coronavirus disease (COVID-19) has become a threat to global public health. It is of great importance to rapidly and accurately screen and distinguish patients with COVID-19 from those with community-acquired pneumonia (CAP). In this study, a total of 1,658 patients with COVID-19 and 1,027 CAP patients underwent thin-section CT and were enrolled. All images were preprocessed to obtain the segmentations of infections and lung fields. A set of handcrafted location-specific features was proposed to best capture the COVID-19 distribution pattern, in comparison to the conventional CT severity score (CT-SS) and radiomics features. An infection size-aware random forest method (iSARF) was proposed for discriminating COVID-19 from CAP. Experimental results show that the proposed method yielded its best performance when using the handcrafted features, with a sensitivity of 90.7%, a specificity of 87.2%, and an accuracy of 89.4% over state-of-the-art classifiers. Additional tests on 734 subjects, with thick slice images, demonstrates great generalizability. It is anticipated that our proposed framework could assist clinical decision making.


Assuntos
COVID-19/diagnóstico por imagem , Infecções Comunitárias Adquiridas/diagnóstico por imagem , Pneumonia/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Adulto , Idoso , Diagnóstico por Computador , Diagnóstico Diferencial , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Pulmão/diagnóstico por imagem , Pulmão/virologia , Masculino , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Estudos Retrospectivos , Sensibilidade e Especificidade
17.
Ann Transl Med ; 9(3): 216, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33708843

RESUMO

BACKGROUND: The assessment of the severity of coronavirus disease 2019 (COVID-19) by clinical presentation has not met the urgent clinical need so far. We aimed to establish a deep learning (DL) model based on quantitative computed tomography (CT) and initial clinical features to predict the severity of COVID-19. METHODS: One hundred ninety-six hospitalized patients with confirmed COVID-19 were enrolled from January 20 to February 10, 2020 in our centre, and were divided into severe and non-severe groups. The clinico-radiological data on admission were retrospectively collected and compared between the two groups. The optimal clinico-radiological features were determined based on least absolute shrinkage and selection operator (LASSO) logistic regression analysis, and a predictive nomogram model was established by five-fold cross-validation. Receiver operating characteristic (ROC) analyses were conducted, and the areas under the receiver operating characteristic curve (AUCs) of the nomogram model, quantitative CT parameters that were significant in univariate analysis, and pneumonia severity index (PSI) were compared. RESULTS: In comparison with the non-severe group (151 patients), the severe group (45 patients) had a higher PSI (P<0.001). DL-based quantitative CT indicated that the mass of infection (MOICT) and the percentage of infection (POICT) in the whole lung were higher in the severe group (both P<0.001). The nomogram model was based on MOICT and clinical features, including age, cluster of differentiation 4 (CD4)+ T cell count, serum lactate dehydrogenase (LDH), and C-reactive protein (CRP). The AUC values of the model, MOICT, POICT, and PSI scores were 0.900, 0.813, 0.805, and 0.751, respectively. The nomogram model performed significantly better than the other three parameters in predicting severity (P=0.003, P=0.001, and P<0.001, respectively). CONCLUSIONS: Although quantitative CT parameters and the PSI can well predict the severity of COVID-19, the DL-based quantitative CT model is more efficient.

18.
Med Image Anal ; 67: 101824, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33091741

RESUMO

With the rapidly worldwide spread of Coronavirus disease (COVID-19), it is of great importance to conduct early diagnosis of COVID-19 and predict the conversion time that patients possibly convert to the severe stage, for designing effective treatment plans and reducing the clinicians' workloads. In this study, we propose a joint classification and regression method to determine whether the patient would develop severe symptoms in the later time formulated as a classification task, and if yes, the conversion time will be predicted formulated as a classification task. To do this, the proposed method takes into account 1) the weight for each sample to reduce the outliers' influence and explore the problem of imbalance classification, and 2) the weight for each feature via a sparsity regularization term to remove the redundant features of the high-dimensional data and learn the shared information across two tasks, i.e., the classification and the regression. To our knowledge, this study is the first work to jointly predict the disease progression and the conversion time, which could help clinicians to deal with the potential severe cases in time or even save the patients' lives. Experimental analysis was conducted on a real data set from two hospitals with 408 chest computed tomography (CT) scans. Results show that our method achieves the best classification (e.g., 85.91% of accuracy) and regression (e.g., 0.462 of the correlation coefficient) performance, compared to all comparison methods. Moreover, our proposed method yields 76.97% of accuracy for predicting the severe cases, 0.524 of the correlation coefficient, and 0.55 days difference for the conversion time.


Assuntos
COVID-19/classificação , COVID-19/diagnóstico por imagem , Pneumonia Viral/classificação , Pneumonia Viral/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Progressão da Doença , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Interpretação de Imagem Radiográfica Assistida por Computador , Radiografia Torácica , SARS-CoV-2 , Índice de Gravidade de Doença , Fatores de Tempo
19.
Med Phys ; 48(4): 1633-1645, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33225476

RESUMO

OBJECTIVE: Computed tomography (CT) provides rich diagnosis and severity information of COVID-19 in clinical practice. However, there is no computerized tool to automatically delineate COVID-19 infection regions in chest CT scans for quantitative assessment in advanced applications such as severity prediction. The aim of this study was to develop a deep learning (DL)-based method for automatic segmentation and quantification of infection regions as well as the entire lungs from chest CT scans. METHODS: The DL-based segmentation method employs the "VB-Net" neural network to segment COVID-19 infection regions in CT scans. The developed DL-based segmentation system is trained by CT scans from 249 COVID-19 patients, and further validated by CT scans from other 300 COVID-19 patients. To accelerate the manual delineation of CT scans for training, a human-involved-model-iterations (HIMI) strategy is also adopted to assist radiologists to refine automatic annotation of each training case. To evaluate the performance of the DL-based segmentation system, three metrics, that is, Dice similarity coefficient, the differences of volume, and percentage of infection (POI), are calculated between automatic and manual segmentations on the validation set. Then, a clinical study on severity prediction is reported based on the quantitative infection assessment. RESULTS: The proposed DL-based segmentation system yielded Dice similarity coefficients of 91.6% ± 10.0% between automatic and manual segmentations, and a mean POI estimation error of 0.3% for the whole lung on the validation dataset. Moreover, compared with the cases with fully manual delineation that often takes hours, the proposed HIMI training strategy can dramatically reduce the delineation time to 4 min after three iterations of model updating. Besides, the best accuracy of severity prediction was 73.4% ± 1.3% when the mass of infection (MOI) of multiple lung lobes and bronchopulmonary segments were used as features for severity prediction, indicating the potential clinical application of our quantification technique on severity prediction. CONCLUSIONS: A DL-based segmentation system has been developed to automatically segment and quantify infection regions in CT scans of COVID-19 patients. Quantitative evaluation indicated high accuracy in automatic infection delineation and severity prediction.


Assuntos
COVID-19/diagnóstico por imagem , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador , Pulmão/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Humanos
20.
Med Image Anal ; 68: 101910, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33285483

RESUMO

The coronavirus disease, named COVID-19, has become the largest global public health crisis since it started in early 2020. CT imaging has been used as a complementary tool to assist early screening, especially for the rapid identification of COVID-19 cases from community acquired pneumonia (CAP) cases. The main challenge in early screening is how to model the confusing cases in the COVID-19 and CAP groups, with very similar clinical manifestations and imaging features. To tackle this challenge, we propose an Uncertainty Vertex-weighted Hypergraph Learning (UVHL) method to identify COVID-19 from CAP using CT images. In particular, multiple types of features (including regional features and radiomics features) are first extracted from CT image for each case. Then, the relationship among different cases is formulated by a hypergraph structure, with each case represented as a vertex in the hypergraph. The uncertainty of each vertex is further computed with an uncertainty score measurement and used as a weight in the hypergraph. Finally, a learning process of the vertex-weighted hypergraph is used to predict whether a new testing case belongs to COVID-19 or not. Experiments on a large multi-center pneumonia dataset, consisting of 2148 COVID-19 cases and 1182 CAP cases from five hospitals, are conducted to evaluate the prediction accuracy of the proposed method. Results demonstrate the effectiveness and robustness of our proposed method on the identification of COVID-19 in comparison to state-of-the-art methods.


Assuntos
COVID-19/diagnóstico por imagem , Infecções Comunitárias Adquiridas/diagnóstico por imagem , Diagnóstico por Computador/métodos , Aprendizado de Máquina , Pneumonia Viral/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X , China , Infecções Comunitárias Adquiridas/virologia , Conjuntos de Dados como Assunto , Diagnóstico Diferencial , Humanos , Pneumonia Viral/virologia , SARS-CoV-2
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA