Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 105
Filtrar
1.
Neuro Oncol ; 2024 Jul 11.
Artículo en Inglés | MEDLINE | ID: mdl-38991556

RESUMEN

BACKGROUND: Artificial intelligence has been proposed for brain metastasis (BM) segmentation but it has not been fully clinically validated. The aim of this study was to develop and evaluate a system for BM segmentation. METHODS: A deep-learning-based BM segmentation system (BMSS) was developed using contrast-enhanced MR images from 488 patients with 10,338 brain metastases. A randomized crossover, multi-reader study was then conducted to evaluate the performance of the BMSS for BM segmentation using data prospectively collected from 50 patients with 203 metastases at five centers. Five radiology residents and five attending radiologists were randomly assigned to contour the same prospective set in assisted and unassisted modes. Aided and unaided Dice similarity coefficients (DSCs) and contouring times per lesion were compared. RESULTS: The BMSS alone yielded a median DSC of 0.91 (95% confidence interval, 0.90-0.92) in the multi-center set and showed comparable performance between the internal and external sets (p = 0.67). With BMSS assistance, the readers increased the median DSC from 0.87 (0.87-0.88) to 0.92 (0.92-0.92) (p < 0.001) with a median time saving of 42% (40-45%) per lesion. Resident readers showed a greater improvement than attending readers in contouring accuracy (improved median DSC, 0.05 [0.05-0.05] vs. 0.03 [0.03-0.03]; p < 0.001), but a similar time reduction (reduced median time, 44% [40-47%] vs. 40% [37-44%]; p = 0.92) with BMSS assistance. CONCLUSIONS: The BMSS can be optimally applied to improve the efficiency of brain metastasis delineation in clinical practice.

2.
Quant Imaging Med Surg ; 13(12): 8641-8656, 2023 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-38106268

RESUMEN

Background: Accurate diagnosis of pneumonia is vital for effective disease management and mortality reduction, but it can be easily confused with other conditions on chest computed tomography (CT) due to an overlap in imaging features. We aimed to develop and validate a deep learning (DL) model based on chest CT for accurate classification of viral pneumonia (VP), bacterial pneumonia (BP), fungal pneumonia (FP), pulmonary tuberculosis (PTB), and no pneumonia (NP) conditions. Methods: In total, 1,776 cases from five hospitals in different regions were retrospectively collected from September 2019 to June 2023. All cases were enrolled according to inclusion and exclusion criteria, and ultimately 1,611 cases were used to develop the DL model with 5-fold cross-validation, with 165 cases being used as the external test set. Five radiologists blindly reviewed the images from the internal and external test sets first without and then with DL model assistance. Precision, recall, F1-score, weighted F1-average, and area under the curve (AUC) were used to evaluate the model performance. Results: The F1-scores of the DL model on the internal and external test sets were, respectively, 0.947 [95% confidence interval (CI): 0.936-0.958] and 0.933 (95% CI: 0.916-0.950) for VP, 0.511 (95% CI: 0.487-0.536) and 0.591 (95% CI: 0.557-0.624) for BP, 0.842 (95% CI: 0.824-0.860) and 0.848 (95% CI: 0.824-0.873) for FP, 0.843 (95% CI: 0.826-0.861) and 0.795 (95% CI: 0.767-0.822) for PTB, and 0.975 (95% CI: 0.968-0.983) and 0.976 (95% CI: 0.965-0.986) for NP, with a weighted F1-average of 0.883 (95% CI: 0.867-0.898) and 0.846 (95% CI: 0.822-0.871), respectively. The model performed well and showed comparable performance in both the internal and external test sets. The F1-score of the DL model was higher than that of radiologists, and with DL model assistance, radiologists achieved a higher F1-score. On the external test set, the F1-score of the DL model (F1-score 0.848; 95% CI: 0.824-0.873) was higher than that of the radiologists (F1-score 0.541; 95% CI: 0.507-0.575) as was its precision for the other three pneumonia conditions (all P values <0.001). With DL model assistance, the F1-score for FP (F1-score 0.541; 95% CI: 0.507-0.575) was higher than that achieved without assistance (F1-score 0.778; 95% CI: 0.750-0.807) as was its precision for the other three pneumonia conditions (all P values <0.001). Conclusions: The DL approach can effectively classify pneumonia and can help improve radiologists' performance, supporting the full integration of DL results into the routine workflow of clinicians.

3.
IEEE Trans Med Imaging ; 42(2): 557-567, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36459600

RESUMEN

With rapid worldwide spread of Coronavirus Disease 2019 (COVID-19), jointly identifying severe COVID-19 cases from mild ones and predicting the conversion time (from mild to severe) is essential to optimize the workflow and reduce the clinician's workload. In this study, we propose a novel framework for COVID-19 diagnosis, termed as Structural Attention Graph Neural Network (SAGNN), which can combine the multi-source information including features extracted from chest CT, latent lung structural distribution, and non-imaging patient information to conduct diagnosis of COVID-19 severity and predict the conversion time from mild to severe. Specifically, we first construct a graph to incorporate structural information of the lung and adopt graph attention network to iteratively update representations of lung segments. To distinguish different infection degrees of left and right lungs, we further introduce a structural attention mechanism. Finally, we introduce demographic information and develop a multi-task learning framework to jointly perform both tasks of classification and regression. Experiments are conducted on a real dataset with 1687 chest CT scans, which includes 1328 mild cases and 359 severe cases. Experimental results show that our method achieves the best classification (e.g., 86.86% in terms of Area Under Curve) and regression (e.g., 0.58 in terms of Correlation Coefficient) performance, compared with other comparison methods.


Asunto(s)
COVID-19 , Humanos , Prueba de COVID-19 , Redes Neurales de la Computación , Pulmón/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos
4.
Neuro Oncol ; 25(3): 544-556, 2023 03 14.
Artículo en Inglés | MEDLINE | ID: mdl-35943350

RESUMEN

BACKGROUND: Errors have seldom been evaluated in computer-aided detection on brain metastases. This study aimed to analyze false negatives (FNs) and false positives (FPs) generated by a brain metastasis detection system (BMDS) and by readers. METHODS: A deep learning-based BMDS was developed and prospectively validated in a multicenter, multireader study. Ad hoc secondary analysis was restricted to the prospective participants (148 with 1,066 brain metastases and 152 normal controls). Three trainees and 3 experienced radiologists read the MRI images without and with the BMDS. The number of FNs and FPs per patient, jackknife alternative free-response receiver operating characteristic figure of merit (FOM), and lesion features associated with FNs were analyzed for the BMDS and readers using binary logistic regression. RESULTS: The FNs, FPs, and the FOM of the stand-alone BMDS were 0.49, 0.38, and 0.97, respectively. Compared with independent reading, BMDS-assisted reading generated 79% fewer FNs (1.98 vs 0.42, P < .001); 41% more FPs (0.17 vs 0.24, P < .001) but 125% more FPs for trainees (P < .001); and higher FOM (0.87 vs 0.98, P < .001). Lesions with small size, greater number, irregular shape, lower signal intensity, and located on nonbrain surface were associated with FNs for readers. Small, irregular, and necrotic lesions were more frequently found in FNs for BMDS. The FPs mainly resulted from small blood vessels for the BMDS and the readers. CONCLUSIONS: Despite the improvement in detection performance, attention should be paid to FPs and small lesions with lower enhancement for radiologists, especially for less-experienced radiologists.


Asunto(s)
Neoplasias Encefálicas , Humanos , Estudios Prospectivos , Curva ROC , Neoplasias Encefálicas/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Computadores , Sensibilidad y Especificidad
5.
Nat Commun ; 13(1): 6566, 2022 11 02.
Artículo en Inglés | MEDLINE | ID: mdl-36323677

RESUMEN

In radiotherapy for cancer patients, an indispensable process is to delineate organs-at-risk (OARs) and tumors. However, it is the most time-consuming step as manual delineation is always required from radiation oncologists. Herein, we propose a lightweight deep learning framework for radiotherapy treatment planning (RTP), named RTP-Net, to promote an automatic, rapid, and precise initialization of whole-body OARs and tumors. Briefly, the framework implements a cascade coarse-to-fine segmentation, with adaptive module for both small and large organs, and attention mechanisms for organs and boundaries. Our experiments show three merits: 1) Extensively evaluates on 67 delineation tasks on a large-scale dataset of 28,581 cases; 2) Demonstrates comparable or superior accuracy with an average Dice of 0.95; 3) Achieves near real-time delineation in most tasks with <2 s. This framework could be utilized to accelerate the contouring process in the All-in-One radiotherapy scheme, and thus greatly shorten the turnaround time of patients.


Asunto(s)
Aprendizaje Profundo , Neoplasias , Humanos , Tomografía Computarizada por Rayos X , Órganos en Riesgo , Neoplasias/radioterapia , Procesamiento de Imagen Asistido por Computador
6.
Front Oncol ; 12: 995870, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36338695

RESUMEN

Background: Different pathological subtypes of lung adenocarcinoma lead to different treatment decisions and prognoses, and it is clinically important to distinguish invasive lung adenocarcinoma from preinvasive adenocarcinoma (adenocarcinoma in situ and minimally invasive adenocarcinoma). This study aims to investigate the performance of the deep learning approach based on high-resolution computed tomography (HRCT) images in the classification of tumor invasiveness and compare it with the performances of currently available approaches. Methods: In this study, we used a deep learning approach based on 3D conventional networks to automatically predict the invasiveness of pulmonary nodules. A total of 901 early-stage non-small cell lung cancer patients who underwent surgical treatment at Shanghai Chest Hospital between November 2015 and March 2017 were retrospectively included and randomly assigned to a training set (n=814) or testing set 1 (n=87). We subsequently included 116 patients who underwent surgical treatment and intraoperative frozen section between April 2019 and January 2020 to form testing set 2. We compared the performance of our deep learning approach in predicting tumor invasiveness with that of intraoperative frozen section analysis and human experts (radiologists and surgeons). Results: The deep learning approach yielded an area under the receiver operating characteristic curve (AUC) of 0.946 for distinguishing preinvasive adenocarcinoma from invasive lung adenocarcinoma in the testing set 1, which is significantly higher than the AUCs of human experts (P<0.05). In testing set 2, the deep learning approach distinguished invasive adenocarcinoma from preinvasive adenocarcinoma with an AUC of 0.862, which is higher than that of frozen section analysis (0.755, P=0.043), senior thoracic surgeons (0.720, P=0.006), radiologists (0.766, P>0.05) and junior thoracic surgeons (0.768, P>0.05). Conclusions: We developed a deep learning model that achieved comparable performance to intraoperative frozen section analysis in determining tumor invasiveness. The proposed method may contribute to clinical decisions related to the extent of surgical resection.

7.
BMC Med Imaging ; 22(1): 123, 2022 07 09.
Artículo en Inglés | MEDLINE | ID: mdl-35810273

RESUMEN

OBJECTIVES: Accurate contouring of the clinical target volume (CTV) is a key element of radiotherapy in cervical cancer. We validated a novel deep learning (DL)-based auto-segmentation algorithm for CTVs in cervical cancer called the three-channel adaptive auto-segmentation network (TCAS). METHODS: A total of 107 cases were collected and contoured by senior radiation oncologists (ROs). Each case consisted of the following: (1) contrast-enhanced CT scan for positioning, (2) the related CTV, (3) multiple plain CT scans during treatment and (4) the related CTV. After registration between (1) and (3) for the same patient, the aligned image and CTV were generated. Method 1 is rigid registration, method 2 is deformable registration, and the aligned CTV is seen as the result. Method 3 is rigid registration and TCAS, method 4 is deformable registration and TCAS, and the result is generated by a DL-based method. RESULTS: From the 107 cases, 15 pairs were selected as the test set. The dice similarity coefficient (DSC) of method 1 was 0.8155 ± 0.0368; the DSC of method 2 was 0.8277 ± 0.0315; the DSCs of method 3 and 4 were 0.8914 ± 0.0294 and 0.8921 ± 0.0231, respectively. The mean surface distance and Hausdorff distance of methods 3 and 4 were markedly better than those of method 1 and 2. CONCLUSIONS: The TCAS achieved comparable accuracy to the manual delineation performed by senior ROs and was significantly better than direct registration.


Asunto(s)
Aprendizaje Profundo , Neoplasias del Cuello Uterino , Algoritmos , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Planificación de la Radioterapia Asistida por Computador/métodos , Especies Reactivas de Oxígeno , Neoplasias del Cuello Uterino/diagnóstico por imagen , Neoplasias del Cuello Uterino/radioterapia
8.
Neuro Oncol ; 24(9): 1559-1570, 2022 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-35100427

RESUMEN

BACKGROUND: Accurate detection is essential for brain metastasis (BM) management, but manual identification is laborious. This study developed, validated, and evaluated a BM detection (BMD) system. METHODS: Five hundred seventy-three consecutive patients (10 448 lesions) with newly diagnosed BMs and 377 patients without BMs were retrospectively enrolled to develop a multi-scale cascaded convolutional network using 3D-enhanced T1-weighted MR images. BMD was validated using a prospective validation set comprising an internal set (46 patients with 349 lesions; 44 patients without BMs) and three external sets (102 patients with 717 lesions; 108 patients without BMs). The lesion-based detection sensitivity and the number of false positives (FPs) per patient were analyzed. The detection sensitivity and reading time of three trainees and three experienced radiologists from three hospitals were evaluated using the validation set. RESULTS: The detection sensitivity and FPs were 95.8% and 0.39 in the test set, 96.0% and 0.27 in the internal validation set, and ranged from 88.9% to 95.5% and 0.29 to 0.66 in the external sets. The BMD system achieved higher detection sensitivity (93.2% [95% CI, 91.6-94.7%]) than all radiologists without BMD (ranging from 68.5% [95% CI, 65.7-71.3%] to 80.4% [95% CI, 78.0-82.8%], all P < .001). Radiologist detection sensitivity improved with BMD, reaching 92.7% to 95.0%. The mean reading time was reduced by 47% for trainees and 32% for experienced radiologists assisted by BMD relative to that without BMD. CONCLUSIONS: BMD enables accurate BM detection. Reading with BMD improves radiologists' detection sensitivity and reduces their reading times.


Asunto(s)
Neoplasias Encefálicas , Aprendizaje Profundo , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/secundario , Humanos , Imagen por Resonancia Magnética/métodos , Estudios Retrospectivos
9.
IEEE Trans Med Imaging ; 41(4): 771-781, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-34705640

RESUMEN

Lung cancer is the leading cause of cancer deaths worldwide. Accurately diagnosing the malignancy of suspected lung nodules is of paramount clinical importance. However, to date, the pathologically-proven lung nodule dataset is largely limited and is highly imbalanced in benign and malignant distributions. In this study, we proposed a Semi-supervised Deep Transfer Learning (SDTL) framework for benign-malignant pulmonary nodule diagnosis. First, we utilize a transfer learning strategy by adopting a pre-trained classification network that is used to differentiate pulmonary nodules from nodule-like tissues. Second, since the size of samples with pathological-proven is small, an iterated feature-matching-based semi-supervised method is proposed to take advantage of a large available dataset with no pathological results. Specifically, a similarity metric function is adopted in the network semantic representation space for gradually including a small subset of samples with no pathological results to iteratively optimize the classification network. In this study, a total of 3,038 pulmonary nodules (from 2,853 subjects) with pathologically-proven benign or malignant labels and 14,735 unlabeled nodules (from 4,391 subjects) were retrospectively collected. Experimental results demonstrate that our proposed SDTL framework achieves superior diagnosis performance, with accuracy = 88.3%, AUC = 91.0% in the main dataset, and accuracy = 74.5%, AUC = 79.5% in the independent testing dataset. Furthermore, ablation study shows that the use of transfer learning provides 2% accuracy improvement, and the use of semi-supervised learning further contributes 2.9% accuracy improvement. Results implicate that our proposed classification network could provide an effective diagnostic tool for suspected lung nodules, and might have a promising application in clinical practice.


Asunto(s)
Neoplasias Pulmonares , Nódulo Pulmonar Solitario , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/patología , Estudios Retrospectivos , Nódulo Pulmonar Solitario/diagnóstico por imagen , Nódulo Pulmonar Solitario/patología , Aprendizaje Automático Supervisado , Tomografía Computarizada por Rayos X/métodos
10.
J Appl Clin Med Phys ; 23(2): e13470, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-34807501

RESUMEN

OBJECTIVES: Because radiotherapy is indispensible for treating cervical cancer, it is critical to accurately and efficiently delineate the radiation targets. We evaluated a deep learning (DL)-based auto-segmentation algorithm for automatic contouring of clinical target volumes (CTVs) in cervical cancers. METHODS: Computed tomography (CT) datasets from 535 cervical cancers treated with definitive or postoperative radiotherapy were collected. A DL tool based on VB-Net was developed to delineate CTVs of the pelvic lymph drainage area (dCTV1) and parametrial area (dCTV2) in the definitive radiotherapy group. The training/validation/test number is 157/20/23. CTV of the pelvic lymph drainage area (pCTV1) was delineated in the postoperative radiotherapy group. The training/validation/test number is 272/30/33. Dice similarity coefficient (DSC), mean surface distance (MSD), and Hausdorff distance (HD) were used to evaluate the contouring accuracy. Contouring times were recorded for efficiency comparison. RESULTS: The mean DSC, MSD, and HD values for our DL-based tool were 0.88/1.32 mm/21.60 mm for dCTV1, 0.70/2.42 mm/22.44 mm for dCTV2, and 0.86/1.15 mm/20.78 mm for pCTV1. Only minor modifications were needed for 63.5% of auto-segmentations to meet the clinical requirements. The contouring accuracy of the DL-based tool was comparable to that of senior radiation oncologists and was superior to that of junior/intermediate radiation oncologists. Additionally, DL assistance improved the performance of junior radiation oncologists for dCTV2 and pCTV1 contouring (mean DSC increases: 0.20 for dCTV2, 0.03 for pCTV1; mean contouring time decrease: 9.8 min for dCTV2, 28.9 min for pCTV1). CONCLUSIONS: DL-based auto-segmentation improves CTV contouring accuracy, reduces contouring time, and improves clinical efficiency for treating cervical cancer.


Asunto(s)
Aprendizaje Profundo , Neoplasias del Cuello Uterino , Algoritmos , Femenino , Humanos , Órganos en Riesgo , Planificación de la Radioterapia Asistida por Computador , Neoplasias del Cuello Uterino/diagnóstico por imagen , Neoplasias del Cuello Uterino/radioterapia
11.
IEEE Trans Med Imaging ; 41(1): 88-102, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34383647

RESUMEN

Early and accurate severity assessment of Coronavirus disease 2019 (COVID-19) based on computed tomography (CT) images offers a great help to the estimation of intensive care unit event and the clinical decision of treatment planning. To augment the labeled data and improve the generalization ability of the classification model, it is necessary to aggregate data from multiple sites. This task faces several challenges including class imbalance between mild and severe infections, domain distribution discrepancy between sites, and presence of heterogeneous features. In this paper, we propose a novel domain adaptation (DA) method with two components to address these problems. The first component is a stochastic class-balanced boosting sampling strategy that overcomes the imbalanced learning problem and improves the classification performance on poorly-predicted classes. The second component is a representation learning that guarantees three properties: 1) domain-transferability by prototype triplet loss, 2) discriminant by conditional maximum mean discrepancy loss, and 3) completeness by multi-view reconstruction loss. Particularly, we propose a domain translator and align the heterogeneous data to the estimated class prototypes (i.e., class centers) in a hyper-sphere manifold. Experiments on cross-site severity assessment of COVID-19 from CT images show that the proposed method can effectively tackle the imbalanced learning problem and outperform recent DA approaches.


Asunto(s)
COVID-19 , Humanos , SARS-CoV-2 , Tomografía Computarizada por Rayos X
12.
Pattern Recognit ; 122: 108341, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-34565913

RESUMEN

Segmentation of infections from CT scans is important for accurate diagnosis and follow-up in tackling the COVID-19. Although the convolutional neural network has great potential to automate the segmentation task, most existing deep learning-based infection segmentation methods require fully annotated ground-truth labels for training, which is time-consuming and labor-intensive. This paper proposed a novel weakly supervised segmentation method for COVID-19 infections in CT slices, which only requires scribble supervision and is enhanced with the uncertainty-aware self-ensembling and transformation-consistent techniques. Specifically, to deal with the difficulty caused by the shortage of supervision, an uncertainty-aware mean teacher is incorporated into the scribble-based segmentation method, encouraging the segmentation predictions to be consistent under different perturbations for an input image. This mean teacher model can guide the student model to be trained using information in images without requiring manual annotations. On the other hand, considering the output of the mean teacher contains both correct and unreliable predictions, equally treating each prediction in the teacher model may degrade the performance of the student network. To alleviate this problem, the pixel level uncertainty measure on the predictions of the teacher model is calculated, and then the student model is only guided by reliable predictions from the teacher model. To further regularize the network, a transformation-consistent strategy is also incorporated, which requires the prediction to follow the same transformation if a transform is performed on an input image of the network. The proposed method has been evaluated on two public datasets and one local dataset. The experimental results demonstrate that the proposed method is more effective than other weakly supervised methods and achieves similar performance as those fully supervised.

13.
Front Oncol ; 11: 700210, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34604036

RESUMEN

OBJECTIVE: To develop a deep learning-based model using esophageal thickness to detect esophageal cancer from unenhanced chest CT images. METHODS: We retrospectively identified 141 patients with esophageal cancer and 273 patients negative for esophageal cancer (at the time of imaging) for model training. Unenhanced chest CT images were collected and used to build a convolutional neural network (CNN) model for diagnosing esophageal cancer. The CNN is a VB-Net segmentation network that segments the esophagus and automatically quantifies the thickness of the esophageal wall and detect positions of esophageal lesions. To validate this model, 52 false negatives and 48 normal cases were collected further as the second dataset. The average performance of three radiologists and that of the same radiologists aided by the model were compared. RESULTS: The sensitivity and specificity of the esophageal cancer detection model were 88.8% and 90.9%, respectively, for the validation dataset set. Of the 52 missed esophageal cancer cases and the 48 normal cases, the sensitivity, specificity, and accuracy of the deep learning esophageal cancer detection model were 69%, 61%, and 65%, respectively. The independent results of the radiologists had a sensitivity of 25%, 31%, and 27%; specificity of 78%, 75%, and 75%; and accuracy of 53%, 54%, and 53%. With the aid of the model, the results of the radiologists were improved to a sensitivity of 77%, 81%, and 75%; specificity of 75%, 74%, and 74%; and accuracy of 76%, 77%, and 75%, respectively. CONCLUSIONS: Deep learning-based model can effectively detect esophageal cancer in unenhanced chest CT scans to improve the incidental detection of esophageal cancer.

14.
Comput Med Imaging Graph ; 90: 101889, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-33848755

RESUMEN

Screening of pulmonary nodules in computed tomography (CT) is crucial for early diagnosis and treatment of lung cancer. Although computer-aided diagnosis (CAD) systems have been designed to assist radiologists to detect nodules, fully automated detection is still challenging due to variations in nodule size, shape, and density. In this paper, we first propose a fully automated nodule detection method using a cascade and heterogeneous neural network trained on chest CT images of 12155 patients, then evaluate the performance by using phantom (828 CT images) and clinical datasets (2640 CT images) scanned with different imaging parameters. The nodule detection network employs two feature pyramid networks (FPNs) and a classification network (BasicNet). The first FPN is trained to achieve high sensitivity for nodule detection, and the second FPN refines the candidates for false positive reduction (FPR). Then, a BasicNet is combined with the second FPR to classify the candidates into either nodules or non-nodules for the final refinement. This study investigates the performance of nodule detection of solid and ground-glass nodules in phantom and patient data scanned with different imaging parameters. The results show that the detection of the solid nodules is robust to imaging parameters, and for GGO detection, reconstruction methods "iDose4-YA" and "STD-YA" achieve better performance. For thin-slice images, higher performance is achieved across different nodule sizes with reconstruction method "iDose4-STD". For 5 mm slice thickness, the best choice is the reconstruction method "iDose4-YA" for larger nodules (>5 mm). Overall, the reconstruction method "iDose4-YA" is suggested to achieve the best balanced results for both solid and GGO nodules.


Asunto(s)
Neoplasias Pulmonares , Nódulo Pulmonar Solitario , Diagnóstico por Computador , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Redes Neurales de la Computación , Fantasmas de Imagen , Interpretación de Imagen Radiográfica Asistida por Computador , Sensibilidad y Especificidad , Nódulo Pulmonar Solitario/diagnóstico por imagen , Tomografía Computarizada por Rayos X
15.
Ann Transl Med ; 9(3): 216, 2021 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-33708843

RESUMEN

BACKGROUND: The assessment of the severity of coronavirus disease 2019 (COVID-19) by clinical presentation has not met the urgent clinical need so far. We aimed to establish a deep learning (DL) model based on quantitative computed tomography (CT) and initial clinical features to predict the severity of COVID-19. METHODS: One hundred ninety-six hospitalized patients with confirmed COVID-19 were enrolled from January 20 to February 10, 2020 in our centre, and were divided into severe and non-severe groups. The clinico-radiological data on admission were retrospectively collected and compared between the two groups. The optimal clinico-radiological features were determined based on least absolute shrinkage and selection operator (LASSO) logistic regression analysis, and a predictive nomogram model was established by five-fold cross-validation. Receiver operating characteristic (ROC) analyses were conducted, and the areas under the receiver operating characteristic curve (AUCs) of the nomogram model, quantitative CT parameters that were significant in univariate analysis, and pneumonia severity index (PSI) were compared. RESULTS: In comparison with the non-severe group (151 patients), the severe group (45 patients) had a higher PSI (P<0.001). DL-based quantitative CT indicated that the mass of infection (MOICT) and the percentage of infection (POICT) in the whole lung were higher in the severe group (both P<0.001). The nomogram model was based on MOICT and clinical features, including age, cluster of differentiation 4 (CD4)+ T cell count, serum lactate dehydrogenase (LDH), and C-reactive protein (CRP). The AUC values of the model, MOICT, POICT, and PSI scores were 0.900, 0.813, 0.805, and 0.751, respectively. The nomogram model performed significantly better than the other three parameters in predicting severity (P=0.003, P=0.001, and P<0.001, respectively). CONCLUSIONS: Although quantitative CT parameters and the PSI can well predict the severity of COVID-19, the DL-based quantitative CT model is more efficient.

16.
BMC Med Imaging ; 21(1): 57, 2021 03 23.
Artículo en Inglés | MEDLINE | ID: mdl-33757431

RESUMEN

BACKGROUND: Spatial and temporal lung infection distributions of coronavirus disease 2019 (COVID-19) and their changes could reveal important patterns to better understand the disease and its time course. This paper presents a pipeline to analyze statistically these patterns by automatically segmenting the infection regions and registering them onto a common template. METHODS: A VB-Net is designed to automatically segment infection regions in CT images. After training and validating the model, we segmented all the CT images in the study. The segmentation results are then warped onto a pre-defined template CT image using deformable registration based on lung fields. Then, the spatial distributions of infection regions and those during the course of the disease are calculated at the voxel level. Visualization and quantitative comparison can be performed between different groups. We compared the distribution maps between COVID-19 and community acquired pneumonia (CAP), between severe and critical COVID-19, and across the time course of the disease. RESULTS: For the performance of infection segmentation, comparing the segmentation results with manually annotated ground-truth, the average Dice is 91.6% ± 10.0%, which is close to the inter-rater difference between two radiologists (the Dice is 96.1% ± 3.5%). The distribution map of infection regions shows that high probability regions are in the peripheral subpleural (up to 35.1% in probability). COVID-19 GGO lesions are more widely spread than consolidations, and the latter are located more peripherally. Onset images of severe COVID-19 (inpatients) show similar lesion distributions but with smaller areas of significant difference in the right lower lobe compared to critical COVID-19 (intensive care unit patients). About the disease course, critical COVID-19 patients showed four subsequent patterns (progression, absorption, enlargement, and further absorption) in our collected dataset, with remarkable concurrent HU patterns for GGO and consolidations. CONCLUSIONS: By segmenting the infection regions with a VB-Net and registering all the CT images and the segmentation results onto a template, spatial distribution patterns of infections can be computed automatically. The algorithm provides an effective tool to visualize and quantify the spatial patterns of lung infection diseases and their changes during the disease course. Our results demonstrate different patterns between COVID-19 and CAP, between severe and critical COVID-19, as well as four subsequent disease course patterns of the severe COVID-19 patients studied, with remarkable concurrent HU patterns for GGO and consolidations.


Asunto(s)
COVID-19/diagnóstico por imagen , Infecciones Comunitarias Adquiridas/diagnóstico por imagen , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Algoritmos , Progresión de la Enfermedad , Humanos , Neumonía/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos
17.
Comput Med Imaging Graph ; 89: 101899, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33761446

RESUMEN

Computed tomography (CT) screening is essential for early lung cancer detection. With the development of artificial intelligence techniques, it is particularly desirable to explore the ability of current state-of-the-art methods and to analyze nodule features in terms of a large population. In this paper, we present an artificial-intelligence lung image analysis system (ALIAS) for nodule detection and segmentation. And after segmenting the nodules, the locations, sizes, as well as imaging features are computed at the population level for studying the differences between benign and malignant nodules. The results provide better understanding of the underlying imaging features and their ability for early lung cancer diagnosis.


Asunto(s)
Neoplasias Pulmonares , Nódulo Pulmonar Solitario , Inteligencia Artificial , Humanos , Inteligencia , Pulmón/diagnóstico por imagen , Neoplasias Pulmonares/diagnóstico por imagen , Interpretación de Imagen Radiográfica Asistida por Computador , Nódulo Pulmonar Solitario/diagnóstico por imagen , Tomografía Computarizada por Rayos X
18.
Phys Med Biol ; 66(6): 065031, 2021 03 17.
Artículo en Inglés | MEDLINE | ID: mdl-33729998

RESUMEN

The worldwide spread of coronavirus disease (COVID-19) has become a threat to global public health. It is of great importance to rapidly and accurately screen and distinguish patients with COVID-19 from those with community-acquired pneumonia (CAP). In this study, a total of 1,658 patients with COVID-19 and 1,027 CAP patients underwent thin-section CT and were enrolled. All images were preprocessed to obtain the segmentations of infections and lung fields. A set of handcrafted location-specific features was proposed to best capture the COVID-19 distribution pattern, in comparison to the conventional CT severity score (CT-SS) and radiomics features. An infection size-aware random forest method (iSARF) was proposed for discriminating COVID-19 from CAP. Experimental results show that the proposed method yielded its best performance when using the handcrafted features, with a sensitivity of 90.7%, a specificity of 87.2%, and an accuracy of 89.4% over state-of-the-art classifiers. Additional tests on 734 subjects, with thick slice images, demonstrates great generalizability. It is anticipated that our proposed framework could assist clinical decision making.


Asunto(s)
COVID-19/diagnóstico por imagen , Infecciones Comunitarias Adquiridas/diagnóstico por imagen , Neumonía/diagnóstico por imagen , Tomografía Computarizada por Rayos X , Adulto , Anciano , Diagnóstico por Computador , Diagnóstico Diferencial , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Pulmón/diagnóstico por imagen , Pulmón/virología , Masculino , Persona de Mediana Edad , Reproducibilidad de los Resultados , Estudios Retrospectivos , Sensibilidad y Especificidad
19.
Med Image Anal ; 67: 101821, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-33049579

RESUMEN

There is a large body of literature linking anatomic and geometric characteristics of kidney tumors to perioperative and oncologic outcomes. Semantic segmentation of these tumors and their host kidneys is a promising tool for quantitatively characterizing these lesions, but its adoption is limited due to the manual effort required to produce high-quality 3D segmentations of these structures. Recently, methods based on deep learning have shown excellent results in automatic 3D segmentation, but they require large datasets for training, and there remains little consensus on which methods perform best. The 2019 Kidney and Kidney Tumor Segmentation challenge (KiTS19) was a competition held in conjunction with the 2019 International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) which sought to address these issues and stimulate progress on this automatic segmentation problem. A training set of 210 cross sectional CT images with kidney tumors was publicly released with corresponding semantic segmentation masks. 106 teams from five continents used this data to develop automated systems to predict the true segmentation masks on a test set of 90 CT images for which the corresponding ground truth segmentations were kept private. These predictions were scored and ranked according to their average Sørensen-Dice coefficient between the kidney and tumor across all 90 cases. The winning team achieved a Dice of 0.974 for kidney and 0.851 for tumor, approaching the inter-annotator performance on kidney (0.983) but falling short on tumor (0.923). This challenge has now entered an "open leaderboard" phase where it serves as a challenging benchmark in 3D semantic segmentation.


Asunto(s)
Neoplasias Renales , Tomografía Computarizada por Rayos X , Estudios Transversales , Humanos , Procesamiento de Imagen Asistido por Computador , Riñón/diagnóstico por imagen , Neoplasias Renales/diagnóstico por imagen
20.
Med Image Anal ; 67: 101824, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-33091741

RESUMEN

With the rapidly worldwide spread of Coronavirus disease (COVID-19), it is of great importance to conduct early diagnosis of COVID-19 and predict the conversion time that patients possibly convert to the severe stage, for designing effective treatment plans and reducing the clinicians' workloads. In this study, we propose a joint classification and regression method to determine whether the patient would develop severe symptoms in the later time formulated as a classification task, and if yes, the conversion time will be predicted formulated as a classification task. To do this, the proposed method takes into account 1) the weight for each sample to reduce the outliers' influence and explore the problem of imbalance classification, and 2) the weight for each feature via a sparsity regularization term to remove the redundant features of the high-dimensional data and learn the shared information across two tasks, i.e., the classification and the regression. To our knowledge, this study is the first work to jointly predict the disease progression and the conversion time, which could help clinicians to deal with the potential severe cases in time or even save the patients' lives. Experimental analysis was conducted on a real data set from two hospitals with 408 chest computed tomography (CT) scans. Results show that our method achieves the best classification (e.g., 85.91% of accuracy) and regression (e.g., 0.462 of the correlation coefficient) performance, compared to all comparison methods. Moreover, our proposed method yields 76.97% of accuracy for predicting the severe cases, 0.524 of the correlation coefficient, and 0.55 days difference for the conversion time.


Asunto(s)
COVID-19/clasificación , COVID-19/diagnóstico por imagen , Neumonía Viral/clasificación , Neumonía Viral/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Progresión de la Enfermedad , Femenino , Humanos , Masculino , Persona de Mediana Edad , Valor Predictivo de las Pruebas , Interpretación de Imagen Radiográfica Asistida por Computador , Radiografía Torácica , SARS-CoV-2 , Índice de Severidad de la Enfermedad , Factores de Tiempo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA