Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
1.
IEEE Trans Med Imaging ; 43(2): 723-733, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37756173

RESUMO

Coronary artery segmentation is critical for coronary artery disease diagnosis but challenging due to its tortuous course with numerous small branches and inter-subject variations. Most existing studies ignore important anatomical information and vascular topologies, leading to less desirable segmentation performance that usually cannot satisfy clinical demands. To deal with these challenges, in this paper we propose an anatomy- and topology-preserving two-stage framework for coronary artery segmentation. The proposed framework consists of an anatomical dependency encoding (ADE) module and a hierarchical topology learning (HTL) module for coarse-to-fine segmentation, respectively. Specifically, the ADE module segments four heart chambers and aorta, and thus five distance field maps are obtained to encode distance between chamber surfaces and coarsely segmented coronary artery. Meanwhile, ADE also performs coronary artery detection to crop region-of-interest and eliminate foreground-background imbalance. The follow-up HTL module performs fine segmentation by exploiting three hierarchical vascular topologies, i.e., key points, centerlines, and neighbor connectivity using a multi-task learning scheme. In addition, we adopt a bottom-up attention interaction (BAI) module to integrate the feature representations extracted across hierarchical topologies. Extensive experiments on public and in-house datasets show that the proposed framework achieves state-of-the-art performance for coronary artery segmentation.


Assuntos
Doença da Artéria Coronariana , Aprendizado Profundo , Humanos , Coração/diagnóstico por imagem , Aorta , Processamento de Imagem Assistida por Computador
2.
Radiol Med ; 128(3): 307-315, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36800112

RESUMO

BACKGROUND: Post-processing and interpretation of coronary CT angiography (CCTA) imaging are time-consuming and dependent on the reader's experience. An automated deep learning (DL)-based imaging reconstruction and diagnosis system was developed to improve diagnostic accuracy and efficiency. METHODS: Our study including 374 cases from five sites, inviting 12 radiologists, assessed the DL-based system in diagnosing obstructive coronary disease with regard to diagnostic performance, imaging post-processing and reporting time of radiologists, with invasive coronary angiography as a standard reference. The diagnostic performance of DL system and DL-assisted human readers was compared with the traditional method of human readers without DL system. RESULTS: Comparing the diagnostic performance of human readers without DL system versus with DL system, the AUC was improved from 0.81 to 0.82 (p < 0.05) at patient level and from 0.79 to 0.81 (p < 0.05) at vessel level. An increase in AUC was observed in inexperienced radiologists (p < 0.05), but was absent in experienced radiologists. Regarding diagnostic efficiency, comparing the DL system versus human reader, the average post-processing and reporting time was decreased from 798.60 s to 189.12 s (p < 0.05). The sensitivity and specificity of using DL system alone were 93.55% and 59.57% at patient level and 83.23% and 79.97% at vessel level, respectively. CONCLUSIONS: With the DL system serving as a concurrent reader, the overall post-processing and reading time was substantially reduced. The diagnostic accuracy of human readers, especially for inexperienced readers, was improved. DL-assisted human reader had the potential of being the reading mode of choice in clinical routine.


Assuntos
Doença da Artéria Coronariana , Estenose Coronária , Aprendizado Profundo , Humanos , Angiografia por Tomografia Computadorizada/métodos , Constrição Patológica , Estenose Coronária/diagnóstico por imagem , Angiografia Coronária/métodos
3.
Radiology ; 306(3): e221393, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36283114

RESUMO

Background CT imaging of chronic total occlusion (CTO) is useful in guiding revascularization, but manual reconstruction and quantification are time consuming. Purpose To develop and validate a deep learning (DL) model for automated CTO reconstruction. Materials and Methods In this retrospective study, a DL model for automated CTO segmentation and reconstruction was developed using coronary CT angiography images from a training set of 6066 patients (582 with CTO, 5484 without CTO) and a validation set of 1962 patients (208 with CTO, 1754 without CTO). The algorithm was validated using an external test set of 211 patients with CTO. The consistency and measurement agreement of CTO quantification were compared between the DL model and the conventional manual protocol using the intraclass correlation coefficient, Cohen κ coefficient, and Bland-Altman plot. The predictive values of CT-derived Multicenter CTO Registry of Japan (J-CTO) score for revascularization success were evaluated. Results In the external test set, 211 patients (mean age, 66 years ± 11 [SD]; 164 men) with 240 CTO lesions were evaluated. Automated segmentation and reconstruction of CTOs by DL was successful in 95% of lesions (228 of 240) without manual editing and in 48% of lesions (116 of 240) with the conventional manual protocol (P < .001). The total postprocessing and measurement time was shorter for DL than for manual reconstruction (mean, 121 seconds ± 20 vs 456 seconds ± 68; P < .001). The quantitative and qualitative CTO parameters evaluated with the two methods showed excellent correlation (all correlation coefficients > 0.85, all P < .001) and minimal measurement difference. The predictive values of J-CTO score derived from DL and conventional manual quantification for procedure success showed no difference (area under the receiver operating characteristic curve, 0.76 [95% CI: 0.69, 0.82] and 0.76 [95% CI: 0.69, 0.82], respectively; P = .55). Conclusion When compared with manual reconstruction, the deep learning model considerably reduced postprocessing time for chronic total occlusion quantification and had excellent correlation and agreement in the anatomic assessment of occlusion features. © RSNA, 2022 Online supplemental material is available for this article. See also the editorial by Loewe in this issue.


Assuntos
Oclusão Coronária , Aprendizado Profundo , Intervenção Coronária Percutânea , Masculino , Humanos , Idoso , Resultado do Tratamento , Oclusão Coronária/diagnóstico por imagem , Oclusão Coronária/cirurgia , Estudos Retrospectivos , Intervenção Coronária Percutânea/métodos , Angiografia Coronária/métodos , Tomografia Computadorizada por Raios X , Doença Crônica , Fatores de Risco
4.
Eur Radiol ; 33(3): 1824-1834, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36214848

RESUMO

OBJECTIVES: To evaluate deep neural networks for automatic rib fracture detection on thoracic CT scans and to compare its performance with that of attending-level radiologists using a large amount of datasets from multiple medical institutions. METHODS: In this retrospective study, an internal dataset of 12,208 emergency room (ER) trauma patients and an external dataset of 1613 ER trauma patients taking chest CT scans were recruited. Two cascaded deep neural networks based on an extended U-Net architecture were developed to segment ribs and detect rib fractures respectively. Model performance was evaluated with a 95% confidence interval (CI) on both the internal and external dataset, and compared with attending-level radiologist readings using t test. RESULTS: On the internal dataset, the AUC of the model for detecting fractures at per-rib level was 0.970 (95% CI: 0.968, 0.972) with sensitivity of 93.3% (95% CI: 92.0%, 94.4%) at a specificity of 98.4% (95% CI: 98.3%, 98.5%). On the external dataset, the model obtained an AUC of 0.943 (95% CI: 0.941, 0.945) with sensitivity of 86.2% (95% CI: 85.0%, 87.3%) at a specificity of 98.8% (95% CI: 98.7%, 98.9%), compared to the sensitivity of 70.5% (95% CI: 69.3%, 71.8%) (p < .0001) and specificity of 98.8% (95% CI: 98.7%, 98.9%) (p = 0.175) by attending radiologists. CONCLUSIONS: The proposed DL model is a feasible approach to identify rib fractures on chest CT scans, at the very least, reaching a level on par with attending-level radiologists. KEY POINTS: • Deep learning-based algorithms automatically detected rib fractures with high sensitivity and reasonable specificity on chest CT scans. • The performance of deep learning-based algorithms reached comparable diagnostic measures with attending level radiologists for rib fracture detection on chest CT scans. • The deep learning models, similar to human readers, were susceptible to the inconspicuity and ambiguity of target lesions. More training data was required for subtle lesions to achieve comparable detection performance.


Assuntos
Aprendizado Profundo , Fraturas das Costelas , Humanos , Fraturas das Costelas/diagnóstico por imagem , Estudos Retrospectivos , Tomografia Computadorizada por Raios X , Algoritmos
5.
Comput Med Imaging Graph ; 102: 102126, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36242993

RESUMO

Intracranial aneurysm is commonly found in human brains especially for the elderly, and its rupture accounts for a high rate of subarachnoid hemorrhages. However, it is time-consuming and requires special expertise to pinpoint small aneurysms from computed tomography angiography (CTA) images. Deep learning-based detection has helped improve much efficiency but false-positives still render difficulty to be ruled out. To study the feasibility of deep learning algorithms for aneurysm analysis in clinical applications, this paper proposes a pipeline for aneurysm detection, segmentation, and rupture classification and validates its performance using CTA images of 1508 subjects. A cascade aneurysm detection model is employed by first using a fine-tuned feature pyramid network (FPN) for candidate detection and then applying a dual-channel ResNet aneurysm classifier to further reduce false positives. Detected aneurysms are then segmented by applying a traditional 3D V-Net to their image patches. Radiomics features of aneurysms are extracted after detection and segmentation. The machine-learning-based and deep learning-based rupture classification can be used to distinguish ruptured and un-ruptured ones. Experimental results show that the dual-channel ResNet aneurysm classifier utilizing image and vesselness information helps boost sensitivity of detection compared to single image channel input. Overall, the proposed pipeline can achieve a sensitivity of 90 % for 1 false positive per image, and 95 % for 2 false positives per image. For rupture classification the area under curve (AUC) of 0.906 can be achieved for the testing dataset. The results suggest feasibility of the pipeline for potential clinical use to assist radiologists in aneurysm detection and classification of ruptured and un-ruptured aneurysms.


Assuntos
Aneurisma Roto , Aneurisma Intracraniano , Humanos , Idoso , Aneurisma Intracraniano/diagnóstico por imagem , Angiografia Cerebral/métodos , Angiografia Digital/métodos , Sensibilidade e Especificidade , Aneurisma Roto/diagnóstico por imagem
6.
Med Sci Monit ; 28: e936733, 2022 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-35698440

RESUMO

BACKGROUND We aimed to develop and evaluate a deep learning-based method for fully automatic segmentation of knee joint MR imaging and quantitative computation of knee osteoarthritis (OA)-related imaging biomarkers. MATERIAL AND METHODS This retrospective study included 843 volumes of proton density-weighted fat suppression MR imaging. A convolutional neural network segmentation method with multiclass gradient harmonized Dice loss was trained and evaluated on 500 and 137 volumes, respectively. To assess potential morphologic biomarkers for OA, the volumes and thickness of cartilage and meniscus, and minimal joint space width (mJSW) were automatically computed and compared between 128 OA and 162 control data. RESULTS The CNN segmentation model produced reasonably high Dice coefficients, ranging from 0.948 to 0.974 for knee bone compartments, 0.717 to 0.809 for cartilage, and 0.846 for both lateral and medial menisci. The OA-related biomarkers computed from automatic knee segmentation achieved strong correlation with those from manual segmentation: average intraclass correlations of 0.916, 0.899, and 0.876 for volume and thickness of cartilage, meniscus, and mJSW, respectively. Volume and thickness measurements of cartilage and mJSW were strongly correlated with knee OA progression. CONCLUSIONS We present a fully automatic CNN-based knee segmentation system for fast and accurate evaluation of knee joint images, and OA-related biomarkers such as cartilage thickness and mJSW were reliably computed and visualized in 3D. The results show that the CNN model can serve as an assistant tool for radiologists and orthopedic surgeons in clinical practice and basic research.


Assuntos
Cartilagem Articular , Aprendizado Profundo , Osteoartrite do Joelho , Cartilagem Articular/diagnóstico por imagem , Cartilagem Articular/patologia , Humanos , Articulação do Joelho/diagnóstico por imagem , Articulação do Joelho/patologia , Imageamento por Ressonância Magnética/métodos , Espectroscopia de Ressonância Magnética , Osteoartrite do Joelho/diagnóstico por imagem , Osteoartrite do Joelho/patologia , Reprodutibilidade dos Testes , Estudos Retrospectivos
7.
Acta Radiol ; 63(11): 1535-1545, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34617809

RESUMO

BACKGROUND: The detection of rib fractures (RFs) on computed tomography (CT) images is time-consuming and susceptible to missed diagnosis. An automated artificial intelligence (AI) detection system may be helpful to improve the diagnostic efficiency for junior radiologists. PURPOSE: To compare the diagnostic performance of junior radiologists with and without AI software for RF detection on chest CT images. MATERIALS AND METHODS: Six junior radiologists from three institutions interpreted 393 CT images of patients with acute chest trauma, with and without AI software. The CT images were randomly split into two sets at each institution, with each set assigned to a different radiologist First, the detection of all fractures (AFs), including displaced fractures (DFs), non-displaced fractures and buckle fractures, was analyzed. Next, the DFs were selected for analysis. The sensitivity and specificity of the radiologist-only and radiologist-AI groups at the patient level were set as primary endpoints, and secondary endpoints were at the rib and lesion level. RESULTS: Regarding AFs, the sensitivity difference between the radiologist-AI group and the radiologist-only group were significant at different levels (patient-level: 26.20%; rib-level: 22.18%; lesion-level: 23.74%; P < 0.001). Regarding DFs, the sensitivity difference was 16.67%, 14.19%, and 16.16% at the patient, rib, and lesion levels, respectively (P < 0.001). No significant difference was found in the specificity between the two groups for AFs and DFs at the patient and rib levels (P > 0.05). CONCLUSION: AI software improved the sensitivity of RF detection on CT images for junior radiologists and reduced the reading time by approximately 1 min per patient without decreasing the specificity.


Assuntos
Fraturas das Costelas , Inteligência Artificial , Humanos , Radiologistas , Estudos Retrospectivos , Fraturas das Costelas/diagnóstico por imagem , Sensibilidade e Especificidade , Software , Tomografia Computadorizada por Raios X/métodos
8.
J Hepatocell Carcinoma ; 8: 671-683, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34235105

RESUMO

PURPOSE: Liver imaging reporting and data system (LI-RADS) classification, especially the identification of LR-3 to 5 lesions with hepatocellular carcinoma (HCC) probability, is of great significance to treatment strategy determination. We aimed to develop a semi-automatic LI-RADS grading system on multiphase gadoxetic acid-enhanced MRI using deep convolutional neural networks (CNN). PATIENTS AND METHODS: An internal data set of 439 patients and external data set of 71 patients with suspected HCC were included and underwent gadoxetic acid-enhanced MRI. The expert-guided LI-RADS grading system consisted of four deep 3D CNN models including a tumor segmentation model for automatic diameter estimation and three classification models of LI-RADS major features including arterial phase hyper-enhancement (APHE), washout and enhancing capsule. An end-to-end learning system comprising single deep CNN model that directly classified the LI-RADS grade was developed for comparison. RESULTS: On internal testing set, the segmentation model reached a mean dice of 0.84, with the accuracy of mapped diameter intervals as 82.7% (95% CI: 74.4%, 91.7%). The area under the curves (AUCs) were 0.941 (95% CI: 0.914, 0.961), 0.859 (95% CI: 0.823, 0.890) and 0.712 (95% CI: 0.668, 0.754) for APHE, washout and capsule, respectively. The expert-guided system significantly outperformed the end-to-end system with a LI-RADS grading accuracy of 68.3% (95% CI: 60.8%, 76.5%) vs 55.6% (95% CI: 48.8%, 63.0%) (P<0.0001). On external testing set, the accuracy of mapped diameter intervals was 91.5% (95% CI: 81.9%, 100.0%). The AUCs were 0.792 (95% CI: 0.745, 0.833), 0.654 (95% CI: 0.602, 0.703) and 0.658 (95% CI: 0.606, 0.707) for APHE, washout and capsule, respectively. The expert-guided system achieved an overall grading accuracy of 66.2% (95% CI: 58.0%, 75.2%), significantly higher than the end-to-end system of 50.1% (95% CI: 43.1%, 58.1%) (P<0.0001). CONCLUSION: We developed a semi-automatic step-by-step expert-guided LI-RADS grading system (LR-3 to 5), superior to the conventional end-to-end learning system. This deep learning-based system may improve workflow efficiency for HCC diagnosis in clinical practice.

9.
Phys Med Biol ; 66(6): 065031, 2021 03 17.
Artigo em Inglês | MEDLINE | ID: mdl-33729998

RESUMO

The worldwide spread of coronavirus disease (COVID-19) has become a threat to global public health. It is of great importance to rapidly and accurately screen and distinguish patients with COVID-19 from those with community-acquired pneumonia (CAP). In this study, a total of 1,658 patients with COVID-19 and 1,027 CAP patients underwent thin-section CT and were enrolled. All images were preprocessed to obtain the segmentations of infections and lung fields. A set of handcrafted location-specific features was proposed to best capture the COVID-19 distribution pattern, in comparison to the conventional CT severity score (CT-SS) and radiomics features. An infection size-aware random forest method (iSARF) was proposed for discriminating COVID-19 from CAP. Experimental results show that the proposed method yielded its best performance when using the handcrafted features, with a sensitivity of 90.7%, a specificity of 87.2%, and an accuracy of 89.4% over state-of-the-art classifiers. Additional tests on 734 subjects, with thick slice images, demonstrates great generalizability. It is anticipated that our proposed framework could assist clinical decision making.


Assuntos
COVID-19/diagnóstico por imagem , Infecções Comunitárias Adquiridas/diagnóstico por imagem , Pneumonia/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Adulto , Idoso , Diagnóstico por Computador , Diagnóstico Diferencial , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Pulmão/diagnóstico por imagem , Pulmão/virologia , Masculino , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Estudos Retrospectivos , Sensibilidade e Especificidade
10.
Eur Radiol ; 31(7): 4824-4838, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33447861

RESUMO

OBJECTIVES: To develop radiomics-based nomograms for preoperative microvascular invasion (MVI) and recurrence-free survival (RFS) prediction in patients with solitary hepatocellular carcinoma (HCC) ≤ 5 cm. METHODS: Between March 2012 and September 2019, 356 patients with pathologically confirmed solitary HCC ≤ 5 cm who underwent preoperative gadoxetate disodium-enhanced MRI were retrospectively enrolled. MVI was graded as M0, M1, or M2 according to the number and distribution of invaded vessels. Radiomics features were extracted from DWI, arterial, portal venous, and hepatobiliary phase images in regions of the entire tumor, peritumoral area ≤ 10 mm, and randomly selected liver tissue. Multivariate analysis identified the independent predictors for MVI and RFS, with nomogram visualized the ultimately predictive models. RESULTS: Elevated alpha-fetoprotein, total bilirubin and radiomics values, peritumoral enhancement, and incomplete or absent capsule enhancement were independent risk factors for MVI. The AUCs of MVI nomogram reached 0.920 (95% CI: 0.861-0.979) using random forest and 0.879 (95% CI: 0.820-0.938) using logistic regression analysis in validation cohort (n = 106). With the 5-year RFS rate of 68.4%, the median RFS of MVI-positive (M2 and M1) and MVI-negative (M0) patients were 30.5 (11.9 and 40.9) and > 96.9 months (p < 0.001), respectively. Age, histologic MVI, alkaline phosphatase, and alanine aminotransferase independently predicted recurrence, yielding AUC of 0.654 (95% CI: 0.538-0.769, n = 99) in RFS validation cohort. Instead of histologic MVI, the preoperatively predicted MVI by MVI nomogram using random forest achieved comparable accuracy in MVI stratification and RFS prediction. CONCLUSIONS: Preoperative radiomics-based nomogram using random forest is a potential biomarker of MVI and RFS prediction for solitary HCC ≤ 5 cm. KEY POINTS: • The radiomics score was the predominant independent predictor of MVI which was the primary independent risk factor for postoperative recurrence. • The radiomics-based nomogram using either random forest or logistic regression analysis has obtained the best preoperative prediction of MVI in HCC patients so far. • As an excellent substitute for the invasive histologic MVI, the preoperatively predicted MVI by MVI nomogram using random forest (MVI-RF) achieved comparable accuracy in MVI stratification and outcome, reinforcing the radiologic understanding of HCC angioinvasion and progression.


Assuntos
Carcinoma Hepatocelular , Neoplasias Hepáticas , Carcinoma Hepatocelular/diagnóstico por imagem , Gadolínio DTPA , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Imageamento por Ressonância Magnética , Invasividade Neoplásica , Recidiva Local de Neoplasia/diagnóstico por imagem , Estudos Retrospectivos
11.
IEEE J Biomed Health Inform ; 24(10): 2798-2805, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32845849

RESUMO

Chest computed tomography (CT) becomes an effective tool to assist the diagnosis of coronavirus disease-19 (COVID-19). Due to the outbreak of COVID-19 worldwide, using the computed-aided diagnosis technique for COVID-19 classification based on CT images could largely alleviate the burden of clinicians. In this paper, we propose an Adaptive Feature Selection guided Deep Forest (AFS-DF) for COVID-19 classification based on chest CT images. Specifically, we first extract location-specific features from CT images. Then, in order to capture the high-level representation of these features with the relatively small-scale data, we leverage a deep forest model to learn high-level representation of the features. Moreover, we propose a feature selection method based on the trained deep forest model to reduce the redundancy of features, where the feature selection could be adaptively incorporated with the COVID-19 classification model. We evaluated our proposed AFS-DF on COVID-19 dataset with 1495 patients of COVID-19 and 1027 patients of community acquired pneumonia (CAP). The accuracy (ACC), sensitivity (SEN), specificity (SPE), AUC, precision and F1-score achieved by our method are 91.79%, 93.05%, 89.95%, 96.35%, 93.10% and 93.07%, respectively. Experimental results on the COVID-19 dataset suggest that the proposed AFS-DF achieves superior performance in COVID-19 vs. CAP classification, compared with 4 widely used machine learning methods.


Assuntos
Betacoronavirus , Técnicas de Laboratório Clínico/estatística & dados numéricos , Infecções por Coronavirus/diagnóstico por imagem , Infecções por Coronavirus/diagnóstico , Pneumonia Viral/diagnóstico por imagem , Pneumonia Viral/diagnóstico , Tomografia Computadorizada por Raios X/estatística & dados numéricos , COVID-19 , Teste para COVID-19 , Biologia Computacional , Infecções por Coronavirus/classificação , Bases de Dados Factuais/estatística & dados numéricos , Aprendizado Profundo , Humanos , Redes Neurais de Computação , Pandemias/classificação , Pneumonia Viral/classificação , Interpretação de Imagem Radiográfica Assistida por Computador/estatística & dados numéricos , Radiografia Torácica/estatística & dados numéricos , SARS-CoV-2
12.
IEEE Trans Med Imaging ; 39(8): 2595-2605, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32730212

RESUMO

The coronavirus disease (COVID-19) is rapidly spreading all over the world, and has infected more than 1,436,000 people in more than 200 countries and territories as of April 9, 2020. Detecting COVID-19 at early stage is essential to deliver proper healthcare to the patients and also to protect the uninfected population. To this end, we develop a dual-sampling attention network to automatically diagnose COVID-19 from the community acquired pneumonia (CAP) in chest computed tomography (CT). In particular, we propose a novel online attention module with a 3D convolutional network (CNN) to focus on the infection regions in lungs when making decisions of diagnoses. Note that there exists imbalanced distribution of the sizes of the infection regions between COVID-19 and CAP, partially due to fast progress of COVID-19 after symptom onset. Therefore, we develop a dual-sampling strategy to mitigate the imbalanced learning. Our method is evaluated (to our best knowledge) upon the largest multi-center CT data for COVID-19 from 8 hospitals. In the training-validation stage, we collect 2186 CT scans from 1588 patients for a 5-fold cross-validation. In the testing stage, we employ another independent large-scale testing dataset including 2796 CT scans from 2057 patients. Results show that our algorithm can identify the COVID-19 images with the area under the receiver operating characteristic curve (AUC) value of 0.944, accuracy of 87.5%, sensitivity of 86.9%, specificity of 90.1%, and F1-score of 82.0%. With this performance, the proposed algorithm could potentially aid radiologists with COVID-19 diagnosis from CAP, especially in the early stage of the COVID-19 outbreak.


Assuntos
Infecções por Coronavirus/diagnóstico por imagem , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Pneumonia Viral/diagnóstico por imagem , Algoritmos , Betacoronavirus , COVID-19 , Infecções Comunitárias Adquiridas/diagnóstico por imagem , Humanos , Pandemias , Curva ROC , Radiografia Torácica , SARS-CoV-2 , Tomografia Computadorizada por Raios X
13.
IEEE Trans Med Imaging ; 39(8): 2606-2614, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32386147

RESUMO

Recently, the outbreak of Coronavirus Disease 2019 (COVID-19) has spread rapidly across the world. Due to the large number of infected patients and heavy labor for doctors, computer-aided diagnosis with machine learning algorithm is urgently needed, and could largely reduce the efforts of clinicians and accelerate the diagnosis process. Chest computed tomography (CT) has been recognized as an informative tool for diagnosis of the disease. In this study, we propose to conduct the diagnosis of COVID-19 with a series of features extracted from CT images. To fully explore multiple features describing CT images from different views, a unified latent representation is learned which can completely encode information from different aspects of features and is endowed with promising class structure for separability. Specifically, the completeness is guaranteed with a group of backward neural networks (each for one type of features), while by using class labels the representation is enforced to be compact within COVID-19/community-acquired pneumonia (CAP) and also a large margin is guaranteed between different types of pneumonia. In this way, our model can well avoid overfitting compared to the case of directly projecting high-dimensional features into classes. Extensive experimental results show that the proposed method outperforms all comparison methods, and rather stable performances are observed when varying the number of training data.


Assuntos
Infecções por Coronavirus/diagnóstico por imagem , Aprendizado de Máquina , Pneumonia Viral/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Betacoronavirus , COVID-19 , Criança , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Pandemias , Radiografia Torácica , SARS-CoV-2 , Adulto Jovem
14.
IEEE Trans Med Imaging ; 34(8): 1694-704, 2015 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-26241768

RESUMO

Accurate segmentation of the spinal canals in computed tomography (CT) images is an important task in many related studies. In this paper, we propose an automatic segmentation method and apply it to our highly challenging image cohort that is acquired from multiple clinical sites and from the CT channel of the PET-CT scans. To this end, we adapt the interactive random-walk solvers to be a fully automatic cascaded pipeline. The automatic segmentation pipeline is initialized with robust voxelwise classification using Haar-like features and probabilistic boosting tree. Then, the topology of the spinal canal is extracted from the tentative segmentation and further refined for the subsequent random-walk solver. In particular, the refined topology leads to improved seeding voxels or boundary conditions, which allow the subsequent random-walk solver to improve the segmentation result. Therefore, by iteratively refining the spinal canal topology and cascading the random-walk solvers, satisfactory segmentation results can be acquired within only a few iterations, even for cases with scoliosis, bone fractures and lesions. Our experiments validate the capability of the proposed method with promising segmentation performance, even though the resolution and the contrast of our dataset with 110 patient cases (90 for testing and 20 for training) are low and various bone pathologies occur frequently.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Canal Medular/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Adulto , Idoso , Bases de Dados Factuais , Humanos , Pessoa de Meia-Idade , Fraturas da Coluna Vertebral/diagnóstico por imagem , Neoplasias da Coluna Vertebral/diagnóstico por imagem , Propriedades de Superfície , Adulto Jovem
15.
Med Image Comput Comput Assist Interv ; 17(Pt 1): 372-80, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25333140

RESUMO

Patient-specific orthopedic knee surgery planning requires precisely segmenting from 3D CT images multiple knee bones, namely femur, tibia, fibula, and patella, around the knee joint with severe pathologies. In this work, we propose a fully automated, highly precise, and computationally efficient segmentation approach for multiple bones. First, each bone is initially segmented using a model-based marginal space learning framework for pose estimation followed by non-rigid boundary deformation. To recover shape details, we then refine the bone segmentation using graph cut that incorporates the shape priors derived from the initial segmentation. Finally we remove overlap between neighboring bones using multi-layer graph partition. In experiments, we achieve simultaneous segmentation of femur, tibia, patella, and fibula with an overall accuracy of less than 1mm surface-to-surface error in less than 90s on hundreds of 3D CT scans with pathological knee joints.


Assuntos
Osso e Ossos/diagnóstico por imagem , Armazenamento e Recuperação da Informação/métodos , Articulação do Joelho/diagnóstico por imagem , Reconhecimento Automatizado de Padrão/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Cirurgia Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Inteligência Artificial , Osso e Ossos/cirurgia , Humanos , Imageamento Tridimensional/métodos , Articulação do Joelho/cirurgia , Cuidados Pré-Operatórios/métodos , Intensificação de Imagem Radiográfica/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
16.
Med Image Comput Comput Assist Interv ; 14(Pt 3): 166-74, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-22003696

RESUMO

We propose an automatic algorithm for phase labeling that relies on the intensity changes in anatomical regions due to the contrast agent propagation. The regions (specified by aorta, vena cava, liver, and kidneys) are first detected by a robust learning-based discriminative algorithm. The intensities inside each region are then used in multi-class LogitBoost classifiers to independently estimate the contrast phase. Each classifier forms a node in a decision tree which is used to obtain the final phase label. Combining independent classification from multiple regions in a tree has the advantage when one of the region detectors fail or when the phase training example database is imbalanced. We show on a dataset of 1016 volumes that the system correctly classifies native phase in 96.2% of the cases, hepatic dominant phase (92.2%), hepatic venous phase (96.7%), and equilibrium phase (86.4%) in 7 seconds on average.


Assuntos
Tomografia Computadorizada de Feixe Cônico/métodos , Meios de Contraste/farmacologia , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Aorta/patologia , Automação , Árvores de Decisões , Humanos , Rim/patologia , Fígado/patologia , Modelos Estatísticos , Miocárdio/patologia , Reconhecimento Automatizado de Padrão , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...