Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
1.
Radiology ; 311(1): e232057, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38591974

RESUMO

Background Preoperative discrimination of preinvasive, minimally invasive, and invasive adenocarcinoma at CT informs clinical management decisions but may be challenging for classifying pure ground-glass nodules (pGGNs). Deep learning (DL) may improve ternary classification. Purpose To determine whether a strategy that includes an adjudication approach can enhance the performance of DL ternary classification models in predicting the invasiveness of adenocarcinoma at chest CT and maintain performance in classifying pGGNs. Materials and Methods In this retrospective study, six ternary models for classifying preinvasive, minimally invasive, and invasive adenocarcinoma were developed using a multicenter data set of lung nodules. The DL-based models were progressively modified through framework optimization, joint learning, and an adjudication strategy (simulating a multireader approach to resolving discordant nodule classifications), integrating two binary classification models with a ternary classification model to resolve discordant classifications sequentially. The six ternary models were then tested on an external data set of pGGNs imaged between December 2019 and January 2021. Diagnostic performance including accuracy, specificity, and sensitivity was assessed. The χ2 test was used to compare model performance in different subgroups stratified by clinical confounders. Results A total of 4929 nodules from 4483 patients (mean age, 50.1 years ± 9.5 [SD]; 2806 female) were divided into training (n = 3384), validation (n = 579), and internal (n = 966) test sets. A total of 361 pGGNs from 281 patients (mean age, 55.2 years ± 11.1 [SD]; 186 female) formed the external test set. The proposed strategy improved DL model performance in external testing (P < .001). For classifying minimally invasive adenocarcinoma, the accuracy was 85% and 79%, sensitivity was 75% and 63%, and specificity was 89% and 85% for the model with adjudication (model 6) and the model without (model 3), respectively. Model 6 showed a relatively narrow range (maximum minus minimum) across diagnostic indexes (accuracy, 1.7%; sensitivity, 7.3%; specificity, 0.9%) compared with the other models (accuracy, 0.6%-10.8%; sensitivity, 14%-39.1%; specificity, 5.5%-17.9%). Conclusion Combining framework optimization, joint learning, and an adjudication approach improved DL classification of adenocarcinoma invasiveness at chest CT. Published under a CC BY 4.0 license. Supplemental material is available for this article. See also the editorial by Sohn and Fields in this issue.


Assuntos
Adenocarcinoma de Pulmão , Adenocarcinoma , Aprendizado Profundo , Neoplasias Pulmonares , Humanos , Feminino , Pessoa de Meia-Idade , Estudos Retrospectivos , Adenocarcinoma de Pulmão/diagnóstico por imagem , Adenocarcinoma/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Neoplasias Pulmonares/diagnóstico por imagem
2.
Nat Commun ; 15(1): 1131, 2024 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-38326351

RESUMO

Early and accurate diagnosis of focal liver lesions is crucial for effective treatment and prognosis. We developed and validated a fully automated diagnostic system named Liver Artificial Intelligence Diagnosis System (LiAIDS) based on a diverse sample of 12,610 patients from 18 hospitals, both retrospectively and prospectively. In this study, LiAIDS achieved an F1-score of 0.940 for benign and 0.692 for malignant lesions, outperforming junior radiologists (benign: 0.830-0.890, malignant: 0.230-0.360) and being on par with senior radiologists (benign: 0.920-0.950, malignant: 0.550-0.650). Furthermore, with the assistance of LiAIDS, the diagnostic accuracy of all radiologists improved. For benign and malignant lesions, junior radiologists' F1-scores improved to 0.936-0.946 and 0.667-0.680 respectively, while seniors improved to 0.950-0.961 and 0.679-0.753. Additionally, in a triage study of 13,192 consecutive patients, LiAIDS automatically classified 76.46% of patients as low risk with a high NPV of 99.0%. The evidence suggests that LiAIDS can serve as a routine diagnostic tool and enhance the diagnostic capabilities of radiologists for liver lesions.


Assuntos
Inteligência Artificial , Neoplasias Hepáticas , Humanos , Estudos Retrospectivos , Radiologistas , Neoplasias Hepáticas/diagnóstico por imagem
3.
J Xray Sci Technol ; 32(3): 583-596, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38306089

RESUMO

PURPOSE: The explore the added value of peri-calcification regions on contrast-enhanced mammography (CEM) in the differential diagnosis of breast lesions presenting as only calcification on routine mammogram. METHODS: Patients who underwent CEM because of suspicious calcification-only lesions were included. The test set included patients between March 2017 and March 2019, while the validation set was collected between April 2019 and October 2019. The calcifications were automatically detected and grouped by a machine learning-based computer-aided system. In addition to extracting radiomic features on both low-energy (LE) and recombined (RC) images from the calcification areas, the peri-calcification regions, which is generated by extending the annotation margin radially with gradients from 1 mm to 9 mm, were attempted. Machine learning (ML) models were built to classify calcifications into malignant and benign groups. The diagnostic matrices were also evaluated by combing ML models with subjective reading. RESULTS: Models for LE (significant features: wavelet-LLL_glcm_Imc2_MLO; wavelet-HLL_firstorder_Entropy_MLO; wavelet-LHH_glcm_DifferenceVariance_CC; wavelet-HLL_glcm_SumEntropy_MLO;wavelet-HLH_glrlm_ShortRunLowGray LevelEmphasis_MLO; original_firstorder_Entropy_MLO; original_shape_Elongation_MLO) and RC (significant features: wavelet-HLH_glszm_GrayLevelNonUniformityNormalized_MLO; wavelet-LLH_firstorder_10Percentile_CC; original_firstorder_Maximum_MLO; wavelet-HHH_glcm_Autocorrelation_MLO; original_shape_Elongation_MLO; wavelet-LHL_glszm_GrayLevelNonUniformityNormalized_MLO; wavelet-LLH_firstorder_RootMeanSquared_MLO) images were set up with 7 features. Areas under the curve (AUCs) of RC models are significantly better than those of LE models with compact and expanded boundary (RC v.s. LE, compact: 0.81 v.s. 0.73, p < 0.05; expanded: 0.89 v.s. 0.81, p < 0.05) and RC models with 3 mm boundary extension yielded the best performance compared to those with other sizes (AUC = 0.89). Combining with radiologists' reading, the 3mm-boundary RC model achieved a sensitivity of 0.871 and negative predictive value of 0.937 with similar accuracy of 0.843 in predicting malignancy. CONCLUSIONS: The machine learning model integrating intra- and peri-calcification regions on CEM has the potential to aid radiologists' performance in predicting malignancy of suspicious breast calcifications.


Assuntos
Neoplasias da Mama , Mama , Calcinose , Meios de Contraste , Aprendizado de Máquina , Mamografia , Humanos , Mamografia/métodos , Feminino , Calcinose/diagnóstico por imagem , Neoplasias da Mama/diagnóstico por imagem , Pessoa de Meia-Idade , Diagnóstico Diferencial , Mama/diagnóstico por imagem , Adulto , Idoso , Interpretação de Imagem Radiográfica Assistida por Computador/métodos
4.
J Thorac Dis ; 15(10): 5475-5484, 2023 Oct 31.
Artigo em Inglês | MEDLINE | ID: mdl-37969262

RESUMO

Background: This study assessed the diagnostic performance of a deep learning (DL)-based model for differentiating malignant subcentimeter (≤10 mm) solid pulmonary nodules (SSPNs) from benign ones in computed tomography (CT) images compared against radiologists with 10 and 15 years of experience in thoracic imaging (medium-senior seniority). Methods: Overall, 200 SSPNs (100 benign and 100 malignant) were retrospectively collected. Malignancy was confirmed by pathology, and benignity was confirmed by follow-up or pathology. CT images were fed into the DL model to obtain the probability of malignancy (range, 0-100%) for each nodule. According to the diagnostic results, enrolled nodules were classified into benign, malignant, or indeterminate. The accuracy and diagnostic composition of the model were compared with those of the radiologists using the McNemar-Bowker test. Enrolled nodules were divided into 3-6-, 6-8-, and 8-10-mm subgroups. For each subgroup, the diagnostic results of the model were compared with those of the radiologists. Results: The accuracy of the DL model, in differentiating malignant and benign SSPNs, was significantly higher than that of the radiologists (71.5% vs. 38.5%, P<0.001). The DL model reported more benign or malignant deterministic results and fewer indeterminate results. In subgroup analysis of nodule size, the DL model also yielded higher performance in comparison with that of the radiologists, providing fewer indeterminate results. The accuracy of the two methods in the 3-6-, 6-8-, and 8-10-mm subgroups was 75.5% vs. 28.3% (P<0.001), 62.0% vs. 28.2% (P<0.001), and 77.6% vs. 55.3% (P=0.001), respectively, and the indeterminate results were 3.8% vs. 66.0%, 8.5% vs. 66.2%, and 2.6% vs. 35.5% (all P<0.001), respectively. Conclusions: The DL-based method yielded higher performance in comparison with that of the radiologists in differentiating malignant and benign SSPNs. This DL model may reduce uncertainty in diagnosis and improve diagnostic accuracy, especially for SSPNs smaller than 8 mm.

5.
Quant Imaging Med Surg ; 13(10): 6424-6433, 2023 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-37869340

RESUMO

Background: Extremities fractures are a leading cause of death and disability, especially in the elderly. Avulsion fracture are also the most commonly missed diagnosis, and delayed diagnosis leads to higher litigation rates. Therefore, this study evaluates the diagnostic efficiency of the artificial intelligence (AI) model before and after optimization based on computed tomography (CT) images and then compares it with that of radiologists, especially for avulsion fractures. Methods: The digital X-ray photography [digital radiography (DR)] and CT images of adult limb trauma in our hospital from 2017 to 2020 were retrospectively collected, with or without 1 or more fractures of the shoulder, elbow, wrist, hand, hip, knee, ankle, and foot. Labeling of the fracture referred to the visualization of the fracture on the corresponding CT images. After training the pre-optimized AI model, the diagnostic performance of the pre-optimized AI, optimized AI model, and the initial radiological reports were evaluated. For the lesion level, the detection rate of avulsion and non-avulsion fractures was analyzed, whereas for the case level, the accuracy, sensitivity, and specificity were compared among them. Results: The total datasets (1,035 cases) were divided into a training set (n=675), a validation set (n=169), and a test set (n=191) in a balanced joint distribution. At the lesion level, the detection rates of avulsion fracture (57.89% vs. 35.09%, P=0.004) and non-avulsion fracture (85.64% vs. 71.29%, P<0.001) by the optimized AI were significantly higher than that by pre-optimized AI. The average precision (AP) of the optimized AI model for all lesions was higher than that of pre-optimized AI model (0.582 vs. 0.425). The detection rate of avulsion fracture by the optimized AI model was significantly higher than that by radiologists (57.89% vs. 29.82%, P=0.002). For the non-avulsion fracture, there was no significant difference of detection rate between the optimized AI model and radiologists (P=0.853). At the case level, the accuracy (86.40% vs. 71.93%, P<0.001) and sensitivity (87.29% vs. 73.48%, P<0.001) of the optimized AI were significantly higher than those of the pre-optimized AI model. There was no statistical difference in accuracy, sensitivity, and specificity between the optimized AI model and the radiologists (P>0.05). Conclusions: The optimized AI model improves the diagnostic efficacy in detecting extremity fractures on radiographs, and the optimized AI model is significantly better than radiologists in detecting avulsion fractures, which may be helpful in the clinical practice of orthopedic emergency.

6.
Front Med (Lausanne) ; 10: 1154314, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37448800

RESUMO

Objective: Post-hepatectomy liver failure (PHLF) remains clinical challenges after major hepatectomy. The aim of this study was to establish and validate a deep learning model to predict PHLF after hemihepatectomy using preoperative contrast-enhancedcomputed tomography with three phases (Non-contrast, arterial phase and venous phase). Methods: 265 patients undergoing hemihepatectomy in Sir Run Run Shaw Hospital were enrolled in this study. The primary endpoint was PHLF, according to the International Study Group of Liver Surgery's definition. In this study, to evaluate the proposed method, 5-fold cross-validation technique was used. The dataset was split into 5 folds of equal size, and each fold was used as a test set once, while the other folds were temporarily combined to form a training set. Performance metrics on the test set were then calculated and stored. At the end of the 5-fold cross-validation run, the accuracy, precision, sensitivity and specificity for predicting PHLF with the deep learning model and the area under receiver operating characteristic curve (AUC) were calculated. Results: Of the 265 patients, 170 patients with left liver resection and 95 patients with right liver resection. The diagnosis had 6 types: hepatocellular carcinoma, intrahepatic cholangiocarcinoma, liver metastases, benign tumor, hepatolithiasis, and other liver diseases. Laparoscopic liver resection was performed in 187 patients. The accuracy of prediction was 84.15%. The AUC was 0.7927. In 170 left hemihepatectomy cases, the accuracy was 89.41% (152/170), and the AUC was 82.72%. The accuracy was 77.47% (141/182) with liver mass, 78.33% (47/60) with liver cirrhosis and 80.46% (70/87) with viral hepatitis. Conclusion: The deep learning model showed excellent performance in prediction of PHLF and could be useful for identifying high-risk patients to modify the treatment planning.

7.
Int J Comput Assist Radiol Surg ; 18(12): 2213-2221, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37145252

RESUMO

PURPOSE: Preprocedural planning is a key step in radiofrequency ablation (RFA) treatment for liver tumors, which is a complex task with multiple constraints and relies heavily on the personal experience of interventional radiologists, and existing optimization-based automatic RFA planning methods are very time-consuming. In this paper, we aim to develop a heuristic RFA planning method to rapidly and automatically make a clinically acceptable RFA plan. METHODS: First, the insertion direction is heuristically initialized based on tumor long axis. Then, the 3D RFA planning is divided into insertion path planning and ablation position planning, which are further simplified into 2D by projections along two orthogonal directions. Here, a heuristic algorithm based on regular arrangement and step-wise adjustment is proposed to implement the 2D planning tasks. Experiments are conducted on patients with liver tumors of different sizes and shapes from multicenter to evaluate the proposed method. RESULTS: The proposed method automatically generated clinically acceptable RFA plans within 3 min for all cases in the test set and the clinical validation set. All RFA plans of our method achieve 100% treatment zone coverage without damaging the vital organs. Compared with the optimization-based method, the proposed method reduces the planning time by dozens of times while generating RFA plans with similar ablation efficiency. CONCLUSION: The proposed method demonstrates a new way to rapidly and automatically generate clinically acceptable RFA plans with multiple clinical constraints. The plans of our method are consistent with the clinical actual plans on almost all cases, which demonstrates the effectiveness of the proposed method and can help reduce the burden on clinicians.


Assuntos
Ablação por Cateter , Neoplasias Hepáticas , Ablação por Radiofrequência , Humanos , Heurística , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/cirurgia , Neoplasias Hepáticas/patologia , Ablação por Radiofrequência/métodos , Algoritmos , Tomografia Computadorizada por Raios X , Ablação por Cateter/métodos
8.
IEEE Trans Pattern Anal Mach Intell ; 45(7): 8020-8035, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37018263

RESUMO

Recent advances in self-supervised learning (SSL) in computer vision are primarily comparative, whose goal is to preserve invariant and discriminative semantics in latent representations by comparing siamese image views. However, the preserved high-level semantics do not contain enough local information, which is vital in medical image analysis (e.g., image-based diagnosis and tumor segmentation). To mitigate the locality problem of comparative SSL, we propose to incorporate the task of pixel restoration for explicitly encoding more pixel-level information into high-level semantics. We also address the preservation of scale information, a powerful tool in aiding image understanding but has not drawn much attention in SSL. The resulting framework can be formulated as a multi-task optimization problem on the feature pyramid. Specifically, we conduct multi-scale pixel restoration and siamese feature comparison in the pyramid. In addition, we propose non-skip U-Net to build the feature pyramid and develop sub-crop to replace multi-crop in 3D medical imaging. The proposed unified SSL framework (PCRLv2) surpasses its self-supervised counterparts on various tasks, including brain tumor segmentation (BraTS 2018), chest pathology identification (ChestX-ray, CheXpert), pulmonary nodule detection (LUNA), and abdominal organ segmentation (LiTS), sometimes outperforming them by large margins with limited annotations. Codes and models are available at https://github.com/RL4M/PCRLv2.


Assuntos
Algoritmos , Neoplasias Encefálicas , Humanos , Imageamento Tridimensional , Semântica , Processamento de Imagem Assistida por Computador
9.
Neuroimage ; 271: 120041, 2023 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-36933626

RESUMO

Brain lesion segmentation provides a valuable tool for clinical diagnosis and research, and convolutional neural networks (CNNs) have achieved unprecedented success in the segmentation task. Data augmentation is a widely used strategy to improve the training of CNNs. In particular, data augmentation approaches that mix pairs of annotated training images have been developed. These methods are easy to implement and have achieved promising results in various image processing tasks. However, existing data augmentation approaches based on image mixing are not designed for brain lesions and may not perform well for brain lesion segmentation. Thus, the design of this type of simple data augmentation method for brain lesion segmentation is still an open problem. In this work, we propose a simple yet effective data augmentation approach, dubbed as CarveMix, for CNN-based brain lesion segmentation. Like other mixing-based methods, CarveMix stochastically combines two existing annotated images (annotated for brain lesions only) to obtain new labeled samples. To make our method more suitable for brain lesion segmentation, CarveMix is lesion-aware, where the image combination is performed with a focus on the lesions and preserves the lesion information. Specifically, from one annotated image we carve a region of interest (ROI) according to the lesion location and geometry with a variable ROI size. The carved ROI then replaces the corresponding voxels in a second annotated image to synthesize new labeled images for network training, and additional harmonization steps are applied for heterogeneous data where the two annotated images can originate from different sources. Besides, we further propose to model the mass effect that is unique to whole brain tumor segmentation during image mixing. To evaluate the proposed method, experiments were performed on multiple publicly available or private datasets, and the results show that our method improves the accuracy of brain lesion segmentation. The code of the proposed method is available at https://github.com/ZhangxinruBIT/CarveMix.git.


Assuntos
Neoplasias Encefálicas , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Encéfalo
10.
Dis Model Mech ; 16(1)2023 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-36695500

RESUMO

Parkinson's disease (PD), an age-dependent neurodegenerative disease, is characterised by the selective loss of dopaminergic neurons in the substantia nigra (SN). Mitochondrial dysfunction is a hallmark of PD, and mutations in PINK1, a gene necessary for mitochondrial fitness, cause PD. Drosophila melanogaster flies with pink1 mutations exhibit mitochondrial defects and dopaminergic cell loss and are used as a PD model. To gain an integrated view of the cellular changes caused by defects in the PINK1 pathway of mitochondrial quality control, we combined metabolomics and transcriptomics analysis in pink1-mutant flies with human induced pluripotent stem cell (iPSC)-derived neural precursor cells (NPCs) with a PINK1 mutation. We observed alterations in cysteine metabolism in both the fly and human PD models. Mitochondrial dysfunction in the NPCs resulted in changes in several metabolites that are linked to cysteine synthesis and increased glutathione levels. We conclude that alterations in cysteine metabolism may compensate for increased oxidative stress in PD, revealing a unifying mechanism of early-stage PD pathology that may be targeted for drug development. This article has an associated First Person interview with the first author of the paper.


Assuntos
Proteínas de Drosophila , Células-Tronco Pluripotentes Induzidas , Células-Tronco Neurais , Doenças Neurodegenerativas , Doença de Parkinson , Animais , Humanos , Drosophila melanogaster/metabolismo , Cisteína , Doença de Parkinson/metabolismo , Células-Tronco Neurais/metabolismo , Células-Tronco Pluripotentes Induzidas/metabolismo , Proteínas Quinases/metabolismo , Proteínas de Drosophila/metabolismo , Proteínas Serina-Treonina Quinases/genética
11.
Eur Radiol ; 33(6): 3918-3930, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-36515714

RESUMO

OBJECTIVES: To develop a pre-treatment CT-based predictive model to anticipate inoperable lung cancer patients' progression-free survival (PFS) to immunotherapy. METHODS: This single-center retrospective study developed and cross-validated a radiomic model in 185 patients and tested it in 48 patients. The binary endpoint is the durable clinical benefit (DCB, PFS ≥ 6 months) and non-DCB (NDCB, PFS < 6 months). Radiomic features were extracted from multiple intrapulmonary lesions and weighted by an attention-based multiple-instance learning model. Aggregated features were then selected through L2-regularized ridge regression. Five machine-learning classifiers were conducted to build predictive models using radiomic and clinical features alone and then together. Lastly, the predictive value of the model with the best performance was validated by Kaplan-Meier survival analysis. RESULTS: The predictive models based on the weighted radiomic approach showed superior performance across all classifiers (AUCs: 0.75-0.82) compared with the largest lesion approach (AUCs: 0.70-0.78) and the average sum approach (AUCs: 0.64-0.80). Among them, the logistic regression model yielded the most balanced performance (AUC = 0.87 [95%CI 0.84-0.89], 0.75 [0.68-0.82], 0.80 [0.68-0.92] in the training, validation, and test cohort respectively). The addition of five clinical characteristics significantly enhanced the performance of radiomic-only model (train: AUC 0.91 [0.89-0.93], p = .042; validation: AUC 0.86 [0.80-0.91], p = .011; test: AUC 0.86 [0.76-0.96], p = .026). Kaplan-Meier analysis of the radiomic-based predictive models showed a clear stratification between classifier-predicted DCB versus NDCB for PFS (HR = 2.40-2.95, p < 0.05). CONCLUSIONS: The adoption of weighted radiomic features from multiple intrapulmonary lesions has the potential to predict long-term PFS benefits for patients who are candidates for PD-1/PD-L1 immunotherapies. KEY POINTS: • Weighted radiomic-based model derived from multiple intrapulmonary lesions on pre-treatment CT images has the potential to predict durable clinical benefits of immunotherapy in lung cancer. • Early line immunotherapy is associated with longer progression-free survival in advanced lung cancer.


Assuntos
Neoplasias Pulmonares , Humanos , Estudos Retrospectivos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/terapia , Estimativa de Kaplan-Meier , Tomografia Computadorizada por Raios X/métodos , Imunoterapia/métodos
12.
IEEE J Biomed Health Inform ; 27(1): 386-396, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36350857

RESUMO

Automatic and accurate differentiation of liver lesions from multi-phase computed tomography imaging is critical for the early detection of liver cancer. Multi-phase data can provide more diagnostic information than single-phase data, and the effective use of multi-phase data can significantly improve diagnostic accuracy. Current fusion methods usually fuse multi-phase information at the image level or feature level, ignoring the specificity of each modality, therefore, the information integration capacity is always limited. In this paper, we propose a Knowledge-guided framework, named MCCNet, which adaptively integrates multi-phase liver lesion information from three different stages to fully utilize and fuse multi-phase liver information. Specifically, 1) a multi-phase self-attention module was designed to adaptively combine and integrate complementary information from three phases using multi-level phase features; 2) a cross-feature interaction module was proposed to further integrate multi-phase fine-grained features from a global perspective; 3) a cross-lesion correlation module was proposed for the first time to imitate the clinical diagnosis process by exploiting inter-lesion correlation in the same patient. By integrating the above three modules into a 3D backbone, we constructed a lesion classification network. The proposed lesion classification network was validated on an in-house dataset containing 3,683 lesions from 2,333 patients in 9 hospitals. Extensive experimental results and evaluations on real-world clinical applications demonstrate the effectiveness of the proposed modules in exploiting and fusing multi-phase information.


Assuntos
Hospitais , Neoplasias Hepáticas , Humanos , Conhecimento , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador
13.
Cancers (Basel) ; 14(19)2022 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-36230746

RESUMO

PURPOSE: Personalized treatments such as targeted therapy and immunotherapy have revolutionized the predominantly therapeutic paradigm for non-small cell lung cancer (NSCLC). However, these treatment decisions require the determination of targetable genomic and molecular alterations through invasive genetic or immunohistochemistry (IHC) tests. Numerous previous studies have demonstrated that artificial intelligence can accurately predict the single-gene status of tumors based on radiologic imaging, but few studies have achieved the simultaneous evaluation of multiple genes to reflect more realistic clinical scenarios. METHODS: We proposed a multi-label multi-task deep learning (MMDL) system for non-invasively predicting actionable NSCLC mutations and PD-L1 expression utilizing routinely acquired computed tomography (CT) images. This radiogenomic system integrated transformer-based deep learning features and radiomic features of CT volumes from 1096 NSCLC patients based on next-generation sequencing (NGS) and IHC tests. RESULTS: For each task cohort, we randomly split the corresponding dataset into training (80%), validation (10%), and testing (10%) subsets. The area under the receiver operating characteristic curves (AUCs) of the MMDL system achieved 0.862 (95% confidence interval (CI), 0.758-0.969) for discrimination of a panel of 8 mutated genes, including EGFR, ALK, ERBB2, BRAF, MET, ROS1, RET and KRAS, 0.856 (95% CI, 0.663-0.948) for identification of a 10-molecular status panel (previous 8 genes plus TP53 and PD-L1); and 0.868 (95% CI, 0.641-0.972) for classifying EGFR / PD-L1 subtype, respectively. CONCLUSIONS: To the best of our knowledge, this study is the first deep learning system to simultaneously analyze 10 molecular expressions, which might be utilized as an assistive tool in conjunction with or in lieu of ancillary testing to support precision treatment options.

14.
Front Immunol ; 13: 828560, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35464416

RESUMO

Background: Programmed death-ligand 1 (PD-L1) assessment of lung cancer in immunohistochemical assays was only approved diagnostic biomarker for immunotherapy. But the tumor proportion score (TPS) of PD-L1 was challenging owing to invasive sampling and intertumoral heterogeneity. There was a strong demand for the development of an artificial intelligence (AI) system to measure PD-L1 expression signature (ES) non-invasively. Methods: We developed an AI system using deep learning (DL), radiomics and combination models based on computed tomography (CT) images of 1,135 non-small cell lung cancer (NSCLC) patients with PD-L1 status. The deep learning feature was obtained through a 3D ResNet as the feature map extractor and the specialized classifier was constructed for the prediction and evaluation tasks. Then, a Cox proportional-hazards model combined with clinical factors and PD-L1 ES was utilized to evaluate prognosis in survival cohort. Results: The combination model achieved a robust high-performance with area under the receiver operating characteristic curves (AUCs) of 0.950 (95% CI, 0.938-0.960), 0.934 (95% CI, 0.906-0.964), and 0.946 (95% CI, 0.933-0.958), for predicting PD-L1ES <1%, 1-49%, and ≥50% in validation cohort, respectively. Additionally, when combination model was trained on multi-source features the performance of overall survival evaluation (C-index: 0.89) could be superior compared to these of the clinical model alone (C-index: 0.86). Conclusion: A non-invasive measurement using deep learning was proposed to access PD-L1 expression and survival outcomes of NSCLC. This study also indicated that deep learning model combined with clinical characteristics improved prediction capabilities, which would assist physicians in making rapid decision on clinical treatment options.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Aprendizado Profundo , Neoplasias Pulmonares , Algoritmos , Inteligência Artificial , Antígeno B7-H1/metabolismo , Carcinoma Pulmonar de Células não Pequenas/patologia , Humanos , Neoplasias Pulmonares/patologia
15.
Front Immunol ; 13: 813072, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35250988

RESUMO

BACKGROUND: Epidermal growth factor receptor (EGFR) genotyping and programmed death ligand-1 (PD-L1) expressions are of paramount importance for treatment guidelines such as the use of tyrosine kinase inhibitors (TKIs) and immune checkpoint inhibitors (ICIs) in lung cancer. Conventional identification of EGFR or PD-L1 status requires surgical or biopsied tumor specimens, which are obtained through invasive procedures associated with risk of morbidities and may be unavailable to access tissue samples. Here, we developed an artificial intelligence (AI) system that can predict EGFR and PD-L1 status in using non-invasive computed tomography (CT) images. METHODS: A multitask AI system including deep learning (DL) module, radiomics (RA) module, and joint (JO) module combining the DL, RA, and clinical features was developed, trained, and optimized with CT images to predict the EGFR and PD-L1 status. We used feature selectors and feature fusion methods to find the best model among combinations of module types. The models were evaluated using the areas under the receiver operating characteristic curves (AUCs). RESULTS: Our multitask AI system yielded promising performance for gene expression status, subtype classification, and joint prediction. The AUCs of DL module achieved 0.842 (95% CI, 0.825-0.855) in the EGFR mutated status and 0.805 (95% CI, 0.779-0.829) in the mutated-EGFR subtypes discrimination (19Del, L858R, other mutations). DL module also demonstrated the AUCs of 0.799 (95% CI, 0.762-0.854) in the PD-L1 expression status and 0.837 (95% CI, 0.775-0.911) in the positive-PD-L1 subtypes (PD-L1 tumor proportion score, 1%-49% and ≥50%). Furthermore, the JO module of our AI system performed well in the EGFR and PD-L1 joint cohort, with an AUC of 0.928 (95% CI, 0.909-0.946) for distinguishing EGFR mutated status and 0.905 (95% CI, 0.886-0.930) for discriminating PD-L1 expression status. CONCLUSION: Our AI system has demonstrated the encouraging results for identifying gene status and further assessing the genotypes. Both clinical indicators and radiomics features showed a complementary role in prediction and provided accurate estimates to predict EGFR and PD-L1 status. Furthermore, this non-invasive, high-throughput, and interpretable AI system can be used as an assistive tool in conjunction with or in lieu of ancillary tests and extensive diagnostic workups to facilitate early intervention.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Neoplasias Pulmonares , Inteligência Artificial , Antígeno B7-H1/metabolismo , Carcinoma Pulmonar de Células não Pequenas/diagnóstico por imagem , Carcinoma Pulmonar de Células não Pequenas/tratamento farmacológico , Carcinoma Pulmonar de Células não Pequenas/genética , Receptores ErbB/metabolismo , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/tratamento farmacológico , Neoplasias Pulmonares/genética , Tomografia Computadorizada por Raios X
16.
Abdom Radiol (NY) ; 47(6): 2135-2147, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35344077

RESUMO

PURPOSE: To develop a deep learning model (DLM) to improve readers' interpretation and speed in the differentiation of pancreatic cystic lesions (PCLs) on dual-phase enhanced CT, and a low contrast media dose, external testing set validated the model. MATERIALS AND METHODS: Dual-phase enhanced CT images of 363 patients with 368 PCLs obtained from two centers were retrospectively assessed. Based on the examination date, a training and validation set of 266 PCLs, an internal testing set of 52 PCLs were designated from center 1. An external testing set included 50 PCLs from center 2. Clinical and radiological characteristics were compared. The DLM was developed using 3D specially designed densely connected convolutional networks for PCL differentiation. Radiomic features were extracted to build a traditional radiomics model (RM). Performance of the DLM, traditional RM, and three readers was compared. RESULTS: The accuracy for differential diagnosis was 0.904 with DLM, which was the highest in the internal testing set. Accuracy differences between the DLM and senior radiologist were not significant both in the internal and external testing set (both p > 0.05). With the help of the DLM, the accuracy and specificity of the junior radiologist were significantly improved (all p < 0.05), and all readers' diagnostic time was shortened (all p < 0.05). CONCLUSION: The DLM achieved senior radiologist-level performance in differentiating benign and malignant PCLs which could improve the junior radiologist's interpretation and speed of PCLs on CT.


Assuntos
Aprendizado Profundo , Cisto Pancreático , Algoritmos , Humanos , Cisto Pancreático/diagnóstico por imagem , Estudos Retrospectivos , Tomografia Computadorizada por Raios X/métodos
17.
IEEE Trans Pattern Anal Mach Intell ; 44(10): 5947-5961, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-34061740

RESUMO

Mammogram mass detection is crucial for diagnosing and preventing the breast cancers in clinical practice. The complementary effect of multi-view mammogram images provides valuable information about the breast anatomical prior structure and is of great significance in digital mammography interpretation. However, unlike radiologists who can utilize the natural reasoning ability to identify masses based on multiple mammographic views, how to endow the existing object detection models with the capability of multi-view reasoning is vital for decision-making in clinical diagnosis but remains the boundary to explore. In this paper, we propose an anatomy-aware graph convolutional network (AGN), which is tailored for mammogram mass detection and endows existing detection methods with multi-view reasoning ability. The proposed AGN consists of three steps. First, we introduce a bipartite graph convolutional network (BGN) to model the intrinsic geometric and semantic relations of ipsilateral views. Second, considering that the visual asymmetry of bilateral views is widely adopted in clinical practice to assist the diagnosis of breast lesions, we propose an inception graph convolutional network (IGN) to model the structural similarities of bilateral views. Finally, based on the constructed graphs, the multi-view information is propagated through nodes methodically, which equips the features learned from the examined view with multi-view reasoning ability. Experiments on two standard benchmarks reveal that AGN significantly exceeds the state-of-the-art performance. Visualization results show that AGN provides interpretable visual cues for clinical diagnosis.


Assuntos
Neoplasias da Mama , Redes Neurais de Computação , Algoritmos , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Feminino , Humanos , Mamografia/métodos , Radiologistas
18.
IEEE Trans Pattern Anal Mach Intell ; 44(11): 7400-7416, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-34822325

RESUMO

During clinical practice, radiologists often use attributes, e.g., morphological and appearance characteristics of a lesion, to aid disease diagnosis. Effectively modeling attributes as well as all relationships involving attributes could boost the generalization ability and verifiability of medical image diagnosis algorithms. In this paper, we introduce a hybrid neuro-probabilistic reasoning algorithm for verifiable attribute-based medical image diagnosis. There are two parallel branches in our hybrid algorithm, a Bayesian network branch performing probabilistic causal relationship reasoning and a graph convolutional network branch performing more generic relational modeling and reasoning using a feature representation. Tight coupling between these two branches is achieved via a cross-network attention mechanism and the fusion of their classification results. We have successfully applied our hybrid reasoning algorithm to two challenging medical image diagnosis tasks. On the LIDC-IDRI benchmark dataset for benign-malignant classification of pulmonary nodules in CT images, our method achieves a new state-of-the-art accuracy of 95.36% and an AUC of 96.54%. Our method also achieves a 3.24% accuracy improvement on an in-house chest X-ray image dataset for tuberculosis diagnosis. Our ablation study indicates that our hybrid algorithm achieves a much better generalization performance than a pure neural network architecture under very limited training data.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Algoritmos , Teorema de Bayes , Humanos , Neoplasias Pulmonares/patologia , Radiologistas , Nódulo Pulmonar Solitário/patologia , Tomografia Computadorizada por Raios X/métodos
19.
Comput Methods Programs Biomed ; 214: 106576, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34915425

RESUMO

BACKGROUND AND OBJECTIVE: Currently, the best performing methods in colonoscopy polyp detection are primarily based on deep neural networks (DNNs), which are usually trained on large amounts of labeled data. However, different hospitals use different endoscope models and set different imaging parameters, which causes the collected endoscopic images and videos to vary greatly in style. There may be variations in the color space, brightness, contrast, and resolution, and there are also differences between white light endoscopy (WLE) and narrow band image endoscopy (NBIE). We call these variations the domain shift. The DNN performance may decrease when the training data and the testing data come from different hospitals or different endoscope models. Additionally, it is quite difficult to collect enough new labeled data and retrain a new DNN model before deploying that DNN to a new hospital or endoscope model. METHODS: To solve this problem, we propose a domain adaptation model called Deep Reconstruction-Recoding Network (DRRN), which jointly learns a shared encoding representation for two tasks: i) a supervised object detection network for labeled source data, and ii) an unsupervised reconstruction-recoding network for unlabeled target data. Through the DRRN, the object detection network's encoder not only learns the features from the labeled source domain, but also encodes useful information from the unlabeled target domain. Therefore, the distribution difference of the two domains' feature spaces can be reduced. RESULTS: We evaluate the performance of the DRRN on a series of cross-domain datasets. Compared with training the polyp detection network using only source data, the performance of the DRRN on the target domain is improved. Through feature statistics and visualization, it is demonstrated that the DRRN can learn the common distribution and feature invariance of the two domains. The distribution difference between the feature spaces of the two domains can be reduced. CONCLUSION: The DRRN can improve cross-domain polyp detection. With the DRRN, the generalization performance of the DNN-based polyp detection model can be improved without additional labeled data. This improvement allows the polyp detection model to be easily transferred to datasets from different hospitals or different endoscope models.


Assuntos
Redes Neurais de Computação , Pólipos , Colonoscopia , Humanos
20.
IEEE J Biomed Health Inform ; 26(3): 1251-1262, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34613925

RESUMO

Segmentation of hepatic vessels from 3D CT images is necessary for accurate diagnosis and preoperative planning for liver cancer. However, due to the low contrast and high noises of CT images, automatic hepatic vessel segmentation is a challenging task. Hepatic vessels are connected branches containing thick and thin blood vessels, showing an important structural characteristic or a prior: the connectivity of blood vessels. However, this is rarely applied in existing methods. In this paper, we segment hepatic vessels from 3D CT images by utilizing the connectivity prior. To this end, a graph neural network (GNN) used to describe the connectivity prior of hepatic vessels is integrated into a general convolutional neural network (CNN). Specifically, a graph attention network (GAT) is first used to model the graphical connectivity information of hepatic vessels, which can be trained with the vascular connectivity graph constructed directly from the ground truths. Second, the GAT is integrated with a lightweight 3D U-Net by an efficient mechanism called the plug-in mode, in which the GAT is incorporated into the U-Net as a multi-task branch and is only used to supervise the training procedure of the U-Net with the connectivity prior. The GAT will not be used in the inference stage, and thus will not increase the hardware and time costs of the inference stage compared with the U-Net. Therefore, hepatic vessel segmentation can be well improved in an efficient mode. Extensive experiments on two public datasets show that the proposed method is superior to related works in accuracy and connectivity of hepatic vessel segmentation.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA