Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 58
Filtrar
1.
ArXiv ; 2024 May 27.
Artigo em Inglês | MEDLINE | ID: mdl-38855539

RESUMO

Knowledge distillation (KD) has demonstrated remarkable success across various domains, but its application to medical imaging tasks, such as kidney and liver tumor segmentation, has encountered challenges. Many existing KD methods are not specifically tailored for these tasks. Moreover, prevalent KD methods often lack a careful consideration of 'what' and 'from where' to distill knowledge from the teacher to the student. This oversight may lead to issues like the accumulation of training bias within shallower student layers, potentially compromising the effectiveness of KD. To address these challenges, we propose Hierarchical Layer-selective Feedback Distillation (HLFD). HLFD strategically distills knowledge from a combination of middle layers to earlier layers and transfers final layer knowledge to intermediate layers at both the feature and pixel levels. This design allows the model to learn higher-quality representations from earlier layers, resulting in a robust and compact student model. Extensive quantitative evaluations reveal that HLFD outperforms existing methods by a significant margin. For example, in the kidney segmentation task, HLFD surpasses the student model (without KD) by over 10%, significantly improving its focus on tumor-specific features. From a qualitative standpoint, the student model trained using HLFD excels at suppressing irrelevant information and can focus sharply on tumor-specific details, which opens a new pathway for more efficient and accurate diagnostic tools. Code is available here.

2.
Acad Radiol ; 31(6): 2424-2433, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38262813

RESUMO

RATIONALE AND OBJECTIVES: Efficiently detecting and characterizing metastatic bone lesions on staging CT is crucial for prostate cancer (PCa) care. However, it demands significant expert time and additional imaging such as PET/CT. We aimed to develop an ensemble of two automated deep learning AI models for 1) bone lesion detection and segmentation and 2) benign vs. metastatic lesion classification on staging CTs and to compare its performance with radiologists. MATERIALS AND METHODS: This retrospective study developed two AI models using 297 staging CT scans (81 metastatic) with 4601 benign and 1911 metastatic lesions in PCa patients. Metastases were validated by follow-up scans, bone biopsy, or PET/CT. Segmentation AI (3DAISeg) was developed using the lesion contours delineated by a radiologist. 3DAISeg performance was evaluated with the Dice similarity coefficient, and classification AI (3DAIClass) performance on AI and radiologist contours was assessed with F1-score and accuracy. Training/validation/testing data partitions of 70:15:15 were used. A multi-reader study was performed with two junior and two senior radiologists within a subset of the testing dataset (n = 36). RESULTS: In 45 unseen staging CT scans (12 metastatic PCa) with 669 benign and 364 metastatic lesions, 3DAISeg detected 73.1% of metastatic (266/364) and 72.4% of benign lesions (484/669). Each scan averaged 12 extra segmentations (range: 1-31). All metastatic scans had at least one detected metastatic lesion, achieving a 100% patient-level detection. The mean Dice score for 3DAISeg was 0.53 (median: 0.59, range: 0-0.87). The F1 for 3DAIClass was 94.8% (radiologist contours) and 92.4% (3DAISeg contours), with a median false positive of 0 (range: 0-3). Using radiologist contours, 3DAIClass had PPV and NPV rates comparable to junior and senior radiologists: PPV (semi-automated approach AI 40.0% vs. Juniors 32.0% vs. Seniors 50.0%) and NPV (AI 96.2% vs. Juniors 95.7% vs. Seniors 91.9%). When using 3DAISeg, 3DAIClass mimicked junior radiologists in PPV (pure-AI 20.0% vs. Juniors 32.0% vs. Seniors 50.0%) but surpassed seniors in NPV (pure-AI 93.8% vs. Juniors 95.7% vs. Seniors 91.9%). CONCLUSION: Our lesion detection and classification AI model performs on par with junior and senior radiologists in discerning benign and metastatic lesions on staging CTs obtained for PCa.


Assuntos
Neoplasias Ósseas , Aprendizado Profundo , Estadiamento de Neoplasias , Neoplasias da Próstata , Tomografia Computadorizada por Raios X , Humanos , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Neoplasias Ósseas/diagnóstico por imagem , Neoplasias Ósseas/secundário , Estudos Retrospectivos , Tomografia Computadorizada por Raios X/métodos , Idoso , Pessoa de Meia-Idade , Interpretação de Imagem Radiográfica Assistida por Computador/métodos
3.
Artigo em Inglês | MEDLINE | ID: mdl-38082949

RESUMO

Accurate segmentation of organs-at-risks (OARs) is a precursor for optimizing radiation therapy planning. Existing deep learning-based multi-scale fusion architectures have demonstrated a tremendous capacity for 2D medical image segmentation. The key to their success is aggregating global context and maintaining high resolution representations. However, when translated into 3D segmentation problems, existing multi-scale fusion architectures might underperform due to their heavy computation overhead and substantial data diet. To address this issue, we propose a new OAR segmentation framework, called OARFocalFuseNet, which fuses multi-scale features and employs focal modulation for capturing global-local context across multiple scales. Each resolution stream is enriched with features from different resolution scales, and multi-scale information is aggregated to model diverse contextual ranges. As a result, feature representations are further boosted. The comprehensive comparisons in our experimental setup with OAR segmentation as well as multi-organ segmentation show that our proposed OARFocalFuseNet outperforms the recent state-of-the-art methods on publicly available OpenKBP datasets and Synapse multi-organ segmentation. Both of the proposed methods (3D-MSF and OARFocalFuseNet) showed promising performance in terms of standard evaluation metrics. Our best performing method (OARFocalFuseNet) obtained a dice coefficient of 0.7995 and hausdorff distance of 5.1435 on OpenKBP datasets and dice coefficient of 0.8137 on Synapse multi-organ segmentation dataset. Our code is available at https://github.com/NoviceMAn-prog/OARFocalFuse.


Assuntos
Órgãos em Risco , Tomografia Computadorizada por Raios X , Tomografia Computadorizada por Raios X/métodos , Planejamento da Radioterapia Assistida por Computador/métodos
4.
Artigo em Inglês | MEDLINE | ID: mdl-38083589

RESUMO

Colorectal cancer (CRC) is one of the most common causes of cancer and cancer-related mortality worldwide. Performing colon cancer screening in a timely fashion is the key to early detection. Colonoscopy is the primary modality used to diagnose colon cancer. However, the miss rate of polyps, adenomas and advanced adenomas remains significantly high. Early detection of polyps at the precancerous stage can help reduce the mortality rate and the economic burden associated with colorectal cancer. Deep learning-based computer-aided diagnosis (CADx) system may help gastroenterologists to identify polyps that may otherwise be missed, thereby improving the polyp detection rate. Additionally, CADx system could prove to be a cost-effective system that improves long-term colorectal cancer prevention. In this study, we proposed a deep learning-based architecture for automatic polyp segmentation called Transformer ResU-Net (TransResU-Net). Our proposed architecture is built upon residual blocks with ResNet-50 as the backbone and takes advantage of the transformer self-attention mechanism as well as dilated convolution(s). Our experimental results on two publicly available polyp segmentation benchmark datasets showed that TransResU-Net obtained a highly promising dice score and a real-time speed. With high efficacy in our performance metrics, we concluded that TransResU-Net could be a strong benchmark for building a real-time polyp detection system for the early diagnosis, treatment, and prevention of colorectal cancer. The source code of the proposed TransResU-Net is publicly available at https://github.com/nikhilroxtomar/TransResUNet.


Assuntos
Adenoma , Neoplasias do Colo , Pólipos do Colo , Neoplasias Colorretais , Humanos , Neoplasias Colorretais/diagnóstico , Detecção Precoce de Câncer , Pólipos do Colo/diagnóstico por imagem , Neoplasias do Colo/diagnóstico por imagem , Adenoma/diagnóstico por imagem
5.
ArXiv ; 2023 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-38106459

RESUMO

Pediatric brain and spinal cancers remain the leading cause of cancer-related death in children. Advancements in clinical decision-support in pediatric neuro-oncology utilizing the wealth of radiology imaging data collected through standard care, however, has significantly lagged other domains. Such data is ripe for use with predictive analytics such as artificial intelligence (AI) methods, which require large datasets. To address this unmet need, we provide a multi-institutional, large-scale pediatric dataset of 23,101 multi-parametric MRI exams acquired through routine care for 1,526 brain tumor patients, as part of the Children's Brain Tumor Network. This includes longitudinal MRIs across various cancer diagnoses, with associated patient-level clinical information, digital pathology slides, as well as tissue genotype and omics data. To facilitate downstream analysis, treatment-naïve images for 370 subjects were processed and released through the NCI Childhood Cancer Data Initiative via the Cancer Data Service. Through ongoing efforts to continuously build these imaging repositories, our aim is to accelerate discovery and translational AI models with real-world data, to ultimately empower precision medicine for children.

6.
Curr Opin Gastroenterol ; 39(5): 436-447, 2023 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-37523001

RESUMO

PURPOSE OF REVIEW: Early and accurate diagnosis of pancreatic cancer is crucial for improving patient outcomes, and artificial intelligence (AI) algorithms have the potential to play a vital role in computer-aided diagnosis of pancreatic cancer. In this review, we aim to provide the latest and relevant advances in AI, specifically deep learning (DL) and radiomics approaches, for pancreatic cancer diagnosis using cross-sectional imaging examinations such as computed tomography (CT) and magnetic resonance imaging (MRI). RECENT FINDINGS: This review highlights the recent developments in DL techniques applied to medical imaging, including convolutional neural networks (CNNs), transformer-based models, and novel deep learning architectures that focus on multitype pancreatic lesions, multiorgan and multitumor segmentation, as well as incorporating auxiliary information. We also discuss advancements in radiomics, such as improved imaging feature extraction, optimized machine learning classifiers and integration with clinical data. Furthermore, we explore implementing AI-based clinical decision support systems for pancreatic cancer diagnosis using medical imaging in practical settings. SUMMARY: Deep learning and radiomics with medical imaging have demonstrated strong potential to improve diagnostic accuracy of pancreatic cancer, facilitate personalized treatment planning, and identify prognostic and predictive biomarkers. However, challenges remain in translating research findings into clinical practice. More studies are required focusing on refining these methods, addressing significant limitations, and developing integrative approaches for data analysis to further advance the field of pancreatic cancer diagnosis.


Assuntos
Aprendizado Profundo , Neoplasias Pancreáticas , Humanos , Inteligência Artificial , Pâncreas , Neoplasias Pancreáticas/diagnóstico por imagem , Tomografia Computadorizada por Raios X
7.
Mach Learn Med Imaging ; 14349: 134-143, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38274402

RESUMO

Intraductal Papillary Mucinous Neoplasm (IPMN) cysts are pre-malignant pancreas lesions, and they can progress into pancreatic cancer. Therefore, detecting and stratifying their risk level is of ultimate importance for effective treatment planning and disease control. However, this is a highly challenging task because of the diverse and irregular shape, texture, and size of the IPMN cysts as well as the pancreas. In this study, we propose a novel computer-aided diagnosis pipeline for IPMN risk classification from multi-contrast MRI scans. Our proposed analysis framework includes an efficient volumetric self-adapting segmentation strategy for pancreas delineation, followed by a newly designed deep learning-based classification scheme with a radiomics-based predictive approach. We test our proposed decision-fusion model in multi-center data sets of 246 multi-contrast MRI scans and obtain superior performance to the state of the art (SOTA) in this field. Our ablation studies demonstrate the significance of both radiomics and deep learning modules for achieving the new SOTA performance compared to international guidelines and published studies (81.9% vs 61.3% in accuracy). Our findings have important implications for clinical decision-making. In a series of rigorous experiments on multi-center data sets (246 MRI scans from five centers), we achieved unprecedented performance (81.9% accuracy). The code is available upon publication.

8.
Med Image Comput Comput Assist Interv ; 14222: 736-746, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38299070

RESUMO

Vision Transformer (ViT) models have demonstrated a breakthrough in a wide range of computer vision tasks. However, compared to the Convolutional Neural Network (CNN) models, it has been observed that the ViT models struggle to capture high-frequency components of images, which can limit their ability to detect local textures and edge information. As abnormalities in human tissue, such as tumors and lesions, may greatly vary in structure, texture, and shape, high-frequency information such as texture is crucial for effective semantic segmentation tasks. To address this limitation in ViT models, we propose a new technique, Laplacian-Former, that enhances the self-attention map by adaptively re-calibrating the frequency information in a Laplacian pyramid. More specifically, our proposed method utilizes a dual attention mechanism via efficient attention and frequency attention while the efficient attention mechanism reduces the complexity of self-attention to linear while producing the same output, selectively intensifying the contribution of shape and texture features. Furthermore, we introduce a novel efficient enhancement multi-scale bridge that effectively transfers spatial information from the encoder to the decoder while preserving the fundamental features. We demonstrate the efficacy of Laplacian-former on multi-organ and skin lesion segmentation tasks with +1.87% and +0.76% dice scores compared to SOTA approaches, respectively. Our implementation is publically available at GitHub.

9.
Front Neurosci ; 16: 911065, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35873825

RESUMO

Radiomics-guided prediction of overall survival (OS) in brain gliomas is seen as a significant problem in Neuro-oncology. The ultimate goal is to develop a robust MRI-based approach (i.e., a radiomics model) that can accurately classify a novel subject as a short-term survivor, a medium-term survivor, or a long-term survivor. The BraTS 2020 challenge provides radiological imaging and clinical data (178 subjects) to develop and validate radiomics-based methods for OS classification in brain gliomas. In this study, we empirically evaluated the efficacy of four multiregional radiomic models, for OS classification, and quantified the robustness of predictions to variations in automatic segmentation of brain tumor volume. More specifically, we evaluated four radiomic models, namely, the Whole Tumor (WT) radiomics model, the 3-subregions radiomics model, the 6-subregions radiomics model, and the 21-subregions radiomics model. The 3-subregions radiomics model is based on a physiological segmentation of whole tumor volume (WT) into three non-overlapping subregions. The 6-subregions and 21-subregions radiomic models are based on an anatomical segmentation of the brain tumor into 6 and 21 anatomical regions, respectively. Moreover, we employed six segmentation schemes - five CNNs and one STAPLE-fusion method - to quantify the robustness of radiomic models. Our experiments revealed that the 3-subregions radiomics model had the best predictive performance (mean AUC = 0.73) but poor robustness (RSD = 1.99) and the 6-subregions and 21-subregions radiomics models were more robust (RSD  1.39) with lower predictive performance (mean AUC  0.71). The poor robustness of the 3-subregions radiomics model was associated with highly variable and inferior segmentation of tumor core and active tumor subregions as quantified by the Hausdorff distance metric (4.4-6.5mm) across six segmentation schemes. Failure analysis revealed that the WT radiomics model, the 6-subregions radiomics model, and the 21-subregions radiomics model failed for the same subjects which is attributed to the common requirement of accurate segmentation of the WT volume. Moreover, short-term survivors were largely misclassified by the radiomic models and had large segmentation errors (average Hausdorff distance of 7.09mm). Lastly, we concluded that while STAPLE-fusion can reduce segmentation errors, it is not a solution to learning accurate and robust radiomic models.

10.
Proc Int Conf Image Anal Process ; 13374: 340-347, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-36745150

RESUMO

Automated liver segmentation from radiology scans (CT, MRI) can improve surgery and therapy planning and follow-up assessment in addition to conventional use for diagnosis and prognosis. Although convolutional neural networks (CNNs) have became the standard image segmentation tasks, more recently this has started to change towards Transformers based architectures because Transformers are taking advantage of capturing long range dependence modeling capability in signals, so called attention mechanism. In this study, we propose a new segmentation approach using a hybrid approach combining the Transformer(s) with the Generative Adversarial Network (GAN) approach. The premise behind this choice is that the self-attention mechanism of the Transformers allows the network to aggregate the high dimensional feature and provide global information modeling. This mechanism provides better segmentation performance compared with traditional methods. Furthermore, we encode this generator into the GAN based architecture so that the discriminator network in the GAN can classify the credibility of the generated segmentation masks compared with the real masks coming from human (expert) annotations. This allows us to extract the high dimensional topology information in the mask for biomedical image segmentation and provide more reliable segmentation results. Our model achieved a high dice coefficient of 0.9433, recall of 0.9515, and precision of 0.9376 and outperformed other Transformer based approaches. The implementation details of the proposed architecture can be found at https://github.com/UgurDemir/tranformer_liver_segmentation.

11.
Artigo em Inglês | MEDLINE | ID: mdl-36777398

RESUMO

The detection and removal of precancerous polyps through colonoscopy is the primary technique for the prevention of colorectal cancer worldwide. However, the miss rate of colorectal polyp varies significantly among the endoscopists. It is well known that a computer-aided diagnosis (CAD) system can assist endoscopists in detecting colon polyps and minimize the variation among endoscopists. In this study, we introduce a novel deep learning architecture, named MKDCNet, for automatic polyp segmentation robust to significant changes in polyp data distribution. MKDCNet is simply an encoder-decoder neural network that uses the pre-trained ResNet50 as the encoder and novel multiple kernel dilated convolution (MKDC) block that expands the field of view to learn more robust and heterogeneous representation. Extensive experiments on four publicly available polyp datasets and cell nuclei dataset show that the proposed MKDCNet outperforms the state-of-the-art methods when trained and tested on the same dataset as well when tested on unseen polyp datasets from different distributions. With rich results, we demonstrated the robustness of the proposed architecture. From an efficiency perspective, our algorithm can process at (≈ 45) frames per second on RTX 3090 GPU. MKDCNet can be a strong benchmark for building real-time systems for clinical colonoscopies. The code of the proposed MKDCNet is available at https://github.com/nikhilroxtomar/MKDCNet.

12.
Artigo em Inglês | MEDLINE | ID: mdl-36777397

RESUMO

Video capsule endoscopy is a hot topic in computer vision and medicine. Deep learning can have a positive impact on the future of video capsule endoscopy technology. It can improve the anomaly detection rate, reduce physicians' time for screening, and aid in real-world clinical analysis. Computer-Aided diagnosis (CADx) classification system for video capsule endoscopy has shown a great promise for further improvement. For example, detection of cancerous polyp and bleeding can lead to swift medical response and improve the survival rate of the patients. To this end, an automated CADx system must have high throughput and decent accuracy. In this study, we propose FocalConvNet, a focal modulation network integrated with lightweight convolutional layers for the classification of small bowel anatomical landmarks and luminal findings. FocalConvNet leverages focal modulation to attain global context and allows global-local spatial interactions throughout the forward pass. Moreover, the convolutional block with its intrinsic inductive/learning bias and capacity to extract hierarchical features allows our FocalConvNet to achieve favourable results with high throughput. We compare our FocalConvNet with other state-of-the-art (SOTA) on Kvasir-Capsule, a large-scale VCE dataset with 44,228 frames with 13 classes of different anomalies. We achieved the weighted F1-score, recall and Matthews correlation coefficient (MCC) of 0.6734, 0.6373 and 0.2974, respectively, outperforming SOTA methodologies. Further, we obtained the highest throughput of 148.02 images/second rate to establish the potential of FocalConvNet in a real-time clinical environment. The code of the proposed FocalConvNet is available at https://github.com/NoviceMAn-prog/FocalConvNet.

13.
Med Image Comput Comput Assist Interv ; 13433: 151-160, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36780239

RESUMO

Colonoscopy is a gold standard procedure but is highly operator-dependent. Automated polyp segmentation, a precancerous precursor, can minimize missed rates and timely treatment of colon cancer at an early stage. Even though there are deep learning methods developed for this task, variability in polyp size can impact model training, thereby limiting it to the size attribute of the majority of samples in the training dataset that may provide sub-optimal results to differently sized polyps. In this work, we exploit size-related and polyp number-related features in the form of text attention during training. We introduce an auxiliary classification task to weight the text-based embedding that allows network to learn additional feature representations that can distinctly adapt to differently sized polyps and can adapt to cases with multiple polyps. Our experimental results demonstrate that these added text embeddings improve the overall performance of the model compared to state-of-the-art segmentation methods. We explore four different datasets and provide insights for size-specific improvements. Our proposed text-guided attention network (TGANet) can generalize well to variable-sized polyps in different datasets. Codes are available at https://github.com/nikhilroxtomar/TGANet.

14.
IEEE Access ; 9: 87531-87542, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34733603

RESUMO

In this study, we formulated an efficient deep learning-based classification strategy for characterizing metastatic bone lesions using computed tomography scans (CTs) of prostate cancer patients. For this purpose, 2,880 annotated bone lesions from CT scans of 114 patients diagnosed with prostate cancer were used for training, validation, and final evaluation. These annotations were in the form of lesion full segmentation, lesion type and labels of either benign or malignant. In this work, we present our approach in developing the state-of-the-art model to classify bone lesions as benign or malignant, where (1) we introduce a valuable dataset to address a clinically important problem, (2) we increase the reliability of our model by patient-level stratification of our dataset following lesion-aware distribution at each of the training, validation, and test splits, (3) we explore the impact of lesion texture, morphology, size, location, and volumetric information on the classification performance, (4) we investigate the functionality of lesion classification using different algorithms including lesion-based average 2D ResNet-50, lesion-based average 2D ResNeXt-50, 3D ResNet-18, 3D ResNet-50, as well as the ensemble of 2D ResNet-50 and 3D ResNet-18. For this purpose, we employed a train/validation/test split equal to 75%/12%/13% with several data augmentation methods applied to the training dataset to avoid overfitting and to increase reliability. We achieved an accuracy of 92.2% for correct classification of benign vs. malignant bone lesions in the test set using an ensemble of lesion-based average 2D ResNet-50 and 3D ResNet-18 with texture, volumetric information, and morphology having the greatest discriminative power respectively. To the best of our knowledge, this is the highest ever achieved lesion-level accuracy having a very comprehensive data set for such a clinically important problem. This level of classification performance in the early stages of metastasis development bodes well for clinical translation of this strategy.

15.
Radiol Artif Intell ; 3(1): e200047, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-33842890

RESUMO

PURPOSE: To generate and assess an algorithm combining eye tracking and speech recognition to extract brain lesion location labels automatically for deep learning (DL). MATERIALS AND METHODS: In this retrospective study, 700 two-dimensional brain tumor MRI scans from the Brain Tumor Segmentation database were clinically interpreted. For each image, a single radiologist dictated a standard phrase describing the lesion into a microphone, simulating clinical interpretation. Eye-tracking data were recorded simultaneously. Using speech recognition, gaze points corresponding to each lesion were obtained. Lesion locations were used to train a keypoint detection convolutional neural network to find new lesions. A network was trained to localize lesions for an independent test set of 85 images. The statistical measure to evaluate our method was percent accuracy. RESULTS: Eye tracking with speech recognition was 92% accurate in labeling lesion locations from the training dataset, thereby demonstrating that fully simulated interpretation can yield reliable tumor location labels. These labels became those that were used to train the DL network. The detection network trained on these labels predicted lesion location of a separate testing set with 85% accuracy. CONCLUSION: The DL network was able to locate brain tumors on the basis of training data that were labeled automatically from simulated clinical image interpretation.© RSNA, 2020.

16.
Transl Lung Cancer Res ; 10(2): 955-964, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33718035

RESUMO

BACKGROUND: Micropapillary/solid (MP/S) growth patterns of lung adenocarcinoma are vital for making clinical decisions regarding surgical intervention. This study aimed to predict the presence of a MP/S component in lung adenocarcinoma using radiomics analysis. METHODS: Between January 2011 and December 2013, patients undergoing curative invasive lung adenocarcinoma resection were included. Using the "PyRadiomics" package, we extracted 90 radiomics features from the preoperative computed tomography (CT) images. Subsequently, four prediction models were built by utilizing conventional machine learning approaches fitting into radiomics analysis: a generalized linear model (GLM), Naïve Bayes, support vector machine (SVM), and random forest classifiers. The models' accuracy was assessed using a receiver operating curve (ROC) analysis, and the models' stability was validated both internally and externally. RESULTS: A total of 268 patients were included as a primary cohort, and 36.6% (98/268) of them had lung adenocarcinoma with an MP/S component. Patients with an MP/S component had a higher rate of lymph node metastasis (18.4% versus 5.3%) and worse recurrence-free and overall survival. Five radiomics features were selected for model building, and in the internal validation, the four models achieved comparable performance of MP/S prediction in terms of area under the curve (AUC): GLM, 0.74 [95% confidence interval (CI): 0.65-0.83]; Naïve Bayes, 0.75 (95% CI: 0.65-0.85); SVM, 0.73 (95% CI: 0.61-0.83); and random forest, 0.72 (95% CI: 0.63-0.81). External validation was performed using a test cohort with 193 patients, and the AUC values were 0.70, 0.72, 0.73, and 0.69 for Naïve Bayes, SVM, random forest, and GLM, respectively. CONCLUSIONS: Radiomics-based machine learning approach is a very strong tool for preoperatively predicting the presence of MP/S growth patterns in lung adenocarcinoma, and can help customize treatment and surveillance strategies.

17.
J Med Imaging (Bellingham) ; 8(1): 010901, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-33426151

RESUMO

Purpose: Deep learning has achieved major breakthroughs during the past decade in almost every field. There are plenty of publicly available algorithms, each designed to address a different task of computer vision in general. However, most of these algorithms cannot be directly applied to images in the medical domain. Herein, we are focused on the required preprocessing steps that should be applied to medical images prior to deep neural networks. Approach: To be able to employ the publicly available algorithms for clinical purposes, we must make a meaningful pixel/voxel representation from medical images which facilitates the learning process. Based on the ultimate goal expected from an algorithm (classification, detection, or segmentation), one may infer the required pre-processing steps that can ideally improve the performance of that algorithm. Required pre-processing steps for computed tomography (CT) and magnetic resonance (MR) images in their correct order are discussed in detail. We further supported our discussion by relevant experiments to investigate the efficiency of the listed preprocessing steps. Results: Our experiments confirmed how using appropriate image pre-processing in the right order can improve the performance of deep neural networks in terms of better classification and segmentation. Conclusions: This work investigates the appropriate pre-processing steps for CT and MR images of prostate cancer patients, supported by several experiments that can be useful for educating those new to the field (https://github.com/NIH-MIP/Radiology_Image_Preprocessing_for_Deep_Learning).

18.
Nat Commun ; 11(1): 4080, 2020 08 14.
Artigo em Inglês | MEDLINE | ID: mdl-32796848

RESUMO

Chest CT is emerging as a valuable diagnostic tool for clinical management of COVID-19 associated lung disease. Artificial intelligence (AI) has the potential to aid in rapid evaluation of CT scans for differentiation of COVID-19 findings from other clinical entities. Here we show that a series of deep learning algorithms, trained in a diverse multinational cohort of 1280 patients to localize parietal pleura/lung parenchyma followed by classification of COVID-19 pneumonia, can achieve up to 90.8% accuracy, with 84% sensitivity and 93% specificity, as evaluated in an independent test set (not included in training and validation) of 1337 patients. Normal controls included chest CTs from oncology, emergency, and pneumonia-related indications. The false positive rate in 140 patients with laboratory confirmed other (non COVID-19) pneumonias was 10%. AI-based algorithms can readily identify CT scans with COVID-19 associated pneumonia, as well as distinguish non-COVID related pneumonias with high specificity in diverse patient populations.


Assuntos
Inteligência Artificial , Técnicas de Laboratório Clínico/métodos , Infecções por Coronavirus/diagnóstico por imagem , Pneumonia Viral/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Betacoronavirus/isolamento & purificação , COVID-19 , Teste para COVID-19 , Criança , Pré-Escolar , Infecções por Coronavirus/diagnóstico , Infecções por Coronavirus/virologia , Aprendizado Profundo , Feminino , Humanos , Imageamento Tridimensional/métodos , Pulmão/diagnóstico por imagem , Masculino , Pessoa de Meia-Idade , Pandemias , Pneumonia Viral/virologia , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , SARS-CoV-2 , Adulto Jovem
19.
Front Neurosci ; 14: 409, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32435182

RESUMO

The success of surgical resection in epilepsy patients depends on preserving functionally critical brain regions, while removing pathological tissues. Being the gold standard, electro-cortical stimulation mapping (ESM) helps surgeons in localizing the function of eloquent cortex through electrical stimulation of electrodes placed directly on the cortical brain surface. Due to the potential hazards of ESM, including increased risk of provoked seizures, electrocorticography based functional mapping (ECoG-FM) was introduced as a safer alternative approach. However, ECoG-FM has a low success rate when compared to the ESM. In this study, we address this critical limitation by developing a new algorithm based on deep learning for ECoG-FM and thereby we achieve an accuracy comparable to ESM in identifying eloquent language cortex. In our experiments, with 11 epilepsy patients who underwent presurgical evaluation (through deep learning-based signal analysis on 637 electrodes), our proposed algorithm obtained an accuracy of 83.05% in identifying language regions, an exceptional 23% improvement with respect to the conventional ECoG-FM analysis (∼60%). Our findings have demonstrated, for the first time, that deep learning powered ECoG-FM can serve as a stand-alone modality and avoid likely hazards of the ESM in epilepsy surgery. Hence, reducing the potential for developing post-surgical morbidity in the language function.

20.
Gastrointest Endosc ; 92(4): 938-945.e1, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32343978

RESUMO

BACKGROUND AND AIMS: Artificial intelligence (AI), specifically deep learning, offers the potential to enhance the field of GI endoscopy in areas ranging from lesion detection and classification to quality metrics and documentation. Progress in this field will be measured by whether AI implementation can lead to improved patient outcomes and more efficient clinical workflow for GI endoscopists. The aims of this article are to report the findings of a multidisciplinary group of experts focusing on issues in AI research and applications related to gastroenterology and endoscopy, to review the current status of the field, and to produce recommendations for investigators developing and studying new AI technologies for gastroenterology. METHODS: A multidisciplinary meeting was held on September 28, 2019, bringing together academic, industry, and regulatory experts in diverse fields including gastroenterology, computer and imaging sciences, machine learning, computer vision, U.S. Food and Drug Administration, and the National Institutes of Health. Recent and ongoing studies in gastroenterology and current technology in AI were presented and discussed, key gaps in knowledge were identified, and recommendations were made for research that would have the highest impact in making advances and implementation in the field of AI to gastroenterology. RESULTS: There was a consensus that AI will transform the field of gastroenterology, particularly endoscopy and image interpretation. Powered by advanced machine learning algorithms, the use of computer vision in endoscopy has the potential to result in better prediction and treatment outcomes for patients with gastroenterology disorders and cancer. Large libraries of endoscopic images, "EndoNet," will be important to facilitate development and application of AI systems. The regulatory environment for implementation of AI systems is evolving, but common outcomes such as colon polyp detection have been highlighted as potential clinical trial endpoints. Other threshold outcomes will be important, as well as clarity on iterative improvement of clinical systems. CONCLUSIONS: Gastroenterology is a prime candidate for early adoption of AI. AI is rapidly moving from an experimental phase to a clinical implementation phase in gastroenterology. It is anticipated that the implementation of AI in gastroenterology over the next decade will have a significant and positive impact on patient care and clinical workflows. Ongoing collaboration among gastroenterologists, industry experts, and regulatory agencies will be important to ensure that progress is rapid and clinically meaningful. However, several constraints and areas will benefit from further exploration, including potential clinical applications, implementation, structure and governance, role of gastroenterologists, and potential impact of AI in gastroenterology.


Assuntos
Inteligência Artificial , Gastroenterologia , Diagnóstico por Imagem , Endoscopia , Humanos , Aprendizado de Máquina
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA