Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 36
Filtrar
1.
Bioengineering (Basel) ; 10(9)2023 Sep 04.
Artigo em Inglês | MEDLINE | ID: mdl-37760142

RESUMO

Transplant pathology plays a critical role in ensuring that transplanted organs function properly and the immune systems of the recipients do not reject them. To improve outcomes for transplant recipients, accurate diagnosis and timely treatment are essential. Recent advances in artificial intelligence (AI)-empowered digital pathology could help monitor allograft rejection and weaning of immunosuppressive drugs. To explore the role of AI in transplant pathology, we conducted a systematic search of electronic databases from January 2010 to April 2023. The PRISMA checklist was used as a guide for screening article titles, abstracts, and full texts, and we selected articles that met our inclusion criteria. Through this search, we identified 68 articles from multiple databases. After careful screening, only 14 articles were included based on title and abstract. Our review focuses on the AI approaches applied to four transplant organs: heart, lungs, liver, and kidneys. Specifically, we found that several deep learning-based AI models have been developed to analyze digital pathology slides of biopsy specimens from transplant organs. The use of AI models could improve clinicians' decision-making capabilities and reduce diagnostic variability. In conclusion, our review highlights the advancements and limitations of AI in transplant pathology. We believe that these AI technologies have the potential to significantly improve transplant outcomes and pave the way for future advancements in this field.

2.
J Pathol Inform ; 14: 100314, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37179570

RESUMO

Microscopic image examination is fundamental to clinical microbiology and often used as the first step to diagnose fungal infections. In this study, we present classification of pathogenic fungi from microscopic images using deep convolutional neural networks (CNN). We trained well-known CNN architectures such as DenseNet, Inception ResNet, InceptionV3, Xception, ResNet50, VGG16, and VGG19 to identify fungal species, and compared their performances. We collected 1079 images of 89 fungi genera and split our data into training, validation, and test datasets by 7:1:2 ratio. The DenseNet CNN model provided the best performance among other CNN architectures with overall accuracy of 65.35% for top 1 prediction and 75.19% accuracy for top 3 predictions for classification of 89 genera. The performance is further improved (>80%) after excluding rare genera with low sample occurrence and applying data augmentation techniques. For some particular fungal genera, we obtained 100% prediction accuracy. In summary, we present a deep learning approach that shows promising results in prediction of filamentous fungi identification from culture, which could be used to enhance diagnostic accuracy and decrease turnaround time to identification.

3.
Ultrasound Med Biol ; 48(11): 2237-2248, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-35961866

RESUMO

Median nerve swelling is one of the features of carpal tunnel syndrome (CTS), and ultrasound measurement of maximum median nerve cross-sectional area is commonly used to diagnose CTS. We hypothesized that volume might be a more sensitive measure than cross-sectional area for CTS diagnosis. We therefore assessed the accuracy and reliability of 3-D volume measurements of the median nerve in human cadavers, comparing direct measurements with ultrasound images interpreted using deep learning algorithms. Ultrasound images of a 10-cm segment of the median nerve were used to train the U-Net model, which achieved an average volume similarity of 0.89 and area under the curve of 0.90 from the threefold cross-validation. Correlation coefficients were calculated using the areas measured by each method. The intraclass correlation coefficient was 0.86. Pearson's correlation coefficient R between the estimated volume from the manually measured cross-sectional area and the estimated volume of deep learning was 0.85. In this study using deep learning to segment the median nerve longitudinally, estimated volume had high reliability. We plan to assess its clinical usefulness in future clinical studies. The volume of the median nerve may provide useful additional information on disease severity, beyond maximum cross-sectional area.


Assuntos
Síndrome do Túnel Carpal , Aprendizado Profundo , Cadáver , Humanos , Nervo Mediano/diagnóstico por imagem , Reprodutibilidade dos Testes , Ultrassonografia/métodos
4.
Abdom Radiol (NY) ; 47(7): 2408-2419, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35476147

RESUMO

PURPOSE: Total kidney volume (TKV) is the most important imaging biomarker for quantifying the severity of autosomal-dominant polycystic kidney disease (ADPKD). 3D ultrasound (US) can accurately measure kidney volume compared to 2D US; however, manual segmentation is tedious and requires expert annotators. We investigated a deep learning-based approach for automated segmentation of TKV from 3D US in ADPKD patients. METHOD: We used axially acquired 3D US-kidney images in 22 ADPKD patients where each patient and each kidney were scanned three times, resulting in 132 scans that were manually segmented. We trained a convolutional neural network to segment the whole kidney and measure TKV. All patients were subsequently imaged with MRI for measurement comparison. RESULTS: Our method automatically segmented polycystic kidneys in 3D US images obtaining an average Dice coefficient of 0.80 on the test dataset. The kidney volume measurement compared with linear regression coefficient and bias from human tracing were R2 = 0.81, and - 4.42%, and between AI and reference standard were R2 = 0.93, and - 4.12%, respectively. MRI and US measured kidney volumes had R2 = 0.84 and a bias of 7.47%. CONCLUSION: This is the first study applying deep learning to 3D US in ADPKD. Our method shows promising performance for auto-segmentation of kidneys using 3D US to measure TKV, close to human tracing and MRI measurement. This imaging and analysis method may be useful in a number of settings, including pediatric imaging, clinical studies, and longitudinal tracking of patient disease progression.


Assuntos
Doenças Renais Policísticas , Rim Policístico Autossômico Dominante , Criança , Humanos , Imageamento Tridimensional , Rim/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Rim Policístico Autossômico Dominante/diagnóstico por imagem
5.
Asian Pac J Cancer Prev ; 22(8): 2597-2602, 2021 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-34452575

RESUMO

INTRODUCTION: The management of follicular (FN) and Hurthle cell neoplasms (HCN) is often difficult because of the uncertainty of malignancy risk. We aimed to assess characteristics of benign and malignant follicular and Hurthle neoplasms based on their shape and size. MATERIALS AND METHODS: Patients with Follicular adenoma (FA) or carcinoma (FC) and Hurthle Cell adenoma (HCA) or carcinoma (HCC) who had preoperative ultrasonography were included. Demographic data were retrieved. Size and shape of the nodules were measured. Logistic regression analyses and odds ratios were performed. RESULTS: A total of 115 nodules with 57 carcinomas and 58 adenomas were included. Logistic regression analysis shows that the nodule height and the patient age are predictors of malignancy (p-values = 0.001 and 0.042). A cutoff value of nodule height ≥ 4 cm. produces an odds ratio of 4.5 (p-value = 0.006). An age ≥ 55 year-old demonstrates an odds ratio of 2.4-3.6 (p-value = 0.03). Taller-than-wide shape was not statistically significant (p-value = 0.613). CONCLUSION: FC and HCC are larger than FA and HCA in size, with a cutoff at 4 cm. Increasing age increases the odds of malignancy with a cutoff at 55 year-old. Taller-than-wide shape is not a predictor of malignancy.


Assuntos
Adenocarcinoma Folicular/diagnóstico , Adenoma Oxífilo/diagnóstico , Adenoma/diagnóstico , Neoplasias da Glândula Tireoide/diagnóstico , Nódulo da Glândula Tireoide/patologia , Ultrassonografia/métodos , Adenocarcinoma Folicular/diagnóstico por imagem , Adenocarcinoma Folicular/cirurgia , Adenoma/diagnóstico por imagem , Adenoma/cirurgia , Adenoma Oxífilo/diagnóstico por imagem , Adenoma Oxífilo/cirurgia , Estudos de Casos e Controles , Feminino , Seguimentos , Humanos , Masculino , Pessoa de Meia-Idade , Prognóstico , Estudos Retrospectivos , Neoplasias da Glândula Tireoide/diagnóstico por imagem , Neoplasias da Glândula Tireoide/cirurgia , Nódulo da Glândula Tireoide/diagnóstico por imagem , Tireoidectomia
6.
J Clin Med ; 10(11)2021 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-34073699

RESUMO

The accurate diagnosis of chronic myelomonocytic leukemia (CMML) and acute myeloid leukemia (AML) subtypes with monocytic differentiation relies on the proper identification and quantitation of blast cells and blast-equivalent cells, including promonocytes. This distinction can be quite challenging given the cytomorphologic and immunophenotypic similarities among the monocytic cell precursors. The aim of this study was to assess the performance of convolutional neural networks (CNN) in separating monocytes from their precursors (i.e., promonocytes and monoblasts). We collected digital images of 935 monocytic cells that were blindly reviewed by five experienced morphologists and assigned into three subtypes: monocyte, promonocyte, and blast. The consensus between reviewers was considered as a ground truth reference label for each cell. In order to assess the performance of CNN models, we divided our data into training (70%), validation (10%), and test (20%) datasets, as well as applied fivefold cross validation. The CNN models did not perform well for predicting three monocytic subtypes, but their performance was significantly improved for two subtypes (monocyte vs. promonocytes + blasts). Our findings (1) support the concept that morphologic distinction between monocytic cells of various differentiation level is difficult; (2) suggest that combining blasts and promonocytes into a single category is desirable for improved accuracy; and (3) show that CNN models can reach accuracy comparable to human reviewers (0.78 ± 0.10 vs. 0.86 ± 0.05). As far as we know, this is the first study to separate monocytes from their precursors using CNN.

7.
J Clin Med ; 10(7)2021 Mar 30.
Artigo em Inglês | MEDLINE | ID: mdl-33808513

RESUMO

Echocardiography (Echo), a widely available, noninvasive, and portable bedside imaging tool, is the most frequently used imaging modality in assessing cardiac anatomy and function in clinical practice. On the other hand, its operator dependability introduces variability in image acquisition, measurements, and interpretation. To reduce these variabilities, there is an increasing demand for an operator- and interpreter-independent Echo system empowered with artificial intelligence (AI), which has been incorporated into diverse areas of clinical medicine. Recent advances in AI applications in computer vision have enabled us to identify conceptual and complex imaging features with the self-learning ability of AI models and efficient parallel computing power. This has resulted in vast opportunities such as providing AI models that are robust to variations with generalizability for instantaneous image quality control, aiding in the acquisition of optimal images and diagnosis of complex diseases, and improving the clinical workflow of cardiac ultrasound. In this review, we provide a state-of-the art overview of AI-empowered Echo applications in cardiology and future trends for AI-powered Echo technology that standardize measurements, aid physicians in diagnosing cardiac diseases, optimize Echo workflow in clinics, and ultimately, reduce healthcare costs.

8.
Pathology ; 53(3): 400-407, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33642096

RESUMO

Advances in digital pathology have allowed a number of opportunities such as decision support using artificial intelligence (AI). The application of AI to digital pathology data shows promise as an aid for pathologists in the diagnosis of haematological disorders. AI-based applications have embraced benign haematology, diagnosing leukaemia and lymphoma, as well as ancillary testing modalities including flow cytometry. In this review, we highlight the progress made to date in machine learning applications in haematopathology, summarise important studies in this field, and highlight key limitations. We further present our outlook on the future direction and trends for AI to support diagnostic decisions in haematopathology.


Assuntos
Hematologia , Leucemia/diagnóstico , Linfoma/diagnóstico , Aprendizado de Máquina , Inteligência Artificial , Citometria de Fluxo , Humanos , Leucemia/patologia , Linfoma/patologia
9.
Sensors (Basel) ; 20(15)2020 Jul 27.
Artigo em Inglês | MEDLINE | ID: mdl-32727146

RESUMO

Ultrasound measurements of detrusor muscle thickness have been proposed as a diagnostic biomarker in patients with bladder overactivity and voiding dysfunction. In this study, we present an approach based on deep learning (DL) and dynamic programming (DP) to segment the bladder sac and measure the detrusor muscle thickness from transabdominal 2D B-mode ultrasound images. To assess the performance of our method, we compared the results of automated methods to the manually obtained reference bladder segmentations and wall thickness measurements of 80 images obtained from 11 volunteers. It takes less than a second to segment the bladder from a 2D B-mode image for the DL method. The average Dice index for the bladder segmentation is 0.93 ± 0.04 mm, and the average root-mean-square-error and standard deviation for wall thickness measurement are 0.7 ± 0.2 mm, which is comparable to the manual ground truth. The proposed fully automated and fast method could be a useful tool for segmentation and wall thickness measurement of the bladder from transabdominal B-mode images. The computation speed and accuracy of the proposed method will enable adaptive adjustment of the ultrasound focus point, and continuous assessment of the bladder wall during the filling and voiding process of the bladder.


Assuntos
Manejo de Espécimes , Bexiga Urinária , Automação , Humanos , Ultrassonografia , Bexiga Urinária/diagnóstico por imagem
10.
J Electrocardiol ; 59: 151-157, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32146201

RESUMO

BACKGROUND: Screening and early diagnosis of mitral regurgitation (MR) are crucial for preventing irreversible progression of MR. In this study, we developed and validated an artificial intelligence (AI) algorithm for detecting MR using electrocardiography (ECG). METHODS: This retrospective cohort study included data from two hospital. An AI algorithm was trained using 56,670 ECGs from 24,202 patients. Internal validation of the algorithm was performed with 3174 ECGs of 3174 patients from one hospital, while external validation was performed with 10,865 ECGs of 10,865 patients from another hospital. The endpoint was the diagnosis of significant MR, moderate to severe, confirmed by echocardiography. We used 500 Hz ECG raw data as predictive variables. Additionally, we showed regions of ECG that have the most significant impact on the decision-making of the AI algorithm using a sensitivity map. RESULTS: During the internal and external validation, the area under the receiver operating characteristic curve of the AI algorithm using a 12-lead ECG for detecting MR was 0.816 and 0.877, respectively, while that using a single-lead ECG was 0.758 and 0.850, respectively. In the 3157 non-MR individuals, those patients that the AI defined as high risk had a significantly higher chance of development of MR than the low risk group (13.9% vs. 2.6%, p < 0.001) during the follow-up period. The sensitivity map showed the AI algorithm focused on the P-wave and T-wave for MR patients and QRS complex for non-MR patients. CONCLUSIONS: The proposed AI algorithm demonstrated promising results for MR detecting using 12-lead and single-lead ECGs.


Assuntos
Aprendizado Profundo , Insuficiência da Valva Mitral , Inteligência Artificial , Eletrocardiografia , Humanos , Insuficiência da Valva Mitral/diagnóstico , Estudos Retrospectivos
11.
Radiol Artif Intell ; 2(5): e190183, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-33937839

RESUMO

PURPOSE: To develop a deep learning model that segments intracranial structures on head CT scans. MATERIALS AND METHODS: In this retrospective study, a primary dataset containing 62 normal noncontrast head CT scans from 62 patients (mean age, 73 years; age range, 27-95 years) acquired between August and December 2018 was used for model development. Eleven intracranial structures were manually annotated on the axial oblique series. The dataset was split into 40 scans for training, 10 for validation, and 12 for testing. After initial training, eight model configurations were evaluated on the validation dataset and the highest performing model was evaluated on the test dataset. Interobserver variability was reported using multirater consensus labels obtained from the test dataset. To ensure that the model learned generalizable features, it was further evaluated on two secondary datasets containing 12 volumes with idiopathic normal pressure hydrocephalus (iNPH) and 30 normal volumes from a publicly available source. Statistical significance was determined using categorical linear regression with P < .05. RESULTS: Overall Dice coefficient on the primary test dataset was 0.84 ± 0.05 (standard deviation). Performance ranged from 0.96 ± 0.01 (brainstem and cerebrum) to 0.74 ± 0.06 (internal capsule). Dice coefficients were comparable to expert annotations and exceeded those of existing segmentation methods. The model remained robust on external CT scans and scans demonstrating ventricular enlargement. The use of within-network normalization and class weighting facilitated learning of underrepresented classes. CONCLUSION: Automated segmentation of CT neuroanatomy is feasible with a high degree of accuracy. The model generalized to external CT scans as well as scans demonstrating iNPH.Supplemental material is available for this article.© RSNA, 2020.

12.
J Am Coll Radiol ; 16(9 Pt B): 1318-1328, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31492410

RESUMO

Ultrasound is the most commonly used imaging modality in clinical practice because it is a nonionizing, low-cost, and portable point-of-care imaging tool that provides real-time images. Artificial intelligence (AI)-powered ultrasound is becoming more mature and getting closer to routine clinical applications in recent times because of an increased need for efficient and objective acquisition and evaluation of ultrasound images. Because ultrasound images involve operator-, patient-, and scanner-dependent variations, the adaptation of classical machine learning methods to clinical applications becomes challenging. With their self-learning ability, deep-learning (DL) methods are able to harness exponentially growing graphics processing unit computing power to identify abstract and complex imaging features. This has given rise to tremendous opportunities such as providing robust and generalizable AI models for improving image acquisition, real-time assessment of image quality, objective diagnosis and detection of diseases, and optimizing ultrasound clinical workflow. In this report, the authors review current DL approaches and research directions in rapidly advancing ultrasound technology and present their outlook on future directions and trends for DL techniques to further improve diagnosis, reduce health care cost, and optimize ultrasound clinical workflow.


Assuntos
Aprendizado Profundo/tendências , Melhoria de Qualidade , Ultrassonografia Doppler em Cores/métodos , Fluxo de Trabalho , Algoritmos , Inteligência Artificial , Neoplasias da Mama/diagnóstico por imagem , Feminino , Previsões , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Masculino , Inquéritos e Questionários , Neoplasias da Glândula Tireoide/diagnóstico por imagem , Estados Unidos
13.
J Digit Imaging ; 32(4): 571-581, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-31089974

RESUMO

Deep-learning algorithms typically fall within the domain of supervised artificial intelligence and are designed to "learn" from annotated data. Deep-learning models require large, diverse training datasets for optimal model convergence. The effort to curate these datasets is widely regarded as a barrier to the development of deep-learning systems. We developed RIL-Contour to accelerate medical image annotation for and with deep-learning. A major goal driving the development of the software was to create an environment which enables clinically oriented users to utilize deep-learning models to rapidly annotate medical imaging. RIL-Contour supports using fully automated deep-learning methods, semi-automated methods, and manual methods to annotate medical imaging with voxel and/or text annotations. To reduce annotation error, RIL-Contour promotes the standardization of image annotations across a dataset. RIL-Contour accelerates medical imaging annotation through the process of annotation by iterative deep learning (AID). The underlying concept of AID is to iteratively annotate, train, and utilize deep-learning models during the process of dataset annotation and model development. To enable this, RIL-Contour supports workflows in which multiple-image analysts annotate medical images, radiologists approve the annotations, and data scientists utilize these annotations to train deep-learning models. To automate the feedback loop between data scientists and image analysts, RIL-Contour provides mechanisms to enable data scientists to push deep newly trained deep-learning models to other users of the software. RIL-Contour and the AID methodology accelerate dataset annotation and model development by facilitating rapid collaboration between analysts, radiologists, and engineers.


Assuntos
Conjuntos de Dados como Assunto , Aprendizado Profundo , Diagnóstico por Imagem/métodos , Processamento de Imagem Assistida por Computador/métodos , Sistemas de Informação em Radiologia , Humanos
14.
AJR Am J Roentgenol ; 211(6): 1184-1193, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30403527

RESUMO

OBJECTIVE: Deep learning has shown great promise for improving medical image classification tasks. However, knowing what aspects of an image the deep learning system uses or, in a manner of speaking, sees to make its prediction is difficult. MATERIALS AND METHODS: Within a radiologic imaging context, we investigated the utility of methods designed to identify features within images on which deep learning activates. In this study, we developed a classifier to identify contrast enhancement phase from whole-slice CT data. We then used this classifier as an easily interpretable system to explore the utility of class activation map (CAMs), gradient-weighted class activation maps (Grad-CAMs), saliency maps, guided backpropagation maps, and the saliency activation map, a novel map reported here, to identify image features the model used when performing prediction. RESULTS: All techniques identified voxels within imaging that the classifier used. SAMs had greater specificity than did guided backpropagation maps, CAMs, and Grad-CAMs at identifying voxels within imaging that the model used to perform prediction. At shallow network layers, SAMs had greater specificity than Grad-CAMs at identifying input voxels that the layers within the model used to perform prediction. CONCLUSION: As a whole, voxel-level visualizations and visualizations of the imaging features that activate shallow network layers are powerful techniques to identify features that deep learning models use when performing prediction.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Algoritmos , Humanos , Sensibilidade e Especificidade
15.
J Am Coll Radiol ; 15(3 Pt B): 521-526, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-29396120

RESUMO

Deep learning (DL) is a popular method that is used to perform many important tasks in radiology and medical imaging. Some forms of DL are able to accurately segment organs (essentially, trace the boundaries, enabling volume measurements or calculation of other properties). Other DL networks are able to predict important properties from regions of an image-for instance, whether something is malignant, molecular markers for tissue in a region, even prognostic markers. DL is easier to train than traditional machine learning methods, but requires more data and much more care in analyzing results. It will automatically find the features of importance, but understanding what those features are can be a challenge. This article describes the basic concepts of DL systems and some of the traps that exist in building DL systems and how to identify those traps.


Assuntos
Aprendizado Profundo , Radiologia/métodos , Diagnóstico por Computador , Humanos , Aprendizado de Máquina
16.
J Digit Imaging ; 31(2): 252-261, 2018 04.
Artigo em Inglês | MEDLINE | ID: mdl-28924878

RESUMO

Schizophrenia has been proposed to result from impairment of functional connectivity. We aimed to use machine learning to distinguish schizophrenic subjects from normal controls using a publicly available functional MRI (fMRI) data set. Global and local parameters of functional connectivity were extracted for classification. We found decreased global and local network connectivity in subjects with schizophrenia, particularly in the anterior right cingulate cortex, the superior right temporal region, and the inferior left parietal region as compared to healthy subjects. Using support vector machine and 10-fold cross-validation, nine features reached 92.1% prediction accuracy, respectively. Our results suggest that there are significant differences between control and schizophrenic subjects based on regional brain activity detected with fMRI.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/fisiopatologia , Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Imageamento por Ressonância Magnética/métodos , Esquizofrenia/fisiopatologia , Adulto , Encéfalo/diagnóstico por imagem , Feminino , Humanos , Masculino , Adulto Jovem
17.
J Digit Imaging ; 30(4): 449-459, 2017 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28577131

RESUMO

Quantitative analysis of brain MRI is routine for many neurological diseases and conditions and relies on accurate segmentation of structures of interest. Deep learning-based segmentation approaches for brain MRI are gaining interest due to their self-learning and generalization ability over large amounts of data. As the deep learning architectures are becoming more mature, they gradually outperform previous state-of-the-art classical machine learning algorithms. This review aims to provide an overview of current deep learning-based segmentation approaches for quantitative brain MRI. First we review the current deep learning architectures used for segmentation of anatomical brain structures and brain lesions. Next, the performance, speed, and properties of deep learning approaches are summarized and discussed. Finally, we provide a critical assessment of the current state and identify likely future developments and trends.


Assuntos
Algoritmos , Encéfalo/diagnóstico por imagem , Aprendizado de Máquina , Imageamento por Ressonância Magnética/métodos , Encéfalo/anatomia & histologia , Previsões , Humanos , Aprendizado de Máquina/tendências
18.
J Digit Imaging ; 30(4): 469-476, 2017 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28600641

RESUMO

Several studies have linked codeletion of chromosome arms 1p/19q in low-grade gliomas (LGG) with positive response to treatment and longer progression-free survival. Hence, predicting 1p/19q status is crucial for effective treatment planning of LGG. In this study, we predict the 1p/19q status from MR images using convolutional neural networks (CNN), which could be a non-invasive alternative to surgical biopsy and histopathological analysis. Our method consists of three main steps: image registration, tumor segmentation, and classification of 1p/19q status using CNN. We included a total of 159 LGG with 3 image slices each who had biopsy-proven 1p/19q status (57 non-deleted and 102 codeleted) and preoperative postcontrast-T1 (T1C) and T2 images. We divided our data into training, validation, and test sets. The training data was balanced for equal class probability and was then augmented with iterations of random translational shift, rotation, and horizontal and vertical flips to increase the size of the training set. We shuffled and augmented the training data to counter overfitting in each epoch. Finally, we evaluated several configurations of a multi-scale CNN architecture until training and validation accuracies became consistent. The results of the best performing configuration on the unseen test set were 93.3% (sensitivity), 82.22% (specificity), and 87.7% (accuracy). Multi-scale CNN with their self-learning capability provides promising results for predicting 1p/19q status non-invasively based on T1C and T2 images. Predicting 1p/19q status non-invasively from MR images would allow selecting effective treatment strategies for LGG patients without the need for surgical biopsy.


Assuntos
Inteligência Artificial , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/genética , Deleção Cromossômica , Cromossomos Humanos Par 19 , Cromossomos Humanos Par 1 , Glioma/diagnóstico por imagem , Glioma/genética , Humanos , Aprendizado de Máquina
19.
J Digit Imaging ; 30(4): 400-405, 2017 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28315069

RESUMO

Deep learning is an important new area of machine learning which encompasses a wide range of neural network architectures designed to complete various tasks. In the medical imaging domain, example tasks include organ segmentation, lesion detection, and tumor classification. The most popular network architecture for deep learning for images is the convolutional neural network (CNN). Whereas traditional machine learning requires determination and calculation of features from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those features to make predictions for new data. In this paper, we will describe some of the libraries and tools that are available to aid in the construction and efficient execution of deep learning as applied to medical images.


Assuntos
Diagnóstico por Imagem , Aprendizado de Máquina , Redes Neurais de Computação , Algoritmos , Documentação , Humanos , Software
20.
Radiographics ; 37(2): 505-515, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28212054

RESUMO

Machine learning is a technique for recognizing patterns that can be applied to medical images. Although it is a powerful tool that can help in rendering medical diagnoses, it can be misapplied. Machine learning typically begins with the machine learning algorithm system computing the image features that are believed to be of importance in making the prediction or diagnosis of interest. The machine learning algorithm system then identifies the best combination of these image features for classifying the image or computing some metric for the given image region. There are several methods that can be used, each with different strengths and weaknesses. There are open-source versions of most of these machine learning methods that make them easy to try and apply to images. Several metrics for measuring the performance of an algorithm exist; however, one must be aware of the possible associated pitfalls that can result in misleading metrics. More recently, deep learning has started to be used; this method has the benefit that it does not require image feature identification and calculation as a first step; rather, features are identified as part of the learning process. Machine learning has been used in medical imaging and will have a greater influence in the future. Those working in medical imaging must be aware of how machine learning works. ©RSNA, 2017.


Assuntos
Diagnóstico por Imagem , Aprendizado de Máquina , Algoritmos , Humanos , Interpretação de Imagem Assistida por Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA