Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 37
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
J Electrocardiol ; 59: 151-157, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32146201

RESUMEN

BACKGROUND: Screening and early diagnosis of mitral regurgitation (MR) are crucial for preventing irreversible progression of MR. In this study, we developed and validated an artificial intelligence (AI) algorithm for detecting MR using electrocardiography (ECG). METHODS: This retrospective cohort study included data from two hospital. An AI algorithm was trained using 56,670 ECGs from 24,202 patients. Internal validation of the algorithm was performed with 3174 ECGs of 3174 patients from one hospital, while external validation was performed with 10,865 ECGs of 10,865 patients from another hospital. The endpoint was the diagnosis of significant MR, moderate to severe, confirmed by echocardiography. We used 500 Hz ECG raw data as predictive variables. Additionally, we showed regions of ECG that have the most significant impact on the decision-making of the AI algorithm using a sensitivity map. RESULTS: During the internal and external validation, the area under the receiver operating characteristic curve of the AI algorithm using a 12-lead ECG for detecting MR was 0.816 and 0.877, respectively, while that using a single-lead ECG was 0.758 and 0.850, respectively. In the 3157 non-MR individuals, those patients that the AI defined as high risk had a significantly higher chance of development of MR than the low risk group (13.9% vs. 2.6%, p < 0.001) during the follow-up period. The sensitivity map showed the AI algorithm focused on the P-wave and T-wave for MR patients and QRS complex for non-MR patients. CONCLUSIONS: The proposed AI algorithm demonstrated promising results for MR detecting using 12-lead and single-lead ECGs.


Asunto(s)
Aprendizaje Profundo , Insuficiencia de la Válvula Mitral , Inteligencia Artificial , Electrocardiografía , Humanos , Insuficiencia de la Válvula Mitral/diagnóstico , Estudios Retrospectivos
2.
Sensors (Basel) ; 20(15)2020 Jul 27.
Artículo en Inglés | MEDLINE | ID: mdl-32727146

RESUMEN

Ultrasound measurements of detrusor muscle thickness have been proposed as a diagnostic biomarker in patients with bladder overactivity and voiding dysfunction. In this study, we present an approach based on deep learning (DL) and dynamic programming (DP) to segment the bladder sac and measure the detrusor muscle thickness from transabdominal 2D B-mode ultrasound images. To assess the performance of our method, we compared the results of automated methods to the manually obtained reference bladder segmentations and wall thickness measurements of 80 images obtained from 11 volunteers. It takes less than a second to segment the bladder from a 2D B-mode image for the DL method. The average Dice index for the bladder segmentation is 0.93 ± 0.04 mm, and the average root-mean-square-error and standard deviation for wall thickness measurement are 0.7 ± 0.2 mm, which is comparable to the manual ground truth. The proposed fully automated and fast method could be a useful tool for segmentation and wall thickness measurement of the bladder from transabdominal B-mode images. The computation speed and accuracy of the proposed method will enable adaptive adjustment of the ultrasound focus point, and continuous assessment of the bladder wall during the filling and voiding process of the bladder.


Asunto(s)
Manejo de Especímenes , Vejiga Urinaria , Automatización , Humanos , Ultrasonografía , Vejiga Urinaria/diagnóstico por imagen
3.
J Digit Imaging ; 32(4): 571-581, 2019 08.
Artículo en Inglés | MEDLINE | ID: mdl-31089974

RESUMEN

Deep-learning algorithms typically fall within the domain of supervised artificial intelligence and are designed to "learn" from annotated data. Deep-learning models require large, diverse training datasets for optimal model convergence. The effort to curate these datasets is widely regarded as a barrier to the development of deep-learning systems. We developed RIL-Contour to accelerate medical image annotation for and with deep-learning. A major goal driving the development of the software was to create an environment which enables clinically oriented users to utilize deep-learning models to rapidly annotate medical imaging. RIL-Contour supports using fully automated deep-learning methods, semi-automated methods, and manual methods to annotate medical imaging with voxel and/or text annotations. To reduce annotation error, RIL-Contour promotes the standardization of image annotations across a dataset. RIL-Contour accelerates medical imaging annotation through the process of annotation by iterative deep learning (AID). The underlying concept of AID is to iteratively annotate, train, and utilize deep-learning models during the process of dataset annotation and model development. To enable this, RIL-Contour supports workflows in which multiple-image analysts annotate medical images, radiologists approve the annotations, and data scientists utilize these annotations to train deep-learning models. To automate the feedback loop between data scientists and image analysts, RIL-Contour provides mechanisms to enable data scientists to push deep newly trained deep-learning models to other users of the software. RIL-Contour and the AID methodology accelerate dataset annotation and model development by facilitating rapid collaboration between analysts, radiologists, and engineers.


Asunto(s)
Conjuntos de Datos como Asunto , Aprendizaje Profundo , Diagnóstico por Imagen/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Sistemas de Información Radiológica , Humanos
4.
AJR Am J Roentgenol ; 211(6): 1184-1193, 2018 12.
Artículo en Inglés | MEDLINE | ID: mdl-30403527

RESUMEN

OBJECTIVE: Deep learning has shown great promise for improving medical image classification tasks. However, knowing what aspects of an image the deep learning system uses or, in a manner of speaking, sees to make its prediction is difficult. MATERIALS AND METHODS: Within a radiologic imaging context, we investigated the utility of methods designed to identify features within images on which deep learning activates. In this study, we developed a classifier to identify contrast enhancement phase from whole-slice CT data. We then used this classifier as an easily interpretable system to explore the utility of class activation map (CAMs), gradient-weighted class activation maps (Grad-CAMs), saliency maps, guided backpropagation maps, and the saliency activation map, a novel map reported here, to identify image features the model used when performing prediction. RESULTS: All techniques identified voxels within imaging that the classifier used. SAMs had greater specificity than did guided backpropagation maps, CAMs, and Grad-CAMs at identifying voxels within imaging that the model used to perform prediction. At shallow network layers, SAMs had greater specificity than Grad-CAMs at identifying input voxels that the layers within the model used to perform prediction. CONCLUSION: As a whole, voxel-level visualizations and visualizations of the imaging features that activate shallow network layers are powerful techniques to identify features that deep learning models use when performing prediction.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Algoritmos , Humanos , Sensibilidad y Especificidad
5.
J Digit Imaging ; 31(2): 252-261, 2018 04.
Artículo en Inglés | MEDLINE | ID: mdl-28924878

RESUMEN

Schizophrenia has been proposed to result from impairment of functional connectivity. We aimed to use machine learning to distinguish schizophrenic subjects from normal controls using a publicly available functional MRI (fMRI) data set. Global and local parameters of functional connectivity were extracted for classification. We found decreased global and local network connectivity in subjects with schizophrenia, particularly in the anterior right cingulate cortex, the superior right temporal region, and the inferior left parietal region as compared to healthy subjects. Using support vector machine and 10-fold cross-validation, nine features reached 92.1% prediction accuracy, respectively. Our results suggest that there are significant differences between control and schizophrenic subjects based on regional brain activity detected with fMRI.


Asunto(s)
Mapeo Encefálico/métodos , Encéfalo/fisiopatología , Interpretación de Imagen Asistida por Computador/métodos , Aprendizaje Automático , Imagen por Resonancia Magnética/métodos , Esquizofrenia/fisiopatología , Adulto , Encéfalo/diagnóstico por imagen , Femenino , Humanos , Masculino , Adulto Joven
6.
Radiographics ; 37(2): 505-515, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28212054

RESUMEN

Machine learning is a technique for recognizing patterns that can be applied to medical images. Although it is a powerful tool that can help in rendering medical diagnoses, it can be misapplied. Machine learning typically begins with the machine learning algorithm system computing the image features that are believed to be of importance in making the prediction or diagnosis of interest. The machine learning algorithm system then identifies the best combination of these image features for classifying the image or computing some metric for the given image region. There are several methods that can be used, each with different strengths and weaknesses. There are open-source versions of most of these machine learning methods that make them easy to try and apply to images. Several metrics for measuring the performance of an algorithm exist; however, one must be aware of the possible associated pitfalls that can result in misleading metrics. More recently, deep learning has started to be used; this method has the benefit that it does not require image feature identification and calculation as a first step; rather, features are identified as part of the learning process. Machine learning has been used in medical imaging and will have a greater influence in the future. Those working in medical imaging must be aware of how machine learning works. ©RSNA, 2017.


Asunto(s)
Diagnóstico por Imagen , Aprendizaje Automático , Algoritmos , Humanos , Interpretación de Imagen Asistida por Computador
7.
J Digit Imaging ; 30(4): 400-405, 2017 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-28315069

RESUMEN

Deep learning is an important new area of machine learning which encompasses a wide range of neural network architectures designed to complete various tasks. In the medical imaging domain, example tasks include organ segmentation, lesion detection, and tumor classification. The most popular network architecture for deep learning for images is the convolutional neural network (CNN). Whereas traditional machine learning requires determination and calculation of features from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those features to make predictions for new data. In this paper, we will describe some of the libraries and tools that are available to aid in the construction and efficient execution of deep learning as applied to medical images.


Asunto(s)
Diagnóstico por Imagen , Aprendizaje Automático , Redes Neurales de la Computación , Algoritmos , Documentación , Humanos , Programas Informáticos
8.
J Digit Imaging ; 30(4): 449-459, 2017 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-28577131

RESUMEN

Quantitative analysis of brain MRI is routine for many neurological diseases and conditions and relies on accurate segmentation of structures of interest. Deep learning-based segmentation approaches for brain MRI are gaining interest due to their self-learning and generalization ability over large amounts of data. As the deep learning architectures are becoming more mature, they gradually outperform previous state-of-the-art classical machine learning algorithms. This review aims to provide an overview of current deep learning-based segmentation approaches for quantitative brain MRI. First we review the current deep learning architectures used for segmentation of anatomical brain structures and brain lesions. Next, the performance, speed, and properties of deep learning approaches are summarized and discussed. Finally, we provide a critical assessment of the current state and identify likely future developments and trends.


Asunto(s)
Algoritmos , Encéfalo/diagnóstico por imagen , Aprendizaje Automático , Imagen por Resonancia Magnética/métodos , Encéfalo/anatomía & histología , Predicción , Humanos , Aprendizaje Automático/tendencias
9.
J Digit Imaging ; 30(4): 469-476, 2017 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-28600641

RESUMEN

Several studies have linked codeletion of chromosome arms 1p/19q in low-grade gliomas (LGG) with positive response to treatment and longer progression-free survival. Hence, predicting 1p/19q status is crucial for effective treatment planning of LGG. In this study, we predict the 1p/19q status from MR images using convolutional neural networks (CNN), which could be a non-invasive alternative to surgical biopsy and histopathological analysis. Our method consists of three main steps: image registration, tumor segmentation, and classification of 1p/19q status using CNN. We included a total of 159 LGG with 3 image slices each who had biopsy-proven 1p/19q status (57 non-deleted and 102 codeleted) and preoperative postcontrast-T1 (T1C) and T2 images. We divided our data into training, validation, and test sets. The training data was balanced for equal class probability and was then augmented with iterations of random translational shift, rotation, and horizontal and vertical flips to increase the size of the training set. We shuffled and augmented the training data to counter overfitting in each epoch. Finally, we evaluated several configurations of a multi-scale CNN architecture until training and validation accuracies became consistent. The results of the best performing configuration on the unseen test set were 93.3% (sensitivity), 82.22% (specificity), and 87.7% (accuracy). Multi-scale CNN with their self-learning capability provides promising results for predicting 1p/19q status non-invasively based on T1C and T2 images. Predicting 1p/19q status non-invasively from MR images would allow selecting effective treatment strategies for LGG patients without the need for surgical biopsy.


Asunto(s)
Inteligencia Artificial , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/genética , Deleción Cromosómica , Cromosomas Humanos Par 19 , Cromosomas Humanos Par 1 , Glioma/diagnóstico por imagen , Glioma/genética , Humanos , Aprendizaje Automático
10.
AJR Am J Roentgenol ; 207(3): 605-13, 2016 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-27341140

RESUMEN

OBJECTIVE: The objective of the present study is to develop and validate a fast, accurate, and reproducible method that will increase and improve institutional measurement of total kidney volume and thereby avoid the higher costs, increased operator processing time, and inherent subjectivity associated with manual contour tracing. MATERIALS AND METHODS: We developed a semiautomated segmentation approach, known as the minimal interaction rapid organ segmentation (MIROS) method, which results in human interaction during measurement of total kidney volume on MR images being reduced to a few minutes. This software tool automatically steps through slices and requires rough definition of kidney boundaries supplied by the user. The approach was verified on T2-weighted MR images of 40 patients with autosomal dominant polycystic kidney disease of varying degrees of severity. RESULTS: The MIROS approach required less than 5 minutes of user interaction in all cases. When compared with the ground-truth reference standard, MIROS showed no significant bias and had low variability (mean ± 2 SD, 0.19% ± 6.96%). CONCLUSION: The MIROS method will greatly facilitate future research studies in which accurate and reproducible measurements of cystic organ volumes are needed.


Asunto(s)
Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Riñón Poliquístico Autosómico Dominante/diagnóstico por imagen , Adulto , Automatización , Femenino , Humanos , Masculino , Persona de Mediana Edad , Tamaño de los Órganos , Reproducibilidad de los Resultados , Programas Informáticos
11.
Bioengineering (Basel) ; 11(7)2024 Jun 25.
Artículo en Inglés | MEDLINE | ID: mdl-39061730

RESUMEN

Thyroid Ultrasound (US) is the primary method to evaluate thyroid nodules. Deep learning (DL) has been playing a significant role in evaluating thyroid cancer. We propose a DL-based pipeline to detect and classify thyroid nodules into benign or malignant groups relying on two views of US imaging. Transverse and longitudinal US images of thyroid nodules from 983 patients were collected retrospectively. Eighty-one cases were held out as a testing set, and the rest of the data were used in five-fold cross-validation (CV). Two You Look Only Once (YOLO) v5 models were trained to detect nodules and classify them. For each view, five models were developed during the CV, which was ensembled by using non-max suppression (NMS) to boost their collective generalizability. An extreme gradient boosting (XGBoost) model was trained on the outputs of the ensembled models for both views to yield a final prediction of malignancy for each nodule. The test set was evaluated by an expert radiologist using the American College of Radiology Thyroid Imaging Reporting and Data System (ACR-TIRADS). The ensemble models for each view achieved a mAP0.5 of 0.797 (transverse) and 0.716 (longitudinal). The whole pipeline reached an AUROC of 0.84 (CI 95%: 0.75-0.91) with sensitivity and specificity of 84% and 63%, respectively, while the ACR-TIRADS evaluation of the same set had a sensitivity of 76% and specificity of 34% (p-value = 0.003). Our proposed work demonstrated the potential possibility of a deep learning model to achieve diagnostic performance for thyroid nodule evaluation.

12.
J Pathol Inform ; 14: 100314, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37179570

RESUMEN

Microscopic image examination is fundamental to clinical microbiology and often used as the first step to diagnose fungal infections. In this study, we present classification of pathogenic fungi from microscopic images using deep convolutional neural networks (CNN). We trained well-known CNN architectures such as DenseNet, Inception ResNet, InceptionV3, Xception, ResNet50, VGG16, and VGG19 to identify fungal species, and compared their performances. We collected 1079 images of 89 fungi genera and split our data into training, validation, and test datasets by 7:1:2 ratio. The DenseNet CNN model provided the best performance among other CNN architectures with overall accuracy of 65.35% for top 1 prediction and 75.19% accuracy for top 3 predictions for classification of 89 genera. The performance is further improved (>80%) after excluding rare genera with low sample occurrence and applying data augmentation techniques. For some particular fungal genera, we obtained 100% prediction accuracy. In summary, we present a deep learning approach that shows promising results in prediction of filamentous fungi identification from culture, which could be used to enhance diagnostic accuracy and decrease turnaround time to identification.

13.
Bioengineering (Basel) ; 10(9)2023 Sep 04.
Artículo en Inglés | MEDLINE | ID: mdl-37760142

RESUMEN

Transplant pathology plays a critical role in ensuring that transplanted organs function properly and the immune systems of the recipients do not reject them. To improve outcomes for transplant recipients, accurate diagnosis and timely treatment are essential. Recent advances in artificial intelligence (AI)-empowered digital pathology could help monitor allograft rejection and weaning of immunosuppressive drugs. To explore the role of AI in transplant pathology, we conducted a systematic search of electronic databases from January 2010 to April 2023. The PRISMA checklist was used as a guide for screening article titles, abstracts, and full texts, and we selected articles that met our inclusion criteria. Through this search, we identified 68 articles from multiple databases. After careful screening, only 14 articles were included based on title and abstract. Our review focuses on the AI approaches applied to four transplant organs: heart, lungs, liver, and kidneys. Specifically, we found that several deep learning-based AI models have been developed to analyze digital pathology slides of biopsy specimens from transplant organs. The use of AI models could improve clinicians' decision-making capabilities and reduce diagnostic variability. In conclusion, our review highlights the advancements and limitations of AI in transplant pathology. We believe that these AI technologies have the potential to significantly improve transplant outcomes and pave the way for future advancements in this field.

14.
Ultrasound Med Biol ; 48(11): 2237-2248, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-35961866

RESUMEN

Median nerve swelling is one of the features of carpal tunnel syndrome (CTS), and ultrasound measurement of maximum median nerve cross-sectional area is commonly used to diagnose CTS. We hypothesized that volume might be a more sensitive measure than cross-sectional area for CTS diagnosis. We therefore assessed the accuracy and reliability of 3-D volume measurements of the median nerve in human cadavers, comparing direct measurements with ultrasound images interpreted using deep learning algorithms. Ultrasound images of a 10-cm segment of the median nerve were used to train the U-Net model, which achieved an average volume similarity of 0.89 and area under the curve of 0.90 from the threefold cross-validation. Correlation coefficients were calculated using the areas measured by each method. The intraclass correlation coefficient was 0.86. Pearson's correlation coefficient R between the estimated volume from the manually measured cross-sectional area and the estimated volume of deep learning was 0.85. In this study using deep learning to segment the median nerve longitudinally, estimated volume had high reliability. We plan to assess its clinical usefulness in future clinical studies. The volume of the median nerve may provide useful additional information on disease severity, beyond maximum cross-sectional area.


Asunto(s)
Síndrome del Túnel Carpiano , Aprendizaje Profundo , Cadáver , Humanos , Nervio Mediano/diagnóstico por imagen , Reproducibilidad de los Resultados , Ultrasonografía/métodos
15.
Abdom Radiol (NY) ; 47(7): 2408-2419, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35476147

RESUMEN

PURPOSE: Total kidney volume (TKV) is the most important imaging biomarker for quantifying the severity of autosomal-dominant polycystic kidney disease (ADPKD). 3D ultrasound (US) can accurately measure kidney volume compared to 2D US; however, manual segmentation is tedious and requires expert annotators. We investigated a deep learning-based approach for automated segmentation of TKV from 3D US in ADPKD patients. METHOD: We used axially acquired 3D US-kidney images in 22 ADPKD patients where each patient and each kidney were scanned three times, resulting in 132 scans that were manually segmented. We trained a convolutional neural network to segment the whole kidney and measure TKV. All patients were subsequently imaged with MRI for measurement comparison. RESULTS: Our method automatically segmented polycystic kidneys in 3D US images obtaining an average Dice coefficient of 0.80 on the test dataset. The kidney volume measurement compared with linear regression coefficient and bias from human tracing were R2 = 0.81, and - 4.42%, and between AI and reference standard were R2 = 0.93, and - 4.12%, respectively. MRI and US measured kidney volumes had R2 = 0.84 and a bias of 7.47%. CONCLUSION: This is the first study applying deep learning to 3D US in ADPKD. Our method shows promising performance for auto-segmentation of kidneys using 3D US to measure TKV, close to human tracing and MRI measurement. This imaging and analysis method may be useful in a number of settings, including pediatric imaging, clinical studies, and longitudinal tracking of patient disease progression.


Asunto(s)
Enfermedades Renales Poliquísticas , Riñón Poliquístico Autosómico Dominante , Niño , Humanos , Imagenología Tridimensional , Riñón/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Riñón Poliquístico Autosómico Dominante/diagnóstico por imagen
16.
Pathology ; 53(3): 400-407, 2021 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-33642096

RESUMEN

Advances in digital pathology have allowed a number of opportunities such as decision support using artificial intelligence (AI). The application of AI to digital pathology data shows promise as an aid for pathologists in the diagnosis of haematological disorders. AI-based applications have embraced benign haematology, diagnosing leukaemia and lymphoma, as well as ancillary testing modalities including flow cytometry. In this review, we highlight the progress made to date in machine learning applications in haematopathology, summarise important studies in this field, and highlight key limitations. We further present our outlook on the future direction and trends for AI to support diagnostic decisions in haematopathology.


Asunto(s)
Hematología , Leucemia/diagnóstico , Linfoma/diagnóstico , Aprendizaje Automático , Inteligencia Artificial , Citometría de Flujo , Humanos , Leucemia/patología , Linfoma/patología
17.
Asian Pac J Cancer Prev ; 22(8): 2597-2602, 2021 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-34452575

RESUMEN

INTRODUCTION: The management of follicular (FN) and Hurthle cell neoplasms (HCN) is often difficult because of the uncertainty of malignancy risk. We aimed to assess characteristics of benign and malignant follicular and Hurthle neoplasms based on their shape and size. MATERIALS AND METHODS: Patients with Follicular adenoma (FA) or carcinoma (FC) and Hurthle Cell adenoma (HCA) or carcinoma (HCC) who had preoperative ultrasonography were included. Demographic data were retrieved. Size and shape of the nodules were measured. Logistic regression analyses and odds ratios were performed. RESULTS: A total of 115 nodules with 57 carcinomas and 58 adenomas were included. Logistic regression analysis shows that the nodule height and the patient age are predictors of malignancy (p-values = 0.001 and 0.042). A cutoff value of nodule height ≥ 4 cm. produces an odds ratio of 4.5 (p-value = 0.006). An age ≥ 55 year-old demonstrates an odds ratio of 2.4-3.6 (p-value = 0.03). Taller-than-wide shape was not statistically significant (p-value = 0.613). CONCLUSION: FC and HCC are larger than FA and HCA in size, with a cutoff at 4 cm. Increasing age increases the odds of malignancy with a cutoff at 55 year-old. Taller-than-wide shape is not a predictor of malignancy.


Asunto(s)
Adenocarcinoma Folicular/diagnóstico , Adenoma Oxifílico/diagnóstico , Adenoma/diagnóstico , Neoplasias de la Tiroides/diagnóstico , Nódulo Tiroideo/patología , Ultrasonografía/métodos , Adenocarcinoma Folicular/diagnóstico por imagen , Adenocarcinoma Folicular/cirugía , Adenoma/diagnóstico por imagen , Adenoma/cirugía , Adenoma Oxifílico/diagnóstico por imagen , Adenoma Oxifílico/cirugía , Estudios de Casos y Controles , Femenino , Estudios de Seguimiento , Humanos , Masculino , Persona de Mediana Edad , Pronóstico , Estudios Retrospectivos , Neoplasias de la Tiroides/diagnóstico por imagen , Neoplasias de la Tiroides/cirugía , Nódulo Tiroideo/diagnóstico por imagen , Tiroidectomía
18.
J Clin Med ; 10(11)2021 May 24.
Artículo en Inglés | MEDLINE | ID: mdl-34073699

RESUMEN

The accurate diagnosis of chronic myelomonocytic leukemia (CMML) and acute myeloid leukemia (AML) subtypes with monocytic differentiation relies on the proper identification and quantitation of blast cells and blast-equivalent cells, including promonocytes. This distinction can be quite challenging given the cytomorphologic and immunophenotypic similarities among the monocytic cell precursors. The aim of this study was to assess the performance of convolutional neural networks (CNN) in separating monocytes from their precursors (i.e., promonocytes and monoblasts). We collected digital images of 935 monocytic cells that were blindly reviewed by five experienced morphologists and assigned into three subtypes: monocyte, promonocyte, and blast. The consensus between reviewers was considered as a ground truth reference label for each cell. In order to assess the performance of CNN models, we divided our data into training (70%), validation (10%), and test (20%) datasets, as well as applied fivefold cross validation. The CNN models did not perform well for predicting three monocytic subtypes, but their performance was significantly improved for two subtypes (monocyte vs. promonocytes + blasts). Our findings (1) support the concept that morphologic distinction between monocytic cells of various differentiation level is difficult; (2) suggest that combining blasts and promonocytes into a single category is desirable for improved accuracy; and (3) show that CNN models can reach accuracy comparable to human reviewers (0.78 ± 0.10 vs. 0.86 ± 0.05). As far as we know, this is the first study to separate monocytes from their precursors using CNN.

19.
J Clin Med ; 10(7)2021 Mar 30.
Artículo en Inglés | MEDLINE | ID: mdl-33808513

RESUMEN

Echocardiography (Echo), a widely available, noninvasive, and portable bedside imaging tool, is the most frequently used imaging modality in assessing cardiac anatomy and function in clinical practice. On the other hand, its operator dependability introduces variability in image acquisition, measurements, and interpretation. To reduce these variabilities, there is an increasing demand for an operator- and interpreter-independent Echo system empowered with artificial intelligence (AI), which has been incorporated into diverse areas of clinical medicine. Recent advances in AI applications in computer vision have enabled us to identify conceptual and complex imaging features with the self-learning ability of AI models and efficient parallel computing power. This has resulted in vast opportunities such as providing AI models that are robust to variations with generalizability for instantaneous image quality control, aiding in the acquisition of optimal images and diagnosis of complex diseases, and improving the clinical workflow of cardiac ultrasound. In this review, we provide a state-of-the art overview of AI-empowered Echo applications in cardiology and future trends for AI-powered Echo technology that standardize measurements, aid physicians in diagnosing cardiac diseases, optimize Echo workflow in clinics, and ultimately, reduce healthcare costs.

20.
Radiol Artif Intell ; 2(5): e190183, 2020 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-33937839

RESUMEN

PURPOSE: To develop a deep learning model that segments intracranial structures on head CT scans. MATERIALS AND METHODS: In this retrospective study, a primary dataset containing 62 normal noncontrast head CT scans from 62 patients (mean age, 73 years; age range, 27-95 years) acquired between August and December 2018 was used for model development. Eleven intracranial structures were manually annotated on the axial oblique series. The dataset was split into 40 scans for training, 10 for validation, and 12 for testing. After initial training, eight model configurations were evaluated on the validation dataset and the highest performing model was evaluated on the test dataset. Interobserver variability was reported using multirater consensus labels obtained from the test dataset. To ensure that the model learned generalizable features, it was further evaluated on two secondary datasets containing 12 volumes with idiopathic normal pressure hydrocephalus (iNPH) and 30 normal volumes from a publicly available source. Statistical significance was determined using categorical linear regression with P < .05. RESULTS: Overall Dice coefficient on the primary test dataset was 0.84 ± 0.05 (standard deviation). Performance ranged from 0.96 ± 0.01 (brainstem and cerebrum) to 0.74 ± 0.06 (internal capsule). Dice coefficients were comparable to expert annotations and exceeded those of existing segmentation methods. The model remained robust on external CT scans and scans demonstrating ventricular enlargement. The use of within-network normalization and class weighting facilitated learning of underrepresented classes. CONCLUSION: Automated segmentation of CT neuroanatomy is feasible with a high degree of accuracy. The model generalized to external CT scans as well as scans demonstrating iNPH.Supplemental material is available for this article.© RSNA, 2020.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA