Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 55
Filtrar
1.
Invest Ophthalmol Vis Sci ; 65(5): 6, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38696188

RESUMEN

Purpose: Thyroid eye disease (TED) is characterized by proliferation of orbital tissues and complicated by compressive optic neuropathy (CON). This study aims to utilize a deep-learning (DL)-based automated segmentation model to segment orbital muscle and fat volumes on computed tomography (CT) images and provide quantitative volumetric data and a machine learning (ML)-based classifier to distinguish between TED and TED with CON. Methods: Subjects with TED who underwent clinical evaluation and orbital CT imaging were included. Patients with clinical features of CON were classified as having severe TED, and those without were classified as having mild TED. Normal subjects were used for controls. A U-Net DL-model was used for automatic segmentation of orbital muscle and fat volumes from orbital CTs, and ensemble of Random Forest Classifiers were used for volumetric analysis of muscle and fat. Results: Two hundred eighty-one subjects were included in this study. Automatic segmentation of orbital tissues was performed. Dice coefficient was recorded to be 0.902 and 0.921 for muscle and fat volumes, respectively. Muscle volumes among normal, mild, and severe TED were found to be statistically different. A classification model utilizing volume data and limited patient data had an accuracy of 0.838 and an area under the curve (AUC) of 0.929 in predicting normal, mild TED, and severe TED. Conclusions: DL-based automated segmentation of orbital images for patients with TED was found to be accurate and efficient. An ML-based classification model using volumetrics and metadata led to high diagnostic accuracy in distinguishing TED and TED with CON. By enabling rapid and precise volumetric assessment, this may be a useful tool in future clinical studies.


Asunto(s)
Tejido Adiposo , Aprendizaje Profundo , Oftalmopatía de Graves , Músculos Oculomotores , Tomografía Computarizada por Rayos X , Humanos , Oftalmopatía de Graves/diagnóstico por imagen , Oftalmopatía de Graves/diagnóstico , Masculino , Femenino , Persona de Mediana Edad , Tejido Adiposo/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Músculos Oculomotores/diagnóstico por imagen , Adulto , Órbita/diagnóstico por imagen , Anciano , Estudios Retrospectivos , Curva ROC , Tamaño de los Órganos
4.
OTA Int ; 6(5 Suppl): e283, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38152438

RESUMEN

Objectives: With more than 300,000 patients per year in the United States alone, hip fractures are one of the most common injuries occurring in the elderly. The incidence is predicted to rise to 6 million cases per annum worldwide by 2050. Many fracture registries have been established, serving as tools for quality surveillance and evaluating patient outcomes. Most registries are based on billing and procedural codes, prone to under-reporting of cases. Deep learning (DL) is able to interpret radiographic images and assist in fracture detection; we propose to conduct a DL-based approach intended to autocreate a fracture registry, specifically for the hip fracture population. Methods: Conventional radiographs (n = 18,834) from 2919 patients from Massachusetts General Brigham hospitals were extracted (images designated as hip radiographs within the medical record). We designed a cascade model consisting of 3 submodules for image view classification (MI), postoperative implant detection (MII), and proximal femoral fracture detection (MIII), including data augmentation and scaling, and convolutional neural networks for model development. An ensemble model of 10 models (based on ResNet, VGG, DenseNet, and EfficientNet architectures) was created to detect the presence of a fracture. Results: The accuracy of the developed submodules reached 92%-100%; visual explanations of model predictions were generated through gradient-based methods. Time for the automated model-based fracture-labeling was 0.03 seconds/image, compared with an average of 12 seconds/image for human annotation as calculated in our preprocessing stages. Conclusion: This semisupervised DL approach labeled hip fractures with high accuracy. This mitigates the burden of annotations in a large data set, which is time-consuming and prone to under-reporting. The DL approach may prove beneficial for future efforts to autocreate construct registries that outperform current diagnosis and procedural codes. Clinicians and researchers can use the developed DL approach for quality improvement, diagnostic and prognostic research purposes, and building clinical decision support tools.

7.
PLoS One ; 18(3): e0281900, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36913348

RESUMEN

Machine learning (ML) algorithms to detect critical findings on head CTs may expedite patient management. Most ML algorithms for diagnostic imaging analysis utilize dichotomous classifications to determine whether a specific abnormality is present. However, imaging findings may be indeterminate, and algorithmic inferences may have substantial uncertainty. We incorporated awareness of uncertainty into an ML algorithm that detects intracranial hemorrhage or other urgent intracranial abnormalities and evaluated prospectively identified, 1000 consecutive noncontrast head CTs assigned to Emergency Department Neuroradiology for interpretation. The algorithm classified the scans into high (IC+) and low (IC-) probabilities for intracranial hemorrhage or other urgent abnormalities. All other cases were designated as No Prediction (NP) by the algorithm. The positive predictive value for IC+ cases (N = 103) was 0.91 (CI: 0.84-0.96), and the negative predictive value for IC- cases (N = 729) was 0.94 (0.91-0.96). Admission, neurosurgical intervention, and 30-day mortality rates for IC+ was 75% (63-84), 35% (24-47), and 10% (4-20), compared to 43% (40-47), 4% (3-6), and 3% (2-5) for IC-. There were 168 NP cases, of which 32% had intracranial hemorrhage or other urgent abnormalities, 31% had artifacts and postoperative changes, and 29% had no abnormalities. An ML algorithm incorporating uncertainty classified most head CTs into clinically relevant groups with high predictive values and may help accelerate the management of patients with intracranial hemorrhage or other urgent intracranial abnormalities.


Asunto(s)
Aprendizaje Profundo , Humanos , Incertidumbre , Tomografía Computarizada por Rayos X/métodos , Hemorragias Intracraneales/diagnóstico por imagen , Algoritmos , Estudios Retrospectivos
8.
Nat Biomed Eng ; 7(6): 711-718, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36581695

RESUMEN

Predictive machine-learning systems often do not convey the degree of confidence in the correctness of their outputs. To prevent unsafe prediction failures from machine-learning models, the users of the systems should be aware of the general accuracy of the model and understand the degree of confidence in each individual prediction. In this Perspective, we convey the need of prediction-uncertainty metrics in healthcare applications, with a focus on radiology. We outline the sources of prediction uncertainty, discuss how to implement prediction-uncertainty metrics in applications that require zero tolerance to errors and in applications that are error-tolerant, and provide a concise framework for understanding prediction uncertainty in healthcare contexts. For machine-learning-enabled automation to substantially impact healthcare, machine-learning models with zero tolerance for false-positive or false-negative errors must be developed intentionally.


Asunto(s)
Aprendizaje Automático , Incertidumbre
9.
Sci Rep ; 12(1): 21164, 2022 12 07.
Artículo en Inglés | MEDLINE | ID: mdl-36476724

RESUMEN

Risk prediction requires comprehensive integration of clinical information and concurrent radiological findings. We present an upgraded chest radiograph (CXR) explainable artificial intelligence (xAI) model, which was trained on 241,723 well-annotated CXRs obtained prior to the onset of the COVID-19 pandemic. Mean area under the receiver operating characteristic curve (AUROC) for detection of 20 radiographic features was 0.955 (95% CI 0.938-0.955) on PA view and 0.909 (95% CI 0.890-0.925) on AP view. Coexistent and correlated radiographic findings are displayed in an interpretation table, and calibrated classifier confidence is displayed on an AI scoreboard. Retrieval of similar feature patches and comparable CXRs from a Model-Derived Atlas provides justification for model predictions. To demonstrate the feasibility of a fine-tuning approach for efficient and scalable development of xAI risk prediction models, we applied our CXR xAI model, in combination with clinical information, to predict oxygen requirement in COVID-19 patients. Prediction accuracy for high flow oxygen (HFO) and mechanical ventilation (MV) was 0.953 and 0.934 at 24 h and 0.932 and 0.836 at 72 h from the time of emergency department (ED) admission, respectively. Our CXR xAI model is auditable and captures key pathophysiological manifestations of cardiorespiratory diseases and cardiothoracic comorbidities. This model can be efficiently and broadly applied via a fine-tuning approach to provide fully automated risk and outcome predictions in various clinical scenarios in real-world practice.


Asunto(s)
COVID-19 , Oxígeno , Humanos , COVID-19/diagnóstico por imagen , Inteligencia Artificial , Pandemias , Pacientes
10.
Nat Commun ; 13(1): 1867, 2022 04 06.
Artículo en Inglés | MEDLINE | ID: mdl-35388010

RESUMEN

The inability to accurately, efficiently label large, open-access medical imaging datasets limits the widespread implementation of artificial intelligence models in healthcare. There have been few attempts, however, to automate the annotation of such public databases; one approach, for example, focused on labor-intensive, manual labeling of subsets of these datasets to be used to train new models. In this study, we describe a method for standardized, automated labeling based on similarity to a previously validated, explainable AI (xAI) model-derived-atlas, for which the user can specify a quantitative threshold for a desired level of accuracy (the probability-of-similarity, pSim metric). We show that our xAI model, by calculating the pSim values for each clinical output label based on comparison to its training-set derived reference atlas, can automatically label the external datasets to a user-selected, high level of accuracy, equaling or exceeding that of human experts. We additionally show that, by fine-tuning the original model using the automatically labelled exams for retraining, performance can be preserved or improved, resulting in a highly accurate, more generalized model.


Asunto(s)
Inteligencia Artificial , Tórax , Atención a la Salud , Humanos , Radiografía , Rayos X
11.
Comput Biol Med ; 144: 105332, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35240378

RESUMEN

BACKGROUND: Although copy number variations (CNVs) are infrequent, each anomaly is unique, and multiple CNVs can appear simultaneously. Growing evidence suggests that CNVs contribute to a wide range of diseases. When CNVs are detected, assessment of their clinical significance requires a thorough literature review. This process can be extremely time-consuming and may delay disease diagnosis. Therefore, we have developed CNV Extraction, Transformation, and Loading Artificial Intelligence (CNV-ETLAI), an innovative tool that allows experts to classify and interpret CNVs accurately and efficiently. METHODS: We combined text, table, and image processing algorithms to develop an artificial intelligence platform that automatically extracts, transforms, and organizes CNV information into a database. To validate CNV-ETLAI, we compared its performance to ground truth datasets labeled by a human expert. In addition, we analyzed the CNV data, which was collected using CNV-ETLAI via a crowdsourcing approach. RESULTS: In comparison to a human expert, CNV-ETLAI improved CNV detection accuracy by 4% and performed the analysis 60 times faster. This performance can improve even further with upscaling of the CNV-ETLAI database as usage increases. 5,800 CNVs from 2,313 journal articles were collected. Total CNV frequency for the whole chromosome was highest for chromosome X, whereas CNV frequency per 1 Mb of genomic length was highest for chromosome 22. CONCLUSIONS: We have developed, tested, and shared CNV-ETLAI for research and clinical purposes (https://lmic.mgh.harvard.edu/CNV-ETLAI). Use of CNV-ETLAI is expected to ease and accelerate diagnostic classification and interpretation of CNVs.


Asunto(s)
Inteligencia Artificial , Variaciones en el Número de Copia de ADN , Algoritmos , Variaciones en el Número de Copia de ADN/genética , Bases de Datos Factuales , Genómica , Humanos
12.
Artículo en Inglés | MEDLINE | ID: mdl-36777485

RESUMEN

Current research on medical image processing relies heavily on the amount and quality of input data. Specifically, supervised machine learning methods require well-annotated datasets. A lack of annotation tools limits the potential to achieve high-volume processing and scaled systems with a proper reward mechanism. We developed MarkIt, a web-based tool, for collaborative annotation of medical imaging data with artificial intelligence and blockchain technologies. Our platform handles both Digital Imaging and Communications in Medicine (DICOM) and non-DICOM images, and allows users to annotate them for classification and object detection tasks in an efficient manner. MarkIt can accelerate the annotation process and keep track of user activities to calculate a fair reward. A proof-of-concept experiment was conducted with three fellowship-trained radiologists, each of whom annotated 1,000 chest X-ray studies for multi-label classification. We calculated the inter-rater agreement and estimated the value of the dataset to distribute the reward for annotators using a crypto currency. We hypothesize that MarkIt allows the typically arduous annotation task to become more efficient. In addition, MarkIt can serve as a platform to evaluate the value of data and trade the annotation results in a more scalable manner in the future. The platform is publicly available for testing on https://markit.mgh.harvard.edu.

13.
Korean J Radiol ; 21(1): 33-41, 2020 01.
Artículo en Inglés | MEDLINE | ID: mdl-31920027

RESUMEN

Artificial intelligence has been applied to many industries, including medicine. Among the various techniques in artificial intelligence, deep learning has attained the highest popularity in medical imaging in recent years. Many articles on deep learning have been published in radiologic journals. However, radiologists may have difficulty in understanding and interpreting these studies because the study methods of deep learning differ from those of traditional radiology. This review article aims to explain the concepts and terms that are frequently used in deep learning radiology articles, facilitating general radiologists' understanding.


Asunto(s)
Aprendizaje Profundo , Humanos , Redes Neurales de la Computación , Publicaciones , Radiología
14.
Radiology ; 294(1): 199-209, 2020 01.
Artículo en Inglés | MEDLINE | ID: mdl-31714194

RESUMEN

Background Multicenter studies are required to validate the added benefit of using deep convolutional neural network (DCNN) software for detecting malignant pulmonary nodules on chest radiographs. Purpose To compare the performance of radiologists in detecting malignant pulmonary nodules on chest radiographs when assisted by deep learning-based DCNN software with that of radiologists or DCNN software alone in a multicenter setting. Materials and Methods Investigators at four medical centers retrospectively identified 600 lung cancer-containing chest radiographs and 200 normal chest radiographs. Each radiograph with a lung cancer had at least one malignant nodule confirmed by CT and pathologic examination. Twelve radiologists from the four centers independently analyzed the chest radiographs and marked regions of interest. Commercially available deep learning-based computer-aided detection software separately trained, tested, and validated with 19 330 radiographs was used to find suspicious nodules. The radiologists then reviewed the images with the assistance of DCNN software. The sensitivity and number of false-positive findings per image of DCNN software, radiologists alone, and radiologists with the use of DCNN software were analyzed by using logistic regression and Poisson regression. Results The average sensitivity of radiologists improved (from 65.1% [1375 of 2112; 95% confidence interval {CI}: 62.0%, 68.1%] to 70.3% [1484 of 2112; 95% CI: 67.2%, 73.1%], P < .001) and the number of false-positive findings per radiograph declined (from 0.2 [488 of 2400; 95% CI: 0.18, 0.22] to 0.18 [422 of 2400; 95% CI: 0.16, 0.2], P < .001) when the radiologists re-reviewed radiographs with the DCNN software. For the 12 radiologists in this study, 104 of 2400 radiographs were positively changed (from false-negative to true-positive or from false-positive to true-negative) using the DCNN, while 56 of 2400 radiographs were changed negatively. Conclusion Radiologists had better performance with deep convolutional network software for the detection of malignant pulmonary nodules on chest radiographs than without. © RSNA, 2019 Online supplemental material is available for this article. See also the editorial by Jacobson in this issue.


Asunto(s)
Neoplasias Pulmonares/diagnóstico por imagen , Nódulos Pulmonares Múltiples/diagnóstico por imagen , Redes Neurales de la Computación , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Radiografía Torácica/métodos , Nódulo Pulmonar Solitario/diagnóstico por imagen , Adulto , Anciano , Femenino , Humanos , Pulmón/diagnóstico por imagen , Masculino , Persona de Mediana Edad , Reproducibilidad de los Resultados , Estudios Retrospectivos , Sensibilidad y Especificidad , Adulto Joven
15.
Sci Rep ; 9(1): 15540, 2019 10 29.
Artículo en Inglés | MEDLINE | ID: mdl-31664075

RESUMEN

Recent advancements in deep learning for automated image processing and classification have accelerated many new applications for medical image analysis. However, most deep learning algorithms have been developed using reconstructed, human-interpretable medical images. While image reconstruction from raw sensor data is required for the creation of medical images, the reconstruction process only uses a partial representation of all the data acquired. Here, we report the development of a system to directly process raw computed tomography (CT) data in sinogram-space, bypassing the intermediary step of image reconstruction. Two classification tasks were evaluated for their feasibility of sinogram-space machine learning: body region identification and intracranial hemorrhage (ICH) detection. Our proposed SinoNet, a convolutional neural network optimized for interpreting sinograms, performed favorably compared to conventional reconstructed image-space-based systems for both tasks, regardless of scanning geometries in terms of projections or detectors. Further, SinoNet performed significantly better when using sparsely sampled sinograms than conventional networks operating in image-space. As a result, sinogram-space algorithms could be used in field settings for triage (presence of ICH), especially where low radiation dose is desired. These findings also demonstrate another strength of deep learning where it can analyze and interpret sinograms that are virtually impossible for human experts.

16.
Nat Biomed Eng ; 3(3): 173-182, 2019 03.
Artículo en Inglés | MEDLINE | ID: mdl-30948806

RESUMEN

Owing to improvements in image recognition via deep learning, machine-learning algorithms could eventually be applied to automated medical diagnoses that can guide clinical decision-making. However, these algorithms remain a 'black box' in terms of how they generate the predictions from the input data. Also, high-performance deep learning requires large, high-quality training datasets. Here, we report the development of an understandable deep-learning system that detects acute intracranial haemorrhage (ICH) and classifies five ICH subtypes from unenhanced head computed-tomography scans. By using a dataset of only 904 cases for algorithm training, the system achieved a performance similar to that of expert radiologists in two independent test datasets containing 200 cases (sensitivity of 98% and specificity of 95%) and 196 cases (sensitivity of 92% and specificity of 95%). The system includes an attention map and a prediction basis retrieved from training data to enhance explainability, and an iterative process that mimics the workflow of radiologists. Our approach to algorithm development can facilitate the development of deep-learning systems for a variety of clinical applications and accelerate their adoption into clinical practice.


Asunto(s)
Algoritmos , Bases de Datos como Asunto , Aprendizaje Profundo , Hemorragias Intracraneales/diagnóstico , Enfermedad Aguda , Hemorragias Intracraneales/diagnóstico por imagen
17.
Radiol Artif Intell ; 1(4): e180066, 2019 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-33937795

RESUMEN

PURPOSE: To investigate the diagnostic accuracy of cascading convolutional neural network (CNN) for urinary stone detection on unenhanced CT images and to evaluate the performance of pretrained models enriched with labeled CT images across different scanners. MATERIALS AND METHODS: This HIPAA-compliant, institutional review board-approved, retrospective clinical study used unenhanced abdominopelvic CT scans from 535 adults suspected of having urolithiasis. The scans were obtained on two scanners (scanner 1 [hereafter S1] and scanner 2 [hereafter S2]). A radiologist reviewed clinical reports and labeled cases for determination of reference standard. Stones were present on 279 (S1, 131; S2, 148) and absent on 256 (S1, 158; S2, 98) scans. One hundred scans (50 from each scanner) were randomly reserved as the test dataset, and the rest were used for developing a cascade of two CNNs: The first CNN identified the extent of the urinary tract, and the second CNN detected presence of stone. Nine variations of models were developed through the combination of different training data sources (S1, S2, or both [hereafter SB]) with (ImageNet, GrayNet) and without (Random) pretrained CNNs. First, models were compared for generalizability at the section level. Second, models were assessed by using area under the receiver operating characteristic curve (AUC) and accuracy at the patient level with test dataset from both scanners (n = 100). RESULTS: The GrayNet-pretrained model showed higher classifier exactness than did ImageNet-pretrained or Random-initialized models when tested by using data from the same or different scanners at section level. At the patient level, the AUC for stone detection was 0.92-0.95, depending on the model. Accuracy of GrayNet-SB (95%) was higher than that of ImageNet-SB (91%) and Random-SB (88%). For stones larger than 4 mm, all models showed similar performance (false-negative results: two of 34). For stones smaller than 4 mm, the number of false-negative results for GrayNet-SB, ImageNet-SB, and Random-SB were one of 16, three of 16, and five of 16, respectively. GrayNet-SB identified stones in all 22 test cases that had obstructive uropathy. CONCLUSION: A cascading model of CNNs can detect urinary tract stones on unenhanced CT scans with a high accuracy (AUC, 0.954). Performance and generalization of CNNs across scanners can be enhanced by using transfer learning with datasets enriched with labeled medical images.© RSNA, 2019Supplemental material is available for this article. : An earlier incorrect version appeared online. This article was corrected on August 6, 2019.

18.
J Digit Imaging ; 32(4): 665-671, 2019 08.
Artículo en Inglés | MEDLINE | ID: mdl-30478479

RESUMEN

Despite the well-established impact of sex and sex hormones on bone structure and density, there has been limited description of sexual dimorphism in the hand and wrist in the literature. We developed a deep convolutional neural network (CNN) model to predict sex based on hand radiographs of children and adults aged between 5 and 70 years. Of the 1531 radiographs tested, the algorithm predicted sex correctly in 95.9% (κ = 0.92) of the cases. Two human radiologists achieved 58% (κ = 0.15) and 46% (κ = - 0.07) accuracy. The class activation maps (CAM) showed that the model mostly focused on the 2nd and 3rd metacarpal base or thumb sesamoid in women, and distal radioulnar joint, distal radial physis and epiphysis, or 3rd metacarpophalangeal joint in men. The radiologists reviewed 70 cases (35 females and 35 males) labeled with sex along with heat maps generated by CAM, but they could not find any patterns that distinguish the two sexes. A small sample of patients (n = 44) with sexual developmental disorders or transgender identity was selected for a preliminary exploration of application of the model. The model prediction agreed with phenotypic sex in only 77.8% (κ = 0.54) of these cases. To the best of our knowledge, this is the first study that demonstrated a machine learning model to perform a task in which human experts could not fulfill.


Asunto(s)
Aprendizaje Profundo , Mano/anatomía & histología , Procesamiento de Imagen Asistido por Computador/métodos , Radiografía/métodos , Caracteres Sexuales , Muñeca/anatomía & histología , Adolescente , Adulto , Anciano , Niño , Preescolar , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
19.
Skeletal Radiol ; 48(2): 275-283, 2019 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-30069585

RESUMEN

OBJECTIVE: Radiographic bone age assessment (BAA) is used in the evaluation of pediatric endocrine and metabolic disorders. We previously developed an automated artificial intelligence (AI) deep learning algorithm to perform BAA using convolutional neural networks. We compared the BAA performance of a cohort of pediatric radiologists with and without AI assistance. MATERIALS AND METHODS: Six board-certified, subspecialty trained pediatric radiologists interpreted 280 age- and gender-matched bone age radiographs ranging from 5 to 18 years. Three of those radiologists then performed BAA with AI assistance. Bone age accuracy and root mean squared error (RMSE) were used as measures of accuracy. Intraclass correlation coefficient evaluated inter-rater variation. RESULTS: AI BAA accuracy was 68.2% overall and 98.6% within 1 year, and the mean six-reader cohort accuracy was 63.6 and 97.4% within 1 year. AI RMSE was 0.601 years, while mean single-reader RMSE was 0.661 years. Pooled RMSE decreased from 0.661 to 0.508 years, all individually decreasing with AI assistance. ICC without AI was 0.9914 and with AI was 0.9951. CONCLUSIONS: AI improves radiologist's bone age assessment by increasing accuracy and decreasing variability and RMSE. The utilization of AI by radiologists improves performance compared to AI alone, a radiologist alone, or a pooled cohort of experts. This suggests that AI may optimally be utilized as an adjunct to radiologist interpretation of imaging studies to improve performance.


Asunto(s)
Determinación de la Edad por el Esqueleto/métodos , Inteligencia Artificial , Enfermedades Óseas Metabólicas/diagnóstico por imagen , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Adolescente , Algoritmos , Niño , Preescolar , Aprendizaje Profundo , Femenino , Humanos , Masculino , Estudios Retrospectivos
20.
Radiology ; 288(2): 318-328, 2018 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-29944078

RESUMEN

Recent advances and future perspectives of machine learning techniques offer promising applications in medical imaging. Machine learning has the potential to improve different steps of the radiology workflow including order scheduling and triage, clinical decision support systems, detection and interpretation of findings, postprocessing and dose estimation, examination quality control, and radiology reporting. In this article, the authors review examples of current applications of machine learning and artificial intelligence techniques in diagnostic radiology. In addition, the future impact and natural extension of these techniques in radiology practice are discussed.


Asunto(s)
Aprendizaje Automático , Sistemas de Información Radiológica , Radiología/métodos , Radiología/tendencias , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...