Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Eur Radiol ; 2024 Jun 24.
Artigo em Inglês | MEDLINE | ID: mdl-38913244

RESUMO

OBJECTIVES: To train the machine and deep learning models to automate the justification analysis of radiology referrals in accordance with iGuide categorisation, and to determine if prediction models can generalise across multiple clinical sites and outperform human experts. METHODS: Adult brain computed tomography (CT) referrals from scans performed in three CT centres in Ireland in 2020 and 2021 were retrospectively collected. Two radiographers analysed the justification of 3000 randomly selected referrals using iGuide, with two consultant radiologists analysing the referrals with disagreement. Insufficient or duplicate referrals were discarded. The inter-rater agreement among radiographers and consultants was computed. A random split (4:1) was performed to apply machine learning (ML) and deep learning (DL) techniques to unstructured clinical indications to automate retrospective justification auditing with multi-class classification. The accuracy and macro-averaged F1 score of the best-performing classifier of each type on the training set were computed on the test set. RESULTS: 42 referrals were ignored. 1909 (64.5%) referrals were justified, 811 (27.4%) were potentially justified, and 238 (8.1%) were unjustified. The agreement between radiographers (κ = 0.268) was lower than radiologists (κ = 0.460). The best-performing ML model was the bag-of-words-based gradient-boosting classifier achieving a 94.4% accuracy and a macro F1 of 0.94. DL models were inferior, with bi-directional long short-term memory achieving 92.3% accuracy, a macro F1 of 0.92, and outperforming multilayer perceptrons. CONCLUSION: Interpreting unstructured clinical indications is challenging necessitating clinical decision support. ML and DL can generalise across multiple clinical sites, outperform human experts, and be used as an artificial intelligence-based iGuide interpreter when retrospectively vetting radiology referrals. CLINICAL RELEVANCE STATEMENT: Healthcare vendors and clinical sites should consider developing and utilising artificial intelligence-enabled systems for justifying medical exposures. This would enable better implementation of imaging referral guidelines in clinical practices and reduce population dose burden, CT waiting lists, and wasteful use of resources. KEY POINTS: Significant variations exist among human experts in interpreting unstructured clinical indications/patient presentations. Machine and deep learning can automate the justification analysis of radiology referrals according to iGuide categorisation. Machine and deep learning can improve retrospective and prospective justification auditing for better implementation of imaging referral guidelines.

2.
J Med Imaging Radiat Sci ; 54(2): 376-385, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37062603

RESUMO

BACKGROUND AND PURPOSE: Artificial intelligence (AI) is present in many areas of our lives. Much of the digital data generated in health care can be used for building automated systems to bring improvements to existing workflows and create a more personalised healthcare experience for patients. This review outlines select current and potential AI applications in medical imaging practice and provides a view of how diagnostic imaging suites will operate in the future. Challenges associated with potential applications will be discussed and healthcare staff considerations necessary to benefit from AI-enabled solutions will be outlined. METHODS: Several electronic databases, including PubMed, ScienceDirect, Google Scholar, and University College Dublin Library Database, were used to identify relevant articles with a Boolean search strategy. Textbooks, government sources and vendor websites were also considered. RESULTS/DISCUSSION: Many AI-enabled solutions in radiographic practice are available with more automation on the horizon. Traditional workflow will become faster, more effective, and more user friendly. AI can handle administrative or technical types of work, meaning it is applicable across all aspects of medical imaging practice. CONCLUSION: AI offers significant potential to automate most of the manual tasks, ensure service consistency, and improve patient care. Radiographers, radiation therapists, and clinicians should ensure they have adequate understanding of the technology to enable ethical oversight of its implementation.


Assuntos
Inteligência Artificial , Atenção à Saúde , Humanos , Radiografia
3.
Insights Imaging ; 13(1): 127, 2022 Aug 04.
Artigo em Inglês | MEDLINE | ID: mdl-35925429

RESUMO

BACKGROUND: With a significant increase in utilisation of computed tomography (CT), inappropriate imaging is a significant concern. Manual justification audits of radiology referrals are time-consuming and require financial resources. We aimed to retrospectively audit justification of brain CT referrals by applying natural language processing and traditional machine learning (ML) techniques to predict their justification based on the audit outcomes. METHODS: Two human experts retrospectively analysed justification of 375 adult brain CT referrals performed in a tertiary referral hospital during the 2019 calendar year, using a cloud-based platform for structured referring. Cohen's kappa was computed to measure inter-rater reliability. Referrals were represented as bag-of-words (BOW) and term frequency-inverse document frequency models. Text preprocessing techniques, including custom stop words (CSW) and spell correction (SC), were applied to the referral text. Logistic regression, random forest, and support vector machines (SVM) were used to predict the justification of referrals. A test set (300/75) was used to compute weighted accuracy, sensitivity, specificity, and the area under the curve (AUC). RESULTS: In total, 253 (67.5%) examinations were deemed justified, 75 (20.0%) as unjustified, and 47 (12.5%) as maybe justified. The agreement between the annotators was strong (κ = 0.835). The BOW + CSW + SC + SVM outperformed other binary models with a weighted accuracy of 92%, a sensitivity of 91%, a specificity of 93%, and an AUC of 0.948. CONCLUSIONS: Traditional ML models can accurately predict justification of unstructured brain CT referrals. This offers potential for automated justification analysis of CT referrals in clinical departments.

4.
AJR Am J Roentgenol ; 194(2): 469-74, 2010 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-20093611

RESUMO

OBJECTIVE: Orthopedic injury and intracranial hemorrhage are commonly encountered in emergency radiology, and accurate and timely diagnosis is important. The purpose of this study was to determine whether the diagnostic accuracy of handheld computing devices is comparable to that of monitors that might be used in emergency teleconsultation. SUBJECTS AND METHODS: Two handheld devices, a Dell Axim personal digital assistant (PDA) and an Apple iPod Touch device, were studied. The diagnostic efficacy of each device was tested against that of secondary-class monitors (primary class being clinical workstation display) for each of two image types-posteroanterior wrist radiographs and slices from CT of the brain-yielding four separate observer performance studies. Participants read a bank of 30 wrist or brain images searching for a specific abnormality (distal radial fracture, fresh intracranial bleed) and rated their confidence in their decisions. A total of 168 readings by examining radiologists of the American Board of Radiology were gathered, and the results were subjected to receiver operating characteristics analysis. RESULTS: In the PDA brain CT study, the scores of PDA readings were significantly higher than those of monitor readings for all observers (p < or = 0.01) and for radiologists who were not neuroradiology specialists (p < or = 0.05). No statistically significant differences between handheld device and monitor findings were found for the PDA wrist images or in the iPod Touch device studies, although some comparisons approached significance. CONCLUSION: Handheld devices show promise in the field of emergency teleconsultation for detection of basic orthopedic injuries and intracranial hemorrhage. Further investigation is warranted.


Assuntos
Lesões Encefálicas/diagnóstico por imagem , Computadores de Mão , Apresentação de Dados , Emergências , Radiologia/instrumentação , Interface Usuário-Computador , Traumatismos do Punho/diagnóstico por imagem , Humanos , Curva ROC , Software , Tomografia Computadorizada por Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA