Your browser doesn't support javascript.
loading
Machine learning and deep learning for classifying the justification of brain CT referrals.
Potocnik, Jaka; Thomas, Edel; Lawlor, Aonghus; Kearney, Dearbhla; Heffernan, Eric J; Killeen, Ronan P; Foley, Shane J.
Afiliação
  • Potocnik J; University College Dublin School of Medicine, Dublin, Ireland. jaka.potocnik@ucd.ie.
  • Thomas E; University College Dublin School of Medicine, Dublin, Ireland. edel.thomas@ucd.ie.
  • Lawlor A; University College Dublin School of Computer Science, Dublin, Ireland. aonghus.lawlor@ucd.ie.
  • Kearney D; The Insight Centre for Data Analytics, Dublin, Ireland. aonghus.lawlor@ucd.ie.
  • Heffernan EJ; Mater Misericordiae University Hospital, Dublin, Ireland. dearbhlakearney92@gmail.com.
  • Killeen RP; St. Vincent's University Hospital, Dublin, Ireland. ericjheffernan@gmail.com.
  • Foley SJ; University College Dublin School of Medicine, Dublin, Ireland. ronanpkilleen@gmail.com.
Eur Radiol ; 2024 Jun 24.
Article em En | MEDLINE | ID: mdl-38913244
ABSTRACT

OBJECTIVES:

To train the machine and deep learning models to automate the justification analysis of radiology referrals in accordance with iGuide categorisation, and to determine if prediction models can generalise across multiple clinical sites and outperform human experts.

METHODS:

Adult brain computed tomography (CT) referrals from scans performed in three CT centres in Ireland in 2020 and 2021 were retrospectively collected. Two radiographers analysed the justification of 3000 randomly selected referrals using iGuide, with two consultant radiologists analysing the referrals with disagreement. Insufficient or duplicate referrals were discarded. The inter-rater agreement among radiographers and consultants was computed. A random split (41) was performed to apply machine learning (ML) and deep learning (DL) techniques to unstructured clinical indications to automate retrospective justification auditing with multi-class classification. The accuracy and macro-averaged F1 score of the best-performing classifier of each type on the training set were computed on the test set.

RESULTS:

42 referrals were ignored. 1909 (64.5%) referrals were justified, 811 (27.4%) were potentially justified, and 238 (8.1%) were unjustified. The agreement between radiographers (κ = 0.268) was lower than radiologists (κ = 0.460). The best-performing ML model was the bag-of-words-based gradient-boosting classifier achieving a 94.4% accuracy and a macro F1 of 0.94. DL models were inferior, with bi-directional long short-term memory achieving 92.3% accuracy, a macro F1 of 0.92, and outperforming multilayer perceptrons.

CONCLUSION:

Interpreting unstructured clinical indications is challenging necessitating clinical decision support. ML and DL can generalise across multiple clinical sites, outperform human experts, and be used as an artificial intelligence-based iGuide interpreter when retrospectively vetting radiology referrals. CLINICAL RELEVANCE STATEMENT Healthcare vendors and clinical sites should consider developing and utilising artificial intelligence-enabled systems for justifying medical exposures. This would enable better implementation of imaging referral guidelines in clinical practices and reduce population dose burden, CT waiting lists, and wasteful use of resources. KEY POINTS Significant variations exist among human experts in interpreting unstructured clinical indications/patient presentations. Machine and deep learning can automate the justification analysis of radiology referrals according to iGuide categorisation. Machine and deep learning can improve retrospective and prospective justification auditing for better implementation of imaging referral guidelines.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Irlanda

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Irlanda