Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 79
Filtrar
Más filtros

Tipo del documento
Intervalo de año de publicación
1.
Neuroradiology ; 66(9): 1513-1526, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38963424

RESUMEN

BACKGROUND AND PURPOSE: Traumatic brain injury (TBI) is a major source of health loss and disability worldwide. Accurate and timely diagnosis of TBI is critical for appropriate treatment and management of the condition. Neuroimaging plays a crucial role in the diagnosis and characterization of TBI. Computed tomography (CT) is the first-line diagnostic imaging modality typically utilized in patients with suspected acute mild, moderate and severe TBI. Radiology reports play a crucial role in the diagnostic process, providing critical information about the location and extent of brain injury, as well as factors that could prevent secondary injury. However, the complexity and variability of radiology reports can make it challenging for healthcare providers to extract the necessary information for diagnosis and treatment planning. METHODS/RESULTS/CONCLUSION: In this article, we report the efforts of an international group of TBI imaging experts to develop a clinical radiology report template for CT scans obtained in patients suspected of TBI and consisting of fourteen different subdivisions (CT technique, mechanism of injury or clinical history, presence of scalp injuries, fractures, potential vascular injuries, potential injuries involving the extra-axial spaces, brain parenchymal injuries, potential injuries involving the cerebrospinal fluid spaces and the ventricular system, mass effect, secondary injuries, prior or coexisting pathology).


Asunto(s)
Lesiones Traumáticas del Encéfalo , Tomografía Computarizada por Rayos X , Lesiones Traumáticas del Encéfalo/diagnóstico por imagen , Humanos , Tomografía Computarizada por Rayos X/métodos
2.
J Clin Densitom ; 27(1): 101437, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38011777

RESUMEN

INTRODUCTION: Professional guidance and standards assist radiologic interpreters in generating high quality reports. Initially DXA reporting Official Positions were provided by the ISCD in 2003; however, as the field has progressed, some of the current recommendations require revision and updating. This manuscript details the research approach and provides updated DXA reporting guidance. METHODS: Key Questions were proposed by ISCD established protocols and approved by the Position Development Conference Steering Committee. Literature related to each question was accumulated by searching PubMed, and existing guidelines from other organizations were extracted from websites. Modifications and additions to the ISCD Official Positions were determined by an expert panel after reviewing the Task Force proposals and position papers. RESULTS: Since most DXA is now performed in radiology departments, an approach was endorsed that better aligns with standard radiologic reports. To achieve this, reporting elements were divided into required minimum or optional. Collectively, required components comprise a standard diagnostic report and are considered the minimum necessary to generate an acceptable report. Additional elements were retained and categorized as optional. These optional components were considered relevant but tailored to a consultative, clinically oriented report. Although this information is beneficial, not all interpreters have access to sufficient clinical information, or may not have the clinical expertise to expand beyond a diagnostic report. Consequently, these are not required for an acceptable report. CONCLUSION: These updated ISCD positions conform with the DXA field's evolution over the past 20 years. Specifically, a basic diagnostic report better aligns with radiology standards, and additional elements (which are valued by treating clinicians) remain acceptable but are optional and not required. Additionally, reporting guidance for newer elements such as fracture risk assessment are incorporated. It is our expectation that these updated Official Positions will improve compliance with required standards and generate high quality DXA reports that are valuable to the recipient clinician and contribute to best patient care.


Asunto(s)
Densidad Ósea , Radiología , Humanos , Absorciometría de Fotón , Sociedades Médicas
3.
Knee Surg Sports Traumatol Arthrosc ; 32(5): 1077-1086, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38488217

RESUMEN

PURPOSE: The purpose of this study was to evaluate the effectiveness of an Artificial Intelligence-Large Language Model (AI-LLM) at improving the readability of knee radiology reports. METHODS: Reports of 100 knee X-rays, 100 knee computed tomography (CT) scans and 100 knee magnetic resonance imaging (MRI) scans were retrieved. The following prompt command was inserted into the AI-LLM: 'Explain this radiology report to a patient in layman's terms in the second person:[Report Text]'. The Flesch-Kincaid reading level (FKRL) score, Flesch reading ease (FRE) score and report length were calculated for the original radiology report and the AI-LLM generated report. Any 'hallucination' or inaccurate text produced by the AI-LLM-generated report was documented. RESULTS: Statistically significant improvements in mean FKRL scores in the AI-LLM generated X-ray report (12.7 ± 1.0-7.2 ± 0.6), CT report (13.4 ± 1.0-7.5 ± 0.5) and MRI report (13.5 ± 0.9-7.5 ± 0.6) were observed. Statistically significant improvements in mean FRE scores in the AI-LLM generated X-ray report (39.5 ± 7.5-76.8 ± 5.1), CT report (27.3 ± 5.9-73.1 ± 5.6) and MRI report (26.8 ± 6.4-73.4 ± 5.0) were observed. Superior FKRL scores and FRE scores were observed in the AI-LLM-generated X-ray report compared to the AI-LLM-generated CT report and MRI report, p < 0.001. The hallucination rates in the AI-LLM generated X-ray report, CT report and MRI report were 2%, 5% and 5%, respectively. CONCLUSIONS: This study highlights the promising use of AI-LLMs as an innovative, patient-centred strategy to improve the readability of knee radiology reports. The clinical relevance of this study is that an AI-LLM-generated knee radiology report may enhance patients' understanding of their imaging reports, potentially reducing the responder burden placed on the ordering physicians. However, due to the 'hallucinations' produced by the AI-LLM-generated report, the ordering physician must always engage in a collaborative discussion with the patient regarding both reports and the corresponding images. LEVEL OF EVIDENCE: Level IV.


Asunto(s)
Inteligencia Artificial , Comprensión , Imagen por Resonancia Magnética , Tomografía Computarizada por Rayos X , Humanos , Articulación de la Rodilla/diagnóstico por imagen
4.
Foot Ankle Surg ; 30(4): 331-337, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38336501

RESUMEN

BACKGROUND: The purpose of this study was to evaluate the efficacy of an Artificial Intelligence Large Language Model (AI-LLM) at improving the readability foot and ankle orthopedic radiology reports. METHODS: The radiology reports from 100 foot or ankle X-Rays, 100 computed tomography (CT) scans and 100 magnetic resonance imaging (MRI) scans were randomly sampled from the institution's database. The following prompt command was inserted into the AI-LLM: "Explain this radiology report to a patient in layman's terms in the second person: [Report Text]". The mean report length, Flesch reading ease score (FRES) and Flesch-Kincaid reading level (FKRL) were evaluated for both the original radiology report and the AI-LLM generated report. The accuracy of the information contained within the AI-LLM report was assessed via a 5-point Likert scale. Additionally, any "hallucinations" generated by the AI-LLM report were recorded. RESULTS: There was a statistically significant improvement in mean FRES scores in the AI-LLM generated X-Ray report (33.8 ± 6.8 to 72.7 ± 5.4), CT report (27.8 ± 4.6 to 67.5 ± 4.9) and MRI report (20.3 ± 7.2 to 66.9 ± 3.9), all p < 0.001. There was also a statistically significant improvement in mean FKRL scores in the AI-LLM generated X-Ray report (12.2 ± 1.1 to 8.5 ± 0.4), CT report (15.4 ± 2.0 to 8.4 ± 0.6) and MRI report (14.1 ± 1.6 to 8.5 ± 0.5), all p < 0.001. Superior FRES scores were observed in the AI-LLM generated X-Ray report compared to the AI-LLM generated CT report and MRI report, p < 0.001. The mean Likert score for the AI-LLM generated X-Ray report, CT report and MRI report was 4.0 ± 0.3, 3.9 ± 0.4, and 3.9 ± 0.4, respectively. The rate of hallucinations in the AI-LLM generated X-Ray report, CT report and MRI report was 4%, 7% and 6%, respectively. CONCLUSION: AI-LLM was an efficacious tool for improving the readability of foot and ankle radiological reports across multiple imaging modalities. Superior FRES scores together with superior Likert scores were observed in the X-Ray AI-LLM reports compared to the CT and MRI AI-LLM reports. This study demonstrates the potential use of AI-LLMs as a new patient-centric approach for enhancing patient understanding of their foot and ankle radiology reports. Jel Classifications: IV.


Asunto(s)
Inteligencia Artificial , Comprensión , Humanos , Imagen por Resonancia Magnética , Tomografía Computarizada por Rayos X , Pie/diagnóstico por imagen , Tobillo/diagnóstico por imagen , Lenguaje
5.
AJR Am J Roentgenol ; 221(3): 373-376, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37095665

RESUMEN

Large language models (LLMs) such as ChatGPT are advanced artificial intelligence models that are designed to process and understand human language. LLMs have the potential to improve radiology reporting and patient engagement by automating generation of the clinical history and impression of a radiology report, creating layperson reports, and providing patients with pertinent questions and answers about findings in radiology reports. However, LLMs are error prone, and human oversight is needed to reduce the risk of patient harm.


Asunto(s)
Inteligencia Artificial , Radiología , Humanos , Participación del Paciente
6.
J Med Internet Res ; 25: e43765, 2023 10 19.
Artículo en Inglés | MEDLINE | ID: mdl-37856174

RESUMEN

BACKGROUND: A frequently used feature of electronic patient portals is the viewing of test results. Research on patient portals is abundant and offers evidence to help portal implementers make policy and practice decisions. In contrast, no comparable comprehensive summary of research addresses the direct release of and patient access to test results. OBJECTIVE: This scoping review aims to analyze and synthesize published research focused on patient and health care provider perspectives on the direct release of laboratory, imaging, and radiology results to patients via web portals. METHODS: PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines were followed. Searches were conducted in CINAHL, MEDLINE, and other databases. Citations were screened in Covidence using the inclusion and exclusion criteria. Primary studies that focused on patient and health care provider perspectives on patient access to laboratory and imaging results via web portals were included. An updated search was conducted up to August 2023. Our review included 27 articles-20 examining patient views, 3 examining provider views, and 4 examining both patient and provider views. Data extraction and inductive data analysis were informed by sensitizing concepts from sociomaterial perspectives, and 15 themes were generated. RESULTS: Patient perspectives (24 papers) were synthesized using nine themes: (1) patterns of use and patient characteristics; (2) emotional response when viewing the results and uncertainty about their implications; (3) understanding test results; (4) preferences for mode and timing of result release; (5). information seeking and patients' actions motivated by viewing results via a portal; (6) contemplating changes in behavior and managing own health; (7) benefits of accessing test results via a portal; (8) limitations of accessing test results via a portal; and (9) suggestions for portal improvement. Health care provider perspectives (7 papers) were synthetized into six themes: (1) providers' view of benefits of patient access to results via the portal; (2) effects on health care provider workload; (3) concerns about patient anxiety; (4) timing of result release into the patient portal; (5) the method of result release into the patient portal: manual versus automatic release; and (6) the effects of hospital health information technology system on patient quality outcomes. CONCLUSIONS: The timing of the release of test results emerged as a particularly important topic. In some countries, the policy context may motivate immediate release of most tests directly into patient portals. However, our findings aim to make policy makers, health administrators, and other stakeholders aware of factors to consider when making decisions about the timing of result release. This review is sensitive to the characteristics of patient populations and portal technology and can inform result release framework policies. The findings are timely, as patient portals have become more common internationally.


Asunto(s)
Registros Electrónicos de Salud , Portales del Paciente , Humanos , Personal de Salud , Actitud del Personal de Salud , Pacientes
7.
J Digit Imaging ; 36(3): 812-826, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36788196

RESUMEN

Rising incidence and mortality of cancer have led to an incremental amount of research in the field. To learn from preexisting data, it has become important to capture maximum information related to disease type, stage, treatment, and outcomes. Medical imaging reports are rich in this kind of information but are only present as free text. The extraction of information from such unstructured text reports is labor-intensive. The use of Natural Language Processing (NLP) tools to extract information from radiology reports can make it less time-consuming as well as more effective. In this study, we have developed and compared different models for the classification of lung carcinoma reports using clinical concepts. This study was approved by the institutional ethics committee as a retrospective study with a waiver of informed consent. A clinical concept-based classification pipeline for lung carcinoma radiology reports was developed using rule-based as well as machine learning models and compared. The machine learning models used were XGBoost and two more deep learning model architectures with bidirectional long short-term neural networks. A corpus consisting of 1700 radiology reports including computed tomography (CT) and positron emission tomography/computed tomography (PET/CT) reports were used for development and testing. Five hundred one radiology reports from MIMIC-III Clinical Database version 1.4 was used for external validation. The pipeline achieved an overall F1 score of 0.94 on the internal set and 0.74 on external validation with the rule-based algorithm using expert input giving the best performance. Among the machine learning models, the Bi-LSTM_dropout model performed better than the ML model using XGBoost and the Bi-LSTM_simple model on internal set, whereas on external validation, the Bi-LSTM_simple model performed relatively better than other 2. This pipeline can be used for clinical concept-based classification of radiology reports related to lung carcinoma from a huge corpus and also for automated annotation of these reports.


Asunto(s)
Carcinoma , Radiología , Humanos , Estudios Retrospectivos , Tomografía Computarizada por Tomografía de Emisión de Positrones , Procesamiento de Lenguaje Natural , Pulmón
8.
Arch Orthop Trauma Surg ; 143(7): 3753-3758, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-35997839

RESUMEN

BACKGROUND: Written communication can convey one's emotions, personality, and sentiments. Radiology reports employ medical jargon and serve to document a patients' condition. Patients might misinterpret this medical jargon in a way that increases their anxiety and makes them feel unwell. We were interested whether linguistic tones in MRI reports vary between radiologists and correlate with the severity of pathology. QUESTIONS/PURPOSES: (1) Is there variation in linguistic tones among different radiologists reporting MRI results for rotator cuff tendinopathy? (2) Is the retraction of the supraspinatus tendon in millimeters associated with linguistic tones? METHODS: Two hundred twenty consecutive MRI reports of patients with full-thickness rotator cuff defects were collected. Supraspinatus retraction was measured on the MRI using viewer tools. Using Kruskal-Wallis H tests, we measured variation between 11 radiologists for the following tones: positive emotion, negative emotion, analytical thinking, cause, insight, tentativeness, certainty, and informal speech. We also measured the correlation of tones and the degree of tendon retraction. Multilevel mixed-effects linear regression models were constructed, seeking factors associated with the tone, accounting for retraction, the presence of prior imaging, and for the effects of each radiologist (nesting). RESULTS: There were statistically significant differences for all of the tones by radiologist. In bivariate analysis, greater retraction of the supraspinatus muscle in millimeters was associated with more negative emotion and certainty, and with less tentativeness. In multilevel mixed-effects linear regression, more negative tones were associated with greater retraction and absence of prior imaging. Greater tentativeness was associated with the absence of prior imaging, but not with retraction. CONCLUSIONS: Radiology reports have emotional content that is relatively negative, varies by radiologist and is affected by pathology. Strategies for more hopeful, positive, optimistic descriptions of pathology have the potential to help patients feel better without introducing inaccuracies even if unlikely. LEVEL OF EVIDENCE: Level III, Diagnostic.


Asunto(s)
Lesiones del Manguito de los Rotadores , Tendinopatía , Humanos , Manguito de los Rotadores/patología , Lesiones del Manguito de los Rotadores/patología , Imagen por Resonancia Magnética/métodos , Tendinopatía/patología , Lingüística
9.
Acta Radiol ; 63(12): 1643-1653, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34846198

RESUMEN

BACKGROUND: Orthopedists prefer imaging studies for the diagnosis, treatment, and follow-up of patients. PURPOSE: To determine the effect of orthopedists' characteristics, including subspecialty, age, education, and professional experience, in collaboration with radiologists and the usefulness of radiology reports for orthopedists in diagnosis and patient management. MATERIAL AND METHODS: Questionnaires, consisting of 21 questions investigating the orthopedists' characteristics, their behavior with radiology reports, their thoughts on communication, and collaboration with radiologists, were distributed to 205 orthopedists. Descriptive analysis was performed, and the effects of orthopedist characteristics on the outcomes was evaluated. RESULTS: In total, 161 out of 205 enrolled participants were included in the analysis. A total of 156 (96.9%) participants stated that they reviewed at least one official radiology report, with MRI receiving the highest rate (92.4%). The main reason provided for not reviewing the radiology reports and requests regarding changes to radiology report formats seemed to be mostly related to time pressure. Despite a significant portion of the participants stating that clinical and surgical findings were inconsistent with radiology reports, less than half were inclined to contact the radiologist most of the time or always. Increasing age (P = 0.005), experience (P = 0.016), and university hospital specialization (P = 0.007) increased the tendency to form multidisciplinary team meetings. Communication with radiologists increased with age (P < 0.001), while more experience reduced the impact of radiology reports on decision-making (P = 0.035). CONCLUSION: Increasing cooperation between orthopedists and radiologists will make a significant contribution to decision-making and treatment processes. Orthopedists' characteristics are influential factors in establishing this communication.


Asunto(s)
Cirujanos Ortopédicos , Radiología , Humanos , Radiografía , Radiólogos , Diagnóstico por Imagen
10.
BMC Med Inform Decis Mak ; 22(1): 272, 2022 10 18.
Artículo en Inglés | MEDLINE | ID: mdl-36258218

RESUMEN

BACKGROUND: Cardiac magnetic resonance (CMR) imaging is important for diagnosis and risk stratification of hypertrophic cardiomyopathy (HCM) patients. However, collection of information from large numbers of CMR reports by manual review is time-consuming, error-prone and costly. Natural language processing (NLP) is an artificial intelligence method for automated extraction of information from narrative text including text in CMR reports in electronic health records (EHR). Our objective was to assess whether NLP can accurately extract diagnosis of HCM from CMR reports. METHODS: An NLP system with two tiers was developed for information extraction from narrative text in CMR reports; the first tier extracted information regarding HCM diagnosis while the second extracted categorical and numeric concepts for HCM classification. We randomly allocated 200 HCM patients with CMR reports from 2004 to 2018 into training (100 patients with 185 CMR reports) and testing sets (100 patients with 206 reports). RESULTS: NLP algorithms demonstrated very high performance compared to manual annotation. The algorithm to extract HCM diagnosis had accuracy of 0.99. The accuracy for categorical concepts included HCM morphologic subtype 0.99, systolic anterior motion of the mitral valve 0.96, mitral regurgitation 0.93, left ventricular (LV) obstruction 0.94, location of obstruction 0.92, apical pouch 0.98, LV delayed enhancement 0.93, left atrial enlargement 0.99 and right atrial enlargement 0.98. Accuracy for numeric concepts included maximal LV wall thickness 0.96, LV mass 0.99, LV mass index 0.98, LV ejection fraction 0.98 and right ventricular ejection fraction 0.99. CONCLUSIONS: NLP identified and classified HCM from CMR narrative text reports with very high performance.


Asunto(s)
Cardiomiopatía Hipertrófica , Procesamiento de Lenguaje Natural , Humanos , Volumen Sistólico , Inteligencia Artificial , Función Ventricular Derecha , Cardiomiopatía Hipertrófica/diagnóstico por imagen , Cardiomiopatía Hipertrófica/patología , Imagen por Resonancia Magnética , Espectroscopía de Resonancia Magnética
11.
Emerg Radiol ; 29(5): 855-862, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-35701617

RESUMEN

PURPOSE: Interactions between radiologists and emergency physicians are often diminished as imaging volume increases and more radiologists read off site. We explore how several commonly used phrasings are perceived by radiologists and emergency physicians to decrease ambiguity in reporting. METHODS: An anonymous survey was distributed to attendings and residents at seven academic radiology and emergency departments across the USA via a digital platform as well as to an email group consisting of radiologists across the country with an interest in quality assurance. Physicians were asked to assign a percent score to probabilistic phrases such as, "suspicious of," or "concerned for." Additional questions including, "how often the report findings are reviewed," "what makes a good radiology report," and "when is it useful to use the phrase 'clinical correlation are recommended.'" Median scores and confidence intervals were compared using an independent Student's T-test. RESULTS: Generally, there was agreement between radiologists and emergency room physicians in how they interpret probabilistic phrases except for the phrases, "compatible with," and "subcentimeter liver lesions too small to characterize." Radiologists consider a useful report to answer the clinical question, be concise, and well organized. Emergency physicians consider a useful report to be concise, definitive or include a differential diagnosis, answer the clinical question, and recommend a next step. Radiologists and emergency physicians did not agree on the usefulness of the phrase, "clinical correlation recommended," in which radiologists found the phrase more helpful under particular circumstances. CONCLUSION: The survey demonstrated a wide range of answers for probabilistic phrases for both radiologists and emergency physicians. While the medians and means of the two groups were often different by statistical significance, the actual percent difference was minor. These wide range of answers suggest that use of probabilistic phrases may sometimes lead to misinterpretation between radiologist and emergency room physician and should be avoided or defined if possible.


Asunto(s)
Radiólogos , Radiología , Humanos , Radiografía , Encuestas y Cuestionarios
12.
J Med Syst ; 46(8): 55, 2022 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-35788428

RESUMEN

To describe the implementation of a standardized code system for notification of relevant expected or incidental findings in imaging exams and use of an automated textual mining tool of radiological report narratives, created to facilitate directing patients to specific lines of care, reducing the waiting time for interventions, consultations, and minimizing delays to treatment. We report our 12-month initial experience with the process. A standardized code was attached to every radiology report when a relevant finding was observed. On a daily basis, the notifications was sent to a dedicated medical team to review the notified abnormality and decide a proper action. Between October 1, 2020, and September 30, 2021, 40,296 sectional examinations (CT and MR scans) were evaluated in 35,944 patients. The main findings reported were calcified plaques on the trunk of the left coronary artery or trunk like, pulmonary nodule/mass and suspected liver disease. Data of follow-up was available in 10,019 patients. The age ranged from 24 to 101 years (mean of 71.3 years) and 6,626 were female (66.1%). In 2,548 patients a complementary study or procedure was indicated, and 3,300 patients were referred to a specialist. Customized database searches looking for critical or relevant findings may facilitate patient referral to specific care lines, reduce the waiting time for interventions or consultations, and minimize delays to treatment.


Asunto(s)
Diagnóstico por Imagen , Hallazgos Incidentales , Adulto , Anciano , Anciano de 80 o más Años , Atención a la Salud , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
13.
BMC Med Imaging ; 21(1): 142, 2021 10 02.
Artículo en Inglés | MEDLINE | ID: mdl-34600486

RESUMEN

BACKGROUND: Automated language analysis of radiology reports using natural language processing (NLP) can provide valuable information on patients' health and disease. With its rapid development, NLP studies should have transparent methodology to allow comparison of approaches and reproducibility. This systematic review aims to summarise the characteristics and reporting quality of studies applying NLP to radiology reports. METHODS: We searched Google Scholar for studies published in English that applied NLP to radiology reports of any imaging modality between January 2015 and October 2019. At least two reviewers independently performed screening and completed data extraction. We specified 15 criteria relating to data source, datasets, ground truth, outcomes, and reproducibility for quality assessment. The primary NLP performance measures were precision, recall and F1 score. RESULTS: Of the 4,836 records retrieved, we included 164 studies that used NLP on radiology reports. The commonest clinical applications of NLP were disease information or classification (28%) and diagnostic surveillance (27.4%). Most studies used English radiology reports (86%). Reports from mixed imaging modalities were used in 28% of the studies. Oncology (24%) was the most frequent disease area. Most studies had dataset size > 200 (85.4%) but the proportion of studies that described their annotated, training, validation, and test set were 67.1%, 63.4%, 45.7%, and 67.7% respectively. About half of the studies reported precision (48.8%) and recall (53.7%). Few studies reported external validation performed (10.8%), data availability (8.5%) and code availability (9.1%). There was no pattern of performance associated with the overall reporting quality. CONCLUSIONS: There is a range of potential clinical applications for NLP of radiology reports in health services and research. However, we found suboptimal reporting quality that precludes comparison, reproducibility, and replication. Our results support the need for development of reporting standards specific to clinical NLP studies.


Asunto(s)
Procesamiento de Lenguaje Natural , Radiografía , Radiología/normas , Conjuntos de Datos como Asunto , Humanos , Reproducibilidad de los Resultados , Informe de Investigación/normas
14.
J Med Internet Res ; 23(1): e19689, 2021 01 12.
Artículo en Inglés | MEDLINE | ID: mdl-33433395

RESUMEN

BACKGROUND: Liver cancer is a substantial disease burden in China. As one of the primary diagnostic tools for detecting liver cancer, dynamic contrast-enhanced computed tomography provides detailed evidences for diagnosis that are recorded in free-text radiology reports. OBJECTIVE: The aim of our study was to apply a deep learning model and rule-based natural language processing (NLP) method to identify evidences for liver cancer diagnosis automatically. METHODS: We proposed a pretrained, fine-tuned BERT (Bidirectional Encoder Representations from Transformers)-based BiLSTM-CRF (Bidirectional Long Short-Term Memory-Conditional Random Field) model to recognize the phrases of APHE (hyperintense enhancement in the arterial phase) and PDPH (hypointense in the portal and delayed phases). To identify more essential diagnostic evidences, we used the traditional rule-based NLP methods for the extraction of radiological features. APHE, PDPH, and other extracted radiological features were used to design a computer-aided liver cancer diagnosis framework by random forest. RESULTS: The BERT-BiLSTM-CRF predicted the phrases of APHE and PDPH with an F1 score of 98.40% and 90.67%, respectively. The prediction model using combined features had a higher performance (F1 score, 88.55%) than those using APHE and PDPH (84.88%) or other extracted radiological features (83.52%). APHE and PDPH were the top 2 essential features for liver cancer diagnosis. CONCLUSIONS: This work was a comprehensive NLP study, wherein we identified evidences for the diagnosis of liver cancer from Chinese radiology reports, considering both clinical knowledge and radiology findings. The BERT-based deep learning method for the extraction of diagnostic evidence achieved state-of-the-art performance. The high performance proves the feasibility of the BERT-BiLSTM-CRF model in information extraction from Chinese radiology reports. The findings of our study suggest that the deep learning-based method for automatically identifying evidences for diagnosis can be extended to other types of Chinese clinical texts.


Asunto(s)
Aprendizaje Profundo/normas , Diagnóstico por Computador/métodos , Almacenamiento y Recuperación de la Información/métodos , Neoplasias Hepáticas/diagnóstico , Procesamiento de Lenguaje Natural , Radiología/métodos , China , Humanos , Neoplasias Hepáticas/radioterapia
15.
BMC Med Inform Decis Mak ; 21(Suppl 9): 247, 2021 11 16.
Artículo en Inglés | MEDLINE | ID: mdl-34789213

RESUMEN

BACKGROUND: Standardized coding of plays an important role in radiology reports' secondary use such as data analytics, data-driven decision support, and personalized medicine. RadLex, a standard radiological lexicon, can reduce subjective variability and improve clarity in radiology reports. RadLex coding of radiology reports is widely used in many countries, but translation and localization of RadLex in China are far from being established. Although automatic RadLex coding is a common way for non-standard radiology reports, the high-accuracy cross-language RadLex coding is hardly achieved due to the limitation of up-to-date auto-translation and text similarity algorithms and still requires further research. METHODS: We present an effective approach that combines a hybrid translation and a Multilayer Perceptron weighting text similarity ensemble algorithm for automatic RadLex coding of Chinese structured radiology reports. Firstly, a hybrid way to integrate Google neural machine translation and dictionary translation helps to optimize the translation of Chinese radiology phrases to English. The dictionary is made up of 21,863 Chinese-English radiological term pairs extracted from several free medical dictionaries. Secondly, four typical text similarity algorithms are introduced, which are Levenshtein distance, Jaccard similarity coefficient, Word2vec Continuous bag-of-words model, and WordNet Wup similarity algorithms. Lastly, the Multilayer Perceptron model has been used to synthesize the contextual, lexical, character and syntactical information of four text similarity algorithms to promote precision, in which four similarity scores of two terms are taken as input and the output presents whether the two terms are synonyms. RESULTS: The results show the effectiveness of the approach with an F1-score of 90.15%, a precision of 91.78% and a recall of 88.59%. The hybrid translation algorithm has no negative effect on the final coding, F1-score has increased by 21.44% and 8.12% compared with the GNMT algorithm and dictionary translation. Compared with the single similarity, the result of the MLP weighting similarity algorithm is satisfactory that has a 4.48% increase compared with the best single similarity algorithm, WordNet Wup. CONCLUSIONS: The paper proposed an innovative automatic cross-language RadLex coding approach to solve the standardization of Chinese structured radiology reports, that can be taken as a reference to automatic cross-language coding.


Asunto(s)
Sistemas de Información Radiológica , Radiología , Algoritmos , China , Humanos , Lenguaje , Procesamiento de Lenguaje Natural
16.
BMC Med Inform Decis Mak ; 21(1): 262, 2021 09 11.
Artículo en Inglés | MEDLINE | ID: mdl-34511100

RESUMEN

BACKGROUND: It is essential for radiologists to communicate actionable findings to the referring clinicians reliably. Natural language processing (NLP) has been shown to help identify free-text radiology reports including actionable findings. However, the application of recent deep learning techniques to radiology reports, which can improve the detection performance, has not been thoroughly examined. Moreover, free-text that clinicians input in the ordering form (order information) has seldom been used to identify actionable reports. This study aims to evaluate the benefits of two new approaches: (1) bidirectional encoder representations from transformers (BERT), a recent deep learning architecture in NLP, and (2) using order information in addition to radiology reports. METHODS: We performed a binary classification to distinguish actionable reports (i.e., radiology reports tagged as actionable in actual radiological practice) from non-actionable ones (those without an actionable tag). 90,923 Japanese radiology reports in our hospital were used, of which 788 (0.87%) were actionable. We evaluated four methods, statistical machine learning with logistic regression (LR) and with gradient boosting decision tree (GBDT), and deep learning with a bidirectional long short-term memory (LSTM) model and a publicly available Japanese BERT model. Each method was used with two different inputs, radiology reports alone and pairs of order information and radiology reports. Thus, eight experiments were conducted to examine the performance. RESULTS: Without order information, BERT achieved the highest area under the precision-recall curve (AUPRC) of 0.5138, which showed a statistically significant improvement over LR, GBDT, and LSTM, and the highest area under the receiver operating characteristic curve (AUROC) of 0.9516. Simply coupling the order information with the radiology reports slightly increased the AUPRC of BERT but did not lead to a statistically significant improvement. This may be due to the complexity of clinical decisions made by radiologists. CONCLUSIONS: BERT was assumed to be useful to detect actionable reports. More sophisticated methods are required to use order information effectively.


Asunto(s)
Procesamiento de Lenguaje Natural , Radiología , Humanos , Modelos Logísticos , Aprendizaje Automático , Radiografía
17.
J Digit Imaging ; 34(2): 374-384, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33569716

RESUMEN

Recommendations are a key component of radiology reports. Automatic extraction of recommendations would facilitate tasks such as recommendation tracking, quality improvement, and large-scale descriptive studies. Existing report-parsing systems are frequently limited to recommendations for follow-up imaging studies, operate at the sentence or document level rather than the individual recommendation level, and do not extract important contextualizing information. We present a neural network architecture capable of extracting fully contextualized recommendations from any type of radiology report. We identified six major "questions" necessary to capture the majority of context associated with a recommendation: recommendation, time period, reason, conditionality, strength, and negation. We developed a unified task representation by allowing questions to refer to answers to other questions. Our representation allows for a single system to perform named entity recognition (NER) and classification tasks. We annotated 2272 radiology reports from all specialties, imaging modalities, and multiple hospitals across our institution. We evaluated the performance of a long short-term memory (LSTM) architecture on the six-question task. The single-task LSTM model achieves a token-level performance of 89.2% at recommendation extraction, and token-level performances between 85 and 95% F1 on extracting modifying features. Our model extracts all types of recommendations, including follow-up imaging, tissue biopsies, and clinical correlation, and can operate in real time. It is feasible to extract complete contextualized recommendations of all types from arbitrary radiology reports. The approach is likely generalizable to other clinical entities referenced in radiology reports, such as radiologic findings or diagnoses.


Asunto(s)
Sistemas de Información Radiológica , Radiología , Humanos , Lenguaje , Procesamiento de Lenguaje Natural , Redes Neurales de la Computación , Informe de Investigación
18.
J Digit Imaging ; 33(2): 334-340, 2020 04.
Artículo en Inglés | MEDLINE | ID: mdl-31515753

RESUMEN

The purpose of this study was to assess if clinical indications, patient location, and imaging sites predict the viewing pattern of referring physicians for CT and MR of the head, chest, and abdomen. Our study included 166,953 CT/MR images of head/chest/abdomen in 2016-2017 in the outpatient (OP, n = 83,981 CT/MR), inpatient (IP, n = 51,052), and emergency (ED, n = 31,920) settings. There were 125,329 CT/MR performed in the hospital setting and 41,624 in one of the nine off-campus locations. We extracted information regarding body region (head/chest/abdomen), patient location, and imaging site from the electronic medical records (EPIC). We recorded clinical indications and the number of times referring physicians viewed CT/MR (defined as the number of separate views of imaging in the EPIC). Data were analyzed with the Microsoft SQL and SPSS statistical software. About 33% of IP CT and MR studies are viewed > 6 times compared to 7% for OP and 19% of ED studies (p < 0.001). Conversely, most OP studies (55%) were viewed 1-2 times only, compared to 21% for IP and 38% for ED studies (p < 0.001). In-hospital exams are viewed (≥ 6 views; 39% studies) more frequently than off-campus imaging (≥ 6 views; 17% studies) (p < 0.001). For head CT/MR, certain clinical indications (i.e., stroke) had higher viewing rates compared to other clinical indications such as malignancy, headache, and dizziness. Conversely, for chest CT, dyspnea-hypoxia had much higher viewing rates (> 6 times) in IP (55%) and ED (46%) than in OP settings (22%). Patient location and imaging site regardless of clinical indications have a profound effect on viewing patterns of referring physicians. Understanding viewing patterns of the referring physicians can help guide interpretation priorities and finding communication for imaging exams based on patient location, imaging site, and clinical indications. The information can help in the efficient delivery of patient care.


Asunto(s)
Médicos , Tomografía Computarizada por Rayos X , Abdomen , Comunicación , Registros Electrónicos de Salud , Humanos
19.
J Digit Imaging ; 33(4): 988-995, 2020 08.
Artículo en Inglés | MEDLINE | ID: mdl-32472318

RESUMEN

Critical results reporting guidelines demand that certain critical findings are communicated to the responsible provider within a specific period of time. In this paper, we discuss a generic report processing pipeline to extract critical findings within the dictated report to allow for automation of quality and compliance oversight using a production dataset containing 1,210,858 radiology exams. Algorithm accuracy on an annotated dataset having 327 sentences was 91.4% (95% CI 87.6-94.2%). Our results show that most critical findings are diagnosed on CT and MR exams and that intracranial hemorrhage and fluid collection are the most prevalent at our institution. 1.6% of the exams were found to have at least one of the ten critical findings we focused on. This methodology can enable detailed analysis of critical results reporting for research, workflow management, compliance, and quality assurance.


Asunto(s)
Sistemas de Información Radiológica , Radiología , Algoritmos , Automatización , Humanos , Informe de Investigación
20.
AJR Am J Roentgenol ; 212(3): 602-606, 2019 03.
Artículo en Inglés | MEDLINE | ID: mdl-30620671

RESUMEN

OBJECTIVE: Radiology reports have traditionally been written for referring clinical providers. However, as patients increasingly access their radiology reports through online medical records, concerns have been raised about their ability to comprehend these complex documents. The purpose of this study was to assess the readability of lumbar spine MRI reports. MATERIALS AND METHODS: We reviewed 110 lumbar spine MRI reports dictated by 11 fellowship-trained radiologists (eight musculoskeletal radiologists and three neuroradiologists) at a single academic medical center. We evaluated each article for readability using five quantitative readability tests: the Flesch-Kincaid Grade Level, Flesch Reading Ease, Gunning Fog Index, Coleman-Liau Index, and the Simple Measure of Gobbledygook. The number of reports with readability at or below eighth-grade level (average reading ability of U.S. adults) and at or below sixth-grade level (level recommended by the National Institutes of Health and the American Medical Association for patient education materials) were determined. RESULTS: The mean readability grade level of the lumbar spine MRI reports was greater than the 12th-grade reading level for all readability scales. Only one report was written at or below eighth-grade level; no reports were written at or below sixth-grade level. CONCLUSION: Lumbar spine MRI reports are written at a level too high for the average patient to comprehend. As patients increasingly read their radiology reports through online portals, consideration should be made of patients' ability to read and comprehend these complex medical documents.


Asunto(s)
Comprensión , Alfabetización en Salud , Vértebras Lumbares/diagnóstico por imagen , Imagen por Resonancia Magnética , Enfermedades de la Columna Vertebral/diagnóstico por imagen , Adulto , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA