Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
J Am Med Inform Assoc ; 29(5): 831-840, 2022 04 13.
Artigo em Inglês | MEDLINE | ID: mdl-35146510

RESUMO

OBJECTIVES: Scanned documents (SDs), while common in electronic health records and potentially rich in clinically relevant information, rarely fit well with clinician workflow. Here, we identify scanned imaging reports requiring follow-up with high recall and practically useful precision. MATERIALS AND METHODS: We focused on identifying imaging findings for 3 common causes of malpractice claims: (1) potentially malignant breast (mammography) and (2) lung (chest computed tomography [CT]) lesions and (3) long-bone fracture (X-ray) reports. We train our ClinicalBERT-based pipeline on existing typed/dictated reports classified manually or using ICD-10 codes, evaluate using a test set of manually classified SDs, and compare against string-matching (baseline approach). RESULTS: A total of 393 mammograms, 305 chest CT, and 683 bone X-ray reports were manually reviewed. The string-matching approach had an F1 of 0.667. For mammograms, chest CTs, and bone X-rays, respectively: models trained on manually classified training data and optimized for F1 reached an F1 of 0.900, 0.905, and 0.817, while separate models optimized for recall achieved a recall of 1.000 with precisions of 0.727, 0.518, and 0.275. Models trained on ICD-10-labelled data and optimized for F1 achieved F1 scores of 0.647, 0.830, and 0.643, while those optimized for recall achieved a recall of 1.0 with precisions of 0.407, 0.683, and 0.358. DISCUSSION: Our pipeline can identify abnormal reports with potentially useful performance and so decrease the manual effort required to screen for abnormal findings that require follow-up. CONCLUSION: It is possible to automatically identify clinically significant abnormalities in SDs with high recall and practically useful precision in a generalizable and minimally laborious way.


Assuntos
Registros Eletrônicos de Saúde , Tomografia Computadorizada por Raios X , Processamento de Linguagem Natural , Relatório de Pesquisa
2.
Med Educ ; 56(6): 634-640, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-34983083

RESUMO

INTRODUCTION: In the context of competency-based medical education, poor student performance must be accurately documented to allow learners to improve and to protect the public. However, faculty may be reluctant to provide evaluations that could be perceived as negative, and clerkship directors report that some students pass who should have failed. Student perception of faculty may be considered in faculty promotion, teaching awards, and leadership positions. Therefore, faculty of lower academic rank may perceive themselves to be more vulnerable and, therefore, be less likely to document poor student performance. This study investigated faculty characteristics associated with low performance evaluations (LPEs). METHOD: The authors analysed individual faculty evaluations of medical students who completed the third-year clerkships over 15 years using a generalised mixed regression model to assess the association of evaluator academic rank with likelihood of an LPE. Other available factors related to experience or academic vulnerability were incorporated including faculty age, race, ethnicity, and gender. RESULTS: The authors identified 50 120 evaluations by 585 faculty on 3447 students between January 2007 and April 2021. Faculty were more likely to give LPEs at the midpoint (4.9%), compared with the final (1.6%), evaluation (odds ratio [OR] = 4.004, 95% confidence interval [CI] [3.59, 4.53]; p < 0.001). The likelihood of LPE decreased significantly during the 15-year study period (OR = 0.94 [0.90, 0.97]; p < 0.01). Full professors were significantly more likely to give an LPE than assistant professors (OR = 1.62 [1.08, 2.43]; p = 0.02). Women were more likely to give LPEs than men (OR = 1.88 [1.37, 2.58]; p 0.01). Other faculty characteristics including race and experience were not associated with LPE. CONCLUSIONS: The number of LPEs decreased over time, and senior faculty were more likely to document poor medical student performance compared with assistant professors.


Assuntos
Estágio Clínico , Estudantes de Medicina , Docentes , Docentes de Medicina , Feminino , Humanos , Liderança , Masculino
3.
Clin Transl Sci ; 15(2): 309-321, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34706145

RESUMO

Artificial intelligence (AI) is transforming many domains, including finance, agriculture, defense, and biomedicine. In this paper, we focus on the role of AI in clinical and translational research (CTR), including preclinical research (T1), clinical research (T2), clinical implementation (T3), and public (or population) health (T4). Given the rapid evolution of AI in CTR, we present three complementary perspectives: (1) scoping literature review, (2) survey, and (3) analysis of federally funded projects. For each CTR phase, we addressed challenges, successes, failures, and opportunities for AI. We surveyed Clinical and Translational Science Award (CTSA) hubs regarding AI projects at their institutions. Nineteen of 63 CTSA hubs (30%) responded to the survey. The most common funding source (48.5%) was the federal government. The most common translational phase was T2 (clinical research, 40.2%). Clinicians were the intended users in 44.6% of projects and researchers in 32.3% of projects. The most common computational approaches were supervised machine learning (38.6%) and deep learning (34.2%). The number of projects steadily increased from 2012 to 2020. Finally, we analyzed 2604 AI projects at CTSA hubs using the National Institutes of Health Research Portfolio Online Reporting Tools (RePORTER) database for 2011-2019. We mapped available abstracts to medical subject headings and found that nervous system (16.3%) and mental disorders (16.2) were the most common topics addressed. From a computational perspective, big data (32.3%) and deep learning (30.0%) were most common. This work represents a snapshot in time of the role of AI in the CTSA program.


Assuntos
Inteligência Artificial , Ciência Translacional Biomédica , Humanos , Pesquisa Translacional Biomédica , Estados Unidos
4.
Int J Med Inform ; 144: 104302, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33091829

RESUMO

OBJECTIVES: Electronic Health Records (EHRs) contain scanned documents from a variety of sources such as identification cards, radiology reports, clinical correspondence, and many other document types. We describe the distribution of scanned documents at one health institution and describe the design and evaluation of a system to categorize documents into clinically relevant and non-clinically relevant categories as well as further sub-classifications. Our objective is to demonstrate that text classification systems can accurately classify scanned documents. METHODS: We extracted text using Optical Character Recognition (OCR). We then created and evaluated multiple text classification machine learning models, including both "bag of words" and deep learning approaches. We evaluated the system on three different levels of classification using both the entire document as input, as well as the individual pages of the document. Finally, we compared the effects of different text processing methods. RESULTS: A deep learning model using ClinicalBERT performed best. This model distinguished between clinically-relevant documents and not clinically-relevant documents with an accuracy of 0.973; between intermediate sub-classifications with an accuracy of 0.949; and between individual classes with an accuracy of 0.913. DISCUSSION: Within the EHR, some document categories such as "external medical records" may contain hundreds of scanned pages without clear document boundaries. Without further sub-classification, clinicians must view every page or risk missing clinically-relevant information. Machine learning can automatically classify these scanned documents to reduce clinician burden. CONCLUSION: Using machine learning applied to OCR-extracted text has the potential to accurately identify clinically-relevant scanned content within EHRs.


Assuntos
Registros Eletrônicos de Saúde , Aprendizado de Máquina , Humanos , Processamento de Linguagem Natural
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA