Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 123
Filtrar
1.
JMIR Med Inform ; 2024 Jun 08.
Artigo em Inglês | MEDLINE | ID: mdl-38850555

RESUMO

BACKGROUND: Increasing and substantial reliance on Electronic health records (EHR) and data types (i.e., diagnosis (Dx), medication (Rx), laboratory (Lx)) demands assessment of its data quality (DQ) as a fundamental approach; especially since there is need to identify appropriate denominator population with chronic conditions, such as Type-2 Diabetes (T2D), using commonly available computable phenotype definitions (phenotype). OBJECTIVE: To bridge this gap, our study aims to assess how issues of EHR DQ, and variations and robustness (or lack thereof) in phenotypes may have potential impact in identifying denominator population. METHODS: Approximately 208k patients with T2D were included in our study using retrospective EHR data of Johns Hopkins Medical Institution (JHMI) during 2017-2019. Our assessment included 4 published phenotypes, and 1 definition from a panel of experts at Hopkins. We conducted descriptive analyses of demographics (i.e., age, sex, race, ethnicity), healthcare utilization (inpatient and emergency room visits), and average Charlson Comorbidity score of each phenotype. We then used different methods to induce/simulate DQ issues of completeness, accuracy and timeliness separately across each phenotype. For induced data incompleteness, our model randomly dropped Dx, Rx, and Lx codes independently at increments of 10%; for induced data inaccuracy, our model randomly replaced a Dx or Rx code with another code of the same data type and induced 2% incremental change from -100% to +10% in Lx result values; and lastly, for timeliness, data was modeled for induced incremental shift of date records by 30 days up to a year. RESULTS: Less than a quarter (23%) of population overlapped across all phenotypes using EHR. The population identified by each phenotype varied across all combination of data types. Induced incompleteness identified fewer patients with each increment, for e.g., at 100% diagnostic incompleteness, Chronic Conditions Data Warehouse (CCW) phenotype identified zero patients as its phenotypic characteristics included only Dx codes. Induced inaccuracy and timeliness similarly demonstrated variations in performance of each phenotype and therefore, resulting in fewer patients being identified with each incremental change. CONCLUSIONS: We utilized EHR data with Dx, Rx, and Lx data types from a large tertiary hospital system to understand the T2D phenotypic differences and performance. We learned how issues of DQ, using induced DQ methods, may impact identification of the denominator populations upon which clinical (e.g., clinical research and trials, population health evaluations) and financial/operational decisions are made. The novel results from our study may inform in shaping a common T2D computable phenotype definition that can be applicable to clinical informatics, managing chronic conditions, and additional healthcare industry-wide efforts.

2.
J Med Internet Res ; 26: e54265, 2024 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-38916936

RESUMO

BACKGROUND: Evidence-based medicine (EBM) has the potential to improve health outcomes, but EBM has not been widely integrated into the systems used for research or clinical decision-making. There has not been a scalable and reusable computer-readable standard for distributing research results and synthesized evidence among creators, implementers, and the ultimate users of that evidence. Evidence that is more rapidly updated, synthesized, disseminated, and implemented would improve both the delivery of EBM and evidence-based health care policy. OBJECTIVE: This study aimed to introduce the EBM on Fast Healthcare Interoperability Resources (FHIR) project (EBMonFHIR), which is extending the methods and infrastructure of Health Level Seven (HL7) FHIR to provide an interoperability standard for the electronic exchange of health-related scientific knowledge. METHODS: As an ongoing process, the project creates and refines FHIR resources to represent evidence from clinical studies and syntheses of those studies and develops tools to assist with the creation and visualization of FHIR resources. RESULTS: The EBMonFHIR project created FHIR resources (ie, ArtifactAssessment, Citation, Evidence, EvidenceReport, and EvidenceVariable) for representing evidence. The COVID-19 Knowledge Accelerator (COKA) project, now Health Evidence Knowledge Accelerator (HEvKA), took this work further and created FHIR resources that express EvidenceReport, Citation, and ArtifactAssessment concepts. The group is (1) continually refining FHIR resources to support the representation of EBM; (2) developing controlled terminology related to EBM (ie, study design, statistic type, statistical model, and risk of bias); and (3) developing tools to facilitate the visualization and data entry of EBM information into FHIR resources, including human-readable interfaces and JSON viewers. CONCLUSIONS: EBMonFHIR resources in conjunction with other FHIR resources can support relaying EBM components in a manner that is interoperable and consumable by downstream tools and health information technology systems to support the users of evidence.


Assuntos
Medicina Baseada em Evidências , Interoperabilidade da Informação em Saúde , Medicina Baseada em Evidências/normas , Humanos , Interoperabilidade da Informação em Saúde/normas , COVID-19 , Nível Sete de Saúde
3.
J Biomed Inform ; 156: 104683, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38925281

RESUMO

OBJECTIVE: Despite increased availability of methodologies to identify algorithmic bias, the operationalization of bias evaluation for healthcare predictive models is still limited. Therefore, this study proposes a process for bias evaluation through an empirical assessment of common hospital readmission models. The process includes selecting bias measures, interpretation, determining disparity impact and potential mitigations. METHODS: This retrospective analysis evaluated racial bias of four common models predicting 30-day unplanned readmission (i.e., LACE Index, HOSPITAL Score, and the CMS readmission measure applied as is and retrained). The models were assessed using 2.4 million adult inpatient discharges in Maryland from 2016 to 2019. Fairness metrics that are model-agnostic, easy to compute, and interpretable were implemented and apprised to select the most appropriate bias measures. The impact of changing model's risk thresholds on these measures was further assessed to guide the selection of optimal thresholds to control and mitigate bias. RESULTS: Four bias measures were selected for the predictive task: zero-one-loss difference, false negative rate (FNR) parity, false positive rate (FPR) parity, and generalized entropy index. Based on these measures, the HOSPITAL score and the retrained CMS measure demonstrated the lowest racial bias. White patients showed a higher FNR while Black patients resulted in a higher FPR and zero-one-loss. As the models' risk threshold changed, trade-offs between models' fairness and overall performance were observed, and the assessment showed all models' default thresholds were reasonable for balancing accuracy and bias. CONCLUSIONS: This study proposes an Applied Framework to Assess Fairness of Predictive Models (AFAFPM) and demonstrates the process using 30-day hospital readmission model as the example. It suggests the feasibility of applying algorithmic bias assessment to determine optimized risk thresholds so that predictive models can be used more equitably and accurately. It is evident that a combination of qualitative and quantitative methods and a multidisciplinary team are necessary to identify, understand and respond to algorithm bias in real-world healthcare settings. Users should also apply multiple bias measures to ensure a more comprehensive, tailored, and balanced view. The results of bias measures, however, must be interpreted with caution and consider the larger operational, clinical, and policy context.


Assuntos
Readmissão do Paciente , Racismo , Humanos , Readmissão do Paciente/estatística & dados numéricos , Estudos Retrospectivos , Masculino , Feminino , Pessoa de Meia-Idade , Adulto , Idoso , Maryland , Algoritmos , Disparidades em Assistência à Saúde
4.
Otol Neurotol Open ; 4(2): e051, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38919767

RESUMO

Objective: Determine the incidence of vestibular disorders in patients with SARS-CoV-2 compared to the control population. Study Design: Retrospective. Setting: Clinical data in the National COVID Cohort Collaborative database (N3C). Methods: Deidentified patient data from the National COVID Cohort Collaborative database (N3C) were queried based on variant peak prevalence (untyped, alpha, delta, omicron 21K, and omicron 23A) from covariants.org to retrospectively analyze the incidence of vestibular disorders in patients with SARS-CoV-2 compared to control population, consisting of patients without documented evidence of COVID infection during the same period. Results: Patients testing positive for COVID-19 were significantly more likely to have a vestibular disorder compared to the control population. Compared to control patients, the odds ratio of vestibular disorders was significantly elevated in patients with untyped (odds ratio [OR], 2.39; confidence intervals [CI], 2.29-2.50; P < 0.001), alpha (OR, 3.63; CI, 3.48-3.78; P < 0.001), delta (OR, 3.03; CI, 2.94-3.12; P < 0.001), omicron 21K variant (OR, 2.97; CI, 2.90-3.04; P < 0.001), and omicron 23A variant (OR, 8.80; CI, 8.35-9.27; P < 0.001). Conclusions: The incidence of vestibular disorders differed between COVID-19 variants and was significantly elevated in COVID-19-positive patients compared to the control population. These findings have implications for patient counseling and further research is needed to discern the long-term effects of these findings.

5.
PLOS Digit Health ; 3(6): e0000527, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38935590

RESUMO

Study-specific data quality testing is an essential part of minimizing analytic errors, particularly for studies making secondary use of clinical data. We applied a systematic and reproducible approach for study-specific data quality testing to the analysis plan for PRESERVE, a 15-site, EHR-based observational study of chronic kidney disease in children. This approach integrated widely adopted data quality concepts with healthcare-specific evaluation methods. We implemented two rounds of data quality assessment. The first produced high-level evaluation using aggregate results from a distributed query, focused on cohort identification and main analytic requirements. The second focused on extended testing of row-level data centralized for analysis. We systematized reporting and cataloguing of data quality issues, providing institutional teams with prioritized issues for resolution. We tracked improvements and documented anomalous data for consideration during analyses. The checks we developed identified 115 and 157 data quality issues in the two rounds, involving completeness, data model conformance, cross-variable concordance, consistency, and plausibility, extending traditional data quality approaches to address more complex stratification and temporal patterns. Resolution efforts focused on higher priority issues, given finite study resources. In many cases, institutional teams were able to correct data extraction errors or obtain additional data, avoiding exclusion of 2 institutions entirely and resolving 123 other gaps. Other results identified complexities in measures of kidney function, bearing on the study's outcome definition. Where limitations such as these are intrinsic to clinical data, the study team must account for them in conducting analyses. This study rigorously evaluated fitness of data for intended use. The framework is reusable and built on a strong theoretical underpinning. Significant data quality issues that would have otherwise delayed analyses or made data unusable were addressed. This study highlights the need for teams combining subject-matter and informatics expertise to address data quality when working with real world data.

6.
Diagnosis (Berl) ; 2024 May 03.
Artigo em Inglês | MEDLINE | ID: mdl-38696319

RESUMO

OBJECTIVES: Diagnostic errors are the leading cause of preventable harm in clinical practice. Implementable tools to quantify and target this problem are needed. To address this gap, we aimed to generalize the Symptom-Disease Pair Analysis of Diagnostic Error (SPADE) framework by developing its computable phenotype and then demonstrated how that schema could be applied in multiple clinical contexts. METHODS: We created an information model for the SPADE processes, then mapped data fields from electronic health records (EHR) and claims data in use to that model to create the SPADE information model (intention) and the SPADE computable phenotype (extension). Later we validated the computable phenotype and tested it in four case studies in three different health systems to demonstrate its utility. RESULTS: We mapped and tested the SPADE computable phenotype in three different sites using four different case studies. We showed that data fields to compute an SPADE base measure are fully available in the EHR Data Warehouse for extraction and can operationalize the SPADE framework from provider and/or insurer perspective, and they could be implemented on numerous health systems for future work in monitor misdiagnosis-related harms. CONCLUSIONS: Data for the SPADE base measure is readily available in EHR and administrative claims. The method of data extraction is potentially universally applicable, and the data extracted is conveniently available within a network system. Further study is needed to validate the computable phenotype across different settings with different data infrastructures.

7.
Res Sq ; 2024 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-38352357

RESUMO

Background: This research delves into the confluence of racial disparities and health inequities among individuals with disabilities, with a focus on those contending with both diabetes and visual impairment. Methods: Utilizing data from the TriNetX Research Network, which includes electronic medical records of roughly 115 million patients from 83 anonymous healthcare organizations, this study employs a directed acyclic graph (DAG) to pinpoint confounders and augment interpretation. We identified patients with visual impairments using ICD-10 codes, deliberately excluding diabetes-related ophthalmology complications. Our approach involved multiple race-stratified analyses, comparing co-morbidities like chronic pulmonary disease in visually impaired patients against their counterparts. We assessed healthcare access disparities by examining the frequency of annual visits, instances of two or more A1c measurements, and glomerular filtration rate (GFR) measurements. Additionally, we evaluated diabetes outcomes by comparing the risk ratio of uncontrolled diabetes (A1c > 9.0) and chronic kidney disease in patients with and without visual impairments. Results: The incidence of diabetes was substantially higher (nearly double) in individuals with visual impairments across White, Asian, and African American populations. Higher rates of chronic kidney disease were observed in visually impaired individuals, with a risk ratio of 1.79 for African American, 2.27 for White, and non-significant for the Asian group. A statistically significant difference in the risk ratio for uncontrolled diabetes was found only in the White cohort (0.843). White individuals without visual impairments were more likely to receive two A1c tests, a trend not significant in other racial groups. African Americans with visual impairments had a higher rate of glomerular filtration rate testing. However, White individuals with visual impairments were less likely to undergo GFR testing, indicating a disparity in kidney health monitoring. This pattern of disparity was not observed in the Asian cohort. Conclusions: This study uncovers pronounced disparities in diabetes incidence and management among individuals with visual impairments, particularly among White, Asian, and African American groups. Our DAG analysis illuminates the intricate interplay between SDoH, healthcare access, and frequency of crucial diabetes monitoring practices, highlighting visual impairment as both a medical and social issue.

8.
Nat Commun ; 15(1): 421, 2024 Jan 11.
Artigo em Inglês | MEDLINE | ID: mdl-38212308

RESUMO

Diabetic retinopathy can be prevented with screening and early detection. We hypothesized that autonomous artificial intelligence (AI) diabetic eye exams at the point-of-care would increase diabetic eye exam completion rates in a racially and ethnically diverse youth population. AI for Children's diabetiC Eye ExamS (NCT05131451) is a parallel randomized controlled trial that randomized youth (ages 8-21 years) with type 1 and type 2 diabetes to intervention (autonomous artificial intelligence diabetic eye exam at the point of care), or control (scripted eye care provider referral and education) in an academic pediatric diabetes center. The primary outcome was diabetic eye exam completion rate within 6 months. The secondary outcome was the proportion of participants who completed follow-through with an eye care provider if deemed appropriate. Diabetic eye exam completion rate was significantly higher (100%, 95%CI: 95.5%, 100%) in the intervention group (n = 81) than the control group (n = 83) (22%, 95%CI: 14.2%, 32.4%)(p < 0.001). In the intervention arm, 25/81 participants had an abnormal result, of whom 64% (16/25) completed follow-through with an eye care provider, compared to 22% in the control arm (p < 0.001). Autonomous AI increases diabetic eye exam completion rates in youth with diabetes.


Assuntos
Diabetes Mellitus Tipo 2 , Retinopatia Diabética , Criança , Humanos , Adolescente , Retinopatia Diabética/diagnóstico , Seguimentos , Inteligência Artificial , Encaminhamento e Consulta
10.
Ophthalmol Sci ; 3(4): 100391, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38025162

RESUMO

Purpose: Evaluate the degree of concept coverage of the general eye examination in one widely used electronic health record (EHR) system using the Observational Health Data Sciences and Informatics Observational Medical Outcomes Partnership (OMOP) common data model (CDM). Design: Study of data elements. Participants: Not applicable. Methods: Data elements (field names and predefined entry values) from the general eye examination in the Epic foundation system were mapped to OMOP concepts and analyzed. Each mapping was given a Health Level 7 equivalence designation-equal when the OMOP concept had the same meaning as the source EHR concept, wider when it was missing information, narrower when it was overly specific, and unmatched when there was no match. Initial mappings were reviewed by 2 graders. Intergrader agreement for equivalence designation was calculated using Cohen's kappa. Agreement on the mapped OMOP concept was calculated as a percentage of total mappable concepts. Discrepancies were discussed and a final consensus created. Quantitative analysis was performed on wider and unmatched concepts. Main Outcome Measures: Gaps in OMOP concept coverage of EHR elements and intergrader agreement of mapped OMOP concepts. Results: A total of 698 data elements (210 fields, 488 values) from the EHR were analyzed. The intergrader kappa on the equivalence designation was 0.88 (standard error 0.03, P < 0.001). There was a 96% agreement on the mapped OMOP concept. In the final consensus mapping, 25% (1% fields, 31% values) of the EHR to OMOP concept mappings were considered equal, 50% (27% fields, 60% values) wider, 4% (8% fields, 2% values) narrower, and 21% (52% fields, 8% values) unmatched. Of the wider mapped elements, 46% were missing the laterality specification, 24% had other missing attributes, and 30% had both issues. Wider and unmatched EHR elements could be found in all areas of the general eye examination. Conclusions: Most data elements in the general eye examination could not be represented precisely using the OMOP CDM. Our work suggests multiple ways to improve the incorporation of important ophthalmology concepts in OMOP, including adding laterality to existing concepts. There exists a strong need to improve the coverage of ophthalmic concepts in source vocabularies so that the OMOP CDM can better accommodate vision research. Financial Disclosures: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

11.
Hepatol Commun ; 7(10)2023 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-37695082

RESUMO

BACKGROUND: The use of large-scale data and artificial intelligence (AI) to support complex transplantation decisions is in its infancy. Transplant candidate decision-making, which relies heavily on subjective assessment (ie, high variability), provides a ripe opportunity for AI-based clinical decision support (CDS). However, AI-CDS for transplant applications must consider important concerns regarding fairness (ie, health equity). The objective of this study was to use human-centered design methods to elicit providers' perceptions of AI-CDS for liver transplant listing decisions. METHODS: In this multicenter qualitative study conducted from December 2020 to July 2021, we performed semistructured interviews with 53 multidisciplinary liver transplant providers from 2 transplant centers. We used inductive coding and constant comparison analysis of interview data. RESULTS: Analysis yielded 6 themes important for the design of fair AI-CDS for liver transplant listing decisions: (1) transparency in the creators behind the AI-CDS and their motivations; (2) understanding how the AI-CDS uses data to support recommendations (ie, interpretability); (3) acknowledgment that AI-CDS could mitigate emotions and biases; (4) AI-CDS as a member of the transplant team, not a replacement; (5) identifying patient resource needs; and (6) including the patient's role in the AI-CDS. CONCLUSIONS: Overall, providers interviewed were cautiously optimistic about the potential for AI-CDS to improve clinical and equitable outcomes for patients. These findings can guide multidisciplinary developers in the design and implementation of AI-CDS that deliberately considers health equity.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Transplante de Fígado , Humanos , Inteligência Artificial , Pesquisa Qualitativa
12.
Sleep Health ; 9(5): 767-773, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37268482

RESUMO

OBJECTIVES: To examine cross-sectional and longitudinal associations of individual sleep domains and multidimensional sleep health with current overweight or obesity and 5-year weight change in adults. METHODS: We estimated sleep regularity, quality, timing, onset latency, sleep interruptions, duration, and napping using validated questionnaires. We calculated multidimensional sleep health using a composite score (total number of "good" sleep health indicators) and sleep phenotypes derived from latent class analysis. Logistic regression was used to examine associations between sleep and overweight or obesity. Multinomial regression was used to examine associations between sleep and weight change (gain, loss, or maintenance) over a median of 1.66 years. RESULTS: The sample included 1016 participants with a median age of 52 (IQR = 37-65), who primarily identified as female (78%), White (79%), and college-educated (74%). We identified 3 phenotypes: good, moderate, and poor sleep. More regularity of sleep, sleep quality, and shorter sleep onset latency were associated with 37%, 38%, and 45% lower odds of overweight or obesity, respectively. The addition of each good sleep health dimension was associated with 16% lower adjusted odds of having overweight or obesity. The adjusted odds of overweight or obesity were similar between sleep phenotypes. Sleep, individual or multidimensional sleep health, was not associated with weight change. CONCLUSIONS: Multidimensional sleep health showed cross-sectional, but not longitudinal, associations with overweight or obesity. Future research should advance our understanding of how to assess multidimensional sleep health to understand the relationship between all aspects of sleep health and weight over time.


Assuntos
Obesidade , Sobrepeso , Adulto , Humanos , Feminino , Sobrepeso/epidemiologia , Estudos de Coortes , Estudos Transversais , Obesidade/epidemiologia , Sono , Inquéritos e Questionários
13.
Artigo em Inglês | MEDLINE | ID: mdl-37146228

RESUMO

OBJECTIVE: The annual American College of Medical Informatics (ACMI) symposium focused discussion on the national public health information systems (PHIS) infrastructure to support public health goals. The objective of this article is to present the strengths, weaknesses, threats, and opportunities (SWOT) identified by public health and informatics leaders in attendance. MATERIALS AND METHODS: The Symposium provided a venue for experts in biomedical informatics and public health to brainstorm, identify, and discuss top PHIS challenges. Two conceptual frameworks, SWOT and the Informatics Stack, guided discussion and were used to organize factors and themes identified through a qualitative approach. RESULTS: A total of 57 unique factors related to the current PHIS were identified, including 9 strengths, 22 weaknesses, 14 opportunities, and 14 threats, which were consolidated into 22 themes according to the Stack. Most themes (68%) clustered at the top of the Stack. Three overarching opportunities were especially prominent: (1) addressing the needs for sustainable funding, (2) leveraging existing infrastructure and processes for information exchange and system development that meets public health goals, and (3) preparing the public health workforce to benefit from available resources. DISCUSSION: The PHIS is unarguably overdue for a strategically designed, technology-enabled, information infrastructure for delivering day-to-day essential public health services and to respond effectively to public health emergencies. CONCLUSION: Most of the themes identified concerned context, people, and processes rather than technical elements. We recommend that public health leadership consider the possible actions and leverage informatics expertise as we collectively prepare for the future.

14.
J Am Med Inform Assoc ; 30(5): 1000-1005, 2023 04 19.
Artigo em Inglês | MEDLINE | ID: mdl-36917089

RESUMO

The COVID-19 pandemic exposed multiple weaknesses in the nation's public health system. Therefore, the American College of Medical Informatics selected "Rebuilding the Nation's Public Health Informatics Infrastructure" as the theme for its annual symposium. Experts in biomedical informatics and public health discussed strategies to strengthen the US public health information infrastructure through policy, education, research, and development. This article summarizes policy recommendations for the biomedical informatics community postpandemic. First, the nation must perceive the health data infrastructure to be a matter of national security. The nation must further invest significantly more in its health data infrastructure. Investments should include the education and training of the public health workforce as informaticians in this domain are currently limited. Finally, investments should strengthen and expand health data utilities that increasingly play a critical role in exchanging information across public health and healthcare organizations.


Assuntos
COVID-19 , Informática Médica , Estados Unidos , Humanos , Saúde Pública , Pandemias
15.
J Biomed Inform ; 140: 104335, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36933631

RESUMO

Identifying patient cohorts meeting the criteria of specific phenotypes is essential in biomedicine and particularly timely in precision medicine. Many research groups deliver pipelines that automatically retrieve and analyze data elements from one or more sources to automate this task and deliver high-performing computable phenotypes. We applied a systematic approach based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines to conduct a thorough scoping review on computable clinical phenotyping. Five databases were searched using a query that combined the concepts of automation, clinical context, and phenotyping. Subsequently, four reviewers screened 7960 records (after removing over 4000 duplicates) and selected 139 that satisfied the inclusion criteria. This dataset was analyzed to extract information on target use cases, data-related topics, phenotyping methodologies, evaluation strategies, and portability of developed solutions. Most studies supported patient cohort selection without discussing the application to specific use cases, such as precision medicine. Electronic Health Records were the primary source in 87.1 % (N = 121) of all studies, and International Classification of Diseases codes were heavily used in 55.4 % (N = 77) of all studies, however, only 25.9 % (N = 36) of the records described compliance with a common data model. In terms of the presented methods, traditional Machine Learning (ML) was the dominant method, often combined with natural language processing and other approaches, while external validation and portability of computable phenotypes were pursued in many cases. These findings revealed that defining target use cases precisely, moving away from sole ML strategies, and evaluating the proposed solutions in the real setting are essential opportunities for future work. There is also momentum and an emerging need for computable phenotyping to support clinical and epidemiological research and precision medicine.


Assuntos
Algoritmos , Registros Eletrônicos de Saúde , Aprendizado de Máquina , Processamento de Linguagem Natural , Fenótipo
16.
NPJ Digit Med ; 6(1): 53, 2023 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-36973403

RESUMO

The effectiveness of using artificial intelligence (AI) systems to perform diabetic retinal exams ('screening') on preventing vision loss is not known. We designed the Care Process for Preventing Vision Loss from Diabetes (CAREVL), as a Markov model to compare the effectiveness of point-of-care autonomous AI-based screening with in-office clinical exam by an eye care provider (ECP), on preventing vision loss among patients with diabetes. The estimated incidence of vision loss at 5 years was 1535 per 100,000 in the AI-screened group compared to 1625 per 100,000 in the ECP group, leading to a modelled risk difference of 90 per 100,000. The base-case CAREVL model estimated that an autonomous AI-based screening strategy would result in 27,000 fewer Americans with vision loss at 5 years compared with ECP. Vision loss at 5 years remained lower in the AI-screened group compared to the ECP group, in a wide range of parameters including optimistic estimates biased toward ECP. Real-world modifiable factors associated with processes of care could further increase its effectiveness. Of these factors, increased adherence with treatment was estimated to have the greatest impact.

17.
J Am Med Inform Assoc ; 30(5): 971-977, 2023 04 19.
Artigo em Inglês | MEDLINE | ID: mdl-36752649

RESUMO

OBJECTIVES: Collider bias is a common threat to internal validity in clinical research but is rarely mentioned in informatics education or literature. Conditioning on a collider, which is a variable that is the shared causal descendant of an exposure and outcome, may result in spurious associations between the exposure and outcome. Our objective is to introduce readers to collider bias and its corollaries in the retrospective analysis of electronic health record (EHR) data. TARGET AUDIENCE: Collider bias is likely to arise in the reuse of EHR data, due to data-generating mechanisms and the nature of healthcare access and utilization in the United States. Therefore, this tutorial is aimed at informaticians and other EHR data consumers without a background in epidemiological methods or causal inference. SCOPE: We focus specifically on problems that may arise from conditioning on forms of healthcare utilization, a common collider that is an implicit selection criterion when one reuses EHR data. Directed acyclic graphs (DAGs) are introduced as a tool for identifying potential sources of bias during study design and planning. References for additional resources on causal inference and DAG construction are provided.


Assuntos
Aceitação pelo Paciente de Cuidados de Saúde , Estudos Retrospectivos , Fatores de Confusão Epidemiológicos , Viés , Métodos Epidemiológicos
18.
Appl Clin Inform ; 14(2): 345-353, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36809791

RESUMO

BACKGROUND: Inflammatory bowel disease (IBD) commonly leads to iron deficiency anemia (IDA). Rates of screening and treatment of IDA are often low. A clinical decision support system (CDSS) embedded in an electronic health record could improve adherence to evidence-based care. Rates of CDSS adoption are often low due to poor usability and fit with work processes. One solution is to use human-centered design (HCD), which designs CDSS based on identified user needs and context of use and evaluates prototypes for usefulness and usability. OBJECTIVES: this study aimed to use HCD to design a CDSS tool called the IBD Anemia Diagnosis Tool, IADx. METHODS: Interviews with IBD practitioners informed creation of a process map of anemia care that was used by an interdisciplinary team that used HCD principles to create a prototype CDSS. The prototype was iteratively tested with "Think Aloud" usability evaluation with clinicians as well as semi-structured interviews, a survey, and observations. Feedback was coded and informed redesign. RESULTS: Process mapping showed that IADx should function at in-person encounters and asynchronous laboratory review. Clinicians desired full automation of clinical information acquisition such as laboratory trends and analysis such as calculation of iron deficit, less automation of clinical decision selection such as laboratory ordering, and no automation of action implementation such as signing medication orders. Providers preferred an interruptive alert over a noninterruptive reminder. CONCLUSION: Providers preferred an interruptive alert, perhaps due to the low likelihood of noticing a noninterruptive advisory. High levels of desire for automation of information acquisition and analysis with less automation of decision selection and action may be generalizable to other CDSSs designed for chronic disease management. This underlines the ways in which CDSSs have the potential to augment rather than replace provider cognitive work.


Assuntos
Anemia , Sistemas de Apoio a Decisões Clínicas , Doenças Inflamatórias Intestinais , Programas de Rastreamento , Criança , Humanos , Doença Crônica , Registros Eletrônicos de Saúde , Programas de Rastreamento/métodos , Anemia/diagnóstico , Doenças Inflamatórias Intestinais/complicações
19.
JMIR Hum Factors ; 10: e25361, 2023 Feb 02.
Artigo em Inglês | MEDLINE | ID: mdl-36729578

RESUMO

BACKGROUND: Many low- and middle-income countries have adopted telemedicine programs that connect frontline health workers (FHWs) such as nurses, midwives, or community health workers in rural and remote areas with physicians in urban areas to deliver care to patients. By leveraging technology to reduce temporal, financial, and geographical barriers, these health worker-to-physician telemedicine programs have the potential to increase health care quality, expand the specialties available to patients, and reduce the time and cost required to deliver care. OBJECTIVE: We aimed to identify, validate, and prioritize unmet needs in the health care space of health worker-to-physician telemedicine programs and develop and refine a solution that addresses those needs. METHODS: We collected information regarding user needs through ethnographic research, direct observation, and semistructured interviews with 37 stakeholders (n=5, 14% physicians; n=1, 3% public health program manager; n=12, 32% community health workers; and n=19, 51% patients) at 2 telemedicine clinics in rural West Bengal, India. We used the Spiral-Iterative Innovation Model to design and develop a prototype solution to meet these needs. RESULTS: We identified 74 unmet needs through our immersion in health worker-to-physician telemedicine programs. We identified a critical unmet need that achieving optimal teleconsultations in low- and middle-income countries often requires shifting tasks such as history taking and physical examination from high-skilled remote physicians to FHWs. To meet this need, we developed a prototype digital assistant that would allow FHWs to assume some of the tasks carried out by remote clinicians. The user needs of multiple stakeholder groups (patients, FHWs, physicians, and health organizations) were incorporated into the design and features of the task-shifting tool. The final prototype was shared with the health workers, physicians, and public health program managers who expressed that the tool would be useful and valuable. CONCLUSIONS: The final prototype that was developed was released as an open-source digital public good and may improve the quality and efficiency of care delivery in health worker-to-physician telemedicine programs.

20.
J Am Heart Assoc ; 12(3): e026484, 2023 02 07.
Artigo em Inglês | MEDLINE | ID: mdl-36651320

RESUMO

Background We aim to evaluate the association between meal intervals and weight trajectory among adults from a clinical cohort. Methods and Results This is a multisite prospective cohort study of adults recruited from 3 health systems. Over the 6-month study period, 547 participants downloaded and used a mobile application to record the timing of meals and sleep for at least 1 day. We obtained information on weight and comorbidities at each outpatient visit from electronic health records for up to 10 years before until 10 months after baseline. We used mixed linear regression to model weight trajectories. Mean age was 51.1 (SD 15.0) years, and body mass index was 30.8 (SD 7.8) kg/m2; 77.9% were women, and 77.5% reported White race. Mean interval from first to last meal was 11.5 (2.3) hours and was not associated with weight change. The number of meals per day was positively associated with weight change. The average difference in annual weight change (95% CI) associated with an increase of 1 daily meal was 0.28 kg (0.02-0.53). Conclusions Number of daily meals was positively associated with weight change over 6 years. Our findings did not support the use of time-restricted eating as a strategy for long-term weight loss in a general medical population.


Assuntos
Dieta , Comportamento Alimentar , Adulto , Humanos , Feminino , Pessoa de Meia-Idade , Masculino , Estudos Prospectivos , Refeições , Sono , Índice de Massa Corporal
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA