Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 72
2.
Lancet Digit Health ; 6(2): e126-e130, 2024 Feb.
Article En | MEDLINE | ID: mdl-38278614

Advances in machine learning for health care have brought concerns about bias from the research community; specifically, the introduction, perpetuation, or exacerbation of care disparities. Reinforcing these concerns is the finding that medical images often reveal signals about sensitive attributes in ways that are hard to pinpoint by both algorithms and people. This finding raises a question about how to best design general purpose pretrained embeddings (GPPEs, defined as embeddings meant to support a broad array of use cases) for building downstream models that are free from particular types of bias. The downstream model should be carefully evaluated for bias, and audited and improved as appropriate. However, in our view, well intentioned attempts to prevent the upstream components-GPPEs-from learning sensitive attributes can have unintended consequences on the downstream models. Despite producing a veneer of technical neutrality, the resultant end-to-end system might still be biased or poorly performing. We present reasons, by building on previously published data, to support the reasoning that GPPEs should ideally contain as much information as the original data contain, and highlight the perils of trying to remove sensitive attributes from a GPPE. We also emphasise that downstream prediction models trained for specific tasks and settings, whether developed using GPPEs or not, should be carefully designed and evaluated to avoid bias that makes models vulnerable to issues such as distributional shift. These evaluations should be done by a diverse team, including social scientists, on a diverse cohort representing the full breadth of the patient population for which the final model is intended.


Delivery of Health Care , Machine Learning , Humans , Bias , Algorithms
3.
Int J Med Inform ; 182: 105303, 2024 Feb.
Article En | MEDLINE | ID: mdl-38088002

BACKGROUND: Studies about racial disparities in healthcare are increasing in quantity; however, they are subject to vast differences in definition, classification, and utilization of race/ethnicity data. Improved standardization of this information can strengthen conclusions drawn from studies using such data. The objective of this study is to examine how data related to race/ethnicity are recorded in research through examining articles on race/ethnicity health disparities and examine problems and solutions in data reporting that may impact overall data quality. METHODS: In this systematic review, Business Source Complete, Embase.com, IEEE Xplore, PubMed, Scopus and Web of Science Core Collection were searched for relevant articles published from 2000 to 2020. Search terms related to the concepts of electronic medical records, race/ethnicity, and data entry related to race/ethnicity were used. Exclusion criteria included articles not in the English language and those describing pediatric populations. Data were extracted from published articles. This review was organized and reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 statement for systematic reviews. FINDINGS: In this systematic review, 109 full text articles were reviewed. Weaknesses and possible solutions have been discussed in current literature, with the predominant problem and solution as follows: the electronic medical record (EMR) is vulnerable to inaccuracies and incompleteness in the methods that research staff collect this data; however, improved standardization of the collection and use of race data in patient care may help alleviate these inaccuracies. INTERPRETATION: Conclusions drawn from large datasets concerning peoples of certain race/ethnic groups should be made cautiously, and a careful review of the methodology of each publication should be considered prior to implementation in patient care.


Electronic Health Records , Research Design , Child , Humans , Ethnicity , Data Accuracy , Healthcare Disparities
4.
Eur J Cancer ; 198: 113504, 2024 Feb.
Article En | MEDLINE | ID: mdl-38141549

Patient care workflows are highly multimodal and intertwined: the intersection of data outputs provided from different disciplines and in different formats remains one of the main challenges of modern oncology. Artificial Intelligence (AI) has the potential to revolutionize the current clinical practice of oncology owing to advancements in digitalization, database expansion, computational technologies, and algorithmic innovations that facilitate discernment of complex relationships in multimodal data. Within oncology, radiation therapy (RT) represents an increasingly complex working procedure, involving many labor-intensive and operator-dependent tasks. In this context, AI has gained momentum as a powerful tool to standardize treatment performance and reduce inter-observer variability in a time-efficient manner. This review explores the hurdles associated with the development, implementation, and maintenance of AI platforms and highlights current measures in place to address them. In examining AI's role in oncology workflows, we underscore that a thorough and critical consideration of these challenges is the only way to ensure equitable and unbiased care delivery, ultimately serving patients' survival and quality of life.


Artificial Intelligence , Neoplasms , Humans , Quality of Life , Workflow , Neoplasms/therapy , Patient Care
6.
PLOS Digit Health ; 2(10): e0000279, 2023 Oct.
Article En | MEDLINE | ID: mdl-37824584

INTRODUCTION: Harnessing new digital technologies can improve access to health care but can also widen the health divide for those with poor digital literacy. This scoping review aims to assess the current situation of low digital health literacy in terms of its definition, reach, impact on health and interventions for its mitigation. METHODS: A comprehensive literature search strategy was composed by a qualified medical librarian. Literature databases [Medline (Ovid), Embase (Ovid), Scopus, and Google Scholar] were queried using appropriate natural language and controlled vocabulary terms along with hand-searching and citation chaining. We focused on recent and highly cited references published in English. Reviews were excluded. This scoping review was conducted following the methodological framework of Arksey and O'Malley. RESULTS: A total of 268 articles were identified (263 from the initial search and 5 more added from the references of the original papers), 53 of which were finally selected for full text analysis. Digital health literacy is the most frequently used descriptor to refer to the ability to find and use health information with the goal of addressing or solving a health problem using technology. The most utilized tool to assess digital health literacy is the eHealth literacy scale (eHEALS), a self-reported measurement tool that evaluates six core dimensions and is available in various languages. Individuals with higher digital health literacy scores have better self-management and participation in their own medical decisions, mental and psychological state and quality of life. Effective interventions addressing poor digital health literacy included education/training and social support. CONCLUSIONS: Although there is interest in the study and impact of poor digital health literacy, there is still a long way to go to improve measurement tools and find effective interventions to reduce the digital health divide.

7.
Int J Med Inform ; 178: 105211, 2023 Oct.
Article En | MEDLINE | ID: mdl-37690225

PURPOSE: Chronic obstructive pulmonary disease (COPD) is one of the most common chronic illnesses in the world. Unfortunately, COPD is often difficult to diagnose early when interventions can alter the disease course, and it is underdiagnosed or only diagnosed too late for effective treatment. Currently, spirometry is the gold standard for diagnosing COPD but it can be challenging to obtain, especially in resource-poor countries. Chest X-rays (CXRs), however, are readily available and may have the potential as a screening tool to identify patients with COPD who should undergo further testing or intervention. In this study, we used three CXR datasets alongside their respective electronic health records (EHR) to develop and externally validate our models. METHOD: To leverage the performance of convolutional neural network models, we proposed two fusion schemes: (1) model-level fusion, using Bootstrap aggregating to aggregate predictions from two models, (2) data-level fusion, using CXR image data from different institutions or multi-modal data, CXR image data, and EHR data for model training. Fairness analysis was then performed to evaluate the models across different demographic groups. RESULTS: Our results demonstrate that DL models can detect COPD using CXRs with an area under the curve of over 0.75, which could facilitate patient screening for COPD, especially in low-resource regions where CXRs are more accessible than spirometry. CONCLUSIONS: By using a ubiquitous test, future research could build on this work to detect COPD in patients early who would not otherwise have been diagnosed or treated, altering the course of this highly morbid disease.

8.
J Am Heart Assoc ; 12(13): e029232, 2023 07 04.
Article En | MEDLINE | ID: mdl-37345819

Background Mortality prediction in critically ill patients with cardiogenic shock can guide triage and selection of potentially high-risk treatment options. Methods and Results We developed and externally validated a checklist risk score to predict in-hospital mortality among adults admitted to the cardiac intensive care unit with Society for Cardiovascular Angiography & Interventions Shock Stage C or greater cardiogenic shock using 2 real-world data sets and Risk-Calibrated Super-sparse Linear Integer Modeling (RiskSLIM). We compared this model to those developed using conventional penalized logistic regression and published cardiogenic shock and intensive care unit mortality prediction models. There were 8815 patients in our training cohort (in-hospital mortality 13.4%) and 2237 patients in our validation cohort (in-hospital mortality 22.8%), and there were 39 candidate predictor variables. The final risk score (termed BOS,MA2) included maximum blood urea nitrogen ≥25 mg/dL, minimum oxygen saturation <88%, minimum systolic blood pressure <80 mm Hg, use of mechanical ventilation, age ≥60 years, and maximum anion gap ≥14 mmol/L, based on values recorded during the first 24 hours of intensive care unit stay. Predicted in-hospital mortality ranged from 0.5% for a score of 0 to 70.2% for a score of 6. The area under the receiver operating curve was 0.83 (0.82-0.84) in training and 0.76 (0.73-0.78) in validation, and the expected calibration error was 0.9% in training and 2.6% in validation. Conclusions Developed using a novel machine learning method and the largest cardiogenic shock cohorts among published models, BOS,MA2 is a simple, clinically interpretable risk score that has improved performance compared with existing cardiogenic-shock risk scores and better calibration than general intensive care unit risk scores.


Intensive Care Units , Shock, Cardiogenic , Adult , Humans , Middle Aged , Shock, Cardiogenic/diagnosis , Shock, Cardiogenic/therapy , Retrospective Studies , Risk Factors , Hospital Mortality
9.
J Crit Care ; 77: 154325, 2023 10.
Article En | MEDLINE | ID: mdl-37187000

PURPOSE: Limited evidence exists regarding outcomes associated with different correction rates of severe hyponatremia. MATERIALS AND METHODS: This retrospective cohort analysis employed a multi-center ICU database to identify patients with sodium ≤120 mEq/L during ICU admission. We determined correction rates over the first 24 h and categorized them as rapid (> 8 mEq/L/day) or slow (≤ 8 mEq/L/day). The primary outcome was in-hospital mortality. Secondary outcomes included hospital-free days, ICU-free days, and neurological complications. We used inverse probability weighting for confounder adjustment. RESULTS: Our cohort included 1024 patients; 451 rapid and 573 slow correctors. Rapid correction was associated with lower in-hospital mortality (absolute difference: -4.37%; 95% CI, -8.47 to -0.26%), longer hospital-free days (1.80 days; 95% CI, 0.82 to 2.79 days), and longer ICU-free days (1.16 days; 95% CI, 0.15 to 2.17 days). There was no significant difference in neurological complications (2.31%; 95% CI, -0.77 to 5.40%). CONCLUSION: Rapid correction (>8 mEq/L/day) of severe hyponatremia within the first 24 h was associated with lower in-hospital mortality and longer ICU and hospital-free days without an increase in neurological complication. Despite major limitations, including the inability to identify the chronicity of hyponatremia, the results have important implications and warrant prospective studies.


Hyponatremia , Humans , Hyponatremia/etiology , Retrospective Studies , Prospective Studies , Sodium , Intensive Care Units
10.
BMJ Glob Health ; 8(5)2023 05.
Article En | MEDLINE | ID: mdl-37257937

BACKGROUND: The COVID-19 pandemic required science to provide answers rapidly to combat the outbreak. Hence, the reproducibility and quality of conducting research may have been threatened, particularly regarding privacy and data protection, in varying ways around the globe. The objective was to investigate aspects of reporting informed consent and data handling as proxies for study quality conduct. METHODS: A systematic scoping review was performed by searching PubMed and Embase. The search was performed on November 8th, 2020. Studies with hospitalised patients diagnosed with COVID-19 over 18 years old were eligible for inclusion. With a focus on informed consent, data were extracted on the study design, prestudy protocol registration, ethical approval, data anonymisation, data sharing and data transfer as proxies for study quality. For reasons of comparison, data regarding country income level, study location and journal impact factor were also collected. RESULTS: 972 studies were included. 21.3% of studies reported informed consent, 42.6% reported waivers of consent, 31.4% did not report consent information and 4.7% mentioned other types of consent. Informed consent reporting was highest in clinical trials (94.6%) and lowest in retrospective cohort studies (15.0%). The reporting of consent versus no consent did not differ significantly by journal impact factor (p=0.159). 16.8% of studies reported a prestudy protocol registration or design. Ethical approval was described in 90.9% of studies. Information on anonymisation was provided in 17.0% of studies. In 257 multicentre studies, 1.2% reported on data sharing agreements, and none reported on Findable, Accessible, Interoperable and Reusable data principles. 1.2% reported on open data. Consent was most often reported in the Middle East (42.4%) and least often in North America (4.7%). Only one report originated from a low-income country. DISCUSSION: Informed consent and aspects of data handling and sharing were under-reported in publications concerning COVID-19 and differed between countries, which strains study quality conduct when in dire need of answers.


COVID-19 , Pandemics , Humans , Adolescent , Retrospective Studies , Reproducibility of Results , Informed Consent
12.
Semin Diagn Pathol ; 40(2): 100-108, 2023 Mar.
Article En | MEDLINE | ID: mdl-36882343

The field of medicine is undergoing rapid digital transformation. Pathologists are now striving to digitize their data, workflows, and interpretations, assisted by the enabling development of whole-slide imaging. Going digital means that the analog process of human diagnosis can be augmented or even replaced by rapidly evolving AI approaches, which are just now entering into clinical practice. But with such progress comes challenges that reflect a variety of stressors, including the impact of unrepresentative training data with accompanying implicit bias, data privacy concerns, and fragility of algorithm performance. Beyond such core digital aspects, considerations arise related to difficulties presented by changing disease presentations, diagnostic approaches, and therapeutic options. While some tools such as data federation can help with broadening data diversity while preserving expertise and local control, they may not be the full answer to some of these issues. The impact of AI in pathology on the field's human practitioners is still very much unknown: installation of unconscious bias and deference to AI guidance need to be understood and addressed. If AI is widely adopted, it may remove many inefficiencies in daily practice and compensate for staff shortages. It may also cause practitioner deskilling, dethrilling, and burnout. We discuss the technological, clinical, legal, and sociological factors that will influence the adoption of AI in pathology, and its eventual impact for good or ill.


Algorithms , Pathologists , Humans , Artificial Intelligence
13.
Surv Ophthalmol ; 68(4): 669-677, 2023.
Article En | MEDLINE | ID: mdl-36878360

Uveitis is a disease complex characterized by intraocular inflammation of the uvea that is an important cause of blindness and social morbidity. With the dawn of artificial intelligence (AI) and machine learning integration in health care, their application in uveitis creates an avenue to improve screening and diagnosis. Our review identified the use of artificial intelligence in studies of uveitis and classified them as diagnosis support, finding detection, screening, and standardization of uveitis nomenclature. The overall performance of models is poor, with limited datasets and a lack of validation studies and publicly available data and codes. We conclude that AI holds great promise to assist with the diagnosis and detection of ocular findings of uveitis, but further studies and large representative datasets are needed to guarantee generalizability and fairness.


Artificial Intelligence , Uveitis , Humans , Machine Learning , Uveitis/diagnosis , Delivery of Health Care , Uvea
15.
Sci Data ; 10(1): 1, 2023 01 03.
Article En | MEDLINE | ID: mdl-36596836

Digital data collection during routine clinical practice is now ubiquitous within hospitals. The data contains valuable information on the care of patients and their response to treatments, offering exciting opportunities for research. Typically, data are stored within archival systems that are not intended to support research. These systems are often inaccessible to researchers and structured for optimal storage, rather than interpretability and analysis. Here we present MIMIC-IV, a publicly available database sourced from the electronic health record of the Beth Israel Deaconess Medical Center. Information available includes patient measurements, orders, diagnoses, procedures, treatments, and deidentified free-text clinical notes. MIMIC-IV is intended to support a wide array of research studies and educational material, helping to reduce barriers to conducting clinical research.


Electronic Health Records , Humans , Databases, Factual , Hospitals
16.
Cancer Treat Rev ; 112: 102498, 2023 Jan.
Article En | MEDLINE | ID: mdl-36527795

Artificial intelligence (AI) has experienced explosive growth in oncology and related specialties in recent years. The improved expertise in data capture, the increased capacity for data aggregation and analytic power, along with decreasing costs of genome sequencing and related biologic "omics", set the foundation and need for novel tools that can meaningfully process these data from multiple sources and of varying types. These advances provide value across biomedical discovery, diagnosis, prognosis, treatment, and prevention, in a multimodal fashion. However, while big data and AI tools have already revolutionized many fields, medicine has partially lagged due to its complexity and multi-dimensionality, leading to technical challenges in developing and validating solutions that generalize to diverse populations. Indeed, inner biases and miseducation of algorithms, in view of their implementation in daily clinical practice, are increasingly relevant concerns; critically, it is possible for AI to mirror the unconscious biases of the humans who generated these algorithms. Therefore, to avoid worsening existing health disparities, it is critical to employ a thoughtful, transparent, and inclusive approach that involves addressing bias in algorithm design and implementation along the cancer care continuum. In this review, a broad landscape of major applications of AI in cancer care is provided, with a focus on cancer research and precision medicine. Major challenges posed by the implementation of AI in the clinical setting will be discussed. Potentially feasible solutions for mitigating bias are provided, in the light of promoting cancer health equity.


Artificial Intelligence , Neoplasms , Humans , Precision Medicine , Algorithms , Prognosis , Neoplasms/genetics , Neoplasms/therapy , Neoplasms/diagnosis
17.
Adv Chronic Kidney Dis ; 29(5): 431-438, 2022 09.
Article En | MEDLINE | ID: mdl-36253026

Machine learning is the field of artificial intelligence in which computers are trained to make predictions or to identify patterns in data through complex mathematical algorithms. It has great potential in critical care to predict outcomes, such as acute kidney injury, and can be used for prognosis and to suggest management strategies. Machine learning can also be used as a research tool to advance our clinical and biochemical understanding of acute kidney injury. In this review, we introduce basic concepts in machine learning and review recent research in each of these domains.


Acute Kidney Injury , Artificial Intelligence , Acute Kidney Injury/diagnosis , Acute Kidney Injury/therapy , Critical Care , Humans , Intensive Care Units , Machine Learning
19.
Lancet Digit Health ; 4(12): e893-e898, 2022 12.
Article En | MEDLINE | ID: mdl-36154811

Analysis of electronic health records (EHRs) is an increasingly common approach for studying real-world patient data. Use of routinely collected data offers several advantages compared with other study designs, including reduced administrative costs, the ability to update analysis as practice patterns evolve, and larger sample sizes. Methodologically, EHR analysis is subject to distinct challenges because data are not collected for research purposes. In this Viewpoint, we elaborate on the importance of in-depth knowledge of clinical workflows and describe six potential pitfalls to be avoided when working with EHR data, drawing on examples from the literature and our experience. We propose solutions for prevention or mitigation of factors associated with each of these six pitfalls-sample selection bias, imprecise variable definitions, limitations to deployment, variable measurement frequency, subjective treatment allocation, and model overfitting. Ultimately, we hope that this Viewpoint will guide researchers to further improve the methodological robustness of EHR analysis.


Data Science , Electronic Health Records , Humans , Data Collection , Research Design , Routinely Collected Health Data
...