Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
J Med Case Rep ; 18(1): 360, 2024 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-39095817

RESUMO

BACKGROUND: Our case report provides the first clinical evaluation of autopsy practices for a patient death that occurs on the cloud. We question how autopsy practices may require adaptation for a death that presents via the 'Internet of Things', examining how existing guidelines capture data related to death which is no longer confined to the patient's body. CASE PRESENTATION: The patient was a British man in his 50s, who came to the attention of the medical team via an alert on the cloud-based platform that monitored his implanted cardioverter defibrillator (ICD). The patient had a background of congenital heart disease, with previous ventricular fibrillation cardiac arrest, for which the ICD had been implanted two years earlier. Retrospective analysis of the cloud data demonstrated a gradually decreasing nocturnal heart rate over the previous three months, falling to a final transmission of 24 beats per minute (bpm). In the patient post-mortem the ICD was treated as medical waste, structural tissue changes precluded the effective evaluation of device hardware, potential issues related to device software were not investigated and the cause of death was assigned to underlying heart failure. The documentation from the attending law enforcement officials did not consider possible digital causes of harm and relevant technology was not collected from the scene of death. CONCLUSION: Through this patient case we explore novel challenges associated with digital deaths including; (1) device hardware issues (difficult extraction processes, impact of pathological tissue changes), (2) software and data limitations (impact of negative body temperatures and mortuary radio-imaging on devices, lack of retrospective cloud data analysis), (3) guideline limitations (missing digital components in autopsy instruction and death certification), and (4) changes to clinical management (emotional impact of communicating deaths occurring over the internet to members of family). We consider the implications of our findings for public health services, the security and intelligence community, and patients and their families. In sharing this report we seek to raise awareness of digital medical cases, to draw attention to how the nature of dying is changing through technology, and to motivate the development of digitally appropriate clinical practice.


Assuntos
Autopsia , Desfibriladores Implantáveis , Humanos , Masculino , Pessoa de Meia-Idade , Computação em Nuvem
2.
J Med Internet Res ; 26: e46936, 2024 Aug 26.
Artigo em Inglês | MEDLINE | ID: mdl-39186324

RESUMO

BACKGROUND: The presence of bias in artificial intelligence has garnered increased attention, with inequities in algorithmic performance being exposed across the fields of criminal justice, education, and welfare services. In health care, the inequitable performance of algorithms across demographic groups may widen health inequalities. OBJECTIVE: Here, we identify and characterize bias in cardiology algorithms, looking specifically at algorithms used in the management of heart failure. METHODS: Stage 1 involved a literature search of PubMed and Web of Science for key terms relating to cardiac machine learning (ML) algorithms. Papers that built ML models to predict cardiac disease were evaluated for their focus on demographic bias in model performance, and open-source data sets were retained for our investigation. Two open-source data sets were identified: (1) the University of California Irvine Heart Failure data set and (2) the University of California Irvine Coronary Artery Disease data set. We reproduced existing algorithms that have been reported for these data sets, tested them for sex biases in algorithm performance, and assessed a range of remediation techniques for their efficacy in reducing inequities. Particular attention was paid to the false negative rate (FNR), due to the clinical significance of underdiagnosis and missed opportunities for treatment. RESULTS: In stage 1, our literature search returned 127 papers, with 60 meeting the criteria for a full review and only 3 papers highlighting sex differences in algorithm performance. In the papers that reported sex, there was a consistent underrepresentation of female patients in the data sets. No papers investigated racial or ethnic differences. In stage 2, we reproduced algorithms reported in the literature, achieving mean accuracies of 84.24% (SD 3.51%) for data set 1 and 85.72% (SD 1.75%) for data set 2 (random forest models). For data set 1, the FNR was significantly higher for female patients in 13 out of 16 experiments, meeting the threshold of statistical significance (-17.81% to -3.37%; P<.05). A smaller disparity in the false positive rate was significant for male patients in 13 out of 16 experiments (-0.48% to +9.77%; P<.05). We observed an overprediction of disease for male patients (higher false positive rate) and an underprediction of disease for female patients (higher FNR). Sex differences in feature importance suggest that feature selection needs to be demographically tailored. CONCLUSIONS: Our research exposes a significant gap in cardiac ML research, highlighting that the underperformance of algorithms for female patients has been overlooked in the published literature. Our study quantifies sex disparities in algorithmic performance and explores several sources of bias. We found an underrepresentation of female patients in the data sets used to train algorithms, identified sex biases in model error rates, and demonstrated that a series of remediation techniques were unable to address the inequities present.


Assuntos
Algoritmos , Aprendizado de Máquina , Humanos , Feminino , Masculino , Cardiopatias , Fatores Sexuais
3.
J Med Internet Res ; 26: e50505, 2024 Jul 11.
Artigo em Inglês | MEDLINE | ID: mdl-38990611

RESUMO

BACKGROUND: Health care professionals receive little training on the digital technologies that their patients rely on. Consequently, practitioners may face significant barriers when providing care to patients experiencing digitally mediated harms (eg, medical device failures and cybersecurity exploits). Here, we explore the impact of technological failures in clinical terms. OBJECTIVE: Our study explored the key challenges faced by frontline health care workers during digital events, identified gaps in clinical training and guidance, and proposes a set of recommendations for improving digital clinical practice. METHODS: A qualitative study involving a 1-day workshop of 52 participants, internationally attended, with multistakeholder participation. Participants engaged in table-top exercises and group discussions focused on medical scenarios complicated by technology (eg, malfunctioning ventilators and malicious hacks on health care apps). Extensive notes from 5 scribes were retrospectively analyzed and a thematic analysis was performed to extract and synthesize data. RESULTS: Clinicians reported novel forms of harm related to technology (eg, geofencing in domestic violence and errors related to interconnected fetal monitoring systems) and barriers impeding adverse event reporting (eg, time constraints and postmortem device disposal). Challenges to providing effective patient care included a lack of clinical suspicion of device failures, unfamiliarity with equipment, and an absence of digitally tailored clinical protocols. Participants agreed that cyberattacks should be classified as major incidents, with the repurposing of existing crisis resources. Treatment of patients was determined by the role technology played in clinical management, such that those reliant on potentially compromised laboratory or radiological facilities were prioritized. CONCLUSIONS: Here, we have framed digital events through a clinical lens, described in terms of their end-point impact on the patient. In doing so, we have developed a series of recommendations for ensuring responses to digital events are tailored to clinical needs and center patient care.


Assuntos
Segurança Computacional , Humanos , Pessoal de Saúde , Tecnologia Biomédica , Pesquisa Qualitativa , Feminino
4.
Digit Health ; 10: 20552076241247939, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38766368

RESUMO

Background: The advance of digital health technologies has created new forms of potential pathology which are not captured in current clinical guidelines. Through simulation-based research, we have identified the challenges to clinical care that emerge when patients suffer from illnesses stemming from failures in digital health technologies. Methods: Clinical simulation sessions were designed based on patient case reports relating to (a) medical device hardware errors, (b) medical device software errors, (c) complications of consumer technology and (d) technology-facilitated abuse. Clinicians were recruited to participate in simulations at three UK hospitals; audiovisual suites were used to facilitate group observation of simulation experience and focused debrief discussions. Invigilators scored clinicians on performance, clinicians provided individual qualitative and quantitative feedback, and extensive notes were taken throughout. Findings: Paired t-tests of pre and post-simulation feedback demonstrated significant improvements in clinician's diagnostic awareness, technical knowledge and confidence in clinical management following simulation exposure (p < 0.01). Barriers to care included: (a) low suspicion of digital agents, (b) attribution to psychopathology, (c) lack of education in technical mechanisms and (d) little utility of available tests. Suggested interventions for improving future practice included: (a) education initiatives, (b) technical support platforms, (c) digitally oriented assessments in hospital workflows, (d) cross-disciplinary staff and (e) protocols for digital cases. Conclusion: We provide an effective framework for simulation training focused on digital health pathologies and uncover barriers that impede effective care for patients dependent on technology. Our recommendations are relevant to educators, practising clinicians and professionals working in regulation, policy and industry.

5.
Lancet ; 402 Suppl 1: S88, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37997134

RESUMO

BACKGROUND: Biotechnological syndromes refer to the illnesses that arise at the intersection of human physiology and digital technology. Implanted technologies can malfunction (eg, runaway pacemakers, hacked insulin pumps), and consumer technologies can be exploited to impose adverse health effects (eg, technology-facilitated abuse, hacks on epilepsy websites inducing seizures). Through a series of clinical simulation events, our study aimed to (1) evaluate the ability of physicians to respond to biotechnological syndromes, (2) explore gaps in training impeding effective patient care in digital cases, and (3) identify clinical cases due to digital technology arising in the population. METHODS: This was a multisite clinical simulation study. Between Jan 1 and July 1, 2023, four half-day clinical simulation events focused on digital pathologies were delivered across three NHS sites in London and the East Midlands. Participants (n=14) ranged in seniority from clinical medical students through to hospital consultants. Ethics approval was attained from University College London. Participant performance was scored by one researcher, using mark schemes built from the Objective Structured Clinical Examinations (OSCEs) format of UK Medical Schools. Qualitative and quantitative feedback was collected from participants following each of the four scenarios. Participants were asked to identify clinical challenges present in each simulation, discuss cases within their own practice, and evaluate the usefulness of the educational material. FINDINGS: Participants reported a wide range of examples within their own practice (eg, insulin pumps malfunctioning due to Apple watches, cardiac arrests due to faults in ventilators). Participants described barriers to treatment in simulations, including low diagnostic suspicion of technological failures, little education on biotechnological mechanisms, a lack of available expertise, and uncertainty regarding effective therapeutics. In the subjective feedback, participants reported the lowest levels of confidence when managing cases relating to software issues in medical devices, both in terms of confidence in their own ability to deliver care (mean scores: 3·6/10 junior staff, 5·8/10 senior staff) and in their teams (3·8/10 juniors, 6·8/10 seniors). INTERPRETATION: In our digital society, clinical cases related to technology are likely to increase in the population. At present, a lack of clinical awareness, education, training material, and appropriate guidelines are some of the barriers that health-care professionals face when treating these patients. FUNDING: None.


Assuntos
Insulinas , Médicos , Humanos , Pessoal de Saúde/educação , Londres
6.
J Fam Violence ; : 1-20, 2023 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-37358974

RESUMO

Purpose: Computational text mining methods are proposed as a useful methodological innovation in Intimate Partner Violence (IPV) research. Text mining can offer researchers access to existing or new datasets, sourced from social media or from IPV-related organisations, that would be too large to analyse manually. This article aims to give an overview of current work applying text mining methodologies in the study of IPV, as a starting point for researchers wanting to use such methods in their own work. Methods: This article reports the results of a systematic review of academic research using computational text mining to research IPV. A review protocol was developed according to PRISMA guidelines, and a literature search of 8 databases was conducted, identifying 22 unique studies that were included in the review. Results: The included studies cover a wide range of methodologies and outcomes. Supervised and unsupervised approaches are represented, including rule-based classification (n = 3), traditional Machine Learning (n = 8), Deep Learning (n = 6) and topic modelling (n = 4) methods. Datasets are mostly sourced from social media (n = 15), with other data being sourced from police forces (n = 3), health or social care providers (n = 3), or litigation texts (n = 1). Evaluation methods mostly used a held-out, labelled test set, or k-fold Cross Validation, with Accuracy and F1 metrics reported. Only a few studies commented on the ethics of computational IPV research. Conclusions: Text mining methodologies offer promising data collection and analysis techniques for IPV research. Future work in this space must consider ethical implications of computational approaches.

7.
PLOS Digit Health ; 2(1): e0000089, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36812593

RESUMO

Safeguarding vulnerable patients is a key responsibility of healthcare professionals. Yet, existing clinical and patient management protocols are outdated as they do not address the emerging threats of technology-facilitated abuse. The latter describes the misuse of digital systems such as smartphones or other Internet-connected devices to monitor, control and intimidate individuals. The lack of attention given to how technology-facilitated abuse may affect patients in their lives, can result in clinicians failing to protect vulnerable patients and may affect their care in several unexpected ways. We attempt to address this gap by evaluating the literature that is available to healthcare practitioners working with patients impacted by digitally enabled forms of harm. A literature search was carried out between September 2021 and January 2022, in which three academic databases were probed using strings of relevant search terms, returning a total of 59 articles for full text review. The articles were appraised according to three criteria: (a) the focus on technology-facilitated abuse; (b) the relevance to clinical settings; and (c) the role of healthcare practitioners in safeguarding. Of the 59 articles, 17 articles met at least one criterion and only one article met all three criteria. We drew additional information from the grey literature to identify areas for improvement in medical settings and at-risk patient groups. Technology-facilitated abuse concerns healthcare professionals from the point of consultation to the point of discharge, as a result clinicians need to be equipped with the tools to identify and address these harms at any stage of the patient's journey. In this article, we offer recommendations for further research within different medical subspecialities and highlight areas requiring policy development in clinical environments.

8.
BMJ Case Rep ; 15(12)2022 Dec 26.
Artigo em Inglês | MEDLINE | ID: mdl-36572446

RESUMO

A man in his 50s attended the emergency department with an acute deterioration in his Parkinson's symptoms, presenting with limb rigidity, widespread tremor, choreiform dyskinesia, dysarthria, intense sadness and a severe occipital headache. After excluding common differentials for sudden-onset parkinsonism (eg, infection, medication change), an error on the patient's deep brain stimulator was noted. The patient's symptoms only resolved once he was transferred to the specialist centre so that the programmer could reset the device settings. Due to COVID-19-related bed pressures on the ward, there was a delay in the patient receiving specialist attention-highlighting the need for non-specialist training in the emergency management of device errors.


Assuntos
COVID-19 , Estimulação Encefálica Profunda , Doença de Parkinson , Masculino , Humanos , Doença de Parkinson/complicações , Doença de Parkinson/terapia , Doença de Parkinson/diagnóstico , COVID-19/terapia , Encéfalo , Tremor/etiologia , Tremor/terapia , Estimulação Encefálica Profunda/efeitos adversos , Serviço Hospitalar de Emergência
9.
NPJ Digit Med ; 5(1): 170, 2022 Nov 04.
Artigo em Inglês | MEDLINE | ID: mdl-36333390

RESUMO

Equity is widely held to be fundamental to the ethics of healthcare. In the context of clinical decision-making, it rests on the comparative fidelity of the intelligence - evidence-based or intuitive - guiding the management of each individual patient. Though brought to recent attention by the individuating power of contemporary machine learning, such epistemic equity arises in the context of any decision guidance, whether traditional or innovative. Yet no general framework for its quantification, let alone assurance, currently exists. Here we formulate epistemic equity in terms of model fidelity evaluated over learnt multidimensional representations of identity crafted to maximise the captured diversity of the population, introducing a comprehensive framework for Representational Ethical Model Calibration. We demonstrate the use of the framework on large-scale multimodal data from UK Biobank to derive diverse representations of the population, quantify model performance, and institute responsive remediation. We offer our approach as a principled solution to quantifying and assuring epistemic equity in healthcare, with applications across the research, clinical, and regulatory domains.

10.
BMJ Health Care Inform ; 29(1)2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35470133

RESUMO

OBJECTIVES: The Indian Liver Patient Dataset (ILPD) is used extensively to create algorithms that predict liver disease. Given the existing research describing demographic inequities in liver disease diagnosis and management, these algorithms require scrutiny for potential biases. We address this overlooked issue by investigating ILPD models for sex bias. METHODS: Following our literature review of ILPD papers, the models reported in existing studies are recreated and then interrogated for bias. We define four experiments, training on sex-unbalanced/balanced data, with and without feature selection. We build random forests (RFs), support vector machines (SVMs), Gaussian Naïve Bayes and logistic regression (LR) classifiers, running experiments 100 times, reporting average results with SD. RESULTS: We reproduce published models achieving accuracies of >70% (LR 71.31% (2.37 SD) - SVM 79.40% (2.50 SD)) and demonstrate a previously unobserved performance disparity. Across all classifiers females suffer from a higher false negative rate (FNR). Presently, RF and LR classifiers are reported as the most effective models, yet in our experiments they demonstrate the greatest FNR disparity (RF; -21.02%; LR; -24.07%). DISCUSSION: We demonstrate a sex disparity that exists in published ILPD classifiers. In practice, the higher FNR for females would manifest as increased rates of missed diagnosis for female patients and a consequent lack of appropriate care. Our study demonstrates that evaluating biases in the initial stages of machine learning can provide insights into inequalities in current clinical practice, reveal pathophysiological differences between the male and females, and can mitigate the digitisation of inequalities into algorithmic systems. CONCLUSION: Our findings are important to medical data scientists, clinicians and policy-makers involved in the implementation medical artificial intelligence systems. An awareness of the potential biases of these systems is essential in preventing the digital exacerbation of healthcare inequalities.


Assuntos
Inteligência Artificial , Hepatopatias , Algoritmos , Teorema de Bayes , Viés , Atenção à Saúde , Feminino , Humanos , Masculino , Aprendizado de Máquina Supervisionado
11.
PLoS One ; 15(12): e0240376, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33332380

RESUMO

BACKGROUND: The rapid integration of Artificial Intelligence (AI) into the healthcare field has occurred with little communication between computer scientists and doctors. The impact of AI on health outcomes and inequalities calls for health professionals and data scientists to make a collaborative effort to ensure historic health disparities are not encoded into the future. We present a study that evaluates bias in existing Natural Language Processing (NLP) models used in psychiatry and discuss how these biases may widen health inequalities. Our approach systematically evaluates each stage of model development to explore how biases arise from a clinical, data science and linguistic perspective. DESIGN/METHODS: A literature review of the uses of NLP in mental health was carried out across multiple disciplinary databases with defined Mesh terms and keywords. Our primary analysis evaluated biases within 'GloVe' and 'Word2Vec' word embeddings. Euclidean distances were measured to assess relationships between psychiatric terms and demographic labels, and vector similarity functions were used to solve analogy questions relating to mental health. RESULTS: Our primary analysis of mental health terminology in GloVe and Word2Vec embeddings demonstrated significant biases with respect to religion, race, gender, nationality, sexuality and age. Our literature review returned 52 papers, of which none addressed all the areas of possible bias that we identify in model development. In addition, only one article existed on more than one research database, demonstrating the isolation of research within disciplinary silos and inhibiting cross-disciplinary collaboration or communication. CONCLUSION: Our findings are relevant to professionals who wish to minimize the health inequalities that may arise as a result of AI and data-driven algorithms. We offer primary research identifying biases within these technologies and provide recommendations for avoiding these harms in the future.


Assuntos
Ciência de Dados/métodos , Disparidades nos Níveis de Saúde , Saúde Mental/estatística & dados numéricos , Processamento de Linguagem Natural , Psiquiatria/métodos , Viés , Ciência de Dados/estatística & dados numéricos , Humanos , Colaboração Intersetorial , Linguística , Psiquiatria/estatística & dados numéricos
12.
Artif Intell Med ; 110: 101965, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-33250145

RESUMO

Medicine is at a disciplinary crossroads. With the rapid integration of Artificial Intelligence (AI) into the healthcare field the future care of our patients will depend on the decisions we make now. Demographic healthcare inequalities continue to persist worldwide and the impact of medical biases on different patient groups is still being uncovered by the research community. At a time when clinical AI systems are scaled up in response to the Covid19 pandemic, the role of AI in exacerbating health disparities must be critically reviewed. For AI to account for the past and build a better future, we must first unpack the present and create a new baseline on which to develop these tools. The means by which we move forwards will determine whether we project existing inequity into the future, or whether we reflect on what we hold to be true and challenge ourselves to be better. AI is an opportunity and a mirror for all disciplines to improve their impact on society and for medicine the stakes could not be higher.


Assuntos
Inteligência Artificial , COVID-19 , Automação , Viés , Humanos , SARS-CoV-2
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA