Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Bases de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
BMJ Open ; 13(4): e069255, 2023 04 26.
Artigo em Inglês | MEDLINE | ID: mdl-37185650

RESUMO

INTRODUCTION: Managing violence or aggression is an ongoing challenge in emergency psychiatry. Many patients identified as being at risk do not go on to become violent or aggressive. Efforts to automate the assessment of risk involve training machine learning (ML) models on data from electronic health records (EHRs) to predict these behaviours. However, no studies to date have examined which patient groups may be over-represented in false positive predictions, despite evidence of social and clinical biases that may lead to higher perceptions of risk in patients defined by intersecting features (eg, race, gender). Because risk assessment can impact psychiatric care (eg, via coercive measures, such as restraints), it is unclear which patients might be underserved or harmed by the application of ML. METHODS AND ANALYSIS: We pilot a computational ethnography to study how the integration of ML into risk assessment might impact acute psychiatric care, with a focus on how EHR data is compiled and used to predict a risk of violence or aggression. Our objectives include: (1) evaluating an ML model trained on psychiatric EHRs to predict violent or aggressive incidents for intersectional bias; and (2) completing participant observation and qualitative interviews in an emergency psychiatric setting to explore how social, clinical and structural biases are encoded in the training data. Our overall aim is to study the impact of ML applications in acute psychiatry on marginalised and underserved patient groups. ETHICS AND DISSEMINATION: The project was approved by the research ethics board at The Centre for Addiction and Mental Health (053/2021). Study findings will be presented in peer-reviewed journals, conferences and shared with service users and providers.


Assuntos
Pacientes Internados , Psiquiatria , Humanos , Pacientes Internados/psicologia , Violência/prevenção & controle , Violência/psicologia , Agressão/psicologia , Antropologia Cultural
2.
BMJ Health Care Inform ; 29(1)2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-35012941

RESUMO

OBJECTIVES: Fairness is a core concept meant to grapple with different forms of discrimination and bias that emerge with advances in Artificial Intelligence (eg, machine learning, ML). Yet, claims to fairness in ML discourses are often vague and contradictory. The response to these issues within the scientific community has been technocratic. Studies either measure (mathematically) competing definitions of fairness, and/or recommend a range of governance tools (eg, fairness checklists or guiding principles). To advance efforts to operationalise fairness in medicine, we synthesised a broad range of literature. METHODS: We conducted an environmental scan of English language literature on fairness from 1960-July 31, 2021. Electronic databases Medline, PubMed and Google Scholar were searched, supplemented by additional hand searches. Data from 213 selected publications were analysed using rapid framework analysis. Search and analysis were completed in two rounds: to explore previously identified issues (a priori), as well as those emerging from the analysis (de novo). RESULTS: Our synthesis identified 'Three Pillars for Fairness': transparency, impartiality and inclusion. We draw on these insights to propose a multidimensional conceptual framework to guide empirical research on the operationalisation of fairness in healthcare. DISCUSSION: We apply the conceptual framework generated by our synthesis to risk assessment in psychiatry as a case study. We argue that any claim to fairness must reflect critical assessment and ongoing social and political deliberation around these three pillars with a range of stakeholders, including patients. CONCLUSION: We conclude by outlining areas for further research that would bolster ongoing commitments to fairness and health equity in healthcare.


Assuntos
Equidade em Saúde , Inteligência Artificial , Atenção à Saúde , Humanos , Aprendizado de Máquina , Medição de Risco
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA