Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Camb Q Healthc Ethics ; : 1-14, 2024 Apr 11.
Artigo em Inglês | MEDLINE | ID: mdl-38602092

RESUMO

The ongoing debate within neuroethics concerning the degree to which neuromodulation such as deep brain stimulation (DBS) changes the personality, identity, and agency (PIA) of patients has paid relatively little attention to the perspectives of prospective patients. Even less attention has been given to pediatric populations. To understand patients' views about identity changes due to DBS in obsessive-compulsive disorder (OCD), the authors conducted and analyzed semistructured interviews with adolescent patients with OCD and their parents/caregivers. Patients were asked about projected impacts to PIA generally due to DBS. All patient respondents and half of caregivers reported that DBS would impact patient self-identity in significant ways. For example, many patients expressed how DBS could positively impact identity by allowing them to explore their identities free from OCD. Others voiced concerns that DBS-related resolution of OCD might negatively impact patient agency and authenticity. Half of patients expressed that DBS may positively facilitate social access through relieving symptoms, while half indicated that DBS could increase social stigma. These views give insights into how to approach decision-making and informed consent if DBS for OCD becomes available for adolescents. They also offer insights into adolescent experiences of disability identity and "normalcy" in the context of OCD.

2.
J Med Ethics ; 2023 Nov 18.
Artigo em Inglês | MEDLINE | ID: mdl-37979976

RESUMO

Rapid advancements in artificial intelligence and machine learning (AI/ML) in healthcare raise pressing questions about how much users should trust AI/ML systems, particularly for high stakes clinical decision-making. Ensuring that user trust is properly calibrated to a tool's computational capacities and limitations has both practical and ethical implications, given that overtrust or undertrust can influence over-reliance or under-reliance on algorithmic tools, with significant implications for patient safety and health outcomes. It is, thus, important to better understand how variability in trust criteria across stakeholders, settings, tools and use cases may influence approaches to using AI/ML tools in real settings. As part of a 5-year, multi-institutional Agency for Health Care Research and Quality-funded study, we identify trust criteria for a survival prediction algorithm intended to support clinical decision-making for left ventricular assist device therapy, using semistructured interviews (n=40) with patients and physicians, analysed via thematic analysis. Findings suggest that physicians and patients share similar empirical considerations for trust, which were primarily epistemic in nature, focused on accuracy and validity of AI/ML estimates. Trust evaluations considered the nature, integrity and relevance of training data rather than the computational nature of algorithms themselves, suggesting a need to distinguish 'source' from 'functional' explainability. To a lesser extent, trust criteria were also relational (endorsement from others) and sometimes based on personal beliefs and experience. We discuss implications for promoting appropriate and responsible trust calibration for clinical decision-making use AI/ML.

5.
Front Hum Neurosci ; 18: 1332451, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38435745

RESUMO

Background: Artificial intelligence (AI)-based computer perception technologies (e.g., digital phenotyping and affective computing) promise to transform clinical approaches to personalized care in psychiatry and beyond by offering more objective measures of emotional states and behavior, enabling precision treatment, diagnosis, and symptom monitoring. At the same time, passive and continuous nature by which they often collect data from patients in non-clinical settings raises ethical issues related to privacy and self-determination. Little is known about how such concerns may be exacerbated by the integration of neural data, as parallel advances in computer perception, AI, and neurotechnology enable new insights into subjective states. Here, we present findings from a multi-site NCATS-funded study of ethical considerations for translating computer perception into clinical care and contextualize them within the neuroethics and neurorights literatures. Methods: We conducted qualitative interviews with patients (n = 20), caregivers (n = 20), clinicians (n = 12), developers (n = 12), and clinician developers (n = 2) regarding their perspective toward using PC in clinical care. Transcripts were analyzed in MAXQDA using Thematic Content Analysis. Results: Stakeholder groups voiced concerns related to (1) perceived invasiveness of passive and continuous data collection in private settings; (2) data protection and security and the potential for negative downstream/future impacts on patients of unintended disclosure; and (3) ethical issues related to patients' limited versus hyper awareness of passive and continuous data collection and monitoring. Clinicians and developers highlighted that these concerns may be exacerbated by the integration of neural data with other computer perception data. Discussion: Our findings suggest that the integration of neurotechnologies with existing computer perception technologies raises novel concerns around dignity-related and other harms (e.g., stigma, discrimination) that stem from data security threats and the growing potential for reidentification of sensitive data. Further, our findings suggest that patients' awareness and preoccupation with feeling monitored via computer sensors ranges from hypo- to hyper-awareness, with either extreme accompanied by ethical concerns (consent vs. anxiety and preoccupation). These results highlight the need for systematic research into how best to implement these technologies into clinical care in ways that reduce disruption, maximize patient benefits, and mitigate long-term risks associated with the passive collection of sensitive emotional, behavioral and neural data.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA