Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Patient Educ Couns ; 122: 108157, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38290171

RESUMO

BACKGROUND: Personalized risk (PR) estimates may enhance clinical decision making and risk communication by providing individualized estimates of patient outcomes. We explored stakeholder attitudes toward the utility, acceptability, usefulness and best-practices for integrating PR estimates into patient education and decision making about Left Ventricular Assist Device (LVAD). METHODS AND RESULTS: As part of a 5-year multi-institutional AHRQ project, we conducted 40 interviews with stakeholders (physicians, nurse coordinators, patients, and caregivers), analyzed using Thematic Content Analysis. All stakeholder groups voiced positive views towards integrating PR in decision making. Patients, caregivers and coordinators emphasized that PR can help to better understand a patient's condition and risks, prepare mentally and logistically for likely outcomes, and meaningfully engage in decision making. Physicians felt it can improve their decision making by enhancing insight into outcomes, enhance tailored pre-emptive care, increase confidence in decisions, and reduce bias and subjectivity. All stakeholder groups also raised concerns about accuracy, representativeness and relevance of algorithms; predictive uncertainty; utility in relation to physician's expertise; potential negative reactions among patients; and overreliance. CONCLUSION: Stakeholders are optimistic about integrating PR into clinical decision making, but acceptability depends on prospectively demonstrating accuracy, relevance and evidence that benefits of PR outweigh potential negative impacts on decision making quality.


Assuntos
Coração Auxiliar , Médicos , Humanos , Tomada de Decisões , Educação de Pacientes como Assunto , Atitude
2.
Stereotact Funct Neurosurg ; 101(5): 301-313, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37844562

RESUMO

INTRODUCTION: Pediatric deep brain stimulation (pDBS) is commonly used to manage treatment-resistant primary dystonias with favorable results and more frequently used for secondary dystonia to improve quality of life. There has been little systematic empirical neuroethics research to identify ethical challenges and potential solutions to ensure responsible use of DBS in pediatric populations. METHODS: Clinicians (n = 29) who care for minors with treatment-resistant dystonia were interviewed for their perspectives on the most pressing ethical issues in pDBS. RESULTS: Using thematic content analysis to explore salient themes, clinicians identified four pressing concerns: (1) uncertainty about risks and benefits of pDBS (22/29; 72%) that poses a challenge to informed decision-making; (2) ethically navigating decision-making roles (15/29; 52%), including how best to integrate perspectives from diverse stakeholders (patient, caregiver, clinician) and how to manage surrogate decisions on behalf of pediatric patients with limited capacity to make autonomous decisions; (3) information scarcity effects on informed consent and decision quality (15/29; 52%) in the context of patient and caregivers' expectations for treatment; and (4) narrow regulatory status and access (7/29; 24%) such as the lack of FDA-approved indications that contribute to decision-making uncertainty and liability and potentially limit access to DBS among patients who may benefit from it. CONCLUSION: These results suggest that clinicians are primarily concerned about ethical limitations of making difficult decisions in the absence of informational, regulatory, and financial supports. We discuss two solutions already underway, including supported decision-making to address uncertainty and further data sharing to enhance clinical knowledge and discovery.


Assuntos
Estimulação Encefálica Profunda , Distonia , Distúrbios Distônicos , Humanos , Criança , Qualidade de Vida , Distúrbios Distônicos/terapia , Consentimento Livre e Esclarecido
3.
NPJ Digit Med ; 5(1): 197, 2022 Dec 28.
Artigo em Inglês | MEDLINE | ID: mdl-36577851

RESUMO

As the use of artificial intelligence and machine learning (AI/ML) continues to expand in healthcare, much attention has been given to mitigating bias in algorithms to ensure they are employed fairly and transparently. Less attention has fallen to addressing potential bias among AI/ML's human users or factors that influence user reliance. We argue for a systematic approach to identifying the existence and impacts of user biases while using AI/ML tools and call for the development of embedded interface design features, drawing on insights from decision science and behavioral economics, to nudge users towards more critical and reflective decision making using AI/ML.

4.
Perspect Biol Med ; 65(4): 672-679, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36468396

RESUMO

Bioethicists today are taking a greater role in the design and implementation of emerging technologies by "embedding" within the development teams and providing their direct guidance and recommendations. Ideally, these collaborations allow ethical considerations to be addressed in an active, iterative, and ongoing process through regular exchanges between ethicists and members of the technological development team. This article discusses a challenge to this embedded ethics approach-namely, that bioethical guidance, even if embraced by the development team in theory, is not easily actionable in situ. Many of the ethical problems at issue in emerging technologies are associated with preexisting structural, socioeconomic, and political factors, making compliance with ethical recommendations sometimes less a matter of choice and more a matter of feasibility. Moreover, incentive structures within these systemic factors maintain them against reform efforts. The authors recommend that embedded bioethicists utilize principles from behavioral science (such as behavioral economics) to better understand and account for these incentive structures so as to encourage the ethically responsible uptake of technological innovations.


Assuntos
Ciências do Comportamento , Bioética , Humanos , Eticistas , Princípios Morais
5.
J Law Med Ethics ; 50(1): 92-100, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35243993

RESUMO

When applied in the health sector, AI-based applications raise not only ethical but legal and safety concerns, where algorithms trained on data from majority populations can generate less accurate or reliable results for minorities and other disadvantaged groups.


Assuntos
Inteligência Artificial , Racismo , Humanos , Aprendizado de Máquina
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA