Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 305
Filtrar
8.
BMC Med Inform Decis Mak ; 24(1): 247, 2024 Sep 04.
Artigo em Inglês | MEDLINE | ID: mdl-39232725

RESUMO

BACKGROUND: Artificial intelligence (AI) is increasingly used for prevention, diagnosis, monitoring, and treatment of cardiovascular diseases. Despite the potential for AI to improve care, ethical concerns and mistrust in AI-enabled healthcare exist among the public and medical community. Given the rapid and transformative recent growth of AI in cardiovascular care, to inform practice guidelines and regulatory policies that facilitate ethical and trustworthy use of AI in medicine, we conducted a literature review to identify key ethical and trust barriers and facilitators from patients' and healthcare providers' perspectives when using AI in cardiovascular care. METHODS: In this rapid literature review, we searched six bibliographic databases to identify publications discussing transparency, trust, or ethical concerns (outcomes of interest) associated with AI-based medical devices (interventions of interest) in the context of cardiovascular care from patients', caregivers', or healthcare providers' perspectives. The search was completed on May 24, 2022 and was not limited by date or study design. RESULTS: After reviewing 7,925 papers from six databases and 3,603 papers identified through citation chasing, 145 articles were included. Key ethical concerns included privacy, security, or confidentiality issues (n = 59, 40.7%); risk of healthcare inequity or disparity (n = 36, 24.8%); risk of patient harm (n = 24, 16.6%); accountability and responsibility concerns (n = 19, 13.1%); problematic informed consent and potential loss of patient autonomy (n = 17, 11.7%); and issues related to data ownership (n = 11, 7.6%). Major trust barriers included data privacy and security concerns, potential risk of patient harm, perceived lack of transparency about AI-enabled medical devices, concerns about AI replacing human aspects of care, concerns about prioritizing profits over patients' interests, and lack of robust evidence related to the accuracy and limitations of AI-based medical devices. Ethical and trust facilitators included ensuring data privacy and data validation, conducting clinical trials in diverse cohorts, providing appropriate training and resources to patients and healthcare providers and improving their engagement in different phases of AI implementation, and establishing further regulatory oversights. CONCLUSION: This review revealed key ethical concerns and barriers and facilitators of trust in AI-enabled medical devices from patients' and healthcare providers' perspectives. Successful integration of AI into cardiovascular care necessitates implementation of mitigation strategies. These strategies should focus on enhanced regulatory oversight on the use of patient data and promoting transparency around the use of AI in patient care.


Assuntos
Inteligência Artificial , Doenças Cardiovasculares , Confiança , Humanos , Inteligência Artificial/ética , Doenças Cardiovasculares/terapia
10.
JMIR Ment Health ; 11: e58493, 2024 Sep 19.
Artigo em Inglês | MEDLINE | ID: mdl-39298759

RESUMO

This article contends that the responsible artificial intelligence (AI) approach-which is the dominant ethics approach ruling most regulatory and ethical guidance-falls short because it overlooks the impact of AI on human relationships. Focusing only on responsible AI principles reinforces a narrow concept of accountability and responsibility of companies developing AI. This article proposes that applying the ethics of care approach to AI regulation can offer a more comprehensive regulatory and ethical framework that addresses AI's impact on human relationships. This dual approach is essential for the effective regulation of AI in the domain of mental health care. The article delves into the emergence of the new "therapeutic" area facilitated by AI-based bots, which operate without a therapist. The article highlights the difficulties involved, mainly the absence of a defined duty of care toward users, and shows how implementing ethics of care can establish clear responsibilities for developers. It also sheds light on the potential for emotional manipulation and the risks involved. In conclusion, the article proposes a series of considerations grounded in the ethics of care for the developmental process of AI-powered therapeutic tools.


Assuntos
Inteligência Artificial , Inteligência Artificial/ética , Humanos , Serviços de Saúde Mental/ética , Serviços de Saúde Mental/legislação & jurisprudência , Saúde Mental/ética
11.
Inquiry ; 61: 469580241266364, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39290068

RESUMO

The increasing integration of Artificial Intelligence (AI) in the medical domain signifies a transformative era in healthcare, with promises of improved diagnostics, treatment, and patient outcomes. However, this rapid technological progress brings a concomitant surge in ethical challenges permeating medical education. This paper explores the crucial role of medical educators in adapting to these changes, ensuring that ethical education remains a central and adaptable component of medical curricula. Medical educators must evolve alongside AI's advancements, becoming stewards of ethical consciousness in an era where algorithms and data-driven decision-making play pivotal roles in patient care. The traditional paradigm of medical education, rooted in foundational ethical principles, must adapt to incorporate the complex ethical considerations introduced by AI. This pedagogical approach fosters dynamic engagement, cultivating a profound ethical awareness among students. It empowers them to critically assess the ethical implications of AI applications in healthcare, including issues related to data privacy, informed consent, algorithmic biases, and technology-mediated patient care. Moreover, the interdisciplinary nature of AI's ethical challenges necessitates collaboration with fields such as computer science, data ethics, law, and social sciences to provide a holistic understanding of the ethical landscape.


Assuntos
Inteligência Artificial , Educação Médica , Consentimento Livre e Esclarecido , Autonomia Pessoal , Inteligência Artificial/ética , Humanos , Consentimento Livre e Esclarecido/ética , Currículo , Tomada de Decisões/ética
14.
Sci Eng Ethics ; 30(5): 43, 2024 Sep 11.
Artigo em Inglês | MEDLINE | ID: mdl-39259362

RESUMO

Machine unlearning (MU) is often analyzed in terms of how it can facilitate the "right to be forgotten." In this commentary, we show that MU can support the OECD's five principles for trustworthy AI, which are influencing AI development and regulation worldwide. This makes it a promising tool to translate AI principles into practice. We also argue that the implementation of MU is not without ethical risks. To address these concerns and amplify the positive impact of MU, we offer policy recommendations across six categories to encourage the research and uptake of this potentially highly influential new technology.


Assuntos
Inteligência Artificial , Confiança , Humanos , Inteligência Artificial/ética , Aprendizado de Máquina/ética , Aprendizagem
15.
JAMA Netw Open ; 7(9): e2432482, 2024 Sep 03.
Artigo em Inglês | MEDLINE | ID: mdl-39240560

RESUMO

Importance: Safe integration of artificial intelligence (AI) into clinical settings often requires randomized clinical trials (RCT) to compare AI efficacy with conventional care. Diabetic retinopathy (DR) screening is at the forefront of clinical AI applications, marked by the first US Food and Drug Administration (FDA) De Novo authorization for an autonomous AI for such use. Objective: To determine the generalizability of the 7 ethical research principles for clinical trials endorsed by the National Institute of Health (NIH), and identify ethical concerns unique to clinical trials of AI. Design, Setting, and Participants: This qualitative study included semistructured interviews conducted with 11 investigators engaged in the design and implementation of clinical trials of AI for DR screening from November 11, 2022, to February 20, 2023. The study was a collaboration with the ACCESS (AI for Children's Diabetic Eye Exams) trial, the first clinical trial of autonomous AI in pediatrics. Participant recruitment initially utilized purposeful sampling, and later expanded with snowball sampling. Study methodology for analysis combined a deductive approach to explore investigators' perspectives of the 7 ethical principles for clinical research endorsed by the NIH and an inductive approach to uncover the broader ethical considerations implementing clinical trials of AI within care delivery. Results: A total of 11 participants (mean [SD] age, 47.5 [12.0] years; 7 male [64%], 4 female [36%]; 3 Asian [27%], 8 White [73%]) were included, with diverse expertise in ethics, ophthalmology, translational medicine, biostatistics, and AI development. Key themes revealed several ethical challenges unique to clinical trials of AI. These themes included difficulties in measuring social value, establishing scientific validity, ensuring fair participant selection, evaluating risk-benefit ratios across various patient subgroups, and addressing the complexities inherent in the data use terms of informed consent. Conclusions and Relevance: This qualitative study identified practical ethical challenges that investigators need to consider and negotiate when conducting AI clinical trials, exemplified by the DR screening use-case. These considerations call for further guidance on where to focus empirical and normative ethical efforts to best support conduct clinical trials of AI and minimize unintended harm to trial participants.


Assuntos
Inteligência Artificial , Ensaios Clínicos como Assunto , Retinopatia Diabética , Humanos , Inteligência Artificial/ética , Retinopatia Diabética/diagnóstico , Ensaios Clínicos como Assunto/ética , Feminino , Pesquisa Qualitativa , Projetos de Pesquisa , Masculino , Estados Unidos
16.
Nature ; 633(8028): 147-154, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39198640

RESUMO

Hundreds of millions of people now interact with language models, with uses ranging from help with writing1,2 to informing hiring decisions3. However, these language models are known to perpetuate systematic racial prejudices, making their judgements biased in problematic ways about groups such as African Americans4-7. Although previous research has focused on overt racism in language models, social scientists have argued that racism with a more subtle character has developed over time, particularly in the United States after the civil rights movement8,9. It is unknown whether this covert racism manifests in language models. Here, we demonstrate that language models embody covert racism in the form of dialect prejudice, exhibiting raciolinguistic stereotypes about speakers of African American English (AAE) that are more negative than any human stereotypes about African Americans ever experimentally recorded. By contrast, the language models' overt stereotypes about African Americans are more positive. Dialect prejudice has the potential for harmful consequences: language models are more likely to suggest that speakers of AAE be assigned less-prestigious jobs, be convicted of crimes and be sentenced to death. Finally, we show that current practices of alleviating racial bias in language models, such as human preference alignment, exacerbate the discrepancy between covert and overt stereotypes, by superficially obscuring the racism that language models maintain on a deeper level. Our findings have far-reaching implications for the fair and safe use of language technology.


Assuntos
Inteligência Artificial , Negro ou Afro-Americano , Tomada de Decisões , Idioma , Processamento de Linguagem Natural , Racismo , Estereotipagem , Inteligência Artificial/ética , Negro ou Afro-Americano/etnologia , Tomada de Decisões/ética , Racismo/etnologia , Racismo/prevenção & controle
17.
Ugeskr Laeger ; 186(28)2024 Jul 08.
Artigo em Dinamarquês | MEDLINE | ID: mdl-39115229

RESUMO

Artificial Intelligence (AI) holds promise in improving diagnostics and treatment. Likewise, AI is anticipated to mitigate the impacts of staff shortages in the healthcare sector. However, realising the expectations placed on AI requires a substantial effort involving patients and clinical domain experts. Against this setting, this review examines ethical challenges related to the development and implementation of AI in healthcare. Furthermore, we introduce and discuss various approaches, guidelines, and standards that proactively aim to address ethical challenges.


Assuntos
Inteligência Artificial , Atenção à Saúde , Inteligência Artificial/ética , Humanos , Atenção à Saúde/ética
20.
BMC Neurosci ; 25(1): 41, 2024 Aug 29.
Artigo em Inglês | MEDLINE | ID: mdl-39210267

RESUMO

The scientific relationship between neuroscience and artificial intelligence is generally acknowledged, and the role that their long history of collaboration has played in advancing both fields is often emphasized. Beyond the important scientific insights provided by their collaborative development, both neuroscience and AI raise a number of ethical issues that are generally explored by neuroethics and AI ethics. Neuroethics and AI ethics have been gaining prominence in the last few decades, and they are typically carried out by different research communities. However, considering the evolving landscape of AI-assisted neurotechnologies and the various conceptual and practical intersections between AI and neuroscience-such as the increasing application of AI in neuroscientific research, the healthcare of neurological and mental diseases, and the use of neuroscientific knowledge as inspiration for AI-some scholars are now calling for a collaborative relationship between these two domains. This article seeks to explore how a collaborative relationship between neuroethics and AI ethics can stimulate theoretical and, ideally, governance efforts. First, we offer some reasons for calling for the collaboration of the ethical reflection on neuroscientific innovations and AI. Next, we explore some dimensions that we think could be enhanced by the cross-fertilization between these two subfields of ethics. We believe that considering the pace and increasing fusion of neuroscience and AI in the development of innovations, broad and underspecified calls for responsibility that do not consider insights from different ethics subfields will only be partially successful in promoting meaningful changes in both research and applications.


Assuntos
Inteligência Artificial , Neurociências , Inteligência Artificial/ética , Neurociências/ética , Humanos , Comportamento Cooperativo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...