Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 94
Filtrar
1.
Asian Bioeth Rev ; 16(3): 513-526, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-39022373

RESUMO

Artificial intelligence (AI) is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. Artificial intelligence can be both a blessing and a curse, and potentially a double-edged sword if not carefully wielded. While it holds massive potential benefits to humans-particularly in healthcare by assisting in treatment of diseases, surgeries, record keeping, and easing the lives of both patients and doctors, its misuse has potential for harm through impact of biases, unemployment, breaches of privacy, and lack of accountability to mention a few. In this article, we discuss the fourth industrial revolution, through a focus on the core of this phenomenon, artificial intelligence. We outline what the fourth industrial revolution is, its basis around AI, and how this infiltrates human lives and society, akin to a transcendence. We focus on the potential dangers of AI and the ethical concerns it brings about particularly in developing countries in general and conflict zones in particular, and we offer potential solutions to such dangers. While we acknowledge the importance and potential of AI, we also call for cautious reservations before plunging straight into the exciting world of the future, one which we long have heard of only in science fiction movies.

2.
Asian Bioeth Rev ; 16(3): 315-344, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-39022380

RESUMO

The increasing integration of artificial intelligence (AI) in healthcare presents a host of ethical, legal, social, and political challenges involving various stakeholders. These challenges prompt various studies proposing frameworks and guidelines to tackle these issues, emphasizing distinct phases of AI development, deployment, and oversight. As a result, the notion of responsible AI has become widespread, incorporating ethical principles such as transparency, fairness, responsibility, and privacy. This paper explores the existing literature on AI use in healthcare to examine how it addresses, defines, and discusses the concept of responsibility. We conducted a scoping review of literature related to AI responsibility in healthcare, searching databases and reference lists between January 2017 and January 2022 for terms related to "responsibility" and "AI in healthcare", and their derivatives. Following screening, 136 articles were included. Data were grouped into four thematic categories: (1) the variety of terminology used to describe and address responsibility; (2) principles and concepts associated with responsibility; (3) stakeholders' responsibilities in AI clinical development, use, and deployment; and (4) recommendations for addressing responsibility concerns. The results show the lack of a clear definition of AI responsibility in healthcare and highlight the importance of ensuring responsible development and implementation of AI in healthcare. Further research is necessary to clarify this notion to contribute to developing frameworks regarding the type of responsibility (ethical/moral/professional, legal, and causal) of various stakeholders involved in the AI lifecycle.

4.
Ann Biomed Eng ; 52(9): 2319-2324, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38977530

RESUMO

AI shaming refers to the practice of criticizing or looking down on individuals or organizations for using AI to generate content or perform tasks. AI shaming has emerged as a recent phenomenon in academia. This paper examines the characteristics, causes, and effects of AI shaming on academic writers and researchers. AI shaming often involves dismissing the validity or authenticity of AI-assisted work, suggesting that using AI is deceitful, lazy, or less valuable than human-only efforts. The paper identifies various profiles of individuals who engage in AI shaming, including traditionalists, technophobes, and elitists, and explores their motivations. The effects of AI shaming are multifaceted, ranging from inhibited technology adoption and stifled innovation to increased stress among researchers and missed opportunities for efficiency. These consequences may hinder academic progress and limit the potential benefits of AI in research and scholarship. Despite these challenges, the paper argues that academic writers and researchers should not be ashamed of using AI when done responsibly and ethically. By embracing AI as a tool to augment human capabilities and by being transparent about its use, academic writers and researchers can lead the way in demonstrating responsible AI integration.


Assuntos
Inteligência Artificial , Pesquisadores , Humanos , Estigma Social
5.
Theor Med Bioeth ; 2024 Jun 08.
Artigo em Inglês | MEDLINE | ID: mdl-38850486

RESUMO

With the growing application of machine learning models in medicine, principlist bioethics has been put forward as needing revision. This paper reflects on the dominant trope in AI ethics to include a new 'principle of explicability' alongside the traditional four principles of bioethics that make up the theory of principlism. It specifically suggests that these four principles are sufficient and challenges the relevance of explicability as a separate ethical principle by emphasizing the coherentist affinity of principlism. We argue that, through specification, the properties of explicability are already covered by the four bioethical principles. The paper finishes by anticipating an objection that coherent principles could not facilitate technology induced change and are not well-suited to tackle moral differences.

6.
Sci Eng Ethics ; 30(3): 24, 2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38833207

RESUMO

While the technologies that enable Artificial Intelligence (AI) continue to advance rapidly, there are increasing promises regarding AI's beneficial outputs and concerns about the challenges of human-computer interaction in healthcare. To address these concerns, institutions have increasingly resorted to publishing AI guidelines for healthcare, aiming to align AI with ethical practices. However, guidelines as a form of written language can be analyzed to recognize the reciprocal links between its textual communication and underlying societal ideas. From this perspective, we conducted a discourse analysis to understand how these guidelines construct, articulate, and frame ethics for AI in healthcare. We included eight guidelines and identified three prevalent and interwoven discourses: (1) AI is unavoidable and desirable; (2) AI needs to be guided with (some forms of) principles (3) trust in AI is instrumental and primary. These discourses signal an over-spillage of technical ideals to AI ethics, such as over-optimism and resulting hyper-criticism. This research provides insights into the underlying ideas present in AI guidelines and how guidelines influence the practice and alignment of AI with ethical, legal, and societal values expected to shape AI in healthcare.


Assuntos
Inteligência Artificial , Atenção à Saúde , Guias como Assunto , Confiança , Inteligência Artificial/ética , Humanos , Atenção à Saúde/ética , Princípios Morais
7.
JMIR AI ; 3: e51204, 2024 Jan 12.
Artigo em Inglês | MEDLINE | ID: mdl-38875585

RESUMO

BACKGROUND: The integration of artificial intelligence (AI)-based applications in the medical field has increased significantly, offering potential improvements in patient care and diagnostics. However, alongside these advancements, there is growing concern about ethical considerations, such as bias, informed consent, and trust in the development of these technologies. OBJECTIVE: This study aims to assess the role of ethics in the development of AI-based applications in medicine. Furthermore, this study focuses on the potential consequences of neglecting ethical considerations in AI development, particularly their impact on patients and physicians. METHODS: Qualitative content analysis was used to analyze the responses from expert interviews. Experts were selected based on their involvement in the research or practical development of AI-based applications in medicine for at least 5 years, leading to the inclusion of 7 experts in the study. RESULTS: The analysis revealed 3 main categories and 7 subcategories reflecting a wide range of views on the role of ethics in AI development. This variance underscores the subjectivity and complexity of integrating ethics into the development of AI in medicine. Although some experts view ethics as fundamental, others prioritize performance and efficiency, with some perceiving ethics as potential obstacles to technological progress. This dichotomy of perspectives clearly emphasizes the subjectivity and complexity surrounding the role of ethics in AI development, reflecting the inherent multifaceted nature of this issue. CONCLUSIONS: Despite the methodological limitations impacting the generalizability of the results, this study underscores the critical importance of consistent and integrated ethical considerations in AI development for medical applications. It advocates further research into effective strategies for ethical AI development, emphasizing the need for transparent and responsible practices, consideration of diverse data sources, physician training, and the establishment of comprehensive ethical and legal frameworks.

8.
Sci Eng Ethics ; 30(3): 22, 2024 May 27.
Artigo em Inglês | MEDLINE | ID: mdl-38801621

RESUMO

Health Recommender Systems are promising Articial-Intelligence-based tools endowing healthy lifestyles and therapy adherence in healthcare and medicine. Among the most supported areas, it is worth mentioning active aging. However, current HRS supporting AA raise ethical challenges that still need to be properly formalized and explored. This study proposes to rethink HRS for AA through an autonomy-based ethical analysis. In particular, a brief overview of the HRS' technical aspects allows us to shed light on the ethical risks and challenges they might raise on individuals' well-being as they age. Moreover, the study proposes a categorization, understanding, and possible preventive/mitigation actions for the elicited risks and challenges through rethinking the AI ethics core principle of autonomy. Finally, elaborating on autonomy-related ethical theories, the paper proposes an autonomy-based ethical framework and how it can foster the development of autonomy-enabling HRS for AA.


Assuntos
Envelhecimento , Análise Ética , Autonomia Pessoal , Humanos , Envelhecimento/ética , Inteligência Artificial/ética , Teoria Ética , Estilo de Vida Saudável , Atenção à Saúde/ética , Envelhecimento Saudável/ética
9.
Heliyon ; 10(9): e30696, 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38737255

RESUMO

Despite the wave of enthusiasm for the role of Artificial Intelligence (AI) in reshaping education, critical voices urge a more tempered approach. This study investigates the less-discussed 'shadows' of AI implementation in educational settings, focusing on potential negatives that may accompany its integration. Through a multi-phased exploration consisting of content analysis and survey research, the study develops and validates a theoretical model that pinpoints several areas of concern. The initial phase, a systematic literature review, yielded 56 relevant studies from which the model was crafted. The subsequent survey with 260 participants from a Saudi Arabian university aimed to validate the model. Findings confirm concerns about human connection, data privacy and security, algorithmic bias, transparency, critical thinking, access equity, ethical issues, teacher development, reliability, and the consequences of AI-generated content. They also highlight correlations between various AI-associated concerns, suggesting intertwined consequences rather than isolated issues. For instance, enhancements in AI transparency could simultaneously support teacher professional development and foster better student outcomes. Furthermore, the study acknowledges the transformative potential of AI but cautions against its unexamined adoption in education. It advocates for comprehensive strategies to maintain human connections, ensure data privacy and security, mitigate biases, enhance system transparency, foster creativity, reduce access disparities, emphasize ethics, prepare teachers, ensure system reliability, and regulate AI-generated content. Such strategies underscore the need for holistic policymaking to leverage AI's benefits while safeguarding against its disadvantages.

10.
Front Psychol ; 15: 1382693, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38694439

RESUMO

The rapid advancement of artificial intelligence (AI) has impacted society in many aspects. Alongside this progress, concerns such as privacy violation, discriminatory bias, and safety risks have also surfaced, highlighting the need for the development of ethical, responsible, and socially beneficial AI. In response, the concept of trustworthy AI has gained prominence, and several guidelines for developing trustworthy AI have been proposed. Against this background, we demonstrate the significance of psychological research in identifying factors that contribute to the formation of trust in AI. Specifically, we review research findings on interpersonal, human-automation, and human-AI trust from the perspective of a three-dimension framework (i.e., the trustor, the trustee, and their interactive context). The framework synthesizes common factors related to trust formation and maintenance across different trust types. These factors point out the foundational requirements for building trustworthy AI and provide pivotal guidance for its development that also involves communication, education, and training for users. We conclude by discussing how the insights in trust research can help enhance AI's trustworthiness and foster its adoption and application.

11.
J Prosthodont ; 2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38655727

RESUMO

PURPOSE: Smile design software increasingly relies on artificial intelligence (AI). However, using AI for smile design raises numerous technical and ethical concerns. This study aimed to evaluate these ethical issues. METHODS: An international consortium of experts specialized in AI, dentistry, and smile design was engaged to emulate and assess the ethical challenges raised by the use of AI for smile design. An e-Delphi protocol was used to seek the agreement of the ITU-WHO group on well-established ethical principles regarding the use of AI (wellness, respect for autonomy, privacy protection, solidarity, governance, equity, diversity, expertise/prudence, accountability/responsibility, sustainability, and transparency). Each principle included examples of ethical challenges that users might encounter when using AI for smile design. RESULTS: On the first round of the e-Delphi exercise, participants agreed that seven items should be considered in smile design (diversity, transparency, wellness, privacy protection, prudence, law and governance, and sustainable development), but the remaining four items (equity, accountability and responsibility, solidarity, and respect of autonomy) were rejected and had to be reformulated. After a second round, participants agreed to all items that should be considered while using AI for smile design. CONCLUSIONS: AI development and deployment for smile design should abide by the ethical principles of wellness, respect for autonomy, privacy protection, solidarity, governance, equity, diversity, expertise/prudence, accountability/responsibility, sustainability, and transparency.

12.
Artif Intell Med ; 152: 102873, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38643592

RESUMO

The COVID-19 pandemic has given rise to a broad range of research from fields alongside and beyond the core concerns of infectiology, epidemiology, and immunology. One significant subset of this work centers on machine learning-based approaches to supporting medical decision-making around COVID-19 diagnosis. To date, various challenges, including IT issues, have meant that, notwithstanding this strand of research on digital diagnosis of COVID-19, the actual use of these methods in medical facilities remains incipient at best, despite their potential to relieve pressure on scarce medical resources, prevent instances of infection, and help manage the difficulties and unpredictabilities surrounding the emergence of new mutations. The reasons behind this research-application gap are manifold and may imply an interdisciplinary dimension. We argue that the discipline of AI ethics can provide a framework for interdisciplinary discussion and create a roadmap for the application of digital COVID-19 diagnosis, taking into account all disciplinary stakeholders involved. This article proposes such an ethical framework for the practical use of digital COVID-19 diagnosis, considering legal, medical, operational managerial, and technological aspects of the issue in accordance with our diverse research backgrounds and noting the potential of the approach we set out here to guide future research.


Assuntos
Inteligência Artificial , COVID-19 , COVID-19/diagnóstico , Humanos , Inteligência Artificial/ética , SARS-CoV-2 , Aprendizado de Máquina/ética , Diagnóstico por Computador/ética , Pandemias
13.
Heliyon ; 10(7): e29048, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38601681

RESUMO

Current studies on the artificial intelligence (AI) ethics focus either on very broad guidelines or on a very special domain. Therefore, the research outcome can hardly be converted into actionable measures or transferred to other domains. Potential correlations between various cases of AI ethics at different granularity levels are unexplored. To overcome these deficiencies, the authors designed a case-oriented ontological model (COOM) and a hyper-knowledge graph system (HKGS) for the research of collected AI ethics cases. COOM describes criteria for modelling cases by attributes from three perspectives: event attributes, relational attributes, and positional attributes on the value chain. Based on it, HKGS stores the correlation between cases as knowledge and allows advanced visual analysis. The correlations between cases and their dynamic changes on value chain can be observed and explored. In HKGS's implementation part, one of the collected ethics cases is used as an example to demonstrate how to generate a hyper-knowledge graph and to visually analyze it. The authors also anticipated how different practitioners of AI ethics, can achieve the desired outputs from HKGS in their diverse scenarios.

14.
J Cancer Educ ; 2024 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-38652432

RESUMO

This commentary evaluates the use of machine translation for multilingual patienteducation in oncology. It critically examines the balance between technologicalbenefits in language accessibility and the potential for increasing healthcare disparities.The analysis emphasizes the need for a multidisciplinary approach to translation thatincorporates linguistic accuracy, medical clarity, and cultural relevance. Additionally, ithighlights the ethical considerations of digital literacy and access, underscoring theimportance of equitable patient education. This contribution seeks to advance thediscussion on the thoughtful integration of technology in healthcare communication,focusing on maintaining high standards of equity, quality, and patient care.

15.
Ther Innov Regul Sci ; 58(3): 456-464, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38528278

RESUMO

Artificial intelligence (AI)-enabled technologies in the MedTech sector hold the promise to transform healthcare delivery by improving access, quality, and outcomes. As the regulatory contours of these technologies are being defined, there is a notable lack of literature on the key stakeholders such as the organizations and interest groups that have a significant input in shaping the regulatory framework. This article explores the perspectives and contributions of these stakeholders in shaping the regulatory paradigm of AI-enabled medical technologies. The formation of an AI regulatory framework requires the convergence of ethical, regulatory, technical, societal, and practical considerations. These multiple perspectives contribute to the various dimensions of an evolving regulatory paradigm. From the global governance guidelines set by the World Health Organization (WHO) to national regulations, the article sheds light not just on these multiple perspectives but also on their interconnectedness in shaping the regulatory landscape of AI.


Assuntos
Inteligência Artificial , Humanos , Atenção à Saúde , Tecnologia Biomédica/legislação & jurisprudência , Organização Mundial da Saúde
16.
Front Digit Health ; 6: 1267290, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38455991

RESUMO

Trustworthy medical AI requires transparency about the development and testing of underlying algorithms to identify biases and communicate potential risks of harm. Abundant guidance exists on how to achieve transparency for medical AI products, but it is unclear whether publicly available information adequately informs about their risks. To assess this, we retrieved public documentation on the 14 available CE-certified AI-based radiology products of the II b risk category in the EU from vendor websites, scientific publications, and the European EUDAMED database. Using a self-designed survey, we reported on their development, validation, ethical considerations, and deployment caveats, according to trustworthy AI guidelines. We scored each question with either 0, 0.5, or 1, to rate if the required information was "unavailable", "partially available," or "fully available." The transparency of each product was calculated relative to all 55 questions. Transparency scores ranged from 6.4% to 60.9%, with a median of 29.1%. Major transparency gaps included missing documentation on training data, ethical considerations, and limitations for deployment. Ethical aspects like consent, safety monitoring, and GDPR-compliance were rarely documented. Furthermore, deployment caveats for different demographics and medical settings were scarce. In conclusion, public documentation of authorized medical AI products in Europe lacks sufficient public transparency to inform about safety and risks. We call on lawmakers and regulators to establish legally mandated requirements for public and substantive transparency to fulfill the promise of trustworthy AI for health.

17.
Public Underst Sci ; 33(5): 654-672, 2024 07.
Artigo em Inglês | MEDLINE | ID: mdl-38326971

RESUMO

The governance of artificial intelligence (AI) is an urgent challenge that requires actions from three interdependent stakeholders: individual citizens, technology corporations, and governments. We conducted an online survey (N = 525) of US adults to examine their beliefs about the governance responsibility of these stakeholders as a function of trust and AI ethics. Different dimensions of trust and different ethical concerns were associated with beliefs in governance responsibility of the three stakeholders. Specifically, belief in the governance responsibility of the government was associated with ethical concerns about AI, whereas belief in governance responsibility of corporations was related to both ethical concerns and trust in AI. Belief in governance responsibility of individuals was related to human-centered values of trust in AI and fairness. Overall, the findings point to the need for an interdependent framework in which citizens, corporations, and governments share governance responsibilities, guided by trust and ethics as the guardrails.


Assuntos
Inteligência Artificial , Governo , Opinião Pública , Confiança , Inteligência Artificial/ética , Estados Unidos , Humanos , Adulto , Feminino , Inquéritos e Questionários , Masculino , Responsabilidade Social , Pessoa de Meia-Idade
18.
JMIR Med Educ ; 10: e55368, 2024 Feb 09.
Artigo em Inglês | MEDLINE | ID: mdl-38285931

RESUMO

The use of artificial intelligence (AI) in medicine, potentially leading to substantial advancements such as improved diagnostics, has been of increasing scientific and societal interest in recent years. However, the use of AI raises new ethical challenges, such as an increased risk of bias and potential discrimination against patients, as well as misdiagnoses potentially leading to over- or underdiagnosis with substantial consequences for patients. Recognizing these challenges, current research underscores the importance of integrating AI ethics into medical education. This viewpoint paper aims to introduce a comprehensive set of ethical principles for teaching AI ethics in medical education. This dynamic and principle-based approach is designed to be adaptive and comprehensive, addressing not only the current but also emerging ethical challenges associated with the use of AI in medicine. This study conducts a theoretical analysis of the current academic discourse on AI ethics in medical education, identifying potential gaps and limitations. The inherent interconnectivity and interdisciplinary nature of these anticipated challenges are illustrated through a focused discussion on "informed consent" in the context of AI in medicine and medical education. This paper proposes a principle-based approach to AI ethics education, building on the 4 principles of medical ethics-autonomy, beneficence, nonmaleficence, and justice-and extending them by integrating 3 public health ethics principles-efficiency, common good orientation, and proportionality. The principle-based approach to teaching AI ethics in medical education proposed in this study offers a foundational framework for addressing the anticipated ethical challenges of using AI in medicine, recommended in the current academic discourse. By incorporating the 3 principles of public health ethics, this principle-based approach ensures that medical ethics education remains relevant and responsive to the dynamic landscape of AI integration in medicine. As the advancement of AI technologies in medicine is expected to increase, medical ethics education must adapt and evolve accordingly. The proposed principle-based approach for teaching AI ethics in medical education provides an important foundation to ensure that future medical professionals are not only aware of the ethical dimensions of AI in medicine but also equipped to make informed ethical decisions in their practice. Future research is required to develop problem-based and competency-oriented learning objectives and educational content for the proposed principle-based approach to teaching AI ethics in medical education.


Assuntos
Inteligência Artificial , Educação Médica , Humanos , Ética Médica , Consentimento Livre e Esclarecido , Beneficência
19.
JMIR Med Educ ; 10: e51247, 2024 Jan 05.
Artigo em Inglês | MEDLINE | ID: mdl-38180787

RESUMO

BACKGROUND: The use of artificial intelligence (AI) in medicine not only directly impacts the medical profession but is also increasingly associated with various potential ethical aspects. In addition, the expanding use of AI and AI-based applications such as ChatGPT demands a corresponding shift in medical education to adequately prepare future practitioners for the effective use of these tools and address the associated ethical challenges they present. OBJECTIVE: This study aims to explore how medical students from Germany, Austria, and Switzerland perceive the use of AI in medicine and the teaching of AI and AI ethics in medical education in accordance with their use of AI-based chat applications, such as ChatGPT. METHODS: This cross-sectional study, conducted from June 15 to July 15, 2023, surveyed medical students across Germany, Austria, and Switzerland using a web-based survey. This study aimed to assess students' perceptions of AI in medicine and the integration of AI and AI ethics into medical education. The survey, which included 53 items across 6 sections, was developed and pretested. Data analysis used descriptive statistics (median, mode, IQR, total number, and percentages) and either the chi-square or Mann-Whitney U tests, as appropriate. RESULTS: Surveying 487 medical students across Germany, Austria, and Switzerland revealed limited formal education on AI or AI ethics within medical curricula, although 38.8% (189/487) had prior experience with AI-based chat applications, such as ChatGPT. Despite varied prior exposures, 71.7% (349/487) anticipated a positive impact of AI on medicine. There was widespread consensus (385/487, 74.9%) on the need for AI and AI ethics instruction in medical education, although the current offerings were deemed inadequate. Regarding the AI ethics education content, all proposed topics were rated as highly relevant. CONCLUSIONS: This study revealed a pronounced discrepancy between the use of AI-based (chat) applications, such as ChatGPT, among medical students in Germany, Austria, and Switzerland and the teaching of AI in medical education. To adequately prepare future medical professionals, there is an urgent need to integrate the teaching of AI and AI ethics into the medical curricula.


Assuntos
Medicina , Estudantes de Medicina , Humanos , Estudos Transversais , Inteligência Artificial , Escolaridade , Colina O-Acetiltransferase
20.
Trends Plant Sci ; 29(2): 104-107, 2024 02.
Artigo em Inglês | MEDLINE | ID: mdl-38199829

RESUMO

The swiftness of artificial intelligence (AI) progress in plant science begets relevant ethical questions with significant scientific and societal implications. Embracing a principled approach to regulation, ethics review and monitoring, and human-centric interpretable informed AI (HIAI), we can begin to navigate our voyage towards ethical and socially responsible AI.


Assuntos
Inteligência Artificial , Inteligência Artificial/ética , Plantas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA