Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Tipo de documento
Assunto da revista
País de afiliação
Intervalo de ano de publicação
1.
Sci Eng Ethics ; 29(3): 21, 2023 05 26.
Artigo em Inglês | MEDLINE | ID: mdl-37237246

RESUMO

Critics currently argue that applied ethics approaches to artificial intelligence (AI) are too principles-oriented and entail a theory-practice gap. Several applied ethical approaches try to prevent such a gap by conceptually translating ethical theory into practice. In this article, we explore how the currently most prominent approaches of AI ethics translate ethics into practice. Therefore, we examine three approaches to applied AI ethics: the embedded ethics approach, the ethically aligned approach, and the Value Sensitive Design (VSD) approach. We analyze each of these three approaches by asking how they understand and conceptualize theory and practice. We outline the conceptual strengths as well as their shortcomings: an embedded ethics approach is context-oriented but risks being biased by it; ethically aligned approaches are principles-oriented but lack justification theories to deal with trade-offs between competing principles; and the interdisciplinary Value Sensitive Design approach is based on stakeholder values but needs linkage to political, legal, or social governance aspects. Against this background, we develop a meta-framework for applied AI ethics conceptions with three dimensions. Based on critical theory, we suggest these dimensions as starting points to critically reflect on the conceptualization of theory and practice. We claim, first, that the inclusion of the dimension of affects and emotions in the ethical decision-making process stimulates reflections on vulnerabilities, experiences of disregard, and marginalization already within the AI development process. Second, we derive from our analysis that considering the dimension of justifying normative background theories provides both standards and criteria as well as guidance for prioritizing or evaluating competing principles in cases of conflict. Third, we argue that reflecting the governance dimension in ethical decision-making is an important factor to reveal power structures as well as to realize ethical AI and its application because this dimension seeks to combine social, legal, technical, and political concerns. This meta-framework can thus serve as a reflective tool for understanding, mapping, and assessing the theory-practice conceptualizations within AI ethics approaches to address and overcome their blind spots.


Assuntos
Inteligência Artificial , Emoções , Teoria Ética
3.
JMIR Pediatr Parent ; 6: e50765, 2023 Dec 15.
Artigo em Inglês | MEDLINE | ID: mdl-38109377

RESUMO

Background: Although digital maternity records (DMRs) have been evaluated in the past, no previous work investigated usability or acceptance through an observational usability study. Objective: The primary objective was to assess the usability and perception of a DMR smartphone app for pregnant women. The secondary objective was to assess personal preferences and habits related to online information searching, wearable data presentation and interpretation, at-home examination, and sharing data for research purposes during pregnancy. Methods: A DMR smartphone app was developed. Key features such as wearable device integration, study functionalities (eg, questionnaires), and common pregnancy app functionalities (eg, mood tracker) were included. Women who had previously given birth were invited to participate. Participants completed 10 tasks while asked to think aloud. Sessions were conducted via Zoom. Video, audio, and the shared screen were recorded for analysis. Task completion times, task success, errors, and self-reported (free text) feedback were evaluated. Usability was measured through the System Usability Scale (SUS) and User Experience Questionnaire (UEQ). Semistructured interviews were conducted to explore the secondary objective. Results: A total of 11 participants (mean age 34.6, SD 2.2 years) were included in the study. A mean SUS score of 79.09 (SD 18.38) was achieved. The app was rated "above average" in 4 of 6 UEQ categories. Sixteen unique features were requested. We found that 5 of 11 participants would only use wearables during pregnancy if requested to by their physician, while 10 of 11 stated they would share their data for research purposes. Conclusions: Pregnant women rely on their medical caregivers for advice, including on the use of mobile and ubiquitous health technology. Clear benefits must be communicated if issuing wearable devices to pregnant women. Participants that experienced pregnancy complications in the past were overall more open toward the use of wearable devices in pregnancy. Pregnant women have different opinions regarding access to, interpretation of, and reactions to alerts based on wearable data. Future work should investigate personalized concepts covering these aspects.

4.
AI Ethics ; 2(4): 747-761, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35098247

RESUMO

Good decision-making is a complex endeavor, and particularly so in a health context. The possibilities for day-to-day clinical practice opened up by AI-driven clinical decision support systems (AI-CDSS) give rise to fundamental questions around responsibility. In causal, moral and legal terms the application of AI-CDSS is challenging existing attributions of responsibility. In this context, responsibility gaps are often identified as main problem. Mapping out the changing dynamics and levels of attributing responsibility, we argue in this article that the application of AI-CDSS causes diffusions of responsibility with respect to a causal, moral, and legal dimension. Responsibility diffusion describes the situation where multiple options and several agents can be considered for attributing responsibility. Using the example of an AI-driven 'digital tumor board', we illustrate how clinical decision-making is changed and diffusions of responsibility take place. Not denying or attempting to bridge responsibility gaps, we argue that dynamics and ambivalences are inherent in responsibility, which is based on normative considerations such as avoiding experiences of disregard and vulnerability of human life, which are inherently accompanied by a moment of uncertainty, and is characterized by revision openness. Against this background and to avoid responsibility gaps, the article concludes with suggestions for managing responsibility diffusions in clinical decision-making with AI-CDSS.

5.
Hastings Cent Rep ; 51(3): 17-22, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33606288

RESUMO

Trust is one of the big buzzwords in debates about the shaping of society, democracy, and emerging technologies. For example, one prominent idea put forward by the High-Level Expert Group on Artificial Intelligence appointed by the European Commission is that artificial intelligence should be trustworthy. In this essay, we explore the notion of trust and argue that both proponents and critics of trustworthy AI have flawed pictures of the nature of trust. We develop an approach to understanding trust in AI that does not conceive of trust merely as an accelerator for societal acceptance of AI technologies. Instead, we argue, trust is granted through leaps of faith. For this reason, trust remains precarious, fragile, and resistant to promotion through formulaic approaches. We also highlight the significance of distrust in societal deliberation, as it is relevant to trust in various and intricate ways. Among the fruitful aspects of distrust is that it enables individuals to forgo technology if desired, to constrain its power, and to exercise meaningful human control.


Assuntos
Inteligência Artificial , Confiança , Altruísmo , Humanos
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa