Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 94
Filtrar
1.
Sci Eng Ethics ; 30(4): 27, 2024 Jun 18.
Artículo en Inglés | MEDLINE | ID: mdl-38888795

RESUMEN

Artificial intelligence (AI) has long been recognised as a challenge to responsibility. Much of this discourse has been framed around robots, such as autonomous weapons or self-driving cars, where we arguably lack control over a machine's behaviour and therefore struggle to identify an agent that can be held accountable. However, most of today's AI is based on machine-learning technology that does not act on its own, but rather serves as a decision-support tool, automatically analysing data to help human agents make better decisions. I argue that decision-support tools pose a challenge to responsibility that goes beyond the familiar problem of finding someone to blame or punish for the behaviour of agent-like systems. Namely, they pose a problem for what we might call "decision ownership": they make it difficult to identify human agents to whom we can attribute value-judgements that are reflected in decisions. Drawing on recent philosophical literature on responsibility and its various facets, I argue that this is primarily a problem of attributability rather than of accountability. This particular responsibility problem comes in different forms and degrees, most obviously when an AI provides direct recommendations for actions, but also, less obviously, when it provides mere descriptive information on the basis of which a decision is made.


Asunto(s)
Inteligencia Artificial , Toma de Decisiones , Responsabilidad Social , Humanos , Inteligencia Artificial/ética , Toma de Decisiones/ética , Técnicas de Apoyo para la Decisión , Juicio , Aprendizaje Automático/ética , Propiedad , Robótica/ética
2.
J Med Internet Res ; 26: e48126, 2024 Jun 18.
Artículo en Inglés | MEDLINE | ID: mdl-38888953

RESUMEN

BACKGROUND: Technological advances in robotics, artificial intelligence, cognitive algorithms, and internet-based coaches have contributed to the development of devices capable of responding to some of the challenges resulting from demographic aging. Numerous studies have explored the use of robotic coaching solutions (RCSs) for supporting healthy behaviors in older adults and have shown their benefits regarding the quality of life and functional independence of older adults at home. However, the use of RCSs by individuals who are potentially vulnerable raises many ethical questions. Establishing an ethical framework to guide the development, use, and evaluation practices regarding RCSs for older adults seems highly pertinent. OBJECTIVE: The objective of this paper was to highlight the ethical issues related to the use of RCSs for health care purposes among older adults and draft recommendations for researchers and health care professionals interested in using RCSs for older adults. METHODS: We conducted a narrative review of the literature to identify publications including an analysis of the ethical dimension and recommendations regarding the use of RCSs for older adults. We used a qualitative analysis methodology inspired by a Health Technology Assessment model. We included all article types such as theoretical papers, research studies, and reviews dealing with ethical issues or recommendations for the implementation of these RCSs in a general population, particularly among older adults, in the health care sector and published after 2011 in either English or French. The review was performed between August and December 2021 using the PubMed, CINAHL, Embase, Scopus, Web of Science, IEEE Explore, SpringerLink, and PsycINFO databases. Selected publications were analyzed using the European Network of Health Technology Assessment Core Model (version 3.0) around 5 ethical topics: benefit-harm balance, autonomy, privacy, justice and equity, and legislation. RESULTS: In the 25 publications analyzed, the most cited ethical concerns were the risk of accidents, lack of reliability, loss of control, risk of deception, risk of social isolation, data confidentiality, and liability in case of safety problems. Recommendations included collecting the opinion of target users, collecting their consent, and training professionals in the use of RCSs. Proper data management, anonymization, and encryption appeared to be essential to protect RCS users' personal data. CONCLUSIONS: Our analysis supports the interest in using RCSs for older adults because of their potential contribution to individuals' quality of life and well-being. This analysis highlights many ethical issues linked to the use of RCSs for health-related goals. Future studies should consider the organizational consequences of the implementation of RCSs and the influence of cultural and socioeconomic specificities of the context of experimentation. We suggest implementing a scalable ethical and regulatory framework to accompany the development and implementation of RCSs for various aspects related to the technology, individual, or legal aspects.


Asunto(s)
Robótica , Humanos , Anciano , Robótica/ética , Tutoría/métodos , Tutoría/ética , Calidad de Vida
3.
Stud Health Technol Inform ; 313: 41-42, 2024 Apr 26.
Artículo en Inglés | MEDLINE | ID: mdl-38682502

RESUMEN

The present study aims to describe ethical and social requirements for technical and robotic systems for caregiving from the perspective of users. Users are interviewed in the ReduSys project during the development phase (prospective viewpoint) and after technology testing in the clinical setting (retrospective viewpoint). The preliminary results presented here refer to the prospective viewpoint.


Asunto(s)
Robótica , Robótica/ética , Humanos , Principios Morales , Atención al Paciente/ética
4.
Behav Brain Sci ; 46: e30, 2023 04 05.
Artículo en Inglés | MEDLINE | ID: mdl-37017043

RESUMEN

Do people hold robots responsible for their actions? While Clark and Fischer present a useful framework for interpreting social robots, we argue that they fail to account for people's willingness to assign responsibility to robots in certain contexts, such as when a robot performs actions not predictable by its user or programmer.


Asunto(s)
Conducta , Modelos Psicológicos , Robótica , Humanos , Robótica/ética , Robótica/métodos , Emociones , Conciencia
5.
Behav Brain Sci ; 46: e31, 2023 04 05.
Artículo en Inglés | MEDLINE | ID: mdl-37017056

RESUMEN

The target article proposes that people perceive social robots as depictions rather than as genuine social agents. We suggest that people might instead view social robots as social agents, albeit agents with more restricted capacities and moral rights than humans. We discuss why social robots, unlike other kinds of depictions, present a special challenge for testing the depiction hypothesis.


Asunto(s)
Principios Morales , Robótica , Humanos , Robótica/ética
6.
PLoS One ; 15(7): e0235361, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32673326

RESUMEN

Most people struggle to understand probability which is an issue for Human-Robot Interaction (HRI) researchers who need to communicate risks and uncertainties to the participants in their studies, the media and policy makers. Previous work showed that even the use of numerical values to express probabilities does not guarantee an accurate understanding by laypeople. We therefore investigate if words can be used to communicate probability, such as "likely" and "almost certainly not". We embedded these phrases in the context of the usage of autonomous vehicles. The results show that the association of phrases to percentages is not random and there is a preferred order of phrases. The association is, however, not as consistent as hoped for. Hence, it would be advisable to complement the use of words with numerical expression of uncertainty. This study provides an empirically verified list of probabilities phrases that HRI researchers can use to complement the numerical values.


Asunto(s)
Interfaces Cerebro-Computador/tendencias , Robótica/tendencias , Interfaces Cerebro-Computador/ética , Humanos , Probabilidad , Factores de Riesgo , Robótica/ética
7.
J Alzheimers Dis ; 76(2): 461-466, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32568203

RESUMEN

Socially assistive robots have the potential to improve aged care by providing assistance through social interaction. While some evidence suggests a positive impact of social robots on measures of well-being, the adoption of robotic technology remains slow. One approach to improve technology adoption is involving all stakeholders in the process of technology development using co-creation methods. To capture relevant stake holders' priorities and perceptions on the ethics of robotic companions, we conducted an interactive co-creation workshop at the 2019 Geriatric Services Conference in Vancouver, BC. The participants were presented with different portrayals of robotic companions in popular culture and answered questions about perceptions, expectations, and ethical concerns about the implementation of robotic technology. Our results reveal that the most pressing ethical concerns with robotic technology, such as issues related to privacy, are critical potential barriers to technology adoption. We also found that most participants agree on the types of tasks that robots should help with, such as domestic chores, communication, and medication reminders. Activities that robots should not help with, according to the stakeholders, included bathing, toileting, and managing finances. The perspectives that were captured contribute to a preliminary outline of the areas of importance for geriatric care stake holders in the process of ethical technology design and development.


Asunto(s)
Envejecimiento/psicología , Congresos como Asunto , Educación/métodos , Robótica/métodos , Interacción Social , Anciano , Envejecimiento/ética , Colombia Británica , Congresos como Asunto/ética , Educación/ética , Estudios de Factibilidad , Humanos , Proyectos Piloto , Robótica/ética
9.
Cuad Bioet ; 31(101): 87-100, 2020.
Artículo en Español | MEDLINE | ID: mdl-32304201

RESUMEN

Beyond the utopian or dystopian scenarios that accompany the progressive introduction of robots for care in daily environments, their use in the medical field entails controversies that require alternative forms of ethical responsibility. From this general objective, in this article we propose a series of reflections to articulate an ethical framework capable of orienting the introduction and use of robots in the field of health. The presented proposal is developed from a series of considerations about robots and care, as a starting point to develop an ethical framework based on the principle of precaution and measured action. It proposes a non-essentialist conceptualization of robots, that emphasizes their relational and contextual nature, understanding robots as heterogeneous artifacts that are constituted in a network of therapeutic relationships and that mediate our care relationships. This approach has a set of implications, which we articulate around measured action as an ethical proposal. The measured action, in our interpretation, responds to the principle of precaution and is configured through four dimensions: (1) the institutional commitment, (2) which integrates the fears and hopes of all those concerned actors, (3) which is realized carrying out progressive and revocable actions, under continuous monitoring and evaluation, and (4) which incorporates into the design process those actors practicing ″good care″.


Asunto(s)
Discusiones Bioéticas , Atención a la Salud/ética , Robótica/ética , Incertidumbre , Humanos , Principios Morales
10.
J Alzheimers Dis ; 76(2): 445-455, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32250295

RESUMEN

Due to the high costs of providing long-term care to older adults with cognitive impairment, artificial companions are increasingly considered as a cost-efficient way to provide support. Artificial companions can comfort, entertain, and inform, and even induce a sense of being in a close relationship. Sensors and algorithms are increasingly leading to applications that exude a life-like feel. We focus on a case study of an artificial companion for people with cognitive impairment. This companion is an avatar on an electronic tablet that is displayed as a dog or a cat. Whereas artificial intelligence guides most artificial companions, this application also relies on technicians "behind" the on-screen avatar, who via surveillance, interact with users. This case is notable because it particularly illustrates the tension between the endless opportunities offered by technology and the ethical issues stemming from limited regulations. Reviewing the case through the lens of biomedical ethics, concerns of deception, monitoring and tracking, as well as informed consent and social isolation are raised by the introduction of this technology to users with cognitive impairment. We provide a detailed description of the case, review the main ethical issues and present two theoretical frameworks, the "human-driven technology" platform and the emancipatory gerontology framework, to inform the design of future applications.


Asunto(s)
Inteligencia Artificial/ética , Disfunción Cognitiva/terapia , Amigos , Grupo de Atención al Paciente/ética , Robótica/ética , Anciano , Animales , Inteligencia Artificial/normas , Gatos , Disfunción Cognitiva/psicología , Perros , Amigos/psicología , Humanos , Grupo de Atención al Paciente/normas , Robótica/normas
11.
AJOB Neurosci ; 11(2): 120-127, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32228385

RESUMEN

The ethics of robots and artificial intelligence (AI) typically centers on "giving ethics" to as-yet imaginary AI with human-levels of autonomy in order to protect us from their potentially destructive power. It is often assumed that to do that, we should program AI with the true moral theory (whatever that might be), much as we teach morality to our children. This paper argues that the focus on AI with human-level autonomy is misguided. The robots and AI that we have now and in the near future are "semi-autonomous" in that their ability to make choices and to act is limited across a number of dimensions. Further, it may be morally problematic to create AI with human-level autonomy, even if it becomes possible. As such, any useful approach to AI ethics should begin with a theory of giving ethics to semi-autonomous agents (SAAs). In this paper, we work toward such a theory by evaluating our obligations to and for "natural" SAAs, including nonhuman animals and humans with developing and diminished capacities. Drawing on research in neuroscience, bioethics, and philosophy, we identify the ways in which AI semi-autonomy differs from semi-autonomy in humans and nonhuman animals. We conclude on the basis of these comparisons that when giving ethics to SAAs, we should focus on principles and restrictions that protect human interests, but that we can only permissibly maintain this approach so long as we do not aim at developing technology with human-level autonomy.


Asunto(s)
Inteligencia Artificial/ética , Bioética , Autonomía Personal , Animales , Humanos , Robótica/ética
12.
Sci Eng Ethics ; 26(1): 141-157, 2020 02.
Artículo en Inglés | MEDLINE | ID: mdl-30701408

RESUMEN

This paper examines the ethical pitfalls and challenges that non-ethicists, such as researchers and programmers in the fields of computer science, artificial intelligence and robotics, face when building moral machines. Whether ethics is "computable" depends on how programmers understand ethics in the first place and on the adequacy of their understanding of the ethical problems and methodological challenges in these fields. Researchers and programmers face at least two types of problems due to their general lack of ethical knowledge or expertise. The first type is so-called rookie mistakes, which could be addressed by providing these people with the necessary ethical knowledge. The second, more difficult methodological issue concerns areas of peer disagreement in ethics, where no easy solutions are currently available. This paper examines several existing approaches to highlight the ethical pitfalls and challenges involved. Familiarity with these and similar problems can help programmers to avoid pitfalls and build better moral machines. The paper concludes that ethical decisions regarding moral robots should be based on avoiding what is immoral (i.e. prohibiting certain immoral actions) in combination with a pluralistic ethical method of solving moral problems, rather than relying on a particular ethical approach, so as to avoid a normative bias.


Asunto(s)
Inteligencia Artificial/ética , Toma de Decisiones/ética , Teoría Ética , Principios Morales , Robótica/ética , Disentimientos y Disputas , Eticistas , Investigadores/ética , Programas Informáticos/ética
13.
J Gerontol B Psychol Sci Soc Sci ; 75(9): 1996-2007, 2020 10 16.
Artículo en Inglés | MEDLINE | ID: mdl-31131848

RESUMEN

OBJECTIVES: Socially assistive robots (SARs) need to be studied from older adults' perspective, given their predicted future ubiquity in aged-care settings. Current ethical discourses on SARs in aged care are uninformed by primary stakeholders' ethical perceptions. This study reports on what community-dwelling older adults in Flanders, Belgium, perceive as ethical issues of SARs in aged care. METHODS: Constructivist grounded theory guided the study of 9 focus groups of 59 community-dwelling older adults (70+ years) in Flanders, Belgium. An open-ended topic guide and a modified Alice Cares documentary focused discussions. The Qualitative Analysis Guide of Leuven (QUAGOL) guided data analysis. RESULTS: Data revealed older adults' multidimensional perceptions on the ethics of SARs which were structured along three sections: (a) SARs as components of a techno-societal evolution, (b) SARs' embeddedness in aged-care dynamics, (c) SARs as embodiments of ethical considerations. DISCUSSION: Perceptions sociohistorically contextualize the ethics of SAR use by older adults' views on societal, organizational, and relational contexts in which aged care takes place. These contexts need to inform the ethical criteria for the design, development, and use of SARs. Focusing on older adults' ethical perceptions creates "normativity in place," viewing participants as moral subjects.


Asunto(s)
Envejecimiento , Vida Independiente , Robótica , Dispositivos de Autoayuda , Percepción Social/psicología , Anciano , Envejecimiento/ética , Envejecimiento/psicología , Bélgica , Femenino , Grupos Focales , Teoría Fundamentada , Humanos , Vida Independiente/ética , Vida Independiente/psicología , Invenciones/ética , Masculino , Investigación Cualitativa , Robótica/ética , Robótica/tendencias , Dispositivos de Autoayuda/ética , Dispositivos de Autoayuda/psicología , Dispositivos de Autoayuda/tendencias , Evolución Social
14.
J Med Ethics ; 46(2): 128-136, 2020 02.
Artículo en Inglés | MEDLINE | ID: mdl-31818967

RESUMEN

Different embodiments of technology permeate all layers of public and private domains in society. In the public domain of aged care, attention is increasingly focused on the use of socially assistive robots (SARs) supporting caregivers and older adults to guarantee that older adults receive care. The introduction of SARs in aged-care contexts is joint by intensive empirical and philosophical research. Although these efforts merit praise, current empirical and philosophical research are still too far separated. Strengthening the connection between these two fields is crucial to have a full understanding of the ethical impact of these technological artefacts. To bridge this gap, we propose a philosophical-ethical framework for SAR use, one that is grounded in the dialogue between empirical-ethical knowledge about and philosophical-ethical reflection on SAR use. We highlight the importance of considering the intuitions of older adults and their caregivers in this framework. Grounding philosophical-ethical reflection in these intuitions opens the ethics of SAR use in aged care to its own socio-historical contextualisation. Referring to the work of Margaret Urban Walker, Joan Tronto and Andrew Feenberg, it is argued that this socio-historical contextualisation of the ethics of SAR use already has strong philosophical underpinnings. Moreover, this contextualisation enables us to formulate a rudimentary decision-making process about SAR use in aged care which rests on three pillars: (1) stakeholders' intuitions about SAR use as sources of knowledge; (2) interpretative dialogues as democratic spaces to discuss the ethics of SAR use; (3) the concretisation of ethics in SAR use.


Asunto(s)
Toma de Decisiones/ética , Hogares para Ancianos , Casas de Salud , Robótica/ética , Interacción Social , Aislamiento Social , Anciano , Anciano de 80 o más Años , Cuidadores , Comunicación , Investigación Empírica , Humanos , Intuición , Conocimiento , Principios Morales , Filosofía
15.
Camb Q Healthc Ethics ; 29(1): 115-121, 2020 01.
Artículo en Inglés | MEDLINE | ID: mdl-31858938

RESUMEN

This article considers recent ethical topics relating to medical AI. After a general discussion of recent medical AI innovations, and a more analytic look at related ethical issues such as data privacy, physician dependency on poorly understood AI helpware, bias in data used to create algorithms post-GDPR, and changes to the patient-physician relationship, the article examines the issue of so-called robot doctors. Whereas the so-called democratization of healthcare due to health wearables and increased access to medical information might suggest a positive shift in the patient-physician relationship, the physician's 'need to care' might be irreplaceable, and robot healthcare workers ('robot carers') might be seen as contributing to dehumanized healthcare practices.


Asunto(s)
Inteligencia Artificial/ética , Ética Médica , Relaciones Médico-Paciente , Inteligencia Artificial/legislación & jurisprudencia , Confidencialidad/ética , Unión Europea , Humanos , Consentimiento Informado , Médicos , Robótica/ética , Robótica/legislación & jurisprudencia
16.
BMC Med Ethics ; 20(1): 98, 2019 12 19.
Artículo en Inglés | MEDLINE | ID: mdl-31856798

RESUMEN

BACKGROUND: Advances in artificial intelligence (AI), robotics and wearable computing are creating novel technological opportunities for mitigating the global burden of population ageing and improving the quality of care for older adults with dementia and/or age-related disability. Intelligent assistive technology (IAT) is the umbrella term defining this ever-evolving spectrum of intelligent applications for the older and disabled population. However, the implementation of IATs has been observed to be sub-optimal due to a number of barriers in the translation of novel applications from the designing labs to the bedside. Furthermore, since these technologies are designed to be used by vulnerable individuals with age- and multi-morbidity-related frailty and cognitive disability, they are perceived to raise important ethical challenges, especially when they involve machine intelligence, collect sensitive data or operate in close proximity to the human body. Thus, the goal of this paper is to explore and assess the ethical issues that professional stakeholders perceive in the development and use of IATs in elderly and dementia care. METHODS: We conducted a multi-site study involving semi-structured qualitative interviews with researchers and health professionals. We analyzed the interview data using a descriptive thematic analysis to inductively explore relevant ethical challenges. RESULTS: Our findings indicate that professional stakeholders find issues of patient autonomy and informed consent, quality of data management, distributive justice and human contact as ethical priorities. Divergences emerged in relation to how these ethical issues are interpreted, how conflicts between different ethical principles are resolved and what solutions should be implemented to overcome current challenges. CONCLUSIONS: Our findings indicate a general agreement among professional stakeholders on the ethical promises and challenges raised by the use of IATs among older and disabled users. Yet, notable divergences persist regarding how these ethical challenges can be overcome and what strategies should be implemented for the safe and effective implementation of IATs. These findings provide technology developers with useful information about unmet ethical needs. Study results may guide policy makers with firsthand information from relevant stakeholders about possible solutions for ethically-aligned technology governance.


Asunto(s)
Inteligencia Artificial/ética , Dispositivos de Autoayuda/ética , Demencia , Europa (Continente) , Femenino , Personal de Salud/psicología , Humanos , Entrevistas como Asunto , Masculino , Investigación Cualitativa , Investigadores/psicología , Robótica/ética , Participación de los Interesados
17.
Methods Inf Med ; 58(S 01): e14-e25, 2019 06.
Artículo en Inglés | MEDLINE | ID: mdl-31342471

RESUMEN

BACKGROUND: Health information systems have developed rapidly and considerably during the last decades, taking advantage of many new technologies. Robots used in operating theaters represent an exceptional example of this trend. Yet, the more these systems are designed to act autonomously and intelligently, the more complex and ethical questions arise about serious implications of how future hybrid clinical team-machine interactions ought to be envisioned, in situations where actions and their decision-making are continuously shared between humans and machines. OBJECTIVES: To discuss the many different viewpoints-from surgery, robotics, medical informatics, law, and ethics-that the challenges of novel team-machine interactions raise, together with potential consequences for health information systems, in particular on how to adequately consider what hybrid actions can be specified, and in which sense these do imply a sharing of autonomous decisions between (teams of) humans and machines, with robotic systems in operating theaters as an example. RESULTS: Team-machine interaction and hybrid action of humans and intelligent machines, as is now becoming feasible, will lead to fundamental changes in a wide range of applications, not only in the context of robotic systems in surgical operating theaters. Collaboration of surgical teams in operating theaters as well as the roles, competencies, and responsibilities of humans (health care professionals) and machines (robotic systems) need to be reconsidered. Hospital information systems will in future not only have humans as users, but also provide the ground for actions of intelligent machines. CONCLUSIONS: The expected significant changes in the relationship of humans and machines can only be appropriately analyzed and considered by inter- and multidisciplinary collaboration. Fundamentally new approaches are needed to construct the reasonable concepts surrounding hybrid action that will take into account the ascription of responsibility to the radically different types of human versus nonhuman intelligent agents involved.


Asunto(s)
Inteligencia Artificial , Atención a la Salud , Quirófanos , Robótica , Atención a la Salud/ética , Humanos , Informática Médica , Quirófanos/ética , Robótica/ética
18.
Trends Cogn Sci ; 23(5): 365-368, 2019 05.
Artículo en Inglés | MEDLINE | ID: mdl-30962074

RESUMEN

As robots become more autonomous, people will see them as more responsible for wrongdoing. Moral psychology suggests that judgments of robot responsibility will hinge on perceived situational awareness, intentionality, and free will, plus human likeness and the robot's capacity for harm. We also consider questions of robot rights and moral decision-making.


Asunto(s)
Principios Morales , Robótica/ética , Humanos , Autonomía Personal , Responsabilidad Social
19.
Med Law Rev ; 27(4): 553-575, 2019 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-30938445

RESUMEN

In July 2014, the roboticist Ronald Arkin suggested that child sex robots could be used to treat those with paedophilic predilections in the same way that methadone is used to treat heroin addicts. Taking this onboard, it would seem that there is reason to experiment with the regulation of this technology. But most people seem to disagree with this idea, with legal authorities in both the UK and US taking steps to outlaw such devices. In this article, I subject these different regulatory attitudes to critical scrutiny. In doing so, I make three main contributions to the debate. First, I present a framework for thinking about the regulatory options that we confront when dealing with child sex robots. Secondly, I argue that there is a prima facie case for restrictive regulation, but that this is contingent on whether Arkin's hypothesis has a reasonable prospect of being successfully tested. Thirdly, I argue that Arkin's hypothesis probably does not have a reasonable prospect of being successfully tested. Consequently, we should proceed with utmost caution when it comes to this technology.


Asunto(s)
Comercio/ética , Comercio/legislación & jurisprudencia , Análisis Ético , Regulación Gubernamental , Pedofilia/terapia , Robótica/ética , Robótica/legislación & jurisprudencia , Adulto , Niño , Abuso Sexual Infantil/prevención & control , Humanos , Principios Morales , Pedofilia/economía , Juego e Implementos de Juego , Robótica/economía
20.
Nurs Ethics ; 26(4): 962-972, 2019 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-29262739

RESUMEN

The use of social robots in elder care is entering the mainstream as robots become more sophisticated and populations age. While there are many potential benefits to the use of social robots in care for the older people, there are ethical challenges as well. This article focuses on the societal consequences of the adoption of social robots in care for people with dementia. Making extensive use of Alasdair MacIntyre's Dependent Rational Animals to discuss issues of unintended consequences and moral hazard, we contend that in choosing to avoid the vulnerability and dependency of human existence, a society blinds itself from the animal reality of humankind. The consequence of this is that a flourishing society, in which each individual is helped to develop the virtues essential to her flourishing, becomes harder to achieve.


Asunto(s)
Demencia/terapia , Geriatría/métodos , Principios Morales , Robótica/ética , Geriatría/tendencias , Envejecimiento Saludable , Humanos , Robótica/tendencias
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA