Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
1.
Camb Q Healthc Ethics ; : 1-14, 2024 Jan 12.
Artigo em Inglês | MEDLINE | ID: mdl-38214062

RESUMO

It is a common view that artificial systems could play an important role in dealing with the shortage of caregivers due to demographic change. One argument to show that this is also in the interest of care-dependent persons is that artificial systems might significantly enhance user autonomy since they might stay longer in their homes. This argument presupposes that the artificial systems in question do not require permanent supervision and control by human caregivers. For this reason, they need the capacity for some degree of moral decision-making and agency to cope with morally relevant situations (artificial morality). Machine ethics provides the theoretical and ethical framework for artificial morality. This article scrutinizes the question how artificial moral agents that enhance user autonomy could look like. It discusses, in particular, the suggestion that they should be designed as moral avatars of their users to enhance user autonomy in a substantial sense.

2.
Sci Eng Ethics ; 29(2): 10, 2023 03 23.
Artigo em Inglês | MEDLINE | ID: mdl-36952064

RESUMO

Is ethics a computable function? Can machines learn ethics like humans do? If teaching consists in no more than programming, training, indoctrinating… and if ethics is merely following a code of conduct, then yes, we can teach ethics to algorithmic machines. But if ethics is not merely about following a code of conduct or about imitating the behavior of others, then an approach based on computing outcomes, and on the reduction of ethics to the compilation and application of a set of rules, either a priori or learned, misses the point. Our intention is not to solve the technical problem of machine ethics, but to learn something about human ethics, and its rationality, by reflecting on the ethics that can and should be implemented in machines. Any machine ethics implementation will have to face a number of fundamental or conceptual problems, which in the end refer to philosophical questions, such as: what is a human being (or more generally, what is a worthy being); what is human intentional acting; and how are intentional actions and their consequences morally evaluated. We are convinced that a proper understanding of ethical issues in AI can teach us something valuable about ourselves, and what it means to lead a free and responsible ethical life, that is, being good people beyond merely "following a moral code". In the end we believe that rationality must be seen to involve more than just computing, and that value rationality is beyond numbers. Such an understanding is a required step to recovering a renewed rationality of ethics, one that is urgently needed in our highly technified society.

3.
Ethics Inf Technol ; 25(2): 29, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37123285

RESUMO

Many researchers from robotics, machine ethics, and adjacent fields seem to assume that norms represent good behavior that social robots should learn to benefit their users and society. We would like to complicate this view and present seven key troubles with norm-compliant robots: (1) norm biases, (2) paternalism (3) tyrannies of the majority, (4) pluralistic ignorance, (5) paths of least resistance, (6) outdated norms, and (7) technologically-induced norm change. Because discussions of why norm-compliant robots can be problematic are noticeably absent from the robot and machine ethics literature, this paper fills an important research gap. We argue that it is critical for researchers to take these issues into account if they wish to make norm-compliant robots.

4.
Sensors (Basel) ; 22(24)2022 Dec 16.
Artigo em Inglês | MEDLINE | ID: mdl-36560285

RESUMO

This paper presents the findings of detailed and comprehensive technical literature aimed at identifying the current and future research challenges of tactical autonomy. It discusses in great detail the current state-of-the-art powerful artificial intelligence (AI), machine learning (ML), and robot technologies, and their potential for developing safe and robust autonomous systems in the context of future military and defense applications. Additionally, we discuss some of the technical and operational critical challenges that arise when attempting to practically build fully autonomous systems for advanced military and defense applications. Our paper provides the state-of-the-art advanced AI methods available for tactical autonomy. To the best of our knowledge, this is the first work that addresses the important current trends, strategies, critical challenges, tactical complexities, and future research directions of tactical autonomy. We believe this work will greatly interest researchers and scientists from academia and the industry working in the field of robotics and the autonomous systems community. We hope this work encourages researchers across multiple disciplines of AI to explore the broader tactical autonomy domain. We also hope that our work serves as an essential step toward designing advanced AI and ML models with practical implications for real-world military and defense settings.


Assuntos
Médicos , Robótica , Humanos , Inteligência Artificial , Aprendizado de Máquina , Previsões
5.
Sci Eng Ethics ; 28(3): 24, 2022 05 19.
Artigo em Inglês | MEDLINE | ID: mdl-35588025

RESUMO

Recent advancements in artificial intelligence (AI) have fueled widespread academic discourse on the ethics of AI within and across a diverse set of disciplines. One notable subfield of AI ethics is machine ethics, which seeks to implement ethical considerations into AI systems. However, since different research efforts within machine ethics have discipline-specific concepts, practices, and goals, the resulting body of work is pestered with conflict and confusion as opposed to fruitful synergies. The aim of this paper is to explore ways to alleviate these issues, both on a practical and theoretical level of analysis. First, we describe two approaches to machine ethics: the philosophical approach and the engineering approach and show how tensions between the two arise due to discipline specific practices and aims. Using the concept of disciplinary capture, we then discuss potential promises and pitfalls to cross-disciplinary collaboration. Drawing on recent work in philosophy of science, we finally describe how metacognitive scaffolds can be used to avoid epistemological obstacles and foster innovative collaboration in AI ethics in general and machine ethics in particular.


Assuntos
Inteligência Artificial , Princípios Morais , Engenharia , Estudos Interdisciplinares , Filosofia
6.
Sci Eng Ethics ; 27(1): 3, 2021 01 26.
Artigo em Inglês | MEDLINE | ID: mdl-33496885

RESUMO

In the present article, I will advocate caution against developing artificial moral agents (AMAs) based on the notion that the utilization of preliminary forms of AMAs will potentially negatively feed back on the human social system and on human moral thought itself and its value-e.g., by reinforcing social inequalities, diminishing the breadth of employed ethical arguments and the value of character. While scientific investigations into AMAs pose no direct significant threat, I will argue against their premature utilization for practical and economical use. I will base my arguments on two thought experiments. The first thought experiment deals with the potential to generate a replica of an individual's moral stances with the purpose to increase, what I term, 'moral efficiency'. Hence, as a first risk, an unregulated utilization of premature AMAs in a neoliberal capitalist system is likely to disadvantage those who cannot afford 'moral replicas' and further reinforce social inequalities. The second thought experiment deals with the idea of a 'moral calculator'. As a second risk, I will argue that, even as a device equally accessible to all and aimed at augmenting human moral deliberation, 'moral calculators' as preliminary forms of AMAs are likely to diminish the breadth and depth of concepts employed in moral arguments. Again, I base this claim on the idea that the current most dominant economic system rewards increases in productivity. However, increases in efficiency will mostly stem from relying on the outputs of 'moral calculators' without further scrutiny. Premature AMAs will cover only a limited scope of moral argumentation and, hence, over-reliance on them will narrow human moral thought. In addition and as the third risk, I will argue that an increased disregard of the interior of a moral agent may ensue-a trend that can already be observed in the literature.


Assuntos
Princípios Morais , Humanos
7.
Sci Eng Ethics ; 27(5): 59, 2021 08 24.
Artigo em Inglês | MEDLINE | ID: mdl-34427804

RESUMO

Artificial intelligence (AI) and robotic technologies have become nearly ubiquitous. In some ways, the developments have likely helped us, but in other ways sophisticated technologies set back our interests. Among the latter sort is what has been dubbed the 'severance problem'-the idea that technologies sever our connection to the world, a connection which is necessary for us to flourish and live meaningful lives. I grant that the severance problem is a threat we should mitigate and I ask: how can we stave it off? In particular, the fact that some technologies exhibit behavior that is unclear to us seems to constitute a kind of severance. Building upon contemporary work on moral responsibility, I argue for a mechanism I refer to as 'technological answerability', namely the capacity to recognize human demands for answers and to respond accordingly. By designing select devices-such as robotic assistants and personal AI programs-for increased answerability, we see at least one way of satisfying our demands for answers and thereby retaining our connection to a world increasingly occupied by technology.


Assuntos
Inteligência Artificial , Robótica , Humanos , Princípios Morais , Tecnologia
8.
Camb Q Healthc Ethics ; 30(3): 455-458, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-34109922

RESUMO

What exactly is it that makes one morally responsible? Is it a set of facts which can be objectively discerned, or is it something more subjective, a reaction to the agent or context-sensitive interaction? This debate gets raised anew when we encounter newfound examples of potentially marginal agency. Accordingly, the emergence of artificial intelligence (AI) and the idea of "novel beings" represent exciting opportunities to revisit inquiries into the nature of moral responsibility. This paper expands upon my article "Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible" and clarifies my reliance upon two competing views of responsibility. Although AI and novel beings are not close enough to us in kind to be considered candidates for the same sorts of responsibility we ascribe to our fellow human beings, contemporary theories show us the priority and adaptability of our moral attitudes and practices. This allows us to take seriously the social ontology of relationships that tie us together. In other words, moral responsibility is to be found primarily in the natural moral community, even if we admit that those communities now contain artificial agents.


Assuntos
Inteligência Artificial , Robótica , Humanos , Princípios Morais
9.
Camb Q Healthc Ethics ; 30(3): 435-447, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-34109925

RESUMO

Our ability to locate moral responsibility is often thought to be a necessary condition for conducting morally permissible medical practice, engaging in a just war, and other high-stakes endeavors. Yet, with increasing reliance upon artificially intelligent systems, we may be facing a widening responsibility gap, which, some argue, cannot be bridged by traditional concepts of responsibility. How then, if at all, can we make use of crucial emerging technologies? According to Colin Allen and Wendell Wallach, the advent of so-called 'artificial moral agents' (AMAs) is inevitable. Still, this notion may seem to push back the problem, leaving those who have an interest in developing autonomous technology with a dilemma. We may need to scale-back our efforts at deploying AMAs (or at least maintain human oversight); otherwise, we must rapidly and drastically update our moral and legal norms in a way that ensures responsibility for potentially avoidable harms. This paper invokes contemporary accounts of responsibility in order to show how artificially intelligent systems might be held responsible. Although many theorists are concerned enough to develop artificial conceptions of agency or to exploit our present inability to regulate valuable innovations, the proposal here highlights the importance of-and outlines a plausible foundation for-a workable notion of artificial moral responsibility.


Assuntos
Inteligência Artificial , Princípios Morais , Humanos , Inteligência , Tecnologia
10.
Entropy (Basel) ; 24(1)2021 Dec 21.
Artigo em Inglês | MEDLINE | ID: mdl-35052036

RESUMO

We present a summary of research that we have conducted employing AI to better understand human morality. This summary adumbrates theoretical fundamentals and considers how to regulate development of powerful new AI technologies. The latter research aim is benevolent AI, with fair distribution of benefits associated with the development of these and related technologies, avoiding disparities of power and wealth due to unregulated competition. Our approach avoids statistical models employed in other approaches to solve moral dilemmas, because these are "blind" to natural constraints on moral agents, and risk perpetuating mistakes. Instead, our approach employs, for instance, psychologically realistic counterfactual reasoning in group dynamics. The present paper reviews studies involving factors fundamental to human moral motivation, including egoism vs. altruism, commitment vs. defaulting, guilt vs. non-guilt, apology plus forgiveness, counterfactual collaboration, among other factors fundamental in the motivation of moral action. These being basic elements in most moral systems, our studies deliver generalizable conclusions that inform efforts to achieve greater sustainability and global benefit, regardless of cultural specificities in constituents.

11.
BMC Geriatr ; 20(1): 244, 2020 07 14.
Artigo em Inglês | MEDLINE | ID: mdl-32664904

RESUMO

BACKGROUND: Use of companion robots may reduce older people's depression, loneliness and agitation. This benefit has to be contrasted against possible ethical concerns raised by philosophers in the field around issues such as deceit, infantilisation, reduced human contact and accountability. Research directly assessing prevalence of such concerns among relevant stakeholders, however, remains limited, even though their views clearly have relevance in the debate. For example, any discrepancies between ethicists and stakeholders might in itself be a relevant ethical consideration while concerns perceived by stakeholders might identify immediate barriers to successful implementation. METHODS: We surveyed 67 younger adults after they had live interactions with companion robot pets while attending an exhibition on intimacy, including the context of intimacy for older people. We asked about their perceptions of ethical issues. Participants generally had older family members, some with dementia. RESULTS: Most participants (40/67, 60%) reported having no ethical concerns towards companion robot use when surveyed with an open question. Twenty (30%) had some concern, the most common being reduced human contact (10%), followed by deception (6%). However, when choosing from a list, the issue perceived as most concerning was equality of access to devices based on socioeconomic factors (m = 4.72 on a scale 1-7), exceeding more commonly hypothesized issues such as infantilising (m = 3.45), and deception (m = 3.44). The lowest-scoring issues were potential for injury or harm (m = 2.38) and privacy concerns (m = 2.17). Over half (39/67 (58%)) would have bought a device for an older relative. Cost was a common reason for choosing not to purchase a device. CONCLUSIONS: Although a relatively small study, we demonstrated discrepancies between ethical concerns raised in the philosophical literature and those likely to make the decision to buy a companion robot. Such discrepancies, between philosophers and 'end-users' in care of older people, and in methods of ascertainment, are worthy of further empirical research and discussion. Our participants were more concerned about economic issues and equality of access, an important consideration for those involved with care of older people. On the other hand the concerns proposed by ethicists seem unlikely to be a barrier to use of companion robots.


Assuntos
Demência , Robótica , Adulto , Idoso , Idoso de 80 Anos ou mais , Atitude , Humanos , Percepção , Inquéritos e Questionários
12.
Sci Eng Ethics ; 26(5): 2381-2399, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32643059

RESUMO

This paper surveys the state-of-the-art in machine ethics, that is, considerations of how to implement ethical behaviour in robots, unmanned autonomous vehicles, or software systems. The emphasis is on covering the breadth of ethical theories being considered by implementors, as well as the implementation techniques being used. There is no consensus on which ethical theory is best suited for any particular domain, nor is there any agreement on which technique is best placed to implement a particular theory. Another unresolved problem in these implementations of ethical theories is how to objectively validate the implementations. The paper discusses the dilemmas being used as validating 'whetstones' and whether any alternative validation mechanism exists. Finally, it speculates that an intermediate step of creating domain-specific ethics might be a possible stepping stone towards creating machines that exhibit ethical behaviour.


Assuntos
Inteligência Artificial , Princípios Morais , Teoria Ética
13.
Sci Eng Ethics ; 26(6): 3469-3481, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-32876909

RESUMO

In a stimulating recent article for this journal (van Wynsberghe and Robbins in Sci Eng Ethics 25(3):719-735. https://doi.org/10.1007/s11948-018-0030-8 , 2019), Aimee van Wynsberghe and Scott Robbins (hereafter, vW&R) mount a serious critique of a number of reasons advanced in favor of building artificial moral agents (AMAs). In light of their critique, vW&R make two recommendations: they advocate a moratorium on the commercialization of AMAs and suggest that the argumentative burden is now shifted onto the proponents of AMAs to come up with new reasons for building them. This commentary aims to explore the implications vW&R draw from their critique. In particular, it will raise objections to the moratorium argument and propose a presumptive case for commercializing AMAs.


Assuntos
Dissidências e Disputas , Princípios Morais , Humanos
14.
Sci Eng Ethics ; 26(6): 3285-3312, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33048325

RESUMO

The ethics of autonomous vehicles (AV) has received a great amount of attention in recent years, specifically in regard to their decisional policies in accident situations in which human harm is a likely consequence. Starting from the assumption that human harm is unavoidable, many authors have developed differing accounts of what morality requires in these situations. In this article, a strategy for AV decision-making is proposed, the Ethical Valence Theory, which paints AV decision-making as a type of claim mitigation: different road users hold different moral claims on the vehicle's behavior, and the vehicle must mitigate these claims as it makes decisions about its environment. Using the context of autonomous vehicles, the harm produced by an action and the uncertainties connected to it are quantified and accounted for through deliberation, resulting in an ethical implementation coherent with reality. The goal of this approach is not to define how moral theory requires vehicles to behave, but rather to provide a computational approach that is flexible enough to accommodate a number of 'moral positions' concerning what morality demands and what road users may expect, offering an evaluation tool for the social acceptability of an autonomous vehicle's ethical decision making.


Assuntos
Tomada de Decisões , Princípios Morais , Teoria Ética , Ética , Humanos , Incerteza
15.
Sci Eng Ethics ; 26(2): 501-532, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-31721023

RESUMO

One of the objectives in the field of artificial intelligence for some decades has been the development of artificial agents capable of coexisting in harmony with people and other systems. The computing research community has made efforts to design artificial agents capable of doing tasks the way people do, tasks requiring cognitive mechanisms such as planning, decision-making, and learning. The application domains of such software agents are evident nowadays. Humans are experiencing the inclusion of artificial agents in their environment as unmanned vehicles, intelligent houses, and humanoid robots capable of caring for people. In this context, research in the field of machine ethics has become more than a hot topic. Machine ethics focuses on developing ethical mechanisms for artificial agents to be capable of engaging in moral behavior. However, there are still crucial challenges in the development of truly Artificial Moral Agents. This paper aims to show the current status of Artificial Moral Agents by analyzing models proposed over the past two decades. As a result of this review, a taxonomy to classify Artificial Moral Agents according to the strategies and criteria used to deal with ethical problems is proposed. The presented review aims to illustrate (1) the complexity of designing and developing ethical mechanisms for this type of agent, and (2) that there is a long way to go (from a technological perspective) before this type of artificial agent can replace human judgment in difficult, surprising or ambiguous moral situations.


Assuntos
Inteligência Artificial , Princípios Morais , Humanos , Julgamento , Software , Inquéritos e Questionários
16.
Cogn Syst Res ; 64: 117-125, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-32901198

RESUMO

New technologies based on artificial agents promise to change the next generation of autonomous systems and therefore our interaction with them. Systems based on artificial agents such as self-driving cars and social robots are examples of this technology that is seeking to improve the quality of people's life. Cognitive architectures aim to create some of the most challenging artificial agents commonly known as bio-inspired cognitive agents. This type of artificial agent seeks to embody human-like intelligence in order to operate and solve problems in the real world as humans do. Moreover, some cognitive architectures such as Soar, LIDA, ACT-R, and iCub try to be fundamental architectures for the Artificial General Intelligence model of human cognition. Therefore, researchers in the machine ethics field face ethical questions related to what mechanisms an artificial agent must have for making moral decisions in order to ensure that their actions are always ethically right. This paper aims to identify some challenges that researchers need to solve in order to create ethical cognitive architectures. These cognitive architectures are characterized by the capacity to endow artificial agents with appropriate mechanisms to exhibit explicit ethical behavior. Additionally, we offer some reasons to develop ethical cognitive architectures. We hope that this study can be useful to guide future research on ethical cognitive architectures.

17.
Sci Eng Ethics ; 25(3): 719-735, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-29460081

RESUMO

Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents (AMA). Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and building these machines would lead to a better understanding of human morality. Although some scholars have challenged the very initiative to develop AMAs, what is currently missing from the debate is a closer examination of the reasons offered by machine ethicists to justify the development of AMAs. This closer examination is especially needed because of the amount of funding currently being allocated to the development of AMAs (from funders like Elon Musk) coupled with the amount of attention researchers and industry leaders receive in the media for their efforts in this direction. The stakes in this debate are high because moral robots would make demands on society; answers to a host of pending questions about what counts as an AMA and whether they are morally responsible for their behavior or not. This paper shifts the burden of proof back to the machine ethicists demanding that they give good reasons to build AMAs. The paper argues that until this is done, the development of commercially available AMAs should not proceed further.


Assuntos
Inteligência Artificial/ética , Análise Ética , Desenvolvimento Moral , Princípios Morais , Robótica/ética , Eticistas
18.
AI Ethics ; : 1-9, 2023 Apr 17.
Artigo em Inglês | MEDLINE | ID: mdl-37360148

RESUMO

This article describes key challenges in creating an ethics "for" robots. Robot ethics is not only a matter of the effects caused by robotic systems or the uses to which they may be put, but also the ethical rules and principles that these systems ought to follow-what we call "Ethics for Robots." We suggest that the Principle of Nonmaleficence, or "do no harm," is one of the basic elements of an ethics for robots-especially robots that will be used in a healthcare setting. We argue, however, that the implementation of even this basic principle will raise significant challenges for robot designers. In addition to technical challenges, such as ensuring that robots are able to detect salient harms and dangers in the environment, designers will need to determine an appropriate sphere of responsibility for robots and to specify which of various types of harms must be avoided or prevented. These challenges are amplified by the fact that the robots we are currently able to design possess a form of semi-autonomy that differs from other more familiar semi-autonomous agents such as animals or young children. In short, robot designers must identify and overcome the key challenges of an ethics for robots before they may ethically utilize robots in practice.

19.
AI Soc ; 38(2): 801-813, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-35645466

RESUMO

We are moving towards a future where Artificial Intelligence (AI) based agents make many decisions on behalf of humans. From healthcare decision-making to social media censoring, these agents face problems, and make decisions with ethical and societal implications. Ethical behaviour is a critical characteristic that we would like in a human-centric AI. A common observation in human-centric industries, like the service industry and healthcare, is that their professionals tend to break rules, if necessary, for pro-social reasons. This behaviour among humans is defined as pro-social rule breaking. To make AI agents more human-centric, we argue that there is a need for a mechanism that helps AI agents identify when to break rules set by their designers. To understand when AI agents need to break rules, we examine the conditions under which humans break rules for pro-social reasons. In this paper, we present a study that introduces a 'vaccination strategy dilemma' to human participants and analyzes their response. In this dilemma, one needs to decide whether they would distribute COVID-19 vaccines only to members of a high-risk group (follow the enforced rule) or, in selected cases, administer the vaccine to a few social influencers (break the rule), which might yield an overall greater benefit to society. The results of the empirical study suggest a relationship between stakeholder utilities and pro-social rule breaking (PSRB), which neither deontological nor utilitarian ethics completely explain. Finally, the paper discusses the design characteristics of an ethical agent capable of PSRB and the future research directions on PSRB in the AI realm. We hope that this will inform the design of future AI agents, and their decision-making behaviour.

20.
Trends Cogn Sci ; 26(5): 388-405, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35365430

RESUMO

Technological advances are enabling roles for machines that present novel ethical challenges. The study of 'AI ethics' has emerged to confront these challenges, and connects perspectives from philosophy, computer science, law, and economics. Less represented in these interdisciplinary efforts is the perspective of cognitive science. We propose a framework - computational ethics - that specifies how the ethical challenges of AI can be partially addressed by incorporating the study of human moral decision-making. The driver of this framework is a computational version of reflective equilibrium (RE), an approach that seeks coherence between considered judgments and governing principles. The framework has two goals: (i) to inform the engineering of ethical AI systems, and (ii) to characterize human moral judgment and decision-making in computational terms. Working jointly towards these two goals will create the opportunity to integrate diverse research questions, bring together multiple academic communities, uncover new interdisciplinary research topics, and shed light on centuries-old philosophical questions.


Assuntos
Princípios Morais , Filosofia , Tomada de Decisões , Engenharia , Humanos , Julgamento
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA