Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
AI Ethics ; : 1-10, 2022 Nov 22.
Artículo en Inglés | MEDLINE | ID: mdl-36466152

RESUMEN

Democratic theories assume that citizens have some form of political knowledge in order to vote for representatives or to directly engage in democratic deliberation and participation. However, apart from widespread attention to the phenomenon of fake news and misinformation, less attention has been paid to how they are supposed to acquire that knowledge in contexts shaped by artificial intelligence and related digital technologies. While this topic can also be approached from an empirical angle, this paper contributes to supporting concerns about AI and democracy by looking at the issue through the lens of political epistemology, in particular using the concept of epistemic agency. It argues that artificial intelligence (AI) endangers democracy since it risks to diminish the epistemic agency of citizens and thereby undermine the relevant kind of political agency in democracy. It shows that next to fake news and manipulation by means of AI analysis of big data, epistemic bubbles and the defaulting of statistical knowledge endanger the epistemic agency of citizens when they form and wish to revise their political beliefs. AI risks to undermine trust in one's own epistemic capacities and hinder the exercise of those capacities. If we want to protect the knowledge basis of our democracies, we must address these problems in education and technology policy.

2.
Sci Eng Ethics ; 28(5): 38, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-36040561

RESUMEN

To be intrinsically valuable means to be valuable for its own sake. Moral philosophy is often ethically anthropocentric, meaning that it locates intrinsic value within humans. This paper rejects ethical anthropocentrism and asks, in what ways might nonhumans be intrinsically valuable? The paper answers this question with a wide-ranging survey of theories of nonhuman intrinsic value. The survey includes both moral subjects and moral objects, and both natural and artificial nonhumans. Literatures from environmental ethics, philosophy of technology, philosophy of art, moral psychology, and related fields are reviewed, and gaps in these literatures are identified. Although the gaps are significant and much work remains to be done, the survey nonetheless demonstrates that those who reject ethical anthropocentrism have considerable resources available to develop their moral views. Given the many very high-stakes issues involving both natural and artificial nonhumans, and the sensitivity of these issues to how nonhumans are intrinsically valued, this is a vital project to pursue.


Asunto(s)
Principios Morales , Filosofía , Humanos
4.
Sci Eng Ethics ; 28(2): 16, 2022 03 29.
Artículo en Inglés | MEDLINE | ID: mdl-35352197

RESUMEN

Recently there has been more attention to the cultural aspects of social robots. This paper contributes to this effort by offering a philosophical, in particular Wittgensteinian framework for conceptualizing in what sense and how robots are related to culture and by exploring what it would mean to create an "Ubuntu Robot". In addition, the paper gestures towards a more culturally diverse and more relational approach to social robotics and emphasizes the role technology can play in addressing the challenges of modernity and in assisting cultural change: it argues that robots can help us to engage in cultural dialogue, reflect on our own culture, and change how we do things. In this way, the paper contributes to the growing literature on cross-cultural approaches to social robotics.


Asunto(s)
Robótica , Tecnología
5.
AI Ethics ; 1(2): 131-138, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34790946

RESUMEN

The recent incidents involving Dr. Timnit Gebru, Dr. Margaret Mitchell, and Google have triggered an important discussion emblematic of issues arising from the practice of AI Ethics research. We offer this paper and its bibliography as a resource to the global community of AI Ethics Researchers who argue for the protection and freedom of this research community. Corporate, as well as academic research settings, involve responsibility, duties, dissent, and conflicts of interest. This article is meant to provide a reference point at the beginning of this decade regarding matters of consensus and disagreement on how to enact AI Ethics for the good of our institutions, society, and individuals. We have herein identified issues that arise at the intersection of information technology, socially encoded behaviors, and biases, and individual researchers' work and responsibilities. We revisit some of the most pressing problems with AI decision-making and examine the difficult relationships between corporate interests and the early years of AI Ethics research. We propose several possible actions we can take collectively to support researchers throughout the field of AI Ethics, especially those from marginalized groups who may experience even more barriers in speaking out and having their research amplified. We promote the global community of AI Ethics researchers and the evolution of standards accepted in our profession guiding a technological future that makes life better for all.

6.
Sci Eng Ethics ; 26(4): 2051-2068, 2020 08.
Artículo en Inglés | MEDLINE | ID: mdl-31650511

RESUMEN

This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence (AI) technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of "many things" is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, which draws attention to the issues of transparency and explainability. In contrast to standard discussions, however, it is then argued that this knowledge problem regarding agents of responsibility is linked to the other side of the responsibility relation: the addressees or "patients" of responsibility, who may demand reasons for actions and decisions made by using AI. Inspired by a relational approach, responsibility as answerability thus offers an important additional, if not primary, justification for explainability based, not on agency, but on patiency.


Asunto(s)
Inteligencia Artificial , Conocimiento , Humanos
7.
Sci Eng Ethics ; 24(5): 1503-1519, 2018 10.
Artículo en Inglés | MEDLINE | ID: mdl-28812291

RESUMEN

In the philosophy of technology after the empirical turn, little attention has been paid to language and its relation to technology. In this programmatic and explorative paper, it is proposed to use the later Wittgenstein, not only to pay more attention to language use in philosophy of technology, but also to rethink technology itself-at least technology in its aspect of tool, technology-in-use. This is done by outlining a working account of Wittgenstein's view of language (as articulated mainly in the Investigations) and by then applying that account to technology-turning around Wittgenstein's metaphor of the toolbox. Using Wittgenstein's concepts of language games and form of life and coining the term 'technology games', the paper proposes and argues for a use-oriented, holistic, transcendental, social, and historical approach to technology which is empirically but also normatively sensitive, and which takes into account implicit knowledge and know-how. It gives examples of interaction with social robots to support the relevance of this project for understanding and evaluating today's technologies, makes comparisons with authors in philosophy of technology such as Winner and Ihde, and sketches the contours of a phenomenology and hermeneutics of technology use that may help us to understand but also to gain a more critical relation to specific uses of concrete technologies in everyday contexts. Ultimately, given the holism argued for, it also promises a more critical relation to the games and forms of life technologies are embedded in-to the ways we do things.


Asunto(s)
Lenguaje , Filosofía , Tecnología , Comprensión , Humanos , Relaciones Interpersonales , Conocimiento , Juego e Implementos de Juego , Robótica
8.
Sci Eng Ethics ; 22(1): 47-65, 2016 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-25894654

RESUMEN

The use of robots in therapy for children with autism spectrum disorder (ASD) raises issues concerning the ethical and social acceptability of this technology and, more generally, about human-robot interaction. However, usually philosophical papers on the ethics of human-robot-interaction do not take into account stakeholders' views; yet it is important to involve stakeholders in order to render the research responsive to concerns within the autism and autism therapy community. To support responsible research and innovation in this field, this paper identifies a range of ethical, social and therapeutic concerns, and presents and discusses the results of an exploratory survey that investigated these issues and explored stakeholders' expectations about this kind of therapy. We conclude that although in general stakeholders approve of using robots in therapy for children with ASD, it is wise to avoid replacing therapists by robots and to develop and use robots that have what we call supervised autonomy. This is likely to create more trust among stakeholders and improve the quality of the therapy. Moreover, our research suggests that issues concerning the appearance of the robot need to be adequately dealt with by the researchers and therapists. For instance, our survey suggests that zoomorphic robots may be less problematic than robots that look too much like humans.


Asunto(s)
Actitud , Trastorno del Espectro Autista/terapia , Robótica , Confianza , Niño , Humanos , Apego a Objetos , Padres , Apariencia Física , Psicoterapia , Robótica/ética , Maestros , Encuestas y Cuestionarios
9.
Theor Med Bioeth ; 36(4): 265-77, 2015 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-26002636

RESUMEN

When is it ethically acceptable to use artificial agents in health care? This article articulates some criteria for good care and then discusses whether machines as artificial agents that take over care tasks meet these criteria. Particular attention is paid to intuitions about the meaning of 'care', 'agency', and 'taking over', but also to the care process as a labour process in a modern organizational and financial-economic context. It is argued that while there is in principle no objection to using machines in medicine and health care, the idea of them functioning and appearing as 'artificial agents' is problematic and attends us to problems in human care which were already present before visions of machine care entered the stage. It is recommended that the discussion about care machines be connected to a broader discussion about the impact of technology on human relations in the context of modernity.


Asunto(s)
Tecnología Biomédica , Atención a la Salud/métodos , Atención al Paciente/métodos , Humanos , Cambio Social
10.
Med Health Care Philos ; 16(4): 807-16, 2013 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-23338289

RESUMEN

Contemporary health care relies on electronic devices. These technologies are not ethically neutral but change the practice of care. In light of Sennett's work and that of other thinkers (Dewey, Dreyfus, Borgmann) one worry is that "e-care"--care by means of new information and communication technologies--does not promote skilful and careful engagement with patients and hence is neither conducive to the quality of care nor to the virtues of the care worker. Attending to the kinds of knowledge involved in care work and their moral significance, this paper explores what "craftsmanship" means in the context of medicine and health care and discusses whether today the care giver's craftsmanship is eroded. It is argued that this is a real danger, especially under modern conditions and in the case of telecare, but that whether it happens, and to what extent it happens, depends on whether in a specific practice and given a specific technology e-carers can develop the know-how and skill to engage more intensely with those under their care and to cooperate with their co-workers.


Asunto(s)
Competencia Clínica/normas , Informática Médica/normas , Calidad de la Atención de Salud/normas , Telemedicina/normas , Humanos
11.
Sci Eng Ethics ; 18(1): 35-48, 2012 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-20862561

RESUMEN

The standard response to engineering disasters like the Deepwater Horizon case is to ascribe full moral responsibility to individuals and to collectives treated as individuals. However, this approach is inappropriate since concrete action and experience in engineering contexts seldom meets the criteria of our traditional moral theories. Technological action is often distributed rather than individual or collective, we lack full control of the technology and its consequences, and we lack knowledge and are uncertain about these consequences. In this paper, I analyse these problems by employing Kierkegaardian notions of tragedy and moral responsibility in order to account for experiences of the tragic in technological action. I argue that ascription of responsibility in engineering contexts should be sensitive to personal experiences of lack of control, uncertainty, role conflicts, social dependence, and tragic choice. I conclude that this does not justify evading individual and corporate responsibility, but inspires practices of responsibility ascription that are less 'harsh' on those directly involved in technological action, that listen to their personal experiences, and that encourage them to gain more knowledge about what they are doing.


Asunto(s)
Desastres , Ingeniería/ética , Obligaciones Morales , Contaminación por Petróleo/ética , Rol Profesional , Tecnología/ética , Conducta de Elección , Ética en los Negocios , Humanos , Conducta Social , Incertidumbre
12.
Sci Eng Ethics ; 16(2): 371-85, 2010 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-19722107

RESUMEN

Engineering can learn from ethics, but ethics can also learn from engineering. In this paper, I discuss what engineering metaphors can teach us about practical philosophy. Using metaphors such as calculation, performance, and open source, I articulate two opposing views of morality and politics: one that relies on images related to engineering as science and one that draws on images of engineering practice. I argue that the latter view and its metaphors provide a more adequate way to understand and guide the moral life. Responding to two problems of alienation and taking into account developments such as Fab Lab I then further explore the implications of this view for engineering and society.


Asunto(s)
Ingeniería/ética , Principios Morales , Filosofía , Cambio Social , Simbolismo , Teoría Ética , Humanos , Política , Solución de Problemas/ética , Ciencia/ética , Semántica , Pensamiento/ética
13.
Sci Eng Ethics ; 13(2): 235-48, 2007 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-17717735

RESUMEN

An influential approach to engineering ethics is based on codes of ethics and the application of moral principles by individual practitioners. However, to better understand the ethical problems of complex technological systems and the moral reasoning involved in such contexts, we need other tools as well. In this article, we consider the role of imagination and develop a concept of distributed responsibility in order to capture a broader range of human abilities and dimensions of moral responsibility. We show that in the case of Snorre A, a near-disaster with an oil and gas production installation, imagination played a crucial and morally relevant role in how the crew coped with the crisis. For example, we discuss the role of scenarios and images in the moral reasoning and discussion of the platform crew in coping with the crisis. Moreover, we argue that responsibility for increased system vulnerability, turning an undesired event into a near-disaster, should not be ascribed exclusively, for example to individual engineers alone, but should be understood as distributed between various actors, levels and times. We conclude that both managers and engineers need imagination to transcend their disciplinary perspectives in order to improve the robustness of their organisations and to be better prepared for crisis situations. We recommend that education and training programmes should be transformed accordingly.


Asunto(s)
Desastres/prevención & control , Ingeniería/ética , Imaginación , Petróleo , Humanos , Noruega
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA