Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Front Sociol ; 9: 1339834, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38912311

RESUMO

With growing commercial, regulatory and scholarly interest in use of Artificial Intelligence (AI) to profile and interact with human emotion ("emotional AI"), attention is turning to its capacity for manipulating people, relating to factors impacting on a person's decisions and behavior. Given prior social disquiet about AI and profiling technologies, surprisingly little is known on people's views on the benefits and harms of emotional AI technologies, especially their capacity for manipulation. This matters because regulators of AI (such as in the European Union and the UK) wish to stimulate AI innovation, minimize harms and build public trust in these systems, but to do so they should understand the public's expectations. Addressing this, we ascertain UK adults' perspectives on the potential of emotional AI technologies for manipulating people through a two-stage study. Stage One (the qualitative phase) uses design fiction principles to generate adequate understanding and informed discussion in 10 focus groups with diverse participants (n = 46) on how emotional AI technologies may be used in a range of mundane, everyday settings. The focus groups primarily flagged concerns about manipulation in two settings: emotion profiling in social media (involving deepfakes, false information and conspiracy theories), and emotion profiling in child oriented "emotoys" (where the toy responds to the child's facial and verbal expressions). In both these settings, participants express concerns that emotion profiling covertly exploits users' cognitive or affective weaknesses and vulnerabilities; additionally, in the social media setting, participants express concerns that emotion profiling damages people's capacity for rational thought and action. To explore these insights at a larger scale, Stage Two (the quantitative phase), conducts a UK-wide, demographically representative national survey (n = 2,068) on attitudes toward emotional AI. Taking care to avoid leading and dystopian framings of emotional AI, we find that large majorities express concern about the potential for being manipulated through social media and emotoys. In addition to signaling need for civic protections and practical means of ensuring trust in emerging technologies, the research also leads us to provide a policy-friendly subdivision of what is meant by manipulation through emotional AI and related technologies.

2.
Asian Bioeth Rev ; 15(4): 417-430, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37808444

RESUMO

Emotions play a significant role in human relations, decision-making, and the motivation to act on those decisions. There are ongoing attempts to use artificial intelligence (AI) to read human emotions, and to predict human behavior or actions that may follow those emotions. However, a person's emotions cannot be easily identified, measured, and evaluated by others, including automated machines and algorithms run by AI. The ethics of emotional AI is under research and this study has examined the emotional variables as well as the perception of emotional AI in two large random groups of college students in an international university in Japan, with a heavy representation of Japanese, Indonesian, Korean, Chinese, Thai, Vietnamese, and other Asian nationalities. Surveys with multiple close-ended questions and an open-ended essay question regarding emotional AI were administered for quantitative and qualitative analysis, respectively. The results demonstrate how ethically questionable results may be obtained through affective computing and by searching for correlations in a variety of factors in collected data to classify individuals into certain categories and thus aggravate bias and discrimination. Nevertheless, the qualitative study of students' essays shows a rather optimistic view over the use of emotional AI, which helps underscore the need to increase awareness about the ethical pitfalls of AI technologies in the complex field of human emotions.

3.
MethodsX ; 10: 102149, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37091958

RESUMO

Emotional artificial intelligence (AI) is a narrow, weak form of an AI system that reads, classifies, and interacts with human emotions. This form of smart technology has become an integral layer of our digital and physical infrastructures and will radically transform how we live, learn, and work. Not only will emotional AI provide numerous benefits (i.e., increased attention and awareness, optimized productivity, stress management, etc.), but in sensing and interacting with our intimate emotions, it seeks to surreptitiously modify human behaviors. This study proposes to bring together the Technological Acceptance Model (TAM) and the Moral Foundation Theory to study determinants of emotional AI's acceptance under the analytical framework of the Three-pronged Approach (Contexts, Variables, and Statistical models). We argue that to quantitatively study the acceptance of new technologies, it is necessary to leverage two intuitions. The first is the degree of acceptance increases with how users of smart technology perceive its utilities and ease of use (formalized in the TAM). The second is the degree of acceptance decreases with the user's perception of threat or affirmation posed by the technology in relation to social norms and values (formalized in the Moral Foundation Theory). This study begins by mapping the ecology of current emotional AI use in various contexts such as workplace, education, healthcare, personal assistance, etc. It then provides a brief review and critique of current applications of the TAM and the Moral Foundation Theory in studying how humans judge smart technologies. Finally, we propose the Three-pronged Analytical Framework, offering recommendations on how future studies of technological acceptance could be conducted from the questionnaire design to building statistical models.

4.
AI Soc ; : 1-7, 2023 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-36776535

RESUMO

This paper interrogates the growing pervasiveness of affect recognition tools as an emerging layer human-centric automated management in the global workplace. While vendors tout the neoliberal incentives of emotion-recognition technology as a pre-eminent tool of workplace wellness, we argue that emotional AI recalibrates the horizons of capital not by expanding outward into the consumer realm (like surveillance capitalism). Rather, as a new genus of digital Taylorism, it turns inward, passing through the corporeal exterior to extract greater surplus value and managerial control from the affective states of workers. Thus, empathic surveillance signals a profound shift in the ontology of human labor relations. In the emotionally quantified workplace, employees are no longer simply seen as physical capital, but conduits of actuarial and statistical intelligence gleaned from their most intimate subjective states. As a result, affect-driven automated management means that priority is often given to actuarial rather than human-centered managerial decisions.

5.
AI Soc ; 38(1): 97-119, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-34776651

RESUMO

Biometric technologies are becoming more pervasive in the workplace, augmenting managerial processes such as hiring, monitoring and terminating employees. Until recently, these devices consisted mainly of GPS tools that track location, software that scrutinizes browser activity and keyboard strokes, and heat/motion sensors that monitor workstation presence. Today, however, a new generation of biometric devices has emerged that can sense, read, monitor and evaluate the affective state of a worker. More popularly known by its commercial moniker, Emotional AI, the technology stems from advancements in affective computing. But whereas previous generations of biometric monitoring targeted the exterior physical body of the worker, concurrent with the writings of Foucault and Hardt, we argue that emotion-recognition tools signal a far more invasive disciplinary gaze that exposes and makes vulnerable the inner regions of the worker-self. Our paper explores attitudes towards empathic surveillance by analyzing a survey of 1015 responses of future job-seekers from 48 countries with Bayesian statistics. Our findings reveal affect tools, left unregulated in the workplace, may lead to heightened stress and anxiety among disadvantaged ethnicities, gender and income class. We also discuss a stark cross-cultural discrepancy whereby East Asians, compared to Western subjects, are more likely to profess a trusting attitude toward EAI-enabled automated management. While this emerging technology is driven by neoliberal incentives to optimize the worksite and increase productivity, ultimately, empathic surveillance may create more problems in terms of algorithmic bias, opaque decisionism, and the erosion of employment relations. Thus, this paper nuances and extends emerging literature on emotion-sensing technologies in the workplace, particularly through its highly original cross-cultural study. Supplementary Information: The online version contains supplementary material available at 10.1007/s00146-021-01290-1.

6.
Asian Bioeth Rev ; 13(4): 421-433, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34616496

RESUMO

To evaluate the moral awareness of college students regarding artificial intelligence (AI) systems, we have examined 467 surveys collected from 152 Japanese and 315 non-Japanese students in an international university in Japan. The students were asked to choose a most significant moral problem of AI applications in the future from a list of ten ethical issues and to write an essay about it. The results show that most of the students (n = 269, 58%) considered unemployment to be the major ethical issue related to AI. The second largest group of students (n = 54, 12%) was concerned with ethical issues related to emotional AI, including the impact of AI on human behavior and emotion and robots' rights and emotions. A relatively small number of students referred to the risk of social control by AI (6%), AI discrimination (6%), increasing inequality (5%), loss of privacy (4%), AI mistakes (3%), malicious AI (3%), and AI security breaches (3%). Calculation of the z score for two population proportions shows that Japanese students were much less concerned about AI control of society (- 3.1276, p < 0.01) than non-Japanese students, but more concerned about discrimination (2.2757, p < 0.05). Female students were less concerned about unemployment (- 2.6108, p < 0.01) than males, but more concerned about discrimination (2.4333, p < 0.05). The study concludes that the moral awareness of college students regarding AI technologies is quite limited and recommends including the ethics of AI in the curriculum.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA