Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Exp Psychol Gen ; 153(5): 1309-1335, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38647480

RESUMO

Robots' proliferation throughout society offers many opportunities and conveniences. However, our ability to effectively employ these machines relies heavily on our perceptions of their competence. In six studies (N = 2,660), participants played a competitive game with a robot to learn about its capabilities. After the learning experience, we measured explicit and implicit competence impressions to investigate how they reflected the learning experience. We observed two distinct dissociations between people's implicit and explicit competence impressions. Firstly, explicit impressions were uniquely sensitive to oddball behaviors. Implicit impressions only incorporated unexpected behaviors when they were moderately prevalent. Secondly, after forming a strong initial impression, explicit, but not implicit, impression updating demonstrated a positivity bias (i.e., an overvaluation of competence information). These findings suggest that the same learning experience with a robot is expressed differently at the implicit versus explicit level. We discuss implications from a social cognitive perspective, and how this work may inform emerging work on psychology toward robots. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Assuntos
Julgamento , Robótica , Percepção Social , Humanos , Robótica/instrumentação , Masculino , Feminino , Adulto , Adulto Jovem , Aprendizagem
3.
Sci Rep ; 13(1): 17957, 2023 10 20.
Artigo em Inglês | MEDLINE | ID: mdl-37864003

RESUMO

Machines powered by artificial intelligence increasingly permeate social networks with control over resources. However, machine allocation behavior might offer little benefit to human welfare over networks when it ignores the specific network mechanism of social exchange. Here, we perform an online experiment involving simple networks of humans (496 participants in 120 networks) playing a resource-sharing game to which we sometimes add artificial agents (bots). The experiment examines two opposite policies of machine allocation behavior: reciprocal bots, which share all resources reciprocally; and stingy bots, which share no resources at all. We also manipulate the bot's network position. We show that reciprocal bots make little changes in unequal resource distribution among people. On the other hand, stingy bots balance structural power and improve collective welfare in human groups when placed in a specific network position, although they bestow no wealth on people. Our findings highlight the need to incorporate the human nature of reciprocity and relational interdependence in designing machine behavior in sharing networks. Conscientious machines do not always work for human welfare, depending on the network structure where they interact.


Assuntos
Inteligência Artificial , Software , Humanos , Comportamento Cooperativo , Rede Social , Seguridade Social
4.
Sci Rep ; 13(1): 5487, 2023 04 04.
Artigo em Inglês | MEDLINE | ID: mdl-37015964

RESUMO

Artificial intelligence (AI) is already widely used in daily communication, but despite concerns about AI's negative effects on society the social consequences of using it to communicate remain largely unexplored. We investigate the social consequences of one of the most pervasive AI applications, algorithmic response suggestions ("smart replies"), which are used to send billions of messages each day. Two randomized experiments provide evidence that these types of algorithmic recommender systems change how people interact with and perceive one another in both pro-social and anti-social ways. We find that using algorithmic responses changes language and social relationships. More specifically, it increases communication speed, use of positive emotional language, and conversation partners evaluate each other as closer and more cooperative. However, consistent with common assumptions about the adverse effects of AI, people are evaluated more negatively if they are suspected to be using algorithmic responses. Thus, even though AI can increase the speed of communication and improve interpersonal perceptions, the prevailing anti-social connotations of AI undermine these potential benefits if used overtly.


Assuntos
Inteligência Artificial , Relações Interpessoais , Humanos , Comunicação , Idioma , Emoções
5.
JMIR Mhealth Uhealth ; 8(12): e21703, 2020 12 04.
Artigo em Inglês | MEDLINE | ID: mdl-33275106

RESUMO

BACKGROUND: Inhibitory control, or inhibition, is one of the core executive functions of humans. It contributes to our attention, performance, and physical and mental well-being. Our inhibitory control is modulated by various factors and therefore fluctuates over time. Being able to continuously and unobtrusively assess our inhibitory control and understand the mediating factors may allow us to design intelligent systems that help manage our inhibitory control and ultimately our well-being. OBJECTIVE: The aim of this study is to investigate whether we can assess individuals' inhibitory control using an unobtrusive and scalable approach to identify digital markers that are predictive of changes in inhibitory control. METHODS: We developed InhibiSense, an app that passively collects the following information: users' behaviors based on their phone use and sensor data, the ground truths of their inhibition control measured with stop-signal tasks (SSTs) and ecological momentary assessments (EMAs), and heart rate information transmitted from a wearable heart rate monitor (Polar H10). We conducted a 4-week in-the-wild study, where participants were asked to install InhibiSense on their phone and wear a Polar H10. We used generalized estimating equation (GEE) and gradient boosting tree models fitted with features extracted from participants' phone use and sensor data to predict their stop-signal reaction time (SSRT), an objective metric used to measure an individual's inhibitory control, and identify the predictive digital markers. RESULTS: A total of 12 participants completed the study, and 2189 EMAs and SST responses were collected. The results from the GEE models suggest that the top digital markers positively associated with an individual's SSRT include phone use burstiness (P=.005), the mean duration between 2 consecutive phone use sessions (P=.02), the change rate of battery level when the phone was not charged (P=.04), and the frequency of incoming calls (P=.03). The top digital markers negatively associated with SSRT include the standard deviation of acceleration (P<.001), the frequency of short phone use sessions (P<.001), the mean duration of incoming calls (P<.001), the mean decibel level of ambient noise (P=.007), and the percentage of time in which the phone was connected to the internet through a mobile network (P=.001). No significant correlation between the participants' objective and subjective measurement of inhibitory control was found. CONCLUSIONS: We identified phone-based digital markers that were predictive of changes in inhibitory control and how they were positively or negatively associated with a person's inhibitory control. The results of this study corroborate the findings of previous studies, which suggest that inhibitory control can be assessed continuously and unobtrusively in the wild. We discussed some potential applications of the system and how technological interventions can be designed to help manage inhibitory control.


Assuntos
Inibição Psicológica , Smartphone , Adolescente , Adulto , Avaliação Momentânea Ecológica , Feminino , Humanos , Estudos Longitudinais , Masculino , Saúde Mental , Telemedicina/métodos , Adulto Jovem
6.
Front Psychol ; 8: 1366, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28912736

RESUMO

Robots intended for social contexts are often designed with explicit humanlike attributes in order to facilitate their reception by (and communication with) people. However, observation of an "uncanny valley"-a phenomenon in which highly humanlike entities provoke aversion in human observers-has lead some to caution against this practice. Both of these contrasting perspectives on the anthropomorphic design of social robots find some support in empirical investigations to date. Yet, owing to outstanding empirical limitations and theoretical disputes, the uncanny valley and its implications for human-robot interaction remains poorly understood. We thus explored the relationship between human similarity and people's aversion toward humanlike robots via manipulation of the agents' appearances. To that end, we employed a picture-viewing task (Nagents = 60) to conduct an experimental test (Nparticipants = 72) of the uncanny valley's existence and the visual features that cause certain humanlike robots to be unnerving. Across the levels of human similarity, we further manipulated agent appearance on two dimensions, typicality (prototypic, atypical, and ambiguous) and agent identity (robot, person), and measured participants' aversion using both subjective and behavioral indices. Our findings were as follows: (1) Further substantiating its existence, the data show a clear and consistent uncanny valley in the current design space of humanoid robots. (2) Both category ambiguity, and more so, atypicalities provoke aversive responding, thus shedding light on the visual factors that drive people's discomfort. (3) Use of the Negative Attitudes toward Robots Scale did not reveal any significant relationships between people's pre-existing attitudes toward humanlike robots and their aversive responding-suggesting positive exposure and/or additional experience with robots is unlikely to affect the occurrence of an uncanny valley effect in humanoid robotics. This work furthers our understanding of both the uncanny valley, as well as the visual factors that contribute to an agent's uncanniness.

7.
GetMobile ; 21(2): 22-25, 2017 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-30923745

RESUMO

Previous studies indicate that the way we perceive our bodily signals, such as our heart rate, can influence how we feel. Inspired by these studies, we built EmotionCheck, which is a wearable device that can change users' perception of their heart rate through subtle vibrations on the wrist. The results of an experiment with 67 participants show that the EmotionCheck device can help users regulate their anxiety through false feedback of a slow heart rate.

8.
Proc ACM Int Conf Ubiquitous Comput ; 2015: 719-730, 2015 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-30294729

RESUMO

Persuasive technologies aim to influence user's behaviors. In order to be effective, many of the persuasive technologies de-veloped so far relies on user's motivation and ability, which is highly variable and often the reason behind the failure of such technology. In this paper, we present the concept of Mindless Computing, which is a new approach to persuasive technology design. Mindless Computing leverages theories and concepts from psychology and behavioral economics into the design of technologies for behavior change. We show through a systematic review that most of the current persuasive technologies do not utilize the fast and automatic mental processes for behavioral change and there is an opportunity for persuasive technology designers to develop systems that are less reliant on user's motivation and ability. We describe two examples of mindless technologies and present pilot studies with encouraging results. Finally, we discuss design guidelines and considerations for developing this type of persuasive technology.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA