RESUMEN
One characteristic of human nature is the ability to align our behavior with others. Previous research has linked poor communication skills to alexithymia. This may suggest the possibility that individuals with high alexithymia do not adhere to the principles of social alignment. One form of cognitive alignment is consensus with a group. So far, little research attention has been given to the possible link between alexithymia and this form of cognitive social alignment. In this study, we address this gap by investigating the association between consensus-reaching abilities and alexithymia. A sample comprising of 122 participants completed the Toronto Alexithymia Scale and then played a specially designed game called "Consensus under a deadline". In each game, a participant played with either seven bots designed to act rationally and always seek a consensus, or with seven other participants. The participants were unaware who they were playing with. The results of the study confirm the link between alexithymia and impaired cognitive social alignment, showing that the alexithymia cognitive component (EOT) is associated with a deficit in reaching a consensus with humans (that sometimes act irrationally). However, this association was not evident when group members were bots (that always act rationally).
Asunto(s)
Síntomas Afectivos , Atención , Consenso , HumanosRESUMEN
Accurately tailored support such as advice or assistance can increase user satisfaction from interactions with smart devices; however, in order to achieve high accuracy, the device must obtain and exploit private user data and thus confidential user information might be jeopardized. We provide an analysis of this privacy-accuracy trade-off. We assume two positive correlations: a user's utility from a device is positively correlated with the user's privacy risk and also with the quality of the advice or assistance offered by the device. The extent of the privacy risk is unknown to the user. Thus, privacy concerned users might choose not to interact with devices they deem as unsafe. We suggest that at the first period of usage, the device should choose not to employ the full capability of its advice or assistance capabilities, since this may intimidate users from adopting it. Using three analytical propositions, we further offer an optimal policy for smart device exploitation of private data for the purpose of interactions with users.