RESUMO
Cognitive biases are widespread in humans and animals alike, and can sometimes be reinforced by social interactions. One prime bias in judgment and decision-making is the human tendency to underestimate large quantities. Previous research on social influence in estimation tasks has generally focused on the impact of single estimates on individual and collective accuracy, showing that randomly sharing estimates does not reduce the underestimation bias. Here, we test a method of social information sharing that exploits the known relationship between the true value and the level of underestimation, and study if it can counteract the underestimation bias. We performed estimation experiments in which participants had to estimate a series of quantities twice, before and after receiving estimates from one or several group members. Our purpose was threefold: to study (i) whether restructuring the sharing of social information can reduce the underestimation bias, (ii) how the number of estimates received affects the sensitivity to social influence and estimation accuracy, and (iii) the mechanisms underlying the integration of multiple estimates. Our restructuring of social interactions successfully countered the underestimation bias. Moreover, we find that sharing more than one estimate also reduces the underestimation bias. Underlying our results are a human tendency to herd, to trust larger estimates than one's own more than smaller estimates, and to follow disparate social information less. Using a computational modeling approach, we demonstrate that these effects are indeed key to explain the experimental results. Overall, our results show that existing knowledge on biases can be used to dampen their negative effects and boost judgment accuracy, paving the way for combating other cognitive biases threatening collective systems.
Assuntos
Viés , Mídias Sociais , Tomada de Decisões , Humanos , Disseminação de InformaçãoRESUMO
Social information use is widespread in the animal kingdom, helping individuals rapidly acquire useful knowledge and adjust to novel circumstances. In humans, the highly interconnected world provides ample opportunities to benefit from social information but also requires navigating complex social environments with people holding disparate or conflicting views. It is, however, still largely unclear how people integrate information from multiple social sources that (dis)agree with them, and among each other. We address this issue in three steps. First, we present a judgement task in which participants could adjust their judgements after observing the judgements of three peers. We experimentally varied the distribution of this social information, systematically manipulating its variance (extent of agreement among peers) and its skewness (peer judgements clustering either near or far from the participant's judgement). As expected, higher variance among peers reduced their impact on behaviour. Importantly, observing a single peer confirming a participant's own judgement markedly decreased the influence of other-more distant-peers. Second, we develop a framework for modelling the cognitive processes underlying the integration of disparate social information, combining Bayesian updating with simple heuristics. Our model accurately accounts for observed adjustment strategies and reveals that people particularly heed social information that confirms personal judgements. Moreover, the model exposes strong inter-individual differences in strategy use. Third, using simulations, we explore the possible implications of the observed strategies for belief updating. These simulations show how confirmation-based weighting can hamper the influence of disparate social information, exacerbate filter bubble effects and deepen group polarization. Overall, our results clarify what aspects of the social environment are, and are not, conducive to changing people's minds.
Assuntos
Meio Social , Adulto , Teorema de Bayes , Feminino , Humanos , Julgamento , MasculinoRESUMO
In our digital and connected societies, the development of social networks, online shopping, and reputation systems raises the questions of how individuals use social information and how it affects their decisions. We report experiments performed in France and Japan, in which subjects could update their estimates after having received information from other subjects. We measure and model the impact of this social information at individual and collective scales. We observe and justify that, when individuals have little prior knowledge about a quantity, the distribution of the logarithm of their estimates is close to a Cauchy distribution. We find that social influence helps the group improve its properly defined collective accuracy. We quantify the improvement of the group estimation when additional controlled and reliable information is provided, unbeknownst to the subjects. We show that subjects' sensitivity to social influence permits us to define five robust behavioral traits and increases with the difference between personal and group estimates. We then use our data to build and calibrate a model of collective estimation to analyze the impact on the group performance of the quantity and quality of information received by individuals. The model quantitatively reproduces the distributions of estimates and the improvement of collective performance and accuracy observed in our experiments. Finally, our model predicts that providing a moderate amount of incorrect information to individuals can counterbalance the human cognitive bias to systematically underestimate quantities and thereby improve collective performance.
Assuntos
Tomada de Decisões , Processos Grupais , Modelos Estatísticos , Rede Social , França , Humanos , Japão , ConhecimentoRESUMO
The recent developments of social networks and recommender systems have dramatically increased the amount of social information shared in human communities, challenging the human ability to process it. As a result, sharing aggregated forms of social information is becoming increasingly popular. However, it is unknown whether sharing aggregated information improves people's judgments more than sharing the full available information. Here, we compare the performance of groups in estimation tasks when social information is fully shared versus when it is first averaged and then shared. We find that improvements in estimation accuracy are comparable in both cases. However, our results reveal important differences in subjects' behaviour: (i) subjects follow the social information more when receiving an average than when receiving all estimates, and this effect increases with the number of estimates underlying the average; (ii) subjects follow the social information more when it is higher than their personal estimate than when it is lower. This effect is stronger when receiving all estimates than when receiving an average. We introduce a model that sheds light on these effects, and confirms their importance for explaining improvements in estimation accuracy in all treatments.
Assuntos
Julgamento , Rede Social , HumanosRESUMO
Competition for social influence is a major force shaping societies, from baboons guiding their troop in different directions, to politicians competing for voters, to influencers competing for attention on social media. Social influence is invariably a competitive exercise with multiple influencers competing for it. We study which strategy maximizes social influence under competition. Applying game theory to a scenario where two advisers compete for the attention of a client, we find that the rational solution for advisers is to communicate truthfully when favored by the client, but to lie when ignored. Across seven pre-registered studies, testing 802 participants, such a strategic adviser consistently outcompeted an honest adviser. Strategic dishonesty outperformed truth-telling in swaying individual voters, the majority vote in anonymously voting groups, and the consensus vote in communicating groups. Our findings help explain the success of political movements that thrive on disinformation, and vocal underdog politicians with no credible program.
RESUMO
A major problem resulting from the massive use of social media is the potential spread of incorrect information. Yet, very few studies have investigated the impact of incorrect information on individual and collective decisions. We performed experiments in which participants had to estimate a series of quantities, before and after receiving social information. Unbeknownst to them, we controlled the degree of inaccuracy of the social information through 'virtual influencers', who provided some incorrect information. We find that a large proportion of individuals only partially follow the social information, thus resisting incorrect information. Moreover, incorrect information can help improve group performance more than correct information, when going against a human underestimation bias. We then design a computational model whose predictions are in good agreement with the empirical data, and sheds light on the mechanisms underlying our results. Besides these main findings, we demonstrate that the dispersion of estimates varies a lot between quantities, and must thus be considered when normalizing and aggregating estimates of quantities that are very different in nature. Overall, our results suggest that incorrect information does not necessarily impair the collective wisdom of groups, and can even be used to dampen the negative effects of known cognitive biases.
Assuntos
Comportamento Social , HumanosRESUMO
In our digital societies, individuals massively interact through digital interfaces whose impact on collective dynamics can be important. In particular, the combination of social media filters and recommender systems can lead to the emergence of polarized and fragmented groups. In some social contexts, such segregation processes of human groups have been shown to share similarities with phase separation phenomena in physics. Here, we study the impact of information filtering on collective segregation behaviour of human groups. We report a series of experiments where groups of 22 subjects have to perform a collective segregation task that mimics the tendency of individuals to bond with other similar individuals. More precisely, the participants are each assigned a colour (red or blue) unknown to them, and have to regroup with other subjects sharing the same colour. To assist them, they are equipped with an artificial sensory device capable of detecting the majority colour in their 'environment' (defined as their k nearest neighbours, unbeknownst to them), for which we control the perception range, k = 1, 3, 5, 7, 9, 11, 13. We study the separation dynamics (emergence of unicolour groups) and the properties of the final state, and show that the value of k controls the quality of the segregation, although the subjects are totally unaware of the precise definition of the 'environment'. We also find that there is a perception range k = 7 above which the ability of the group to segregate does not improve. We introduce a model that precisely describes the random motion of a group of pedestrians in a confined space, and which faithfully reproduces and allows interpretation of the results of the segregation experiments. Finally, we discuss the strong and precise analogy between our experiment and the phase separation of two immiscible materials at very low temperature. This article is part of the theme issue 'Multi-scale analysis and modelling of collective migration in biological systems'.