Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
Nat Hum Behav ; 7(11): 1855-1868, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37985914

ABSTRACT

The ability of humans to create and disseminate culture is often credited as the single most important factor of our success as a species. In this Perspective, we explore the notion of 'machine culture', culture mediated or generated by machines. We argue that intelligent machines simultaneously transform the cultural evolutionary processes of variation, transmission and selection. Recommender algorithms are altering social learning dynamics. Chatbots are forming a new mode of cultural transmission, serving as cultural models. Furthermore, intelligent machines are evolving as contributors in generating cultural traits-from game strategies and visual art to scientific results. We provide a conceptual framework for studying the present and anticipated future impact of machines on cultural evolution, and present a research agenda for the study of machine culture.


Subject(s)
Cultural Evolution , Hominidae , Humans , Animals , Culture , Learning
2.
Nat Commun ; 13(1): 5821, 2022 10 03.
Article in English | MEDLINE | ID: mdl-36192416

ABSTRACT

As Artificial Intelligence (AI) proliferates across important social institutions, many of the most powerful AI systems available are difficult to interpret for end-users and engineers alike. Here, we sought to characterize public attitudes towards AI interpretability. Across seven studies (N = 2475), we demonstrate robust and positive attitudes towards interpretable AI among non-experts that generalize across a variety of real-world applications and follow predictable patterns. Participants value interpretability positively across different levels of AI autonomy and accuracy, and rate interpretability as more important for AI decisions involving high stakes and scarce resources. Crucially, when AI interpretability trades off against AI accuracy, participants prioritize accuracy over interpretability under the same conditions driving positive attitudes towards interpretability in the first place: amidst high stakes and scarce resources. These attitudes could drive a proliferation of AI systems making high-impact ethical decisions that are difficult to explain and understand.


Subject(s)
Artificial Intelligence , Public Opinion , Attitude , Humans
3.
Pers Soc Psychol Bull ; : 1461672221092273, 2022 May 07.
Article in English | MEDLINE | ID: mdl-35532002

ABSTRACT

Helping acts, however well intended and beneficial, sometimes involve immoral means or immoral helpers. Here, we explore whether help recipients consider moral evaluations in their appraisals of gratitude, a possibility that has been neglected by existing accounts of gratitude. Participants felt less grateful and more uneasy when offered immoral help (Study 1, N = 150), and when offered morally neutral help by an immoral helper (Study 2, N = 172). In response to immoral help or helpers, participants were less likely to accept the help and less willing to strengthen their relationship with the helper even when they accepted it. Study 3 (N = 276) showed that recipients who felt grateful when offered immoral help were perceived as less likable, less moral, and less suitable as close relationship partners than those who felt uneasy by observers. Our results demonstrate that gratitude is morally sensitive and suggest this might be socially adaptive.

4.
Psychol Sci ; 32(11): 1842-1855, 2021 11.
Article in English | MEDLINE | ID: mdl-34705578

ABSTRACT

Helping other people can entail risks for the helper. For example, when treating infectious patients, medical volunteers risk their own health. In such situations, decisions to help should depend on the individual's valuation of others' well-being (social preferences) and the degree of personal risk the individual finds acceptable (risk preferences). We investigated how these distinct preferences are psychologically and neurobiologically integrated when helping is risky. We used incentivized decision-making tasks (Study 1; N = 292 adults) and manipulated dopamine and norepinephrine levels in the brain by administering methylphenidate, atomoxetine, or a placebo (Study 2; N = 154 adults). We found that social and risk preferences are independent drivers of risky helping. Methylphenidate increased risky helping by selectively altering risk preferences rather than social preferences. Atomoxetine influenced neither risk preferences nor social preferences and did not affect risky helping. This suggests that methylphenidate-altered dopamine concentrations affect helping decisions that entail a risk to the helper.


Subject(s)
Decision Making , Methylphenidate , Adult , Brain , Dopamine , Humans , Risk-Taking
5.
Bioethics ; 35(9): 932-946, 2021 11.
Article in English | MEDLINE | ID: mdl-34464476

ABSTRACT

In a world with limited resources, allocation of resources to certain individuals and conditions inevitably means fewer resources allocated to other individuals and conditions. Should a patient's personal responsibility be relevant to decisions regarding allocation? In this project we combine the normative and the descriptive, conducting an empirical bioethical examination of how both Norwegian and British doctors think about principles of responsibility in allocating scarce healthcare resources. A large proportion of doctors in both countries supported including responsibility for illness in prioritization decisions. This finding was more prominent in zero-sum scenarios where allocation to one patient means that another patient is denied treatment. There was most support for incorporating prospective responsibility (through patient contracts), and low support for integrating responsibility into co-payments (i.e. through requiring responsible patients to pay part of the costs of treatment). Finally, some behaviours were considered more appropriate grounds for deprioritization (smoking, alcohol, drug use)-potentially because of the certainty of impact and direct link to ill health. In zero-sum situations, prognosis also influenced prioritization (but did not outweigh responsibility). Ethical implications are discussed. We argue that the role that responsibility constructs appear to play in doctors' decisions indicates a needs for more nuanced-and clear-policy. Such policy should account for the distinctions we draw between responsibility-sensitive and prognostic justifications for deprioritization.


Subject(s)
Physicians , Delivery of Health Care , Health Facilities , Humans , Prospective Studies
7.
J Med Ethics ; 46(12): 815-826, 2020 12.
Article in English | MEDLINE | ID: mdl-32978306

ABSTRACT

Controlled Human Infection Model (CHIM) research involves the infection of otherwise healthy participants with disease often for the sake of vaccine development. The COVID-19 pandemic has emphasised the urgency of enhancing CHIM research capability and the importance of having clear ethical guidance for their conduct. The payment of CHIM participants is a controversial issue involving stakeholders across ethics, medicine and policymaking with allegations circulating suggesting exploitation, coercion and other violations of ethical principles. There are multiple approaches to payment: reimbursement, wage payment and unlimited payment. We introduce a new Payment for Risk Model, which involves paying for time, pain and inconvenience and for risk associated with participation. We give philosophical arguments based on utility, fairness and avoidance of exploitation to support this. We also examine a cross-section of the UK public and CHIM experts. We found that CHIM participants are currently paid variable amounts. A representative sample of the UK public believes CHIM participants should be paid approximately triple the UK minimum wage and should be paid for the risk they endure throughout participation. CHIM experts believe CHIM participants should be paid more than double the UK minimum wage but are divided on the payment for risk. The Payment for Risk Model allows risk and pain to be accounted for in payment and could be used to determine ethically justifiable payment for CHIM participants.Although many research guidelines warn against paying large amounts or paying for risk, our empirical findings provide empirical support to the growing number of ethical arguments challenging this status quo. We close by suggesting two ways (value of statistical life or consistency with risk in other employment) by which payment for risk could be calculated.


Subject(s)
Biomedical Research/organization & administration , COVID-19 Vaccines/administration & dosage , COVID-19/epidemiology , COVID-19/prevention & control , Healthy Volunteers/psychology , Attitude , Biomedical Research/ethics , Biomedical Research/standards , Cross-Sectional Studies , Humans , Pandemics , Public Opinion , Remuneration , SARS-CoV-2
8.
HEC Forum ; 31(4): 325-344, 2019 Dec.
Article in English | MEDLINE | ID: mdl-31606869

ABSTRACT

Many parents are hesitant about, or face motivational barriers to, vaccinating their children. In this paper, we propose a type of vaccination policy that could be implemented either in addition to coercive vaccination or as an alternative to it in order to increase paediatric vaccination uptake in a non-coercive way. We propose the use of vaccination nudges that exploit the very same decision biases that often undermine vaccination uptake. In particular, we propose a policy under which children would be vaccinated at school or day-care by default, without requiring parental authorization, but with parents retaining the right to opt their children out of vaccination. We show that such a policy is (1) likely to be effective, at least in cases in which non-vaccination is due to practical obstacles, rather than to strong beliefs about vaccines, (2) ethically acceptable and less controversial than some alternatives because it is not coercive and affects individual autonomy only in a morally unproblematic way, and (3) likely to receive support from the UK public, on the basis of original empirical research we have conducted on the lay public.


Subject(s)
Day Care, Medical/methods , Health Policy , Schools/standards , Vaccination/methods , Anti-Vaccination Movement/psychology , Day Care, Medical/standards , Humans , Schools/trends , Vaccination/psychology , Vaccination/trends
10.
Nat Hum Behav ; 2(8): 573-580, 2018 08.
Article in English | MEDLINE | ID: mdl-31209312

ABSTRACT

Uncertainty about how our choices will affect others infuses social life. Past research suggests uncertainty has a negative effect on prosocial behaviour1-12 by enabling people to adopt self-serving narratives about their actions1,13. We show that uncertainty does not always promote selfishness. We introduce a distinction between two types of uncertainty that have opposite effects on prosocial behaviour. Previous work focused on outcome uncertainty (uncertainty about whether or not a decision will lead to a particular outcome). However, as soon as people's decisions might have negative consequences for others, there is also impact uncertainty (uncertainty about how others' well-being will be impacted by the negative outcome). Consistent with past research1-12, we found decreased prosocial behaviour under outcome uncertainty. In contrast, prosocial behaviour was increased under impact uncertainty in incentivized economic decisions and hypothetical decisions about infectious disease threats. Perceptions of social norms paralleled the behavioural effects. The effect of impact uncertainty on prosocial behaviour did not depend on the individuation of others or the mere mention of harm, and was stronger when impact uncertainty was made more salient. Our findings offer insights into communicating uncertainty, especially in contexts where prosocial behaviour is paramount, such as responding to infectious disease threats.

SELECTION OF CITATIONS
SEARCH DETAIL
...