Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros

Banco de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Proc Natl Acad Sci U S A ; 120(18): e2213709120, 2023 05 02.
Artículo en Inglés | MEDLINE | ID: mdl-37094137

RESUMEN

The philosopher John Rawls proposed the Veil of Ignorance (VoI) as a thought experiment to identify fair principles for governing a society. Here, we apply the VoI to an important governance domain: artificial intelligence (AI). In five incentive-compatible studies (N = 2, 508), including two preregistered protocols, participants choose principles to govern an Artificial Intelligence (AI) assistant from behind the veil: that is, without knowledge of their own relative position in the group. Compared to participants who have this information, we find a consistent preference for a principle that instructs the AI assistant to prioritize the worst-off. Neither risk attitudes nor political preferences adequately explain these choices. Instead, they appear to be driven by elevated concerns about fairness: Without prompting, participants who reason behind the VoI more frequently explain their choice in terms of fairness, compared to those in the Control condition. Moreover, we find initial support for the ability of the VoI to elicit more robust preferences: In the studies presented here, the VoI increases the likelihood of participants continuing to endorse their initial choice in a subsequent round where they know how they will be affected by the AI intervention and have a self-interested motivation to change their mind. These results emerge in both a descriptive and an immersive game. Our findings suggest that the VoI may be a suitable mechanism for selecting distributive principles to govern AI.


Asunto(s)
Inteligencia Artificial , Sociedades , Humanos , Justicia Social
2.
BMJ Open ; 14(3): e079105, 2024 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-38490661

RESUMEN

INTRODUCTION: For artificial intelligence (AI) to help improve mental healthcare, the design of data-driven technologies needs to be fair, safe, and inclusive. Participatory design can play a critical role in empowering marginalised communities to take an active role in constructing research agendas and outputs. Given the unmet needs of the LGBTQI+ (Lesbian, Gay, Bisexual, Transgender, Queer and Intersex) community in mental healthcare, there is a pressing need for participatory research to include a range of diverse queer perspectives on issues of data collection and use (in routine clinical care as well as for research) as well as AI design. Here we propose a protocol for a Delphi consensus process for the development of PARticipatory Queer AI Research for Mental Health (PARQAIR-MH) practices, aimed at informing digital health practices and policy. METHODS AND ANALYSIS: The development of PARQAIR-MH is comprised of four stages. In stage 1, a review of recent literature and fact-finding consultation with stakeholder organisations will be conducted to define a terms-of-reference for stage 2, the Delphi process. Our Delphi process consists of three rounds, where the first two rounds will iterate and identify items to be included in the final Delphi survey for consensus ratings. Stage 3 consists of consensus meetings to review and aggregate the Delphi survey responses, leading to stage 4 where we will produce a reusable toolkit to facilitate participatory development of future bespoke LGBTQI+-adapted data collection, harmonisation, and use for data-driven AI applications specifically in mental healthcare settings. ETHICS AND DISSEMINATION: PARQAIR-MH aims to deliver a toolkit that will help to ensure that the specific needs of LGBTQI+ communities are accounted for in mental health applications of data-driven technologies. The study is expected to run from June 2024 through January 2025, with the final outputs delivered in mid-2025. Participants in the Delphi process will be recruited by snowball and opportunistic sampling via professional networks and social media (but not by direct approach to healthcare service users, patients, specific clinical services, or via clinicians' caseloads). Participants will not be required to share personal narratives and experiences of healthcare or treatment for any condition. Before agreeing to participate, people will be given information about the issues considered to be in-scope for the Delphi (eg, developing best practices and methods for collecting and harmonising sensitive characteristics data; developing guidelines for data use/reuse) alongside specific risks of unintended harm from participating that can be reasonably anticipated. Outputs will be made available in open-access peer-reviewed publications, blogs, social media, and on a dedicated project website for future reuse.


Asunto(s)
Salud Mental , Minorías Sexuales y de Género , Femenino , Humanos , Técnica Delphi , Inteligencia Artificial , Recolección de Datos , Literatura de Revisión como Asunto
3.
iScience ; 26(9): 107603, 2023 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-37636048

RESUMEN

[This corrects the article DOI: 10.1016/j.isci.2023.107256.].

4.
iScience ; 26(8): 107256, 2023 Aug 18.
Artículo en Inglés | MEDLINE | ID: mdl-37520710

RESUMEN

Artificial intelligence (A.I.) increasingly suffuses everyday life. However, people are frequently reluctant to interact with A.I. systems. This challenges both the deployment of beneficial A.I. technology and the development of deep learning systems that depend on humans for oversight, direction, and regulation. Nine studies (N = 3,300) demonstrate that social-cognitive processes guide human interactions across a diverse range of real-world A.I. systems. Across studies, perceived warmth and competence emerge prominently in participants' impressions of A.I. systems. Judgments of warmth and competence systematically depend on human-A.I. interdependence and autonomy. In particular, participants perceive systems that optimize interests aligned with human interests as warmer and systems that operate independently from human direction as more competent. Finally, a prisoner's dilemma game shows that warmth and competence judgments predict participants' willingness to cooperate with a deep-learning system. These results underscore the generality of intent detection to perceptions of a broad array of algorithmic actors.

5.
Nat Hum Behav ; 7(10): 1787-1796, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37679439

RESUMEN

Effective approaches to encouraging group cooperation are still an open challenge. Here we apply recent advances in deep learning to structure networks of human participants playing a group cooperation game. We leverage deep reinforcement learning and simulation methods to train a 'social planner' capable of making recommendations to create or break connections between group members. The strategy that it develops succeeds at encouraging pro-sociality in networks of human participants (N = 208 participants in 13 groups) playing for real monetary stakes. Under the social planner, groups finished the game with an average cooperation rate of 77.7%, compared with 42.8% in static networks (N = 176 in 11 groups). In contrast to prior strategies that separate defectors from cooperators (tested here with N = 384 in 24 groups), the social planner learns to take a conciliatory approach to defectors, encouraging them to act pro-socially by moving them to small highly cooperative neighbourhoods.


Asunto(s)
Conducta Cooperativa , Teoría del Juego , Humanos , Conducta Social , Procesos de Grupo
6.
Nat Commun ; 13(1): 7214, 2022 12 06.
Artículo en Inglés | MEDLINE | ID: mdl-36473833

RESUMEN

The success of human civilization is rooted in our ability to cooperate by communicating and making joint plans. We study how artificial agents may use communication to better cooperate in Diplomacy, a long-standing AI challenge. We propose negotiation algorithms allowing agents to agree on contracts regarding joint plans, and show they outperform agents lacking this ability. For humans, misleading others about our intentions forms a barrier to cooperation. Diplomacy requires reasoning about our opponents' future plans, enabling us to study broken commitments between agents and the conditions for honest cooperation. We find that artificial agents face a similar problem as humans: communities of communicating agents are susceptible to peers who deviate from agreements. To defend against this, we show that the inclination to sanction peers who break contracts dramatically reduces the advantage of such deviators. Hence, sanctioning helps foster mostly truthful communication, despite conditions that initially favor deviations from agreements.


Asunto(s)
Inteligencia Artificial , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA