Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Cognition ; 245: 105741, 2024 04.
Artigo em Inglês | MEDLINE | ID: mdl-38350252

RESUMO

Despite the societal relevance of creative ideas, humans favor traditional over more original solutions, conceivably because of the increased uncertainty that comes with trying novel approaches. Here, we tested whether this anti-creativity bias can be counteracted by increasing familiarity with, and confidence in, creative solutions. Participants chose between creative and traditional uses for given objects. In study 1 (N = 67 international adults), these objects repeated either identically or conceptually during the experiment; and in study 2 (N = 68 international adults), choice options were either self-generated or externally provided. Spatial and temporal measures of response selection indicated an implicit bias towards the traditional approach, independent of repetition type (study 1). This attraction towards the norm was also found for self-generated creative ideas, but it was considerably reduced compared to other-generated ideas (study 2). Instead of increasing familiarity, building confidence in creative solutions might thus be the key to reduce corresponding uncertainty and promote successful creative ideation.


Assuntos
Criatividade , Reconhecimento Psicológico , Adulto , Humanos
2.
Nat Med ; 2024 Jul 25.
Artigo em Inglês | MEDLINE | ID: mdl-39054373

RESUMO

Large language models offer novel opportunities to seek digital medical advice. While previous research primarily addressed the performance of such artificial intelligence (AI)-based tools, public perception of these advancements received little attention. In two preregistered studies (n = 2,280), we presented participants with scenarios of patients obtaining medical advice. All participants received identical information, but we manipulated the putative source of this advice ('AI', 'human physician', 'human + AI'). 'AI'- and 'human + AI'-labeled advice was evaluated as significantly less reliable and less empathetic compared with 'human'-labeled advice. Moreover, participants indicated lower willingness to follow the advice when AI was believed to be involved in advice generation. Our findings point toward an anti-AI bias when receiving digital medical advice, even when AI is supposedly supervised by physicians. Given the tremendous potential of AI for medicine, elucidating ways to counteract this bias should be an important objective of future research.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA