Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Psychol Rev ; 130(6): 1566-1591, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37589709

RESUMO

Developing an accurate model of another agent's knowledge is central to communication and cooperation between agents. In this article, we propose a hierarchical framework of knowledge assessment that explains how people construct mental models of their own knowledge and the knowledge of others. Our framework posits that people integrate information about their own and others' knowledge via Bayesian inference. To evaluate this claim, we conduct an experiment in which participants repeatedly assess their own performance (a metacognitive task) and the performance of another person (a type of theory of mind task) on the same image classification tasks. We contrast the hierarchical framework with simpler alternatives that assume different degrees of differentiation between mental models of self and others. Our model accurately captures participants' assessment of their own performance and the performance of others in the task: Initially, people rely on their own self-assessment process to reason about the other person's performance, leading to similar self- and other-performance predictions. As more information about the other person's ability becomes available, the mental model for the other person becomes increasingly distinct from the mental model of self. Simulation studies also confirm that our framework explains a wide range of findings about human knowledge assessment of themselves and others. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Assuntos
Metacognição , Teoria da Mente , Humanos , Teorema de Bayes , Conhecimento , Modelos Psicológicos
2.
Perspect Psychol Sci ; : 17456916231181102, 2023 Jul 13.
Artigo em Inglês | MEDLINE | ID: mdl-37439761

RESUMO

Artificial intelligence (AI) has the potential to improve human decision-making by providing decision recommendations and problem-relevant information to assist human decision-makers. However, the full realization of the potential of human-AI collaboration continues to face several challenges. First, the conditions that support complementarity (i.e., situations in which the performance of a human with AI assistance exceeds the performance of an unassisted human or the AI in isolation) must be understood. This task requires humans to be able to recognize situations in which the AI should be leveraged and to develop new AI systems that can learn to complement the human decision-maker. Second, human mental models of the AI, which contain both expectations of the AI and reliance strategies, must be accurately assessed. Third, the effects of different design choices for human-AI interaction must be understood, including both the timing of AI assistance and the amount of model information that should be presented to the human decision-maker to avoid cognitive overload and ineffective reliance strategies. In response to each of these three challenges, we present an interdisciplinary perspective based on recent empirical and theoretical findings and discuss new research directions.

3.
Psychon Bull Rev ; 30(6): 2049-2066, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37450264

RESUMO

Individual difference exploration of cognitive domains is predicated on being able to ascertain how well performance on tasks covary. Yet, establishing correlations among common inhibition tasks such as Stroop or flanker tasks has proven quite difficult. It remains unclear whether this difficulty occurs because there truly is a lack of correlation or whether analytic techniques to localize correlations perform poorly real-world contexts because of excessive measurement error from trial noise. In this paper, we explore how well correlations may localized in large data sets with many people, tasks, and replicate trials. Using hierarchical models to separate trial noise from true individual variability, we show that trial noise in 24 extant tasks is about 8 times greater than individual variability. This degree of trial noise results in massive attenuation in correlations and instability in Spearman corrections. We then develop hierarchical models that account for variation across trials, variation across individuals, and covariation across individuals and tasks. These hierarchical models also perform poorly in localizing correlations. The advantage of these models is not in estimation efficiency, but in providing a sense of uncertainty so that researchers are less likely to misinterpret variability in their data. We discuss possible improvements to study designs to help localize correlations.


Assuntos
Individualidade , Ruído , Humanos , Inibição Psicológica , Incerteza
4.
NPJ Sci Learn ; 7(1): 24, 2022 Oct 04.
Artigo em Inglês | MEDLINE | ID: mdl-36195645

RESUMO

Practice in real-world settings exhibits many idiosyncracies of scheduling and duration that can only be roughly approximated by laboratory research. Here we investigate 39,157 individuals' performance on two cognitive games on the Lumosity platform over a span of 5 years. The large-scale nature of the data allows us to observe highly varied lengths of uncontrolled interruptions to practice and offers a unique view of learning in naturalistic settings. We enlist a suite of models that grow in the complexity of the mechanisms they postulate and conclude that long-term naturalistic learning is best described with a combination of long-term skill and task-set preparedness. We focus additionally on the nature and speed of relearning after breaks in practice and conclude that those components must operate interactively to produce the rapid relearning that is evident even at exceptionally long delays (over 2 years). Naturalistic learning over long time spans provides a strong test for the robustness of theoretical accounts of learning, and should be more broadly used in the learning sciences.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...