Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Mem Cognit ; 2024 May 30.
Artigo em Inglês | MEDLINE | ID: mdl-38814385

RESUMO

Early in life and without special training, human beings discern resemblance between abstract visual stimuli, such as drawings, and the real-world objects they represent. We used this capacity for visual abstraction as a tool for evaluating deep neural networks (DNNs) as models of human visual perception. Contrasting five contemporary DNNs, we evaluated how well each explains human similarity judgments among line drawings of recognizable and novel objects. For object sketches, human judgments were dominated by semantic category information; DNN representations contributed little additional information. In contrast, such features explained significant unique variance perceived similarity of abstract drawings. In both cases, a vision transformer trained to blend representations of images and their natural language descriptions showed the greatest ability to explain human perceptual similarity-an observation consistent with contemporary views of semantic representation and processing in the human mind and brain. Together, the results suggest that the building blocks of visual similarity may arise within systems that learn to use visual information, not for specific classification, but in service of generating semantic representations of objects.

2.
Mem Cognit ; 2024 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-38668991

RESUMO

In her 1926 book Measurement of Intelligence by Drawings, Florence Goodenough pioneered the quantitative analysis of children's human-figure drawings as a tool for evaluating their cognitive development. This influential work launched a broad enterprise in cognitive evaluation that continues to the present day, with most clinicians and researchers deploying variants of the checklist-based scoring methods that Goodenough invented. Yet recent work leveraging computational innovations in cognitive science suggests that human-figure drawings possess much richer structure than checklist-based approaches can capture. The current study uses these contemporary tools to characterize structure in the images from Goodenough's original work, then assesses whether this structure carries information about demographic and cognitive characteristics of the participants in that early study. The results show that contemporary methods can reliably extract information about participant age, gender, and mental faculties from images produced over 100 years ago, with no expert training and with minimal human effort. Moreover, the new analyses suggest a different relationship between drawing and mental ability than that captured by Goodenough's highly influential approach, with important implications for the use of drawings in cognitive evaluation in the present day.

3.
Psychol Rev ; 2024 Jul 25.
Artigo em Inglês | MEDLINE | ID: mdl-39052340

RESUMO

Understanding the mechanisms enabling the learning and flexible use of knowledge in context-appropriate ways has been a major focus of research in the study of both semantic cognition and cognitive control. We present a unified model of semantics and control that addresses these questions from both perspectives. The model provides a coherent view of how semantic knowledge, and the ability to flexibly access and deploy that knowledge to meet current task demands, arises from end-to-end learning of the statistics of the environment. We show that the model addresses unresolved issues from both literatures, including how control operates over features that covary with one another and how control representations themselves are structured and emerge through learning, through a series of behavioral experiments and simulations. We conclude by discussing the implications of our approach to other fundamental questions in cognitive science, machine learning, and artificial intelligence. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa