Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Cognition ; 212: 104683, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33774508

RESUMO

Classic theories emphasize the primacy of first-person sensory experience for learning meanings of words: to know what "see" means, one must be able to use the eyes to perceive. Contrary to this idea, blind adults and children acquire normative meanings of "visual" verbs, e.g., interpreting "see" and "look" to mean with the eyes for sighted agents. Here we ask the flip side of this question: how easily do sighted children acquire the meanings of "visual" verbs as they apply to blind agents? We asked sighted 4-, 6- and 9-year-olds to tell us what part of the body a blind or a sighted agent would use to "see", "look" (and other visual verbs, n = 5), vs. "listen", "smell" (and other non-visual verbs, n = 10). Even the youngest children consistently reported the correct body parts for sighted agents (eyes for "look", ears for "listen"). By contrast, there was striking developmental change in applying "visual" verbs to blind agents. Adults, 9- and 6-year-olds, either extended visual verbs to other modalities for blind agents (e.g., "seeing" with hands or a cane) or stated that the blind agent "cannot" "look" or "see". By contrast, 4-year-olds said that a blind agent would use her eyes to "see", "look", etc., even while explicitly acknowledging that the agent's "eyes don't work". Young children also endorsed "she is looking at the dax" descriptions of photographs where the blind agent had the object in her "line of sight", irrespective of whether she had physical contact with the object. This pattern held for leg-motion verbs ("walk", "run") applied to wheelchair users. The ability to modify verb modality for agents with disabilities undergoes developmental change between 4 and 6. Despite this, we find that 4-year-olds are sensitive to the semantic distinction between active ("look") and stative ("see"), even when applied to blind agents. These results challenge the primacy of first-person sensory experience and highlight the importance of linguistic input and social interaction in the acquisition of verb meaning.


Assuntos
Pessoas com Deficiência , Pessoas com Deficiência Visual , Adulto , Criança , Pré-Escolar , Feminino , Humanos , Aprendizagem , Linguística , Semântica
4.
Proc Natl Acad Sci U S A ; 116(23): 11213-11222, 2019 06 04.
Artigo em Inglês | MEDLINE | ID: mdl-31113884

RESUMO

How does first-person sensory experience contribute to knowledge? Contrary to the suppositions of early empiricist philosophers, people who are born blind know about phenomena that cannot be perceived directly, such as color and light. Exactly what is learned and how remains an open question. We compared knowledge of animal appearance across congenitally blind (n = 20) and sighted individuals (two groups, n = 20 and n = 35) using a battery of tasks, including ordering (size and height), sorting (shape, skin texture, and color), odd-one-out (shape), and feature choice (texture). On all tested dimensions apart from color, sighted and blind individuals showed substantial albeit imperfect agreement, suggesting that linguistic communication and visual perception convey partially redundant appearance information. To test the hypothesis that blind individuals learn about appearance primarily by remembering sighted people's descriptions of what they see (e.g., "elephants are gray"), we measured verbalizability of animal shape, texture, and color in the sighted. Contrary to the learn-from-description hypothesis, blind and sighted groups disagreed most about the appearance dimension that was easiest for sighted people to verbalize: color. Analysis of disagreement patterns across all tasks suggest that blind individuals infer physical features from non-appearance properties of animals such as folk taxonomy and habitat (e.g., bats are textured like mammals but shaped like birds). These findings suggest that in the absence of sensory access, structured appearance knowledge is acquired through inference from ontological kind.


Assuntos
Cegueira/fisiopatologia , Visão Ocular/fisiologia , Percepção Visual/fisiologia , Adulto , Animais , Ecossistema , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
5.
Cereb Cortex ; 29(11): 4803-4817, 2019 12 17.
Artigo em Inglês | MEDLINE | ID: mdl-30767007

RESUMO

What is the neural organization of the mental lexicon? Previous research suggests that partially distinct cortical networks are active during verb and noun processing, but what information do these networks represent? We used multivoxel pattern analysis (MVPA) to investigate whether these networks are sensitive to lexicosemantic distinctions among verbs and among nouns and, if so, whether they are more sensitive to distinctions among words in their preferred grammatical class. Participants heard 4 types of verbs (light emission, sound emission, hand-related actions, mouth-related actions) and 4 types of nouns (birds, mammals, manmade places, natural places). As previously shown, the left posterior middle temporal gyrus (LMTG+), and inferior frontal gyrus (LIFG) responded more to verbs, whereas the inferior parietal lobule (LIP), precuneus (LPC), and inferior temporal (LIT) cortex responded more to nouns. MVPA revealed a double-dissociation in lexicosemantic sensitivity: classification was more accurate among verbs than nouns in the LMTG+, and among nouns than verbs in the LIP, LPC, and LIT. However, classification was similar for verbs and nouns in the LIFG, and above chance for the nonpreferred category in all regions. These results suggest that the lexicosemantic information about verbs and nouns is represented in partially nonoverlapping networks.


Assuntos
Córtex Pré-Frontal/fisiologia , Semântica , Percepção da Fala/fisiologia , Lobo Temporal/fisiologia , Adulto , Mapeamento Encefálico , Córtex Cerebral/fisiologia , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Vias Neurais/fisiologia , Adulto Jovem
6.
Cereb Cortex ; 29(9): 3590-3605, 2019 08 14.
Artigo em Inglês | MEDLINE | ID: mdl-30272134

RESUMO

The brain has separate specialized computational units to process faces and voices located in occipital and temporal cortices. However, humans seamlessly integrate signals from the faces and voices of others for optimal social interaction. How are emotional expressions, when delivered by different sensory modalities (faces and voices), integrated in the brain? In this study, we characterized the brains' response to faces, voices, and combined face-voice information (congruent, incongruent), which varied in expression (neutral, fearful). Using a whole-brain approach, we found that only the right posterior superior temporal sulcus (rpSTS) responded more to bimodal stimuli than to face or voice alone but only when the stimuli contained emotional expression. Face- and voice-selective regions of interest, extracted from independent functional localizers, similarly revealed multisensory integration in the face-selective rpSTS only; further, this was the only face-selective region that also responded significantly to voices. Dynamic causal modeling revealed that the rpSTS receives unidirectional information from the face-selective fusiform face area, and voice-selective temporal voice area, with emotional expression affecting the connection strength. Our study promotes a hierarchical model of face and voice integration, with convergence in the rpSTS, and that such integration depends on the (emotional) salience of the stimuli.


Assuntos
Encéfalo/fisiologia , Emoções/fisiologia , Reconhecimento Facial/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Vias Neurais/fisiologia , Estimulação Luminosa , Adulto Jovem
7.
Multisens Res ; 27(5-6): 271-91, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25693297

RESUMO

Sensory substitution devices (SSDs) have been developed with the ultimate purpose of supporting sensory deprived individuals in their daily activities. However, more than forty years after their first appearance in the scientific literature, SSDs still remain more common in research laboratories than in the daily life of people with sensory deprivation. Here, we seek to identify the reasons behind the limited diffusion of SSDs among the blind community by discussing the ergonomic, neurocognitive and psychosocial issues potentially associated with the use of these systems. We stress that these issues should be considered together when developing future devices or improving existing ones. We provide some examples of how to achieve this by adopting a multidisciplinary and participatory approach. These efforts would contribute not solely to address fundamental theoretical research questions, but also to better understand the everyday needs of blind people and eventually promote the use of SSDs outside laboratories.


Assuntos
Cegueira/fisiopatologia , Cegueira/reabilitação , Privação Sensorial/fisiologia , Percepção Auditiva/fisiologia , Ergonomia , Humanos , Percepção do Tato/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...