Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
Mais filtros

Bases de dados
Tipo de documento
Intervalo de ano de publicação
1.
Cogn Neuropsychol ; : 1-17, 2024 Feb 20.
Artigo em Inglês | MEDLINE | ID: mdl-38377394

RESUMO

ABSTRACTThis study investigates factors influencing lexical access in language production across modalities (signed and oral). Data from deaf and hearing signers were reanalyzed (Baus and Costa, 2015, On the temporal dynamics of sign production: An ERP study in Catalan Sign Language (LSC). Brain Research, 1609(1), 40-53. https://doi.org/10.1016/j.brainres.2015.03.013; Gimeno-Martínez and Baus, 2022, Iconicity in sign language production: Task matters. Neuropsychologia, 167, 108166. https://doi.org/10.1016/j.neuropsychologia.2022.108166) testing the influence of psycholinguistic variables and ERP mean amplitudes on signing and naming latencies. Deaf signers' signing latencies were influenced by sign iconicity in the picture signing task, and by spoken psycholinguistic variables in the word-to-sign translation task. Additionally, ERP amplitudes before response influenced signing but not translation latencies. Hearing signers' latencies, both signing and naming, were influenced by sign iconicity and word frequency, with early ERP amplitudes predicting only naming latencies. These findings highlight general and modality-specific determinants of lexical access in language production.

2.
J Deaf Stud Deaf Educ ; 25(1): 80-90, 2020 01 03.
Artigo em Inglês | MEDLINE | ID: mdl-31504619

RESUMO

In the past years, there has been a significant increase in the number of people learning sign languages. For hearing second language (L2) signers, acquiring a sign language involves acquiring a new language in a different modality. Exploring how L2 sign perception is accomplished and how newly learned categories are created is the aim of the present study. In particular, we investigated handshape perception by means of two tasks, identification and discrimination. In two experiments, we compared groups of hearing L2 signers and groups with different knowledge of sign language. Experiment 1 explored three groups of children-hearing L2 signers, deaf signers, and hearing nonsigners. All groups obtained similar results in both identification and discrimination tasks regardless of sign language experience. In Experiment 2, two groups of adults-Catalan sign language learners (LSC) and nonsigners-perceived handshapes that could be permissible (either as a sign or as a gesture) or not. Both groups obtained similar results in both tasks and performed significantly different perceiving handshapes depending on their permissibility. The results obtained here suggest that sign language experience is not a determinant factor in handshape perception and support other hypotheses considering gesture experience.


Assuntos
Surdez/psicologia , Gestos , Língua de Sinais , Adolescente , Adulto , Estudos de Casos e Controles , Criança , Pré-Escolar , Compreensão , Discriminação Psicológica , Feminino , Humanos , Aprendizagem , Linguística , Masculino , Multilinguismo , Adulto Jovem
3.
Behav Res Methods ; 48(1): 123-37, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25630312

RESUMO

The LSE-Sign database is a free online tool for selecting Spanish Sign Language stimulus materials to be used in experiments. It contains 2,400 individual signs taken from a recent standardized LSE dictionary, and a further 2,700 related nonsigns. Each entry is coded for a wide range of grammatical, phonological, and articulatory information, including handshape, location, movement, and non-manual elements. The database is accessible via a graphically based search facility which is highly flexible both in terms of the search options available and the way the results are displayed. LSE-Sign is available at the following website: http://www.bcbl.eu/databases/lse/.


Assuntos
Bases de Dados Factuais , Língua de Sinais , Humanos , Movimento , Gravação em Vídeo
4.
Neurobiol Lang (Camb) ; 5(2): 484-496, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38911463

RESUMO

Cortical tracking, the synchronization of brain activity to linguistic rhythms is a well-established phenomenon. However, its nature has been heavily contested: Is it purely epiphenomenal or does it play a fundamental role in speech comprehension? Previous research has used intelligibility manipulations to examine this topic. Here, we instead varied listeners' language comprehension skills while keeping the auditory stimulus constant. To do so, we tested 22 native English speakers and 22 Spanish/Catalan bilinguals learning English as a second language (SL) in an EEG cortical entrainment experiment and correlated the responses with the magnitude of the N400 component of a semantic comprehension task. As expected, native listeners effectively tracked sentential, phrasal, and syllabic linguistic structures. In contrast, SL listeners exhibited limitations in tracking sentential structures but successfully tracked phrasal and syllabic rhythms. Importantly, the amplitude of the neural entrainment correlated with the amplitude of the detection of semantic incongruities in SLs, showing a direct connection between tracking and the ability to understand speech. Together, these findings shed light on the interplay between language comprehension and cortical tracking, to identify neural entrainment as a fundamental principle for speech comprehension.

5.
Sci Rep ; 13(1): 20037, 2023 11 16.
Artigo em Inglês | MEDLINE | ID: mdl-37973908

RESUMO

When encountering people, their faces are usually paired with their voices. We know that if the face looks familiar, and the voice is high-pitched, the first impression will be positive and trustworthy. But, how do we integrate these two multisensory physical attributes? Here, we explore 1) the automaticity of audiovisual integration in shaping first impressions of trustworthiness, and 2) the relative contribution of each modality in the final judgment. We find that, even though participants can focus their attention on one modality to judge trustworthiness, they fail to completely filter out the other modality for both faces (Experiment 1a) and voices (Experiment 1b). When asked to judge the person as a whole, people rely more on voices (Experiment 2) or faces (Experiment 3). We link this change to the distinctiveness of each cue in the stimulus set rather than a general property of the modality. Overall, we find that people weigh faces and voices automatically based on cue saliency when forming trustworthiness impressions.


Assuntos
Sinais (Psicologia) , Voz , Humanos , Atenção , Expressão Facial , Exame Físico , Confiança
6.
Neuropsychologia ; 167: 108166, 2022 03 12.
Artigo em Inglês | MEDLINE | ID: mdl-35114219

RESUMO

The present study explored the influence of iconicity on sign lexical retrieval and whether it is modulated by the task at hand. Lexical frequency was also manipulated to have an index of lexical processing during sign production. Behavioural and electrophysiological measures (ERPs) were collected from 22 Deaf bimodal bilinguals while performing a picture naming task in Catalan Sign Language (Llengua de Signes Catalana, LSC) and a word-to-sign translation task (Spanish written-words to LSC). Iconicity effects were observed in the picture naming task, but not in the word-to-sign translation task, both behaviourally and at the ERP level. In contrast, frequency effects were observed in the two tasks, with ERP effects appearing earlier in the word-to-sign translation than in the picture naming task. These results support the idea that iconicity in sign language is not pervasive but modulated by task demands. As discussed, iconicity effects in sign language would be emphasised when naming pictures because sign lexical representations in this task are retrieved via semantic-to-phonological links. Conversely, attenuated iconicity effects when translating words might result from sign lexical representations being directly accessed from the lexical representations of the word.


Assuntos
Semântica , Língua de Sinais , Humanos , Idioma , Linguística
7.
Cognition ; 227: 105213, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35803105

RESUMO

In this study we investigated whether people conceptually align when performing a language task together with a robot. In a joint picture-naming task, 24 French native speakers took turns with a robot in naming images of objects belonging to fifteen different semantic categories. For a subset of those semantic categories, the robot was programmed to produce the superordinate, semantic category name (e.g., fruit) instead of the more typical basic-level name associated with an object (e.g., pear). Importantly, while semantic categories were shared between the participant and the robot (e.g., fruits), different objects were assigned to each of them (e.g., the object of 'a pear' for the robot and of 'an apple' for the participant). Logistic regression models on participants' responses revealed that they aligned with the conceptual choices of the robot, producing over the course of the experiment more superordinate names (e.g., saying 'fruit' to the picture of an 'apple') for those objects belonging to the same semantic category as where the robot produced a superordinate name (e.g., saying 'fruit' to the picture of a 'pear'). These results provide evidence for conceptual alignment affecting speakers' word choices as a result of adaptation to the partner, even when the partner is a robot.


Assuntos
Reconhecimento Visual de Modelos , Robótica , Humanos , Tempo de Reação/fisiologia , Semântica , Interação Social
8.
PLoS One ; 17(11): e0276334, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36322568

RESUMO

This registered report article investigates the role of language as a dimension of social categorization. Our critical aim was to investigate whether categorization based on language occurs even when the languages coexist within the same sociolinguistic context, as is the case in bilingual communities. Bilingual individuals of two bilingual communities, the Basque Country (Spain) and Veneto (Italy), were tested using the memory confusion paradigm in a 'Who said what?' task. In the encoding part of the task, participants were presented with different faces together with auditory sentences. Two different languages of the sentences were presented in each study, with half of the faces always associated with one language and the other half with the other language. Spanish and Basque languages were used in Study 1, and Italian and Venetian dialect in Study 2. In the test phase, the auditory sentences were presented again and participants were required to decide which face uttered each sentence. As expected, participants error rates were high. Critically, participants were more likely to confuse faces from the same language category than from the other (different) language category. The results indicate that bilinguals categorize individuals belonging to the same sociolinguistic community based on the language these individuals speak, suggesting that social categorization based on language is an automatic process.


Assuntos
Idioma , Multilinguismo , Humanos , Sinais (Psicologia) , Linguística , Espanha
9.
Cogn Sci ; 46(2): e13102, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-35122322

RESUMO

How does prior linguistic knowledge modulate learning in verbal auditory statistical learning (SL) tasks? Here, we address this question by assessing to what extent the frequency of syllabic co-occurrences in the learners' native language determines SL performance. We computed the frequency of co-occurrences of syllables in spoken Spanish through a transliterated corpus, and used this measure to construct two artificial familiarization streams. One stream was constructed by embedding pseudowords with high co-occurrence frequency in Spanish ("Spanish-like" condition), the other by embedding pseudowords with low co-occurrence frequency ("Spanish-unlike" condition). Native Spanish-speaking participants listened to one of the two streams, and were tested in an old/new identification task to examine their ability to discriminate the embedded pseudowords from foils. Our results show that performance in the verbal auditory SL (ASL) task was significantly influenced by the frequency of syllabic co-occurrences in Spanish: When the embedded pseudowords were more "Spanish-like," participants were better able to identify them as part of the stream. These findings demonstrate that learners' task performance in verbal ASL tasks changes as a function of the artificial language's similarity to their native language, and highlight how linguistic prior knowledge biases the learning of regularities.


Assuntos
Aprendizagem , Percepção da Fala , Percepção Auditiva , Humanos , Idioma , Linguística , Aprendizagem Verbal
10.
Sci Data ; 9(1): 431, 2022 07 21.
Artigo em Inglês | MEDLINE | ID: mdl-35864133

RESUMO

The growing interdisciplinary research field of psycholinguistics is in constant need of new and up-to-date tools which will allow researchers to answer complex questions, but also expand on languages other than English, which dominates the field. One type of such tools are picture datasets which provide naming norms for everyday objects. However, existing databases tend to be small in terms of the number of items they include, and have also been normed in a limited number of languages, despite the recent boom in multilingualism research. In this paper we present the Multilingual Picture (Multipic) database, containing naming norms and familiarity scores for 500 coloured pictures, in thirty-two languages or language varieties from around the world. The data was validated with standard methods that have been used for existing picture datasets. This is the first dataset to provide naming norms, and translation equivalents, for such a variety of languages; as such, it will be of particular value to psycholinguists and other interested researchers. The dataset has been made freely available.


Assuntos
Multilinguismo , Psicolinguística , Bases de Dados Factuais , Humanos , Idioma , Reconhecimento Psicológico
11.
Proc Natl Acad Sci U S A ; 105(42): 16083-8, 2008 Oct 21.
Artigo em Inglês | MEDLINE | ID: mdl-18852470

RESUMO

Human beings differ in their ability to master the sounds of their second language (L2). Phonetic training studies have proposed that differences in phonetic learning stem from differences in psychoacoustic abilities rather than speech-specific capabilities. We aimed at finding the origin of individual differences in L2 phonetic acquisition in natural learning contexts. We consider two alternative explanations: a general psychoacoustic origin vs. a speech-specific one. For this purpose, event-related potentials (ERPs) were recorded from two groups of early, proficient Spanish-Catalan bilinguals who differed in their mastery of the Catalan (L2) phonetic contrast /e-epsilon/. Brain activity in response to acoustic change detection was recorded in three different conditions involving tones of different length (duration condition), frequency (frequency condition), and presentation order (pattern condition). In addition, neural correlates of speech change detection were also assessed for both native (/o/-/e/) and nonnative (/o/-/ö/) phonetic contrasts (speech condition). Participants' discrimination accuracy, reflected electrically as a mismatch negativity (MMN), was similar between the two groups of participants in the three acoustic conditions. Conversely, the MMN was reduced in poor perceivers (PP) when they were presented with speech sounds. Therefore, our results support a speech-specific origin of individual variability in L2 phonetic mastery.


Assuntos
Encéfalo/fisiologia , Idioma , Fonética , Percepção da Fala/fisiologia , Humanos
12.
PLoS One ; 16(7): e0254513, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34252169

RESUMO

The present pre-registration aims to investigate the role of language as a dimension of social categorization. Our critical aim is to investigate whether language can be used as a dimension of social categorization even when the languages coexist within the same sociolinguistic group, as is the case in bilingual communities where two languages are used in daily social interactions. We will use the memory confusion paradigm (also known as the Who said what? task). In the first part of the task, i.e. encoding, participants will be presented with a face (i.e. speaker) and will listen to an auditory sentence. Two languages will be used, with half of the faces always associated with one language and the other half with the other language. In the second phase, i.e. recognition, all the faces will be presented on the screen and participants will decide which face uttered which sentence in the encoding phase. Based on previous literature, we expect that participants will be more likely to confuse faces from within the same language category than from the other language category. Participants will be bilingual individuals of two bilingual communities, the Basque Country (Spain) and Veneto (Italy). The two languages of these communities will be used, Spanish and Basque (Study 1), and Italian and Venetian dialect (Study 2). Furthermore, we will explore whether the amount of daily exposure to the two languages modulates the effect of language as a social categorization cue. This research will allow us to test whether bilingual people use language to categorize individuals belonging to the same sociolinguistic community based on the language these individuals are speaking. Our findings may have relevant political and social implications for linguistic policies in bilingual communities.


Assuntos
Idioma , Humanos , Itália , Masculino , Multilinguismo , Espanha , População Branca
13.
Lang Cogn Neurosci ; 36(7): 824-839, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34485588

RESUMO

Speakers learning a second language show systematic differences from native speakers in the retrieval, planning, and articulation of speech. A key challenge in examining the interrelationship between these differences at various stages of production is the need for manual annotation of fine-grained properties of speech. We introduce a new method for automatically analyzing voice onset time (VOT), a key phonetic feature indexing differences in sound systems cross-linguistically. In contrast to previous approaches, our method allows reliable measurement of prevoicing, a dimension of VOT variation used by many languages. Analysis of VOTs, word durations, and reaction times from German-speaking learners of Spanish (Baus et al., 2013) suggest that while there are links between the factors impacting planning and articulation, these two processes also exhibit some degree of independence. We discuss the implications of these findings for theories of speech production and future research in bilingual language processing.

14.
Sci Rep ; 11(1): 9715, 2021 05 06.
Artigo em Inglês | MEDLINE | ID: mdl-33958663

RESUMO

Does language categorization influence face identification? The present study addressed this question by means of two experiments. First, to establish language categorization of faces, the memory confusion paradigm was used to create two language categories of faces, Spanish and English. Subsequently, participants underwent an oddball paradigm, in which faces that had been previously paired with one of the two languages (Spanish or English), were presented. We measured EEG perceptual differences (vMMN) between standard and two types of deviant faces: within-language category (faces sharing language with standards) or between-language category (faces paired with the other language). Participants were more likely to confuse faces within the language category than between categories, an index that faces were categorized by language. At the neural level, early vMMN were obtained for between-language category faces, but not for within-language category faces. At a later stage, however, larger vMMNs were obtained for those faces from the same language category. Our results showed that language is a relevant social cue that individuals used to categorize others and this categorization subsequently affects face perception.


Assuntos
Reconhecimento Facial , Idioma , Eletroencefalografia , Feminino , Humanos , Masculino , Adulto Jovem
15.
Brain Sci ; 9(11)2019 Oct 27.
Artigo em Inglês | MEDLINE | ID: mdl-31717882

RESUMO

Word reduction refers to how predictable words are shortened in features such as duration, intensity, or pitch. However, its origin is still unclear: Are words reduced because it is the second time that conceptual representations are activated, or because words are articulated twice? If word reduction is conceptually driven, it would be irrelevant whether the same referent is mentioned twice but using different words. However, if is articulatory, using different words for the same referent could prevent word reduction. In the present work, we use bilingualism to explore the conceptual or articulatory origin of word reduction in language production. Word reduction was compared in two conditions: a non-switch condition, where the two mentions of a referent were uttered in the same language, and a switch condition, where the referent was said in both languages. Dyads of participants completed collaborative maps in which words were uttered twice in Catalan or in Spanish, either repeating or switching the language between mentions. Words were equally reduced in duration, intensity, and pitch in non-switch and in switch conditions. Furthermore, the cognate status of words did not play any role. These findings support the theory that word reduction is conceptually driven.

16.
Sci Rep ; 9(1): 414, 2019 01 23.
Artigo em Inglês | MEDLINE | ID: mdl-30674913

RESUMO

We form very rapid personality impressions about speakers on hearing a single word. This implies that the acoustical properties of the voice (e.g., pitch) are very powerful cues when forming social impressions. Here, we aimed to explore how personality impressions for brief social utterances transfer across languages and whether acoustical properties play a similar role in driving personality impressions. Additionally, we examined whether evaluations are similar in the native and a foreign language of the listener. In two experiments we asked Spanish listeners to evaluate personality traits from different instances of the Spanish word "Hola" (Experiment 1) and the English word "Hello" (Experiment 2), native and foreign language respectively. The results revealed that listeners across languages form very similar personality impressions irrespective of whether the voices belong to the native or the foreign language of the listener. A social voice space was summarized by two main personality traits, one emphasizing valence (e.g., trust) and the other strength (e.g., dominance). Conversely, the acoustical properties that listeners pay attention to when judging other's personality vary across languages. These results provide evidence that social voice perception contains certain elements invariant across cultures/languages, while others are modulated by the cultural/linguistic background of the listener.


Assuntos
Atenção , Idioma , Personalidade , Percepção Social , Percepção da Fala , Adolescente , Adulto , Feminino , Humanos , Masculino
17.
Cognition ; 108(3): 856-65, 2008 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-18656181

RESUMO

This paper investigates whether the semantic and phonological levels in speech production are specific to spoken languages or universal across modalities. We examined semantic and phonological effects during Catalan Signed Language (LSC: Llengua de Signes Catalana) production using an adaptation of the picture-word interference task: native and non-native signers were asked to sign picture names while ignoring signs produced in the background. The results showed semantic interference effects for semantically related distractor signs and phonological facilitation effects when target signs and distractor signs shared either Handshape or Movement but phonological interference effects when target and distractor shared Location. The results suggest that the general distinction between semantic and phonological levels seems to hold across modalities. However, differences in sign language and spoken production become evident in the mechanisms underlying phonological encoding, shown by the different role that Location, Handshape, and Movement play during phonological encoding in sign language.


Assuntos
Idioma , Fonética , Semântica , Língua de Sinais , Comportamento Verbal , Adolescente , Adulto , Atenção , Surdez/psicologia , Surdez/reabilitação , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Reconhecimento Visual de Modelos , Tempo de Reação , Espanha , Gravação de Videoteipe
18.
Acta Psychol (Amst) ; 186: 63-70, 2018 May.
Artigo em Inglês | MEDLINE | ID: mdl-29704743

RESUMO

The information we obtain from how speakers sound-for example their accent-affects how we interpret the messages they convey. A clear example is foreign accented speech, where reduced intelligibility and speaker's social categorization (out-group member) affect memory and the credibility of the message (e.g., less trustworthiness). In the present study, we go one step further and ask whether evaluations of messages are also affected by regional accents-accents from a different region than the listener. In the current study, we report results from three experiments on immediate memory recognition and immediate credibility assessments as well as the illusory truth effect. These revealed no differences between messages conveyed in local-from the same region as the participant-and regional accents-from native speakers of a different country than the participants. Our results suggest that when the accent of a speaker has high intelligibility, social categorization by accent does not seem to negatively affect how we treat the speakers' messages.


Assuntos
Comparação Transcultural , Idioma , Memória/fisiologia , Fonética , Percepção da Fala/fisiologia , Adolescente , Adulto , Cognição/fisiologia , Cuba/etnologia , Feminino , Humanos , Masculino , América do Sul/etnologia , Espanha/etnologia , Fala/fisiologia , Adulto Jovem
19.
Front Psychol ; 9: 1032, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29988490

RESUMO

Bilingual speakers are suggested to use control processes to avoid linguistic interference from the unintended language. It is debated whether these bilingual language control (BLC) processes are an instantiation of the more domain-general executive control (EC) processes. Previous studies inconsistently report correlations between measures of linguistic and non-linguistic control in bilinguals. In the present study, we investigate the extent to which there is cross-talk between these two domains of control for two switch costs, namely the n-1 shift cost and the n-2 repetition cost. Also, we address an important problem, namely the reliability of the measures used to investigate cross-talk. If the reliability of a measure is low, then these measures are ill-suited to test cross-talk between domains through correlations. We asked participants to perform both a linguistic- and non-linguistic switching task at two sessions about a week apart. The results show a dissociation between the two types of switch costs. Regarding test-retest reliability, we found a stronger reliability for the n-1 shift cost compared to the n-2 repetition cost within both domains as measured by correlations across sessions. This suggests the n-1 shift cost is more suitable to explore cross-talk of BLC and EC. Next, we do find cross-talk for the n-1 shift cost as demonstrated by a significant cross-domain correlation. This suggests that there are at least some shared processes in the linguistic and non-linguistic task.

20.
Front Psychol ; 8: 709, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28539898

RESUMO

Here we investigated how the language in which a person addresses us, native or foreign, influences subsequent face recognition. In an old/new paradigm, we explored the behavioral and electrophysiological activity associated with face recognition memory. Participants were first presented with faces accompanied by voices speaking either in their native (NL) or foreign language (FL). Faces were then presented in isolation and participants decided whether the face was presented before (old) or not (new). The results revealed that participants were more accurate at remembering faces previously paired with their native as opposed to their FL. At the event-related potential (ERP) level, we obtained evidence that faces in the NL were differently encoded from those in the FL condition, potentially due to differences in processing demands. During recognition, the frontal old/new effect was present (with a difference in latency) regardless of the language with which a face was associated, while the parietal old/new effect appeared only for faces associated with the native language. These results suggest that the language of our social interactions has an impact on the memory processes underlying the recognition of individuals.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA