Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
bioRxiv ; 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38328074

RESUMO

Scientific progress depends on reliable and reproducible results. Progress can also be accelerated when data are shared and re-analyzed to address new questions. Current approaches to storing and analyzing neural data typically involve bespoke formats and software that make replication, as well as the subsequent reuse of data, difficult if not impossible. To address these challenges, we created Spyglass, an open-source software framework that enables reproducible analyses and sharing of data and both intermediate and final results within and across labs. Spyglass uses the Neurodata Without Borders (NWB) standard and includes pipelines for several core analyses in neuroscience, including spectral filtering, spike sorting, pose tracking, and neural decoding. It can be easily extended to apply both existing and newly developed pipelines to datasets from multiple sources. We demonstrate these features in the context of a cross-laboratory replication by applying advanced state space decoding algorithms to publicly available data. New users can try out Spyglass on a Jupyter Hub hosted by HHMI and 2i2c: https://spyglass.hhmi.2i2c.cloud/.

2.
Acta Psychol (Amst) ; 236: 103923, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37087958

RESUMO

For sign languages, transitional movements of the hands are fully visible and may be used to predict upcoming linguistic input. We investigated whether and how deaf signers and hearing nonsigners use transitional information to detect a target item in a string of either pseudosigns or grooming gestures, as well as whether motor imagery ability was related to this skill. Transitional information between items was either intact (Normal videos), digitally altered such that the hands were selectively blurred (Blurred videos), or edited to only show the frame prior to the transition which was frozen for the entire transition period, removing all transitional information (Static videos). For both pseudosigns and gestures, signers and nonsigners had faster target detection times for Blurred than Static videos, indicating similar use of movement transition cues. For linguistic stimuli (pseudosigns), only signers made use of transitional handshape information, as evidenced by faster target detection times for Normal than Blurred videos. This result indicates that signers can use their linguistic knowledge to interpret transitional handshapes to predict the upcoming signal. Signers and nonsigners did not differ in motor imagery abilities, but only non-signers exhibited evidence of using motor imagery as a prediction strategy. Overall, these results suggest that signers use transitional movement and handshape cues to facilitate sign recognition.


Assuntos
Gestos , Audição , Humanos , Sinais (Psicologia) , Linguística , Língua de Sinais , Percepção
3.
Brain Lang ; 223: 105044, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34741986

RESUMO

In American Sign Language (ASL) spatial relationships are conveyed by the location of the hands in space, whereas English employs prepositional phrases. Using event-related fMRI, we examined comprehension of perspective-dependent (PD) (left, right) and perspective-independent (PI) (in, on) sentences in ASL and audiovisual English (sentence-picture matching task). In contrast to non-spatial control sentences, PD sentences engaged the superior parietal lobule (SPL) bilaterally for ASL and English, consistent with a previous study with written English. The ASL-English conjunction analysis revealed bilateral SPL activation for PD sentences, but left-lateralized activation for PI sentences. The direct contrast between PD and PI expressions revealed greater SPL activation for PD expressions only for ASL. Increased SPL activation for ASL PD expressions may reflect the mental transformation required to interpret locations in signing space from the signer's viewpoint. Overall, the results suggest both overlapping and distinct neural regions support spatial language comprehension in ASL and English.


Assuntos
Surdez , Língua de Sinais , Compreensão/fisiologia , Humanos , Idioma , Imageamento por Ressonância Magnética , Lobo Parietal/diagnóstico por imagem , Lobo Parietal/fisiologia , Estados Unidos
4.
Acta Psychol (Amst) ; 208: 103092, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-32531500

RESUMO

Motor simulation has emerged as a mechanism for both predictive action perception and language comprehension. By deriving a motor command, individuals can predictively represent the outcome of an unfolding action as a forward model. Evidence of simulation can be seen via improved participant performance for stimuli that conform to the participant's individual characteristics (an egocentric bias). There is little evidence, however, from individuals for whom action and language take place in the same modality: sign language users. The present study asked signers and nonsigners to shadow (perform actions in tandem with various models), and the delay between the model and participant ("lag time") served as an indicator of the strength of the predictive model (shorter lag time = more robust model). This design allowed us to examine the role of (a) motor simulation during action prediction, (b) linguistic status in predictive representations (i.e., pseudosigns vs. grooming gestures), and (c) language experience in generating predictions (i.e., signers vs. nonsigners). An egocentric bias was only observed under limited circumstances: when nonsigners began shadowing grooming gestures. The data do not support strong motor simulation proposals, and instead highlight the role of (a) production fluency and (b) manual rhythm for signer productions. Signers showed significantly faster lag times for the highly skilled pseudosign model and increased temporal regularity (i.e., lower standard deviations) compared to nonsigners. We conclude sign language experience may (a) reduce reliance on motor simulation during action observation, (b) attune users to prosodic cues (c) and induce temporal regularities during action production.


Assuntos
Gestos , Língua de Sinais , Sinais (Psicologia) , Humanos , Idioma , Linguística
5.
J Deaf Stud Deaf Educ ; 24(3): 214-222, 2019 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-30856254

RESUMO

In ASL spatial classifier expressions, the location of the hands in signing space depicts the relative position of described objects. When objects are physically present, the arrangement of the hands maps to the observed position of objects in the world (Shared Space). For non-present objects, interlocutors must perform a mental transformation to take the signer's perspective ("Signer Space"). The ASL Spatial Perspective Comprehension Test (ASPCT) was developed to assess the comprehension of locative expressions produced in both Shared and Signer Space, viewed from both canonical Face-to-face and 90° offset viewing angles. Deaf signers (N = 38) performed better with Shared than Signer Space. Viewing angle only impacted Signer Space comprehension (90° offset better than 180° Face-to-face). ASPCT performance correlated positively with both nonverbal intelligence and ASL proficiency. These findings indicate that the mental transformation required to understand a signer's perspective is not automatic, takes time, and is cognitively demanding.


Assuntos
Compreensão/fisiologia , Surdez/psicologia , Língua de Sinais , Navegação Espacial/fisiologia , Adulto , Análise de Variância , Feminino , Humanos , Masculino , Estimulação Luminosa , Tempo de Reação/fisiologia , Gravação em Vídeo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA