Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Brain Lang ; 252: 105413, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38608511

RESUMEN

Sign languages (SLs) are expressed through different bodily actions, ranging from re-enactment of physical events (constructed action, CA) to sequences of lexical signs with internal structure (plain telling, PT). Despite the prevalence of CA in signed interactions and its significance for SL comprehension, its neural dynamics remain unexplored. We examined the processing of different types of CA (subtle, reduced, and overt) and PT in 35 adult deaf or hearing native signers. The electroencephalographic-based processing of signed sentences with incongruent targets was recorded. Attenuated N300 and early N400 were observed for CA in deaf but not in hearing signers. No differences were found between sentences with CA types in all signers, suggesting a continuum from PT to overt CA. Deaf signers focused more on body movements; hearing signers on faces. We conclude that CA is processed less effortlessly than PT, arguably because of its strong focus on bodily actions.


Asunto(s)
Comprensión , Sordera , Electroencefalografía , Lengua de Signos , Humanos , Comprensión/fisiología , Adulto , Masculino , Femenino , Sordera/fisiopatología , Adulto Joven , Encéfalo/fisiología , Potenciales Evocados/fisiología
2.
J Eye Mov Res ; 11(2)2018 May 07.
Artículo en Inglés | MEDLINE | ID: mdl-33828688

RESUMEN

Both eye tracking and motion capture technologies are nowadays frequently used in human sciences, although both technologies are usually used separately. However, measuring both eye and body movements simultaneously would offer great potential for investigating crossmodal interaction in human (e.g. music and language-related) behavior. Here we combined an Ergoneers Dikablis head mounted eye tracker with a Qualisys Oqus optical motion capture system. In order to synchronize the recordings of both devices, we developed a generalizable solution that does not rely on any (cost-intensive) ready-made / company-provided synchronization solution. At the beginning of each recording, the participant nods quickly while fixing on a target while keeping the eyes open - a motion yielding a sharp vertical displacement in both mocap and eye data. This displacement can be reliably detected with a peak-picking algorithm and used for accurately aligning the mocap and eye data. This method produces accurate synchronization results in the case of clean data and therefore provides an attractive alternative to costly plug-ins, as well as a solution in case ready-made synchronization solutions are unavailable.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA