RESUMEN
Automated speech and language analysis (ASLA) is a promising approach for capturing early markers of neurodegenerative diseases. However, its potential remains underexploited in research and translational settings, partly due to the lack of a unified tool for data collection, encryption, processing, download, and visualization. Here we introduce the Toolkit to Examine Lifelike Language (TELL) v.1.0.0, a web-based app designed to bridge such a gap. First, we outline general aspects of its development. Second, we list the steps to access and use the app. Third, we specify its data collection protocol, including a linguistic profile survey and 11 audio recording tasks. Fourth, we describe the outputs the app generates for researchers (downloadable files) and for clinicians (real-time metrics). Fifth, we survey published findings obtained through its tasks and metrics. Sixth, we refer to TELL's current limitations and prospects for expansion. Overall, with its current and planned features, TELL aims to facilitate ASLA for research and clinical aims in the neurodegeneration arena. A demo version can be accessed here: https://demo.sci.tellapp.org/ .
RESUMEN
Measuring human capabilities to synchronize in time, adapt to perturbations to timing sequences, or reproduce time intervals often requires experimental setups that allow recording response times with millisecond precision. Most setups present auditory stimuli using either MIDI devices or specialized hardware such as Arduino and are often expensive or require calibration and advanced programming skills. Here, we present in detail an experimental setup that only requires an external sound card and minor electronic skills, works on a conventional PC, is cheaper than alternatives, and requires almost no programming skills. It is intended for presenting any auditory stimuli and recording tapping response times with within 2-ms precision (up to - 2 ms lag). This paper shows why desired accuracy in recording response times against auditory stimuli is difficult to achieve in conventional computer setups, presents an experimental setup to overcome this, and explains in detail how to set it up and use the provided code. Finally, the code for analyzing the recorded tapping responses was evaluated, showing that no spurious or missing events were found in 94% of the analyzed recordings.
Asunto(s)
Percepción del Tiempo , Computadores , Humanos , Sonido , Percepción del Tiempo/fisiologíaRESUMEN
Several studies have examined how music may affect the evaluation of food and drink, but the vast majority have not observed how this interaction unfolds in time. This seems to be quite relevant, since both music and the consumer experience of food/drink are time-varying in nature. In the present study we sought to fix this gap, using Temporal Dominance of Sensations (TDS), a method developed to record the dominant sensory attribute at any given moment in time, to examine the impact of music on the wine taster's perception. More specifically, we assessed how the same red wine might be experienced differently when tasters were exposed to various sonic environments (two pieces of music plus a silent control condition). The results revealed diverse patterns of dominant flavours for each sound condition, with significant differences in flavour dominance in each music condition as compared to the silent control condition. Moreover, musical correspondence analysis revealed that differences in perceived dominance of acidity and bitterness in the wine were correlated in the temporality of the experience, with changes in basic auditory attributes. Potential implications for the role of attention in auditory flavour modification and opportunities for future studies are discussed.
Asunto(s)
Percepción Auditiva/fisiología , Música , Sensación/fisiología , Percepción del Gusto/fisiología , Gusto/fisiología , Lóbulo Temporal/fisiología , Vino , Adulto , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto JovenRESUMEN
What are the features that impersonators select to elicit a speaker's identity? We built a voice database of public figures (targets) and imitations produced by professional impersonators. They produced one imitation based on their memory of the target (caricature) and another one after listening to the target audio (replica). A set of naive participants then judged identity and similarity of pairs of voices. Identity was better evoked by the caricatures and replicas were perceived to be closer to the targets in terms of voice similarity. We used this data to map relevant acoustic dimensions for each task. Our results indicate that speaker identity is mainly associated with vocal tract features, while perception of voice similarity is related to vocal folds parameters. We therefore show the way in which acoustic caricatures emphasize identity features at the cost of loosing similarity, which allows drawing an analogy with caricatures in the visual space.