Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
1.
BMC Vet Res ; 20(1): 106, 2024 Mar 16.
Artigo em Inglês | MEDLINE | ID: mdl-38493286

RESUMO

BACKGROUND: Feline herpesvirus type 1 (FHV) and Feline calicivirus (FCV) are the primary co-infecting pathogens that cause upper respiratory tract disease in cats. However, there are currently no visual detection assays available for on-site testing. Here, we develop an ultrasensitive and visual detection method based on dual recombinase polymerase amplification (dRPA) reaction and the hybrid Cas12a/Cas13a trans-cleavage activities in a one-tube reaction system, referred to as one-tube dRPA-Cas12a/Cas13a assay. RESULTS: The recombinant plasmid DNAs, crRNAs, and RPA oligonucleotides targeting the FCV ORF1 gene and FHV-1 TK gene were meticulously prepared. Subsequently, dual RPA reactions were performed followed by screening of essential reaction components for hybrid CRISPR-Cas12a (targeting the FHV-1 TK gene) and CRISPR-Cas13a (targeting the FCV ORF1 gene) trans-cleavage reaction. As a result, we successfully established an ultra-sensitive and visually detectable method for simultaneous detection of FCV and FHV-1 nucleic acids using dRPA and CRISPR/Cas-powered technology in one-tube reaction system. Visual readouts were displayed using either a fluorescence detector (Fluor-based assay) or lateral flow dipsticks (LDF-based assay). As expected, this optimized assay exhibited high specificity towards only FHV-1 and FCV without cross-reactivity with other feline pathogens while achieving accurate detection for both targets with limit of detection at 2.4 × 10- 1 copies/µL for the FHV-1 TK gene and 5.5 copies/µL for the FCV ORF1 gene, respectively. Furthermore, field detection was conducted using the dRPA-Cas12a/Cas13a assay and the reference real-time PCR methods for 56 clinical samples collected from cats with URTD. Comparatively, the results of Fluor-based assay were in exceptional concordance with the reference real-time PCR methods, resulting in high sensitivity (100% for both FHV-1 and FCV), specificity (100% for both FHV-1 and FCV), as well as consistency (Kappa values were 1.00 for FHV-1 and FCV). However, several discordant results for FHV-1 detection were observed by LDF-based assay, which suggests its prudent use and interpretaion for clinical detection. In spite of this, incorporating dRPA-Cas12a/Cas13a assay and visual readouts will facilitate rapid and accurate detection of FHV-1 and FCV in resource-limited settings. CONCLUSIONS: The one-tube dRPA-Cas12a/Cas13a assay enables simultaneously ultrasensitive and visual detection of FHV-1 and FCV with user-friendly modality, providing unparalleled convenience for FHV-1 and FCV co-infection surveillance and decision-making of URTD management.


Assuntos
Calicivirus Felino , Herpesviridae , Varicellovirus , Gatos , Animais , Recombinases/genética , Sistemas CRISPR-Cas
2.
J Neurophysiol ; 127(6): 1622-1628, 2022 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-35583972

RESUMO

Humans can effortlessly categorize objects, both when they are conveyed through visual images and spoken words. To resolve the neural correlates of object categorization, studies have so far primarily focused on the visual modality. It is therefore still unclear how the brain extracts categorical information from auditory signals. In the current study, we used EEG (n = 48) and time-resolved multivariate pattern analysis to investigate 1) the time course with which object category information emerges in the auditory modality and 2) how the representational transition from individual object identification to category representation compares between the auditory modality and the visual modality. Our results show that 1) auditory object category representations can be reliably extracted from EEG signals and 2) a similar representational transition occurs in the visual and auditory modalities, where an initial representation at the individual-object level is followed by a subsequent representation of the objects' category membership. Altogether, our results suggest an analogous hierarchy of information processing across sensory channels. However, there was no convergence toward conceptual modality-independent representations, thus providing no evidence for a shared supramodal code.NEW & NOTEWORTHY Object categorization operates on inputs from different sensory modalities, such as vision and audition. This process was mainly studied in vision. Here, we explore auditory object categorization. We show that auditory object category representations can be reliably extracted from EEG signals and, similar to vision, auditory representations initially carry information about individual objects, which is followed by a subsequent representation of the objects' category membership.


Assuntos
Mapeamento Encefálico , Encéfalo , Percepção Auditiva , Cognição , Humanos , Reconhecimento Visual de Modelos , Estimulação Luminosa/métodos , Visão Ocular
3.
Sensors (Basel) ; 20(17)2020 Sep 03.
Artigo em Inglês | MEDLINE | ID: mdl-32899441

RESUMO

This study aimed to explore how the type and visual modality of a recommendation agent's identity affect male university students' (1) self-reported responses to agent-recommended symbolic brand in evaluating the naturalness of virtual agents, human, or artificial intelligence (AI) and (2) early event-related potential (ERP) responses between text- and face-specific scalp locations. Twenty-seven participants (M = 25.26, SD = 5.35) whose consumption was more motivated by symbolic needs (vs. functional) were instructed to perform a visual task to evaluate the naturalness of the target stimuli. As hypothesized, the subjective evaluation showed that they had lower attitudes and perceived higher unnaturalness when the symbolic brand was recommended by AI (vs. human). Based on this self-report, two epochs were segmented for the ERP analysis: human-natural and AI-unnatural. As revealed by P100 amplitude modulation on visual modality of two agents, their evaluation relied more on face image rather than text. Furthermore, this tendency was consistently observed in that of N170 amplitude when the agent identity was defined as human. However, when the agent identity was defined as AI, reversed N170 modulation was observed, indicating that participants referred more to textual information than graphical information to assess the naturalness of the agent.


Assuntos
Inteligência Artificial , Potenciais Evocados , Inteligência , Adulto , Eletroencefalografia , Face , Humanos , Masculino , Adulto Jovem
4.
Scand J Psychol ; 57(4): 292-7, 2016 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-27241617

RESUMO

An initial act of self-control that impairs subsequent acts of self-control is called ego depletion. The ego depletion phenomenon has been observed consistently. The modality effect refers to the effect of the presentation modality on the processing of stimuli. The modality effect was also robustly found in a large body of research. However, no study to date has examined the modality effects of ego depletion. This issue was addressed in the current study. In Experiment 1, after all participants completed a handgrip task, one group's participants completed a visual attention regulation task and the other group's participants completed an auditory attention regulation task, and then all participants again completed a handgrip task. The ego depletion phenomenon was observed in both the visual and the auditory attention regulation task. Moreover, participants who completed the visual task performed worse on the handgrip task than participants who completed the auditory task, which indicated that there was high ego depletion in the visual task condition. In Experiment 2, participants completed an initial task that either did or did not deplete self-control resources, and then they completed a second visual or auditory attention control task. The results indicated that depleted participants performed better on the auditory attention control task than the visual attention control task. These findings suggest that altering task modality may reduce ego depletion.


Assuntos
Atenção , Percepção Auditiva , Ego , Autocontrole , Percepção Visual , Estimulação Acústica , Afeto , Feminino , Força da Mão , Humanos , Masculino , Estimulação Luminosa
5.
Data Brief ; 55: 110672, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39071970

RESUMO

The existence of diverse traditional machine learning and deep learning models designed for various multimodal music information retrieval (MIR) applications, such as multimodal music sentiment analysis, genre classification, recommender systems, and emotion recognition, renders the machine learning and deep learning models indispensable for the MIR tasks. However, solving these tasks in a data-driven manner depends on the availability of high-quality benchmark datasets. Hence, the necessity for datasets tailored for multimodal music information retrieval applications is paramount. While a handful of multimodal datasets exist for distinct music information retrieval applications, they are not available in low-resourced languages, like Sotho-Tswana languages. In response to this gap, we introduce a novel multimodal music information retrieval dataset for various music information retrieval applications. This dataset centres on Sotho-Tswana musical videos, encompassing both textual, visual, and audio modalities specific to Sotho-Tswana musical content. The musical videos were downloaded from YouTube, but Python programs were written to process the musical videos and extract relevant spectral-based acoustic features, using different Python libraries. Annotation of the dataset was done manually by native speakers of Sotho-Tswana languages, who understand the culture and traditions of the Sotho-Tswana people. It is distinctive as, to our knowledge, no such dataset has been established until now.

6.
Artigo em Inglês | MEDLINE | ID: mdl-38777989

RESUMO

To effectively process the most relevant information, the brain anticipates the optimal timing for allocating attentional resources. Behavior can be optimized by automatically aligning attention with external rhythmic structures, whether visual or auditory. Although the auditory modality is known for its efficacy in representing temporal information, the current body of research has not conclusively determined whether visual or auditory rhythmic presentations have a definitive advantage in entraining temporal attention. The present study directly examined the effects of auditory and visual rhythmic cues on the discrimination of visual targets in Experiment 1 and on auditory targets in Experiment 2. Additionally, the role of endogenous spatial attention was also considered. When and where the target was the most likely to occur were cued by unimodal (visual or auditory) and bimodal (audiovisual) signals. A sequence of salient events was employed to elicit rhythm-based temporal expectations and a symbolic predictive cue served to orient spatial attention. The results suggest a superiority of auditory over visual rhythms, irrespective of spatial attention, whether the spatial cue and rhythm converge or not (unimodal or bimodal), and regardless of the target modality (visual or auditory). These findings are discussed in terms of a modality-specific rhythmic orienting, while considering a single, supramodal system operating in a top-down manner for endogenous spatial attention.

7.
Q J Exp Psychol (Hove) ; 76(5): 1086-1097, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-35570680

RESUMO

Prospective memory (PM) is the ability to perform an intended action when the appropriate conditions occur. Several features play a role in the successful retrieval of an intention: the activity we are concurrently engaged in, the number of intentions we are maintaining, where our attention is focused (outward vs. to inner states), and how outstanding the trigger of the intention is. Another factor that may play a crucial role is sensory modality: Do auditory and visual stimuli prompt PM processing in the same way? In this study, we explored for the first time the nature of PM for auditory stimuli and the presence of modality-dependent differences in PM processing. To do so, an identical paradigm composed of multiple PM tasks was administered in two versions, one with auditory stimuli and one with visual ones. Each PM task differed for features such as focality, salience, and number of intentions (factors that are known in literature to modulate the monitoring and maintenance requests of PM) to explore the impact of sensory modality on a broad variety of classical PM tasks. In general, PM processing showed similar patterns between modalities, especially for low demanding prospective instructions. Conversely, substantial differences were found when the prospective load was increased and monitoring requests enhanced, as participants were significantly slower and less accurate with acoustic stimuli. These results represent the first evidence that modality-dependent effects arise in PM processing, especially in its interaction with features such as the difficulty of the task and the increased monitoring load.


Assuntos
Memória Episódica , Humanos , Atenção , Percepção Auditiva , Intenção , Estimulação Acústica
8.
Cogn Res Princ Implic ; 8(1): 62, 2023 10 04.
Artigo em Inglês | MEDLINE | ID: mdl-37794290

RESUMO

The present study examined whether scaling direction and perceptual modality affect children's spatial scaling. Children aged 6-8 years (N = 201) were assigned to a visual, visuo-haptic, and haptic condition in which they were presented with colourful, embossed graphics. In the haptic condition, they were asked to wear a blindfold during the test trials. Across several trials, children were asked to learn about the position of a target in a map and to localize a disc at the same location in a referent space. Scaling factor was manipulated systematically, so that children had to either scale up or scale down spatial information. Their absolute deviations from the correct target location, reversal and signed errors, and response times served as dependent variables. Results revealed higher absolute deviations and response times for the haptic modality as opposed to the visual modality. Children's signed errors, however, showed similar response strategies across the perceptual conditions. Therefore, it seems that a functional equivalence between vision and touch seems to emerge slowly across development for spatial scaling. With respect to scaling directions, findings showed that absolute deviations were affected by scaling factors, with symmetric increases in scaling up and scaling down in the haptic condition. Conversely, children showed an unbalanced pattern in the visual conditions, with higher accuracy in scaling down as opposed to scaling up. Overall, our findings suggest that visibility seems to factor into children's scaling process.


Assuntos
Percepção do Tato , Tato , Humanos , Criança , Tato/fisiologia , Percepção do Tato/fisiologia , Aprendizagem , Tempo de Reação , Serviços de Saúde
9.
Chronobiol Int ; 40(4): 515-528, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36912022

RESUMO

The main objective of the current study was to investigate the effect of time of day on visual and auditory, short-term memory (STM), and long-term memory (LTM) distortions using a hybrid Deese-Roediger-McDermott procedure. In Experiment 1, we used semantically related words, whereas in Experiment 2 - words were characterized by phonological similarity. The results showed a relationship between modality and types of stimuli. In STM, more semantic errors were found in the evening for items presented visually and more errors following auditory presentation for phonologically similar words. In LTM, the analysis revealed a higher rate of semantic distortions in the evening hours for auditorily presented words. For words with phonological similarity, we observed more errors in the evening without the effect of modality. The results support the hypothesis that more reliance is placed on elaborative processing in the evening and more on maintenance processing in the morning; however, this is not modality invariant.


Assuntos
Ritmo Circadiano , Reconhecimento Psicológico , Memória de Curto Prazo , Semântica
10.
Atten Percept Psychophys ; 84(6): 1994-2001, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-34725775

RESUMO

Usually people can estimate the correct position of a moving object even when it temporarily moves behind an occlusion. Studies have been performed on this type of occluded motion with prediction motion (PM) tasks in the laboratory. Previous publications have emphasized that people could use mental imagery or apply an oculomotor system to estimate the arrival of a moving stimulus at the target place. Nevertheless, these two ways cannot account for the performance difference under a different set of conditions. Our study tested the role of time structure in a time-to-collision (TTC) task using visual and auditory modalities. In the visual condition, the moving red bar travelled from left to right and was invisible during the entire course but flashed at the initial and the occluded points. The auditory condition and visual condition were alike, except that the flashes in the visual condition were changed to clicks at the initial and the occluded points. The results illustrated that participants' performance was better in the equal time structure condition. The comparison between the two sense modalities demonstrated a similar tendency, which suggested there could be common cognitive processes between visual and auditory modalities when participants took advantage of temporal cues to judge TTC.


Assuntos
Percepção de Movimento , Percepção Auditiva , Sinais (Psicologia) , Humanos , Movimento (Física) , Estimulação Luminosa/métodos
11.
Heliyon ; 8(5): e09469, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35647346

RESUMO

Prior knowledge of color, such as traffic rules (blue/green and red mean "go" and "stop" respectively), can influence reaction times (RTs). Specifically, in a Go/No-go task, where signals were presented by a light-emitting diode (LED) lighting device, RT has been reported to be longer when responding to a red signal and withholding the response to a blue signal (Red Go/Blue No-go task) than when responding to a blue signal and withholding the response to a red signal (Blue Go/Red No-go task). In recent years, a driving simulator has been shown to be effective in evaluation and training of driving skills of dementia and stroke patients. However, it is unknown whether the change in RT observed with the LED lighting device can be replicated with a monitor presenting signals that are different from the real traffic lights in terms of depth and texture. The purpose of this study was to elucidate whether a difference in visual modality (LED and monitor) influences the effect of prior knowledge of color on RTs. Fifteen participants performed a simple reaction task (Blue and Red signals), a Blue Go/Red No-go task, and a Red Go/Blue No-go task. Signals were presented from an LED lighting device (Light condition) and a liquid crystal display (LCD) monitor (Monitor condition). The results showed that there was no significant difference in simple RT by signal color in both conditions. In the Go/No-go task, there was a significant interaction between the type of signal presentation device and the color of signal. Although the RT was significantly longer in the Red Go/Blue No-go than Blue Go/Red No-go task in the Light condition, there was no significant difference in RT between the Blue Go/Red No-go and Red Go/Blue No-go tasks in the Monitor condition. It is interpreted that blue and red signals presented from the LCD monitor were insufficient to evoke a perception of traffic lights as compared to the LED. This study suggests that a difference in the presentation modality (LED and monitor) of visual information can influence the level of object perception and consequently the effect of prior knowledge on behavioral responses.

12.
Cogn Sci ; 46(5): e13133, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35613353

RESUMO

Sign languages use multiple articulators and iconicity in the visual modality which allow linguistic units to be organized not only linearly but also simultaneously. Recent research has shown that users of an established sign language such as LIS (Italian Sign Language) use simultaneous and iconic constructions as a modality-specific resource to achieve communicative efficiency when they are required to encode informationally rich events. However, it remains to be explored whether the use of such simultaneous and iconic constructions recruited for communicative efficiency can be employed even without a linguistic system (i.e., in silent gesture) or whether they are specific to linguistic patterning (i.e., in LIS). In the present study, we conducted the same experiment as in Slonimska et al. with 23 Italian speakers using silent gesture and compared the results of the two studies. The findings showed that while simultaneity was afforded by the visual modality to some extent, its use in silent gesture was nevertheless less frequent and qualitatively different than when used within a linguistic system. Thus, the use of simultaneous and iconic constructions for communicative efficiency constitutes an emergent property of sign languages. The present study highlights the importance of studying modality-specific resources and their use for linguistic expression in order to promote a more thorough understanding of the language faculty and its modality-specific adaptive capabilities.


Assuntos
Gestos , Língua de Sinais , Humanos , Idioma , Desenvolvimento da Linguagem , Linguística
13.
Front Neurosci ; 15: 653965, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34017235

RESUMO

Name recognition plays important role in self-related cognitive processes and also contributes to a variety of clinical applications, such as autism spectrum disorder diagnosis and consciousness disorder analysis. However, most previous name-related studies usually adopted noninvasive EEG or fMRI recordings, which were limited by low spatial resolution and temporal resolution, respectively, and thus millisecond-level response latencies in precise brain regions could not be measured using these noninvasive recordings. By invasive stereo-electroencephalography (SEEG) recordings that have high resolution in both the spatial and temporal domain, the current study distinguished the neural response to one's own name or a stranger's name, and explored common active brain regions in both auditory and visual modalities. The neural activities were classified using spatiotemporal features of high-gamma, beta, and alpha band. Results showed that different names could be decoded using multi-region SEEG signals, and the best classification performance was achieved at high gamma (60-145 Hz) band. In this case, auditory and visual modality-based name classification accuracies were 84.5 ± 8.3 and 79.9 ± 4.6%, respectively. Additionally, some single regions such as the supramarginal gyrus, middle temporal gyrus, and insula could also achieve remarkable accuracies for both modalities, supporting their roles in the processing of self-related information. The average latency of the difference between the two responses in these precise regions was 354 ± 63 and 285 ± 59 ms in the auditory and visual modality, respectively. This study suggested that name recognition was attributed to a distributed brain network, and the subsets with decoding capabilities might be potential implanted regions for awareness detection and cognition evaluation.

14.
Brain Res ; 1755: 147277, 2021 03 15.
Artigo em Inglês | MEDLINE | ID: mdl-33422540

RESUMO

In the present study, we used an innovative music-rest interleaved fMRI paradigm to investigate the neural correlates of tinnitus distress. Tinnitus is a poorly-understood hearing disorder where individuals perceive sounds, in the absence of an external source. Although the great majority of individuals habituate to chronic tinnitus and report few symptoms, a minority report debilitating distress and annoyance. Prior research suggests that a diverse set of brain regions, including the attention, the salience, and the limbic networks, play key roles in mediating both the perception of tinnitus and its impact on the individual; however, evidence of the degree and extent of their involvement has been inconsistent. Here, we minimally modified the conventional resting state fMRI by interleaving it with segments of jazz music. We found that the functional connectivity between a set of brain regions-including cerebellum, precuneus, superior/middle frontal gyrus, and primary visual cortex-and seeds in the dorsal attention network, the salience network, and the amygdala, were effective in fractionating the tinnitus patients into two subgroups, characterized by the severity of tinnitus-related distress. Further, our findings revealed cross-modal modulation of the attention and salience networks by the visual modality during the music segments. On average, the more bothersome the reported tinnitus, the stronger was the exhibited inter-network functional connectivity. This study substantiates the essential role of the attention, salience, and limbic networks in tinnitus habituation, and suggests modulation of the attention and salience networks across the auditory and visual modalities as a possible compensatory mechanism for bothersome tinnitus.


Assuntos
Encéfalo/fisiopatologia , Emoções/fisiologia , Rede Nervosa/fisiopatologia , Descanso/fisiologia , Zumbido/fisiopatologia , Mapeamento Encefálico , Humanos , Redes Neurais de Computação , Vias Neurais/fisiopatologia
15.
Front Psychol ; 11: 132, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32116935

RESUMO

The mood and atmosphere of a service setting are essential factors in the way customers evaluate their shopping experience in a retail store environment. Scholars have shown that background music has a strong effect on consumer behavior. Retailers design novel environments in which appropriate music can elevate the shopping experience. While previous findings highlight the effects of background music on consumer behavior, the extent to which recognition of store atmosphere varies with genre of background music in sales spaces is unknown. We conducted an eye tracking experiment to evaluate the effect of background music on the perceived atmosphere of a service setting. We used a 2 (music genre: jazz song with slow tempo vs. dance song with fast tempo) × 1 (visual stimuli: image of coffee shop) within-subject design to test the effect of music genre on visual perception of a physical environment. Results show that the fixation values during the slow tempo music were at least two times higher than the fixation values during the fast tempo music and that the blink values during the fast tempo music were at least two times higher than the blink values during the slow tempo music. Notably, initial and maximum concentration differed by music type. Our findings also indicate that differences in scan paths and locations between the slow tempo music and the fast tempo music changed over time. However, average fixation values were not significantly different between the two music types.

16.
Top Cogn Sci ; 12(3): 875-893, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-31495072

RESUMO

Artificial grammar learning (AGL) has become an important tool used to understand aspects of human language learning and whether the abilities underlying learning may be unique to humans or found in other species. Successful learning is typically assumed when human or animal participants are able to distinguish stimuli generated by the grammar from those that are not at a level better than chance. However, the question remains as to what subjects actually learn in these experiments. Previous studies of AGL have frequently introduced multiple potential contributors to performance in the training and testing stimuli, but meta-analysis techniques now enable us to consider these multiple information sources for their contribution to learning-enabling intended and unintended structures to be assessed simultaneously. We present a blueprint for meta-analysis approaches to appraise the effect of learning in human and other animal studies for a series of artificial grammar learning experiments, focusing on studies that examine auditory and visual modalities. We identify a series of variables that differ across these studies, focusing on both structural and surface properties of the grammar, and characteristics of training and test regimes, and provide a first step in assessing the relative contribution of these design features of artificial grammars as well as species-specific effects for learning.


Assuntos
Aprendizagem , Metanálise como Assunto , Psicolinguística , Adulto , Animais , Criança , Modificador do Efeito Epidemiológico , Humanos
17.
Front Aging Neurosci ; 11: 165, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31316374

RESUMO

Exploratory behavior and responsiveness to novelty play an important role in maintaining cognitive function in older adults. Inferences about age- or disease-related differences in neural and behavioral responses to novelty are most often based on results from single experimental testing sessions. There has been very limited research on whether such findings represent stable characteristics of populations studied, which is essential if investigators are to determine the result of interventions aimed at promoting exploratory behaviors or draw appropriate conclusions about differences in the processing of novelty across diverse clinical groups. The goal of the current study was to investigate the short-term test-retest reliability of event-related potential (ERP) and behavioral responses to novel stimuli in cognitively normal older adults. ERPs and viewing durations were recorded in 70 healthy older adults participating in a subject-controlled visual novelty oddball task during two sessions occurring 7 weeks apart. Mean midline P3 amplitude and latency, mean midline amplitude during successive 50 ms intervals, temporospatial factors derived from principal component analysis (PCA), and viewing duration in response to novel stimuli were measured during each session. Analysis of variance (ANOVA) revealed no reliable differences in the value of any measurements between Time 1 and 2. Intraclass correlation coefficients (ICCs) between Time 1 and 2 were excellent for mean P3 amplitude (ICC = 0.86), the two temporospatial factors consistent with the P3 components (ICC of 0.88 and 0.76) and viewing duration of novel stimuli (ICC = 0.81). Reliability was only fair for P3 peak latency (ICC = 0.56). Successive 50 ms mean amplitude measures from 100 to 1,000 ms yielded fair to excellent reliabilities, and all but one of the 12 temporospatial factors identified demonstrated ICCs in the good to excellent range. We conclude that older adults demonstrate substantial stability in ERP and behavioral responses to novel visual stimuli over a 7-week period. These results suggest that older adults may have a characteristic way of processing novelty that appears resistant to transient changes in their environment or internal states, which can be indexed during a single testing session. The establishment of reliable measures of novelty processing will allow investigators to determine whether proposed interventions have an impact on this important aspect of behavior.

18.
Front Integr Neurosci ; 10: 13, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27013994

RESUMO

Humans constantly process and integrate sensory input from multiple sensory modalities. However, the amount of input that can be processed is constrained by limited attentional resources. A matter of ongoing debate is whether attentional resources are shared across sensory modalities, and whether multisensory integration is dependent on attentional resources. Previous research suggested that the distribution of attentional resources across sensory modalities depends on the the type of tasks. Here, we tested a novel task combination in a dual task paradigm: Participants performed a self-terminated visual search task and a localization task in either separate sensory modalities (i.e., haptics and vision) or both within the visual modality. Tasks considerably interfered. However, participants performed the visual search task faster when the localization task was performed in the tactile modality in comparison to performing both tasks within the visual modality. This finding indicates that tasks performed in separate sensory modalities rely in part on distinct attentional resources. Nevertheless, participants integrated visuotactile information optimally in the localization task even when attentional resources were diverted to the visual search task. Overall, our findings suggest that visual search and tactile localization partly rely on distinct attentional resources, and that optimal visuotactile integration is not dependent on attentional resources.

19.
Top Cogn Sci ; 7(1): 2-11, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25565249

RESUMO

For humans, the ability to communicate and use language is instantiated not only in the vocal modality but also in the visual modality. The main examples of this are sign languages and (co-speech) gestures. Sign languages, the natural languages of Deaf communities, use systematic and conventionalized movements of the hands, face, and body for linguistic expression. Co-speech gestures, though non-linguistic, are produced in tight semantic and temporal integration with speech and constitute an integral part of language together with speech. The articles in this issue explore and document how gestures and sign languages are similar or different and how communicative expression in the visual modality can change from being gestural to grammatical in nature through processes of conventionalization. As such, this issue contributes to our understanding of how the visual modality shapes language and the emergence of linguistic structure in newly developing systems. Studying the relationship between signs and gestures provides a new window onto the human ability to recruit multiple levels of representation (e.g., categorical, gradient, iconic, abstract) in the service of using or creating conventionalized communicative systems.


Assuntos
Gestos , Idioma , Reconhecimento Visual de Modelos/fisiologia , Língua de Sinais , Humanos , Desenvolvimento da Linguagem
20.
Top Cogn Sci ; 7(1): 36-60, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25472492

RESUMO

Establishing and maintaining reference is a crucial part of discourse. In spoken languages, differential linguistic devices mark referents occurring in different referential contexts, that is, introduction, maintenance, and re-introduction contexts. Speakers using gestures as well as users of sign languages have also been shown to mark referents differentially depending on the referential context. This article investigates the modality-specific contribution of the visual modality in marking referential context by providing a direct comparison between sign language (German Sign Language; DGS) and co-speech gesture with speech (German) in elicited narratives. Across all forms of expression, we find that referents in subject position are referred to with more marking material in re-introduction contexts compared to maintenance contexts. Furthermore, we find that spatial modification is used as a modality-specific strategy in both DGS and German co-speech gesture, and that the configuration of referent locations in sign space and gesture space corresponds in an iconic and consistent way to the locations of referents in the narrated event. However, we find that spatial modification is used in different ways for marking re-introduction and maintenance contexts in DGS and German co-speech gesture. The findings are discussed in relation to the unique contribution of the visual modality to reference tracking in discourse when it is used in a unimodal system with full linguistic structure (i.e., as in sign) versus in a bimodal system that is a composite of speech and gesture.


Assuntos
Gestos , Língua de Sinais , Fala/fisiologia , Feminino , Humanos , Idioma , Desenvolvimento da Linguagem , Modelos Lineares , Reconhecimento Visual de Modelos/fisiologia , Pessoas com Deficiência Auditiva
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa