Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
1.
J Eur Acad Dermatol Venereol ; 37(7): 1426-1434, 2023 Jul.
Article in English | MEDLINE | ID: mdl-36950946

ABSTRACT

BACKGROUND: Implicit visual skills play an important role in the recognition of skin-related conditions. OBJECTIVES: We aimed to evaluate effectiveness and practicability of digital perceptual learning modules (PLMs) during undergraduate teaching of dermatology. METHODS: The study consisted of four subsequent dermatology courses including 105 medical students. PLMs designed for an online setting were carried out before, during and at course ends, as well as 6-12 months after the courses (N = 33). We investigated four important outcome measures regarding perceptual learning: diagnostic accuracy (%-percentage of correct responses), decision duration (response time), recognized features (decision basis) and student-perceived confidence. RESULTS: Diagnostic accuracy (p < 0.001, effect size η p 2 = 0.82), fluency (p < 0.001, η p 2 = 0.23) and confidence (p < 0.001, η p 2 = 0.74) increased significantly with successive PLMs during courses. Students classified more visual features and based the diagnosis more on primary lesion. Accuracy increased in all tasks during the courses and reached over 90% in diagnoses of the first to third task difficulty quartile. In the most difficult quartile accuracy reached to 60%. In the follow-up, students' performance remained at high level. Analysis of diagnostic errors showed that there were specific conditions which were systematically confused with each other. CONCLUSIONS: Digital PLMs improved high rates of diagnostic accuracy, fluency and student-perceived confidence in recognition of skin-related conditions. There was a long-term consistency in high performance suggesting effective learning retention. In the digital setting, PLMs were practicable and easily integrated into traditional teachings. We believe that there is extensive potential for a wider use of perceptual learning to improve nonanalytical visual skills in dermatology and medical education in general.


Subject(s)
Dermatology , Skin Diseases , Students, Medical , Humans , Dermatology/education , Cohort Studies , Finland , Learning
2.
Neuroimage ; 263: 119631, 2022 11.
Article in English | MEDLINE | ID: mdl-36113736

ABSTRACT

Face perception provides an excellent example of how the brain processes nuanced visual differences and transforms them into behaviourally useful representations of identities and emotional expressions. While a body of literature has looked into the spatial and temporal neural processing of facial expressions, few studies have used a dimensionally varying set of stimuli containing subtle perceptual changes. In the current study, we used 48 short videos varying dimensionally in their intensity and category (happy, angry, surprised) of expression. We measured both fMRI and EEG responses to these video clips and compared the neural response patterns to the predictions of models based on image features and models derived from behavioural ratings of the stimuli. In fMRI, the inferior frontal gyrus face area (IFG-FA) carried information related only to the intensity of the expression, independent of image-based models. The superior temporal sulcus (STS), inferior temporal (IT) and lateral occipital (LO) areas contained information about both expression category and intensity. In the EEG, the coding of expression category and low-level image features were most pronounced at around 400 ms. The expression intensity model did not, however, correlate significantly at any EEG timepoint. Our results show a specific role for IFG-FA in the coding of expressions and suggest that it contains image and category invariant representations of expression intensity.


Subject(s)
Emotions , Magnetic Resonance Imaging , Humans , Emotions/physiology , Magnetic Resonance Imaging/methods , Brain Mapping/methods , Facial Expression , Electroencephalography
3.
Neuroimage ; 209: 116531, 2020 04 01.
Article in English | MEDLINE | ID: mdl-31931156

ABSTRACT

The temporal and spatial neural processing of faces has been investigated rigorously, but few studies have unified these dimensions to reveal the spatio-temporal dynamics postulated by the models of face processing. We used support vector machine decoding and representational similarity analysis to combine information from different locations (fMRI), time windows (EEG), and theoretical models. By correlating representational dissimilarity matrices (RDMs) derived from multiple pairwise classifications of neural responses to different facial expressions (neutral, happy, fearful, angry), we found early EEG time windows (starting around 130 â€‹ms) to match fMRI data from primary visual cortex (V1), and later time windows (starting around 190 â€‹ms) to match data from lateral occipital, fusiform face complex, and temporal-parietal-occipital junction (TPOJ). According to model comparisons, the EEG classification results were based more on low-level visual features than expression intensities or categories. In fMRI, the model comparisons revealed change along the processing hierarchy, from low-level visual feature coding in V1 to coding of intensity of expressions in the right TPOJ. The results highlight the importance of a multimodal approach for understanding the functional roles of different brain regions in face processing.


Subject(s)
Brain Mapping , Cerebral Cortex/physiology , Electroencephalography , Emotions/physiology , Facial Recognition/physiology , Magnetic Resonance Imaging , Adult , Cerebral Cortex/diagnostic imaging , Female , Humans , Male , Time Factors , Young Adult
4.
Sci Rep ; 9(1): 892, 2019 01 29.
Article in English | MEDLINE | ID: mdl-30696943

ABSTRACT

Simple visual items and complex real-world objects are stored into visual working memory as a collection of independent features, not as whole or integrated objects. Storing faces into memory might differ, however, since previous studies have reported perceptual and memory advantage for whole faces compared to other objects. We investigated whether facial features can be integrated in a statistically optimal fashion and whether memory maintenance disrupts this integration. The observers adjusted a probe - either a whole face or isolated features (eyes or mouth region) - to match the identity of a target while viewing both stimuli simultaneously or after a 1.5 second retention period. Precision was better for the whole face compared to the isolated features. Perceptual precision was higher than memory precision, as expected, and memory precision further declined as the number of memorized items was increased from one to four. Interestingly, the whole-face precision was better predicted by models assuming injection of memory noise followed by integration of features than by models assuming integration of features followed by the memory noise. The results suggest equally weighted or optimal integration of facial features and indicate that feature information is preserved in visual working memory while remembering faces.


Subject(s)
Facial Expression , Memory , Humans , Models, Theoretical , Photic Stimulation , Recognition, Psychology , Visual Perception
5.
Cereb Cortex ; 28(2): 549-560, 2018 02 01.
Article in English | MEDLINE | ID: mdl-27999122

ABSTRACT

The fronto-parietal attention networks have been extensively studied with functional magnetic resonance imaging (fMRI), but spatiotemporal dynamics of these networks are not well understood. We measured event-related potentials (ERPs) with electroencephalography (EEG) and collected fMRI data from identical experiments where participants performed visual and auditory discrimination tasks separately or simultaneously and with or without distractors. To overcome the low temporal resolution of fMRI, we used a novel ERP-based application of multivariate representational similarity analysis (RSA) to parse time-averaged fMRI pattern activity into distinct spatial maps that each corresponded, in representational structure, to a short temporal ERP segment. Discriminant analysis of ERP-fMRI correlations revealed 8 cortical networks-2 sensory, 3 attention, and 3 other-segregated by 4 orthogonal, temporally multifaceted and spatially distributed functions. We interpret these functions as 4 spatiotemporal components of attention: modality-dependent and stimulus-driven orienting, top-down control, mode transition, and response preparation, selection and execution.


Subject(s)
Attention/physiology , Auditory Cortex/physiology , Electroencephalography/methods , Magnetic Resonance Imaging/methods , Nerve Net/physiology , Visual Cortex/physiology , Acoustic Stimulation/methods , Adult , Auditory Cortex/diagnostic imaging , Auditory Perception/physiology , Female , Humans , Male , Nerve Net/diagnostic imaging , Photic Stimulation/methods , Time Factors , Visual Cortex/diagnostic imaging , Visual Perception/physiology
6.
Brain Res ; 1655: 204-215, 2017 01 15.
Article in English | MEDLINE | ID: mdl-27815094

ABSTRACT

Gaming experience has been suggested to lead to performance enhancements in a wide variety of working memory tasks. Previous studies have, however, mostly focused on adult expert gamers and have not included measurements of both behavioral performance and brain activity. In the current study, 167 adolescents and young adults (aged 13-24 years) with different amounts of gaming experience performed an n-back working memory task with vowels, with the sensory modality of the vowel stream switching between audition and vision at random intervals. We studied the relationship between self-reported daily gaming activity, working memory (n-back) task performance and related brain activity measured using functional magnetic resonance imaging (fMRI). The results revealed that the extent of daily gaming activity was related to enhancements in both performance accuracy and speed during the most demanding (2-back) level of the working memory task. This improved working memory performance was accompanied by enhanced recruitment of a fronto-parietal cortical network, especially the dorsolateral prefrontal cortex. In contrast, during the less demanding (1-back) level of the task, gaming was associated with decreased activity in the same cortical regions. Our results suggest that a greater degree of daily gaming experience is associated with better working memory functioning and task difficulty-dependent modulation in fronto-parietal brain activity already in adolescence and even when non-expert gamers are studied. The direction of causality within this association cannot be inferred with certainty due to the correlational nature of the current study.


Subject(s)
Cerebral Cortex/physiology , Memory, Short-Term/physiology , Video Games/psychology , Adolescent , Analysis of Variance , Brain Mapping , Cerebral Cortex/growth & development , Cohort Studies , Creativity , Cross-Sectional Studies , Female , Humans , Internet , Magnetic Resonance Imaging , Male , Neuropsychological Tests , Self Report , Surveys and Questionnaires , Young Adult
7.
Neuroimage ; 134: 113-121, 2016 07 01.
Article in English | MEDLINE | ID: mdl-27063068

ABSTRACT

The current generation of young people indulges in more media multitasking behavior (e.g., instant messaging while watching videos) in their everyday lives than older generations. Concerns have been raised about how this might affect their attentional functioning, as previous studies have indicated that extensive media multitasking in everyday life may be associated with decreased attentional control. In the current study, 149 adolescents and young adults (aged 13-24years) performed speech-listening and reading tasks that required maintaining attention in the presence of distractor stimuli in the other modality or dividing attention between two concurrent tasks. Brain activity during task performance was measured using functional magnetic resonance imaging (fMRI). We studied the relationship between self-reported daily media multitasking (MMT), task performance and brain activity during task performance. The results showed that in the presence of distractor stimuli, a higher MMT score was associated with worse performance and increased brain activity in right prefrontal regions. The level of performance during divided attention did not depend on MMT. This suggests that daily media multitasking is associated with behavioral distractibility and increased recruitment of brain areas involved in attentional and inhibitory control, and that media multitasking in everyday life does not translate to performance benefits in multitasking in laboratory settings.


Subject(s)
Attention/physiology , Auditory Perception/physiology , Multitasking Behavior/physiology , Prefrontal Cortex/physiology , Reading , Task Performance and Analysis , Adolescent , Brain Mapping/methods , Female , Humans , Male , Multimedia , Nerve Net/physiology , Young Adult
8.
Atten Percept Psychophys ; 76(7): 1962-74, 2014 Oct.
Article in English | MEDLINE | ID: mdl-24935809

ABSTRACT

A common assumption in the working memory literature is that the visual and auditory modalities have separate and independent memory stores. Recent evidence on visual working memory has suggested that resources are shared between representations, and that the precision of representations sets the limit for memory performance. We tested whether memory resources are also shared across sensory modalities. Memory precision for two visual (spatial frequency and orientation) and two auditory (pitch and tone duration) features was measured separately for each feature and for all possible feature combinations. Thus, only the memory load was varied, from one to four features, while keeping the stimuli similar. In Experiment 1, two gratings and two tones-both containing two varying features-were presented simultaneously. In Experiment 2, two gratings and two tones-each containing only one varying feature-were presented sequentially. The memory precision (delayed discrimination threshold) for a single feature was close to the perceptual threshold. However, as the number of features to be remembered was increased, the discrimination thresholds increased more than twofold. Importantly, the decrease in memory precision did not depend on the modality of the other feature(s), or on whether the features were in the same or in separate objects. Hence, simultaneously storing one visual and one auditory feature had an effect on memory precision equal to those of simultaneously storing two visual or two auditory features. The results show that working memory is limited by the precision of the stored representations, and that working memory can be described as a resource pool that is shared across modalities.


Subject(s)
Auditory Perception/physiology , Memory, Short-Term/physiology , Visual Perception/physiology , Acoustic Stimulation , Analysis of Variance , Attention/physiology , Cues , Discrimination, Psychological/physiology , Humans , Orientation/physiology , Photic Stimulation , Sensory Thresholds
SELECTION OF CITATIONS
SEARCH DETAIL
...