Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters











Database
Language
Publication year range
1.
Nat Commun ; 15(1): 6241, 2024 Jul 24.
Article in English | MEDLINE | ID: mdl-39048577

ABSTRACT

Studying the neural basis of human dynamic visual perception requires extensive experimental data to evaluate the large swathes of functionally diverse brain neural networks driven by perceiving visual events. Here, we introduce the BOLD Moments Dataset (BMD), a repository of whole-brain fMRI responses to over 1000 short (3 s) naturalistic video clips of visual events across ten human subjects. We use the videos' extensive metadata to show how the brain represents word- and sentence-level descriptions of visual events and identify correlates of video memorability scores extending into the parietal cortex. Furthermore, we reveal a match in hierarchical processing between cortical regions of interest and video-computable deep neural networks, and we showcase that BMD successfully captures temporal dynamics of visual events at second resolution. With its rich metadata, BMD offers new perspectives and accelerates research on the human brain basis of visual event perception.


Subject(s)
Brain Mapping , Brain , Magnetic Resonance Imaging , Metadata , Visual Perception , Humans , Magnetic Resonance Imaging/methods , Visual Perception/physiology , Male , Female , Brain Mapping/methods , Adult , Brain/physiology , Brain/diagnostic imaging , Parietal Lobe/physiology , Parietal Lobe/diagnostic imaging , Young Adult , Photic Stimulation , Video Recording
2.
PLoS Biol ; 22(4): e3002564, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38557761

ABSTRACT

Behavioral and neuroscience studies in humans and primates have shown that memorability is an intrinsic property of an image that predicts its strength of encoding into and retrieval from memory. While previous work has independently probed when or where this memorability effect may occur in the human brain, a description of its spatiotemporal dynamics is missing. Here, we used representational similarity analysis (RSA) to combine functional magnetic resonance imaging (fMRI) with source-estimated magnetoencephalography (MEG) to simultaneously measure when and where the human cortex is sensitive to differences in image memorability. Results reveal that visual perception of High Memorable images, compared to Low Memorable images, recruits a set of regions of interest (ROIs) distributed throughout the ventral visual cortex: a late memorability response (from around 300 ms) in early visual cortex (EVC), inferior temporal cortex, lateral occipital cortex, fusiform gyrus, and banks of the superior temporal sulcus. Image memorability magnitude results are represented after high-level feature processing in visual regions and reflected in classical memory regions in the medial temporal lobe (MTL). Our results present, to our knowledge, the first unified spatiotemporal account of visual memorability effect across the human cortex, further supporting the levels-of-processing theory of perception and memory.


Subject(s)
Brain , Visual Perception , Animals , Humans , Visual Perception/physiology , Brain/physiology , Cerebral Cortex/physiology , Temporal Lobe/diagnostic imaging , Temporal Lobe/physiology , Magnetoencephalography/methods , Magnetic Resonance Imaging/methods , Brain Mapping/methods
3.
Elife ; 132024 Apr 30.
Article in English | MEDLINE | ID: mdl-38686919

ABSTRACT

Gait is impaired in musculoskeletal conditions, such as knee arthropathy. Gait analysis is used in clinical practice to inform diagnosis and monitor disease progression or intervention response. However, clinical gait analysis relies on subjective visual observation of walking as objective gait analysis has not been possible within clinical settings due to the expensive equipment, large-scale facilities, and highly trained staff required. Relatively low-cost wearable digital insoles may offer a solution to these challenges. In this work, we demonstrate how a digital insole measuring osteoarthritis-specific gait signatures yields similar results to the clinical gait-lab standard. To achieve this, we constructed a machine learning model, trained on force plate data collected in participants with knee arthropathy and controls. This model was highly predictive of force plate data from a validation set (area under the receiver operating characteristics curve [auROC] = 0.86; area under the precision-recall curve [auPR] = 0.90) and of a separate, independent digital insole dataset containing control and knee osteoarthritis subjects (auROC = 0.83; auPR = 0.86). After showing that digital insole-derived gait characteristics are comparable to traditional gait measurements, we next showed that a single stride of raw sensor time-series data could be accurately assigned to each subject, highlighting that individuals using digital insoles can be identified by their gait characteristics. This work provides a framework for a promising alternative to traditional clinical gait analysis methods, adds to the growing body of knowledge regarding wearable technology analytical pipelines, and supports clinical development of at-home gait assessments, with the potential to improve the ease, frequency, and depth of patient monitoring.


The way we walk ­ our 'gait' ­ is a key indicator of health. Gait irregularities like limping, shuffling or a slow pace can be signs of muscle or joint problems. Assessing a patient's gait is therefore an important element in diagnosing these conditions, and in evaluating whether treatments are working. Gait is often assessed via a simple visual inspection, with patients being asked to walk back and forth in a doctor's office. While quick and easy, this approach is highly subjective and therefore imprecise. 'Objective gait analysis' is a more accurate alternative, but it relies on tests being conducted in specialised laboratories with large-scale, expensive equipment operated by highly trained staff. Unfortunately, this means that gait laboratories are not accessible for everyday clinical use. In response, Wipperman et al. aimed to develop a low-cost alternative to the complex equipment used in gait laboratories. To do this, they harnessed wearable sensor technologies ­ devices that can directly measure physiological data while embedded in clothing or attached to the user. Wearable sensors have the advantage of being cheap, easy to use, and able to provide clinically useful information without specially trained staff. Wipperman et al. analysed data from classic gait laboratory devices, as well as 'digital insoles' equipped with sensors that captured foot movements and pressure as participants walked. The analysis first 'trained' on data from gait laboratories (called force plates) and then applied the method to gait measurements obtained from digital insoles worn by either healthy participants or patients with knee problems. Analysis of the pressure data from the insoles confirmed that they could accurately predict which measurements were from healthy individuals, and which were from patients. The gait characteristics detected by the insoles were also comparable to lab-based measurements ­ in other words, the insoles provided similar type and quality of data as a gait laboratory. Further analysis revealed that information from just a single step could reveal additional information about the subject's walking. These results support the use of wearable devices as a simple and relatively inexpensive way to measure gait in everyday clinical practice, without the need for specialised laboratories and visits to the doctor's office. Although the digital insoles will require further analytical and clinical study before they can be widely used, Wipperman et al. hope they will eventually make monitoring muscle and joint conditions easier and more affordable.


Subject(s)
Gait , Machine Learning , Osteoarthritis, Knee , Wearable Electronic Devices , Humans , Gait/physiology , Male , Female , Osteoarthritis, Knee/physiopathology , Osteoarthritis, Knee/diagnosis , Middle Aged , Aged , Gait Analysis/methods , Gait Analysis/instrumentation
4.
Elife ; 122023 07 04.
Article in English | MEDLINE | ID: mdl-37401757

ABSTRACT

The theta rhythm, a quasi-periodic 4-10 Hz oscillation, is observed during memory processing in the hippocampus, with different phases of theta hypothesized to separate independent streams of information related to the encoding and recall of memories. At the cellular level, the discovery of hippocampal memory cells (engram neurons), as well as the modulation of memory recall through optogenetic activation of these cells, has provided evidence that certain memories are stored, in part, in a sparse ensemble of neurons in the hippocampus. In previous research, however, engram reactivation has been carried out using open-loop stimulation at fixed frequencies; the relationship between engram neuron reactivation and ongoing network oscillations has not been taken into consideration. To address this concern, we implemented a closed-loop reactivation of engram neurons that enabled phase-specific stimulation relative to theta oscillations in the local field potential in CA1. Using this real-time approach, we tested the impact of activating dentate gyrus engram neurons during the peak (encoding phase) and trough (recall phase) of theta oscillations. Consistent with previously hypothesized functions of theta oscillations in memory function, we show that stimulating dentate gyrus engram neurons at the trough of theta is more effective in eliciting behavioral recall than either fixed-frequency stimulation or stimulation at the peak of theta. Moreover, phase-specific trough stimulation is accompanied by an increase in the coupling between gamma and theta oscillations in CA1 hippocampus. Our results provide a causal link between phase-specific activation of engram cells and the behavioral expression of memory.


Subject(s)
Hippocampus , Neurons , Mice , Animals , Mice, Inbred C57BL , Neurons/physiology , Hippocampus/physiology , Memory/physiology , Theta Rhythm/physiology , Dentate Gyrus/physiology
5.
Cogn Neuropsychol ; 38(7-8): 468-489, 2021.
Article in English | MEDLINE | ID: mdl-35729704

ABSTRACT

How does the auditory system categorize natural sounds? Here we apply multimodal neuroimaging to illustrate the progression from acoustic to semantically dominated representations. Combining magnetoencephalographic (MEG) and functional magnetic resonance imaging (fMRI) scans of observers listening to naturalistic sounds, we found superior temporal responses beginning ∼55 ms post-stimulus onset, spreading to extratemporal cortices by ∼100 ms. Early regions were distinguished less by onset/peak latency than by functional properties and overall temporal response profiles. Early acoustically-dominated representations trended systematically toward category dominance over time (after ∼200 ms) and space (beyond primary cortex). Semantic category representation was spatially specific: Vocalizations were preferentially distinguished in frontotemporal voice-selective regions and the fusiform; scenes and objects were distinguished in parahippocampal and medial place areas. Our results are consistent with real-world events coded via an extended auditory processing hierarchy, in which acoustic representations rapidly enter multiple streams specialized by category, including areas typically considered visual cortex.


Subject(s)
Brain Mapping , Semantics , Acoustic Stimulation/methods , Auditory Perception/physiology , Brain Mapping/methods , Cochlea , Humans , Magnetic Resonance Imaging/methods , Magnetoencephalography/methods
6.
Sci Rep ; 10(1): 4638, 2020 03 13.
Article in English | MEDLINE | ID: mdl-32170209

ABSTRACT

Research at the intersection of computer vision and neuroscience has revealed hierarchical correspondence between layers of deep convolutional neural networks (DCNNs) and cascade of regions along human ventral visual cortex. Recently, studies have uncovered emergence of human interpretable concepts within DCNNs layers trained to identify visual objects and scenes. Here, we asked whether an artificial neural network (with convolutional structure) trained for visual categorization would demonstrate spatial correspondences with human brain regions showing central/peripheral biases. Using representational similarity analysis, we compared activations of convolutional layers of a DCNN trained for object and scene categorization with neural representations in human brain visual regions. Results reveal a brain-like topographical organization in the layers of the DCNN, such that activations of layer-units with central-bias were associated with brain regions with foveal tendencies (e.g. fusiform gyrus), and activations of layer-units with selectivity for image backgrounds were associated with cortical regions showing peripheral preference (e.g. parahippocampal cortex). The emergence of a categorical topographical correspondence between DCNNs and brain regions suggests these models are a good approximation of the perceptual representation generated by biological neural networks.


Subject(s)
Pattern Recognition, Visual/physiology , Visual Cortex/physiology , Adult , Female , Humans , Magnetic Resonance Imaging , Male , Models, Neurological , Neural Networks, Computer , Photic Stimulation , Visual Cortex/diagnostic imaging , Young Adult
7.
Vision (Basel) ; 3(1)2019 Feb 10.
Article in English | MEDLINE | ID: mdl-31735809

ABSTRACT

To build a representation of what we see, the human brain recruits regions throughout the visual cortex in cascading sequence. Recently, an approach was proposed to evaluate the dynamics of visual perception in high spatiotemporal resolution at the scale of the whole brain. This method combined functional magnetic resonance imaging (fMRI) data with magnetoencephalography (MEG) data using representational similarity analysis and revealed a hierarchical progression from primary visual cortex through the dorsal and ventral streams. To assess the replicability of this method, we here present the results of a visual recognition neuro-imaging fusion experiment and compare them within and across experimental settings. We evaluated the reliability of this method by assessing the consistency of the results under similar test conditions, showing high agreement within participants. We then generalized these results to a separate group of individuals and visual input by comparing them to the fMRI-MEG fusion data of Cichy et al (2016), revealing a highly similar temporal progression recruiting both the dorsal and ventral streams. Together these results are a testament to the reproducibility of the fMRI-MEG fusion approach and allows for the interpretation of these spatiotemporal dynamic in a broader context.

SELECTION OF CITATIONS
SEARCH DETAIL