Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 213
Filtrar
1.
Nat Commun ; 15(1): 4843, 2024 Jun 06.
Artículo en Inglés | MEDLINE | ID: mdl-38844440

RESUMEN

Carbon quantum dots (CQDs) have versatile applications in luminescence, whereas identifying optimal synthesis conditions has been challenging due to numerous synthesis parameters and multiple desired outcomes, creating an enormous search space. In this study, we present a novel multi-objective optimization strategy utilizing a machine learning (ML) algorithm to intelligently guide the hydrothermal synthesis of CQDs. Our closed-loop approach learns from limited and sparse data, greatly reducing the research cycle and surpassing traditional trial-and-error methods. Moreover, it also reveals the intricate links between synthesis parameters and target properties and unifies the objective function to optimize multiple desired properties like full-color photoluminescence (PL) wavelength and high PL quantum yields (PLQY). With only 63 experiments, we achieve the synthesis of full-color fluorescent CQDs with high PLQY exceeding 60% across all colors. Our study represents a significant advancement in ML-guided CQDs synthesis, setting the stage for developing new materials with multiple desired properties.

2.
Artículo en Inglés | MEDLINE | ID: mdl-38722724

RESUMEN

The olfactory system enables humans to smell different odors, which are closely related to emotions. The high temporal resolution and non-invasiveness of Electroencephalogram (EEG) make it suitable to objectively study human preferences for odors. Effectively learning the temporal dynamics and spatial information from EEG is crucial for detecting odor-induced emotional valence. In this paper, we propose a deep learning architecture called Temporal Attention with Spatial Autoencoder Network (TASA) for predicting odor-induced emotions using EEG. TASA consists of a filter-bank layer, a spatial encoder, a time segmentation layer, a Long Short-Term Memory (LSTM) module, a multi-head self-attention (MSA) layer, and a fully connected layer. We improve upon the previous work by utilizing a two-phase learning framework, using the autoencoder module to learn the spatial information among electrodes by reconstructing the given input with a latent representation in the spatial dimension, which aims to minimize information loss compared to spatial filtering with CNN. The second improvement is inspired by the continuous nature of the olfactory process; we propose to use LSTM-MSA in TASA to capture its temporal dynamics by learning the intercorrelation among the time segments of the EEG. TASA is evaluated on an existing olfactory EEG dataset and compared with several existing deep learning architectures to demonstrate its effectiveness in predicting olfactory-triggered emotional responses. Interpretability analyses with DeepLIFT also suggest that TASA learns spatial-spectral features that are relevant to olfactory-induced emotion recognition.


Asunto(s)
Algoritmos , Atención , Aprendizaje Profundo , Electroencefalografía , Emociones , Redes Neurales de la Computación , Odorantes , Humanos , Electroencefalografía/métodos , Emociones/fisiología , Atención/fisiología , Masculino , Adulto , Femenino , Olfato/fisiología , Memoria a Corto Plazo/fisiología , Adulto Joven
3.
Neuroimage ; 293: 120629, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38697588

RESUMEN

Covert speech (CS) refers to speaking internally to oneself without producing any sound or movement. CS is involved in multiple cognitive functions and disorders. Reconstructing CS content by brain-computer interface (BCI) is also an emerging technique. However, it is still controversial whether CS is a truncated neural process of overt speech (OS) or involves independent patterns. Here, we performed a word-speaking experiment with simultaneous EEG-fMRI. It involved 32 participants, who generated words both overtly and covertly. By integrating spatial constraints from fMRI into EEG source localization, we precisely estimated the spatiotemporal dynamics of neural activity. During CS, EEG source activity was localized in three regions: the left precentral gyrus, the left supplementary motor area, and the left putamen. Although OS involved more brain regions with stronger activations, CS was characterized by an earlier event-locked activation in the left putamen (peak at 262 ms versus 1170 ms). The left putamen was also identified as the only hub node within the functional connectivity (FC) networks of both OS and CS, while showing weaker FC strength towards speech-related regions in the dominant hemisphere during CS. Path analysis revealed significant multivariate associations, indicating an indirect association between the earlier activation in the left putamen and CS, which was mediated by reduced FC towards speech-related regions. These findings revealed the specific spatiotemporal dynamics of CS, offering insights into CS mechanisms that are potentially relevant for future treatment of self-regulation deficits, speech disorders, and development of BCI speech applications.


Asunto(s)
Electroencefalografía , Imagen por Resonancia Magnética , Habla , Humanos , Masculino , Imagen por Resonancia Magnética/métodos , Femenino , Habla/fisiología , Adulto , Electroencefalografía/métodos , Adulto Joven , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Mapeo Encefálico/métodos
4.
Artículo en Inglés | MEDLINE | ID: mdl-38625770

RESUMEN

This study embarks on a comprehensive investigation of the effectiveness of repetitive transcranial direct current stimulation (tDCS)-based neuromodulation in augmenting steady-state visual evoked potential (SSVEP) brain-computer interfaces (BCIs), alongside exploring pertinent electroencephalography (EEG) biomarkers for assessing brain states and evaluating tDCS efficacy. EEG data were garnered across three distinct task modes (eyes open, eyes closed, and SSVEP stimulation) and two neuromodulation patterns (sham-tDCS and anodal-tDCS). Brain arousal and brain functional connectivity were measured by extracting features of fractal EEG and information flow gain, respectively. Anodal-tDCS led to diminished offsets and enhanced information flow gains, indicating improvements in both brain arousal and brain information transmission capacity. Additionally, anodal-tDCS markedly enhanced SSVEP-BCIs performance as evidenced by increased amplitudes and accuracies, whereas sham-tDCS exhibited lesser efficacy. This study proffers invaluable insights into the application of neuromodulation methods for bolstering BCI performance, and concurrently authenticates two potent electrophysiological markers for multifaceted characterization of brain states.


Asunto(s)
Interfaces Cerebro-Computador , Electroencefalografía , Potenciales Evocados Visuales , Fractales , Estimulación Transcraneal de Corriente Directa , Humanos , Estimulación Transcraneal de Corriente Directa/métodos , Potenciales Evocados Visuales/fisiología , Masculino , Adulto , Femenino , Adulto Joven , Nivel de Alerta/fisiología , Encéfalo/fisiología , Voluntarios Sanos , Algoritmos
5.
Artículo en Inglés | MEDLINE | ID: mdl-38652609

RESUMEN

Emotion recognition from electroencephalogram (EEG) signals is a critical domain in biomedical research with applications ranging from mental disorder regulation to human-computer interaction. In this paper, we address two fundamental aspects of EEG emotion recognition: continuous regression of emotional states and discrete classification of emotions. While classification methods have garnered significant attention, regression methods remain relatively under-explored. To bridge this gap, we introduce MASA-TCN, a novel unified model that leverages the spatial learning capabilities of Temporal Convolutional Networks (TCNs) for EEG emotion regression and classification tasks. The key innovation lies in the introduction of a space-aware temporal layer, which empowers TCN to capture spatial relationships among EEG electrodes, enhancing its ability to discern nuanced emotional states. Additionally, we design a multi-anchor block with attentive fusion, enabling the model to adaptively learn dynamic temporal dependencies within the EEG signals. Experiments on two publicly available datasets show that MASA-TCN achieves higher results than the state-of-the-art methods for both EEG emotion regression and classification tasks. The code is available at https://github.com/yi-ding-cs/MASA-TCN.

6.
IEEE Trans Biomed Eng ; PP2024 Mar 18.
Artículo en Inglés | MEDLINE | ID: mdl-38498752

RESUMEN

OBJECTIVE: Growing attention has been paid recently to electrocardiogram (ECG) based obstructive sleep apnea (OSA) detection, with some progresses been made on this topic. However, the lack of data, low data quality, and incomplete data labeling hinder the application of deep learning to OSA detection, which in turn affects the overall generalization capacity of the network. METHODS: To address these issues, we propose the ResT-ECGAN framework. It uses a one-dimensional generative adversarial network (ECGAN) for sample generation, and integrates it into ResTNet for OSA detection. ECGAN filters the generated ECG signals by incorporating the concept of fuzziness, effectively increasing the amount of high-quality data. ResT-Net not only alleviates the problems caused by deepening the network but also utilizes multihead attention mechanisms to parallelize sequence processing and extract more valuable OSA detection features by leveraging contextual information. RESULTS: Through extensive experiments, we verify that ECGAN can effectively improve the OSA detection performance of ResT-Net. Using only ResT-Net for detection, the accuracy on the Apnea-ECG and private databases is 0.885 and 0.837, respectively. By adding ECGAN-generated data augmentation, the accuracy is increased to 0.893 and 0.848, respectively. CONCLUSION AND SIGNIFICANCE: Comparing with the state-of-the-art deep learning methods, our method outperforms them in terms of accuracy. This study provides a new approach and solution to improve OSA detection in situations with limited labeled samples.

7.
Artículo en Inglés | MEDLINE | ID: mdl-38329860

RESUMEN

Graph neural networks (GNNs) have attracted extensive research attention in recent years due to their capability to progress with graph data and have been widely used in practical applications. As societies become increasingly concerned with the need for data privacy protection, GNNs face the need to adapt to this new normal. Besides, as clients in federated learning (FL) may have relationships, more powerful tools are required to utilize such implicit information to boost performance. This has led to the rapid development of the emerging research field of federated GNNs (FedGNNs). This promising interdisciplinary field is highly challenging for interested researchers to grasp. The lack of an insightful survey on this topic further exacerbates the entry difficulty. In this article, we bridge this gap by offering a comprehensive survey of this emerging field. We propose a 2-D taxonomy of the FedGNN literature: 1) the main taxonomy provides a clear perspective on the integration of GNNs and FL by analyzing how GNNs enhance FL training as well as how FL assists GNN training and 2) the auxiliary taxonomy provides a view on how FedGNNs deal with heterogeneity across FL clients. Through discussions of key ideas, challenges, and limitations of existing works, we envision future research directions that can help build more robust, explainable, efficient, fair, inductive, and comprehensive FedGNNs.

8.
Neural Netw ; 172: 106108, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38219680

RESUMEN

Advances in deep learning have shown great promise towards the application of performing high-accuracy Electroencephalography (EEG) signal classification in a variety of tasks. However, many EEG-based datasets are often plagued by the issue of high inter-subject signal variability. Robust deep learning models are notoriously difficult to train under such scenarios, often leading to subpar or widely varying performance across subjects under the leave-one-subject-out paradigm. Recently, the model agnostic meta-learning framework was introduced as a way to increase the model's ability to generalize towards new tasks. While the original framework focused on task-based meta-learning, this research aims to show that the meta-learning methodology can be modified towards subject-based signal classification while maintaining the same task objectives and achieve state-of-the-art performance. Namely, we propose the novel implementation of a few/zero-shot subject-independent meta-learning framework towards multi-class inner speech and binary class motor imagery classification. Compared to current subject-adaptive methods which utilize large number of labels from the target, the proposed framework shows its effectiveness in training zero-calibration and few-shot models for subject-independent EEG classification. The proposed few/zero-shot subject-independent meta-learning mechanism performs well on both small and large datasets and achieves robust, generalized performance across subjects. The results obtained shows a significant improvement over the current state-of-the-art, with the binary class motor imagery achieving 88.70% and the accuracy of multi-class inner speech achieving an average of 31.15%. Codes will be made available to public upon publication.


Asunto(s)
Interfaces Cerebro-Computador , Humanos , Electroencefalografía/métodos , Calibración , Imaginación , Algoritmos
9.
Neural Netw ; 172: 106100, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38232427

RESUMEN

Insufficient data is a long-standing challenge for Brain-Computer Interface (BCI) to build a high-performance deep learning model. Though numerous research groups and institutes collect a multitude of EEG datasets for the same BCI task, sharing EEG data from multiple sites is still challenging due to the heterogeneity of devices. The significance of this challenge cannot be overstated, given the critical role of data diversity in fostering model robustness. However, existing works rarely discuss this issue, predominantly centering their attention on model training within a single dataset, often in the context of inter-subject or inter-session settings. In this work, we propose a hierarchical personalized Federated Learning EEG decoding (FLEEG) framework to surmount this challenge. This innovative framework heralds a new learning paradigm for BCI, enabling datasets with disparate data formats to collaborate in the model training process. Each client is assigned a specific dataset and trains a hierarchical personalized model to manage diverse data formats and facilitate information exchange. Meanwhile, the server coordinates the training procedure to harness knowledge gleaned from all datasets, thus elevating overall performance. The framework has been evaluated in Motor Imagery (MI) classification with nine EEG datasets collected by different devices but implementing the same MI task. Results demonstrate that the proposed framework can boost classification performance up to 8.4% by enabling knowledge sharing between multiple datasets, especially for smaller datasets. Visualization results also indicate that the proposed framework can empower the local models to put a stable focus on task-related areas, yielding better performance. To the best of our knowledge, this is the first end-to-end solution to address this important challenge.


Asunto(s)
Interfaces Cerebro-Computador , Humanos , Conocimiento , Electroencefalografía , Imaginación
10.
J Neural Eng ; 21(1)2024 01 17.
Artículo en Inglés | MEDLINE | ID: mdl-38091617

RESUMEN

Objective.Motor imagery (MI) brain-computer interfaces (BCIs) based on electroencephalogram (EEG) have been developed primarily for stroke rehabilitation, however, due to limited stroke data, current deep learning methods for cross-subject classification rely on healthy data. This study aims to assess the feasibility of applying MI-BCI models pre-trained using data from healthy individuals to detect MI in stroke patients.Approach.We introduce a new transfer learning approach where features from two-class MI data of healthy individuals are used to detect MI in stroke patients. We compare the results of the proposed method with those obtained from analyses within stroke data. Experiments were conducted using Deep ConvNet and state-of-the-art subject-specific machine learning MI classifiers, evaluated on OpenBMI two-class MI-EEG data from healthy subjects and two-class MI versus rest data from stroke patients.Main results.Results of our study indicate that through domain adaptation of a model pre-trained using healthy subjects' data, an average MI detection accuracy of 71.15% (±12.46%) can be achieved across 71 stroke patients. We demonstrate that the accuracy of the pre-trained model increased by 18.15% after transfer learning (p<0.001). Additionally, the proposed transfer learning method outperforms the subject-specific results achieved by Deep ConvNet and FBCSP, with significant enhancements of 7.64% (p<0.001) and 5.55% (p<0.001) in performance, respectively. Notably, the healthy-to-stroke transfer learning approach achieved similar performance to stroke-to-stroke transfer learning, with no significant difference (p>0.05). Explainable AI analyses using transfer models determined channel relevance patterns that indicate contributions from the bilateral motor, frontal, and parietal regions of the cortex towards MI detection in stroke patients.Significance.Transfer learning from healthy to stroke can enhance the clinical use of BCI algorithms by overcoming the challenge of insufficient clinical data for optimal training.


Asunto(s)
Interfaces Cerebro-Computador , Aprendizaje Profundo , Accidente Cerebrovascular , Humanos , Voluntarios Sanos , Accidente Cerebrovascular/diagnóstico , Imágenes en Psicoterapia , Electroencefalografía/métodos , Algoritmos , Imaginación
11.
Artículo en Inglés | MEDLINE | ID: mdl-38048235

RESUMEN

In electroencephalography (EEG) classification paradigms, data from a target subject is often difficult to obtain, leading to difficulties in training a robust deep learning network. Transfer learning and their variations are effective tools in improving such models suffering from lack of data. However, many of the proposed variations and deep models often rely on a single assumed distribution to represent the latent features which may not scale well due to inter- and intra-subject variations in signals. This leads to significant instability in individual subject decoding performances. The presence of non-trivial domain differences between different sets of training or transfer learning data causes poorer model generalization towards the target subject. However, the detection of these domain differences is often difficult to perform due to the ill-defined nature of the EEG domain features. This study proposes a novel inference model, the Joint Embedding Variational Autoencoder, that offers conditionally tighter approximation of the estimated spatiotemporal feature distribution through the use of jointly optimised variational autoencoders to achieve optimizable data dependent inputs as an additional variable for improved overall model optimisation and scaling without sacrificing model tightness. To learn the variational bound, we show that maximising the marginal log-likelihood of only the second embedding section is required to achieve conditionally tighter lower bounds. Furthermore, we show that this model provides state-of-the-art EEG data reconstruction and deep feature extraction. The extracted domains of the EEG signals across each subject displays the rationale as to why there exists disparity between subjects' adaptation efficacy.


Asunto(s)
Aprendizaje Profundo , Electroencefalografía , Humanos
12.
Artículo en Inglés | MEDLINE | ID: mdl-38082819

RESUMEN

Electroencephalography (EEG) and lower-limb electromyography (EMG) signals are widely used in lower-limb kinematic classification and regression tasks. Since it directly measures muscle responses, EMG usually works better. However, due to the susceptibility of EMG signals to muscle fatigue, insufficient residual myoelectric activity, and the difficulty of precise localization, it is difficult to acquire EMG signals in practice. In contrast, EEG signals are stable and easy to sample. Therefore, in this work, we propose a multimodal training strategy based on supervised contrastive learning. With this training strategy, we can effectively use the guiding role of EMG in the training phase to help the model fit the gait with EEG signal while using only EEG signal in the testing phase to obtain better results than using any single modal signal to train and test the model. Finally, we compared the models trained with the strategy proposed in this paper with other models trained with EEG signals. The obtained Pearson's Correlation Coefficient value exceeds those of all baseline models.


Asunto(s)
Electroencefalografía , Marcha , Electromiografía/métodos , Electroencefalografía/métodos , Marcha/fisiología , Fatiga Muscular/fisiología , Extremidad Inferior
13.
Artículo en Inglés | MEDLINE | ID: mdl-38083323

RESUMEN

Emotion recognition from electroencephalogram (EEG) requires computational models to capture the crucial features of the emotional response to external stimulation. Spatial, spectral, and temporal information are relevant features for emotion recognition. However, learning temporal dynamics is a challenging task, and there is a lack of efficient approaches to capture such information. In this work, we present a deep learning framework called MTDN that is designed to capture spectral features with a filterbank module and to learn spatial features with a spatial convolution block. Multiple temporal dynamics are jointly learned with parallel long short-term memory (LSTM) embedding and self-attention modules. The LSTM module is used to embed the time segments, and then the self-attention is utilized to learn the temporal dynamics by intercorrelating every embedded time segment. Multiple temporal dynamics representations are then aggregated to form the final extracted features for classification. We experiment on a publicly available dataset, DEAP, to evaluate the performance of our proposed framework and compare MTDN with existing published results. The results demonstrate improvement over the current state-of-the-art methods on the valence dimension of the DEAP dataset.


Asunto(s)
Electroencefalografía , Emociones , Memoria a Largo Plazo , Reconocimiento en Psicología
14.
Artículo en Inglés | MEDLINE | ID: mdl-38083341

RESUMEN

Effectively learning the spatial topology information of EEG channels as well as the temporal contextual information underlying emotions is crucial for EEG emotion regression tasks. In this paper, we represent EEG signals as spatial graphs in a temporal graph (SGTG). A graph-in-graph neural network (GIGN) is proposed to learn the spatial-temporal information from the proposed SGTG for continuous EEG emotion recognition. A spatial graph neural network (GCN) with a learnable adjacency matrix is utilized to capture the dynamical relations among EEG channels. To learn the temporal contextual information, we propose to use GCN to combine the short-time emotional states of each spatial graph embeddings with the help of a learnable adjacency matrix. Experiments on a public dataset, MAHNOB-HCI, show the proposed GIGN achieves better regression results than recently published methods for the same task. The code of GIGN is available at: https://github.com/yi-ding-cs/GIGN.


Asunto(s)
Emociones , Aprendizaje , Redes Neurales de la Computación , Reconocimiento en Psicología , Electroencefalografía
15.
Artículo en Inglés | MEDLINE | ID: mdl-38083406

RESUMEN

The efficacy of Electroencephalogram (EEG) classifiers can be augmented by increasing the quantity of available data. In the case of geometric deep learning classifiers, the input consists of spatial covariance matrices derived from EEGs. In order to synthesize these spatial covariance matrices and facilitate future improvements of geometric deep learning classifiers, we propose a generative modeling technique based on state-of-the-art score-based models. The quality of generated samples is evaluated through visual and quantitative assessments using a left/right-hand-movement motor imagery dataset. The exceptional pixel-level resolution of these generative samples highlights the formidable capacity of score-based generative modeling. Additionally, the center (Fréchet mean) of the generated samples aligns with neurophysiological evidence that event-related desynchronization and synchronization occur on electrodes C3 and C4 within the Mu and Beta frequency bands during motor imagery processing. The quantitative evaluation revealed that 84.3% of the generated samples could be accurately predicted by a pre-trained classifier and an improvement of up to 8.7% in the average accuracy over ten runs for a specific test subject in a holdout experiment.


Asunto(s)
Interfaces Cerebro-Computador , Encéfalo/fisiología , Electroencefalografía/métodos , Imágenes en Psicoterapia , Movimiento/fisiología
16.
Brain Sci ; 13(11)2023 Nov 12.
Artículo en Inglés | MEDLINE | ID: mdl-38002544

RESUMEN

Research has shown the effectiveness of motor imagery in patient motor rehabilitation. Transcranial electrical stimulation has also demonstrated to improve patient motor and non-motor performance. However, mixed findings from motor imagery studies that involved transcranial electrical stimulation suggest that current experimental protocols can be further improved towards a unified design for consistent and effective results. This paper aims to review, with some clinical and neuroscientific findings from literature as support, studies of motor imagery coupled with different types of transcranial electrical stimulation and their experiments onhealthy and patient subjects. This review also includes the cognitive domains of working memory, attention, and fatigue, which are important for designing consistent and effective therapy protocols. Finally, we propose a theoretical all-inclusive framework that synergizes the three cognitive domains with motor imagery and transcranial electrical stimulation for patient rehabilitation, which holds promise of benefiting patients suffering from neuromuscular and cognitive disorders.

17.
Artículo en Inglés | MEDLINE | ID: mdl-37725740

RESUMEN

The motor imagery (MI) classification has been a prominent research topic in brain-computer interfaces (BCIs) based on electroencephalography (EEG). Over the past few decades, the performance of MI-EEG classifiers has seen gradual enhancement. In this study, we amplify the geometric deep-learning-based MI-EEG classifiers from the perspective of time-frequency analysis, introducing a new architecture called Graph-CSPNet. We refer to this category of classifiers as Geometric Classifiers, highlighting their foundation in differential geometry stemming from EEG spatial covariance matrices. Graph-CSPNet utilizes novel manifold-valued graph convolutional techniques to capture the EEG features in the time-frequency domain, offering heightened flexibility in signal segmentation for capturing localized fluctuations. To evaluate the effectiveness of Graph-CSPNet, we employ five commonly used publicly available MI-EEG datasets, achieving near-optimal classification accuracies in nine out of 11 scenarios. The Python repository can be found at https://github.com/GeometricBCI/Tensor-CSPNet-and-Graph-CSPNet.

18.
Front Cardiovasc Med ; 10: 1237043, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37692045

RESUMEN

Accurate heart rate (HR) measurement is crucial for optimal cardiac health, and while conventional methods such as electrocardiography and photoplethysmography are widely used for continuous daily monitoring, they may face practical limitations due to their dependence on external sensors and susceptibility to motion artifacts. In recent years, mechanocardiography (MCG)-based technologies, such as gyrocardiography (GCG) and seismocardiography (SCG), have emerged as promising alternatives to address these limitations. GCG has shown enhanced sensitivity and accuracy for HR detection compared to SCG, although its benefits are often overlooked in the context of the widespread use of accelerometers in HR monitoring applications. In this perspective, we aim to explore the potential and challenges of GCG, while recognizing that other technologies, including photoplethysmography and remote photoplethysmography, also have promising applications for HR monitoring. We propose a roadmap for future research to unlock the transformative capabilities of GCG for everyday heart rate monitoring.

19.
Neuroimage ; 282: 120372, 2023 11 15.
Artículo en Inglés | MEDLINE | ID: mdl-37748558

RESUMEN

Source imaging of Electroencephalography (EEG) and Magnetoencephalography (MEG) provides a noninvasive way of monitoring brain activities with high spatial and temporal resolution. In order to address this highly ill-posed problem, conventional source imaging models adopted spatio-temporal constraints that assume spatial stability of the source activities, neglecting the transient characteristics of M/EEG. In this work, a novel source imaging method µ-STAR that includes a microstate analysis and a spatio-temporal Bayesian model was introduced to address this problem. Specifically, the microstate analysis was applied to achieve automatic determination of time window length with quasi-stable source activity pattern for optimal reconstruction of source dynamics. Then a user-specific spatial prior and data-driven temporal basis functions were utilized to characterize the spatio-temporal information of sources within each state. The solution of the source reconstruction was obtained through a computationally efficient algorithm based upon variational Bayesian and convex analysis. The performance of the µ-STAR was first assessed through numerical simulations, where we found that the determination and inclusion of optimal temporal length in the spatio-temporal prior significantly improved the performance of source reconstruction. More importantly, the µ-STAR model achieved robust performance under various settings (i.e., source numbers/areas, SNR levels, and source depth) with fast convergence speed compared with five widely-used benchmark models (including wMNE, STV, SBL, BESTIES, & SI-STBF). Additional validations on real data were then performed on two publicly-available datasets (including block-design face-processing ERP and continuous resting-state EEG). The reconstructed source activities exhibited spatial and temporal neurophysiologically plausible results consistent with previously-revealed neural substrates, thereby further proving the feasibility of the µ-STAR model for source imaging in various applications.


Asunto(s)
Mapeo Encefálico , Electroencefalografía , Humanos , Teorema de Bayes , Mapeo Encefálico/métodos , Electroencefalografía/métodos , Magnetoencefalografía/métodos , Algoritmos , Encéfalo/diagnóstico por imagen , Encéfalo/fisiología
20.
IEEE Trans Pattern Anal Mach Intell ; 45(12): 15604-15618, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37639415

RESUMEN

Learning time-series representations when only unlabeled data or few labeled samples are available can be a challenging task. Recently, contrastive self-supervised learning has shown great improvement in extracting useful representations from unlabeled data via contrasting different augmented views of data. In this work, we propose a novel Time-Series representation learning framework via Temporal and Contextual Contrasting (TS-TCC) that learns representations from unlabeled data with contrastive learning. Specifically, we propose time-series-specific weak and strong augmentations and use their views to learn robust temporal relations in the proposed temporal contrasting module, besides learning discriminative representations by our proposed contextual contrasting module. Additionally, we conduct a systematic study of time-series data augmentation selection, which is a key part of contrastive learning. We also extend TS-TCC to the semi-supervised learning settings and propose a Class-Aware TS-TCC (CA-TCC) that benefits from the available few labeled data to further improve representations learned by TS-TCC. Specifically, we leverage the robust pseudo labels produced by TS-TCC to realize a class-aware contrastive loss. Extensive experiments show that the linear evaluation of the features learned by our proposed framework performs comparably with the fully supervised training. Additionally, our framework shows high efficiency in few labeled data and transfer learning scenarios.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...