Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 757
Filtrar
1.
Artículo en Inglés | MEDLINE | ID: mdl-39255190

RESUMEN

Affective data is the basis of emotion recognition, which is mainly acquired through extrinsic elicitation. To investigate the enhancing effects of multi-sensory stimuli on emotion elicitation and emotion recognition, we designed an experimental paradigm involving visual, auditory, and olfactory senses. A multimodal emotional dataset (OVPD-II) that employed the video-only or video-odor patterns as the stimuli materials, and recorded the electroencephalogram (EEG) and electrooculogram (EOG) signals, was created. The feedback results reported by subjects after each trial demonstrated that the video-odor pattern outperformed the video-only pattern in evoking individuals' emotions. To further validate the efficiency of the video-odor pattern, the transformer was employed to perform the emotion recognition task, where the highest accuracy reached 86.65% (66.12%) for EEG (EOG) modality with the video-odor pattern, which improved by 1.42% (3.43%) compared with the video-only pattern. What's more, the hybrid fusion (HF) method combined with the transformer and joint training was developed to improve the performance of the emotion recognition task, which achieved classify accuracies of 89.50% and 88.47% for the video-odor and video-only patterns, respectively.


Asunto(s)
Algoritmos , Electroencefalografía , Electrooculografía , Emociones , Odorantes , Humanos , Electroencefalografía/métodos , Emociones/fisiología , Masculino , Femenino , Adulto Joven , Electrooculografía/métodos , Adulto , Grabación en Video , Estimulación Luminosa , Reproducibilidad de los Resultados , Voluntarios Sanos
2.
PLoS One ; 19(7): e0305902, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39024373

RESUMEN

Eye movement during blinking can be a significant artifact in Event-Related Potentials (ERP) analysis. Blinks produce a positive potential in the vertical electrooculogram (VEOG), spreading towards the posterior direction. Two methods are frequently used to suppress VEOGs: linear regression to subtract the VEOG signal from the electroencephalogram (EEG) and Independent Component Analysis (ICA). However, some information is lost in both. The present algorithm (1) statistically identifies the position of VEOGs in the frontopolar channels; (2) performs EEG averaging for each channel, which results in 'blink templates'; (3) subtracts each template from the respective EEG at each VEOG position, only when the linear correlation index between the template and the segment is greater than a chosen threshold L. The signals from twenty subjects were acquired using a behavioral test and were treated using FilterBlink for subsequent ERP analysis. A model was designed to test the method for each subject using twenty copies of the EEG signal from the subject's mid-central channel (with nearly no VEOG) representing the EEG channels and their respective blink templates. At the same 200 equidistant time points (marks), a signal (2.5 sinusoidal cycles at 1050 ms emulating an ERP) was mixed with each model channel and the respective blink template of that channel, between 500 to 1200 ms after each mark. According to the model, VEOGs interfered with both ERPs and the ongoing EEG, mainly on the anterior medial leads, and no significant effect was observed on the mid-central channel (Cz). FilterBlink recovered approximately 90% (Fp1) to 98% (Fz) of the original ERP and EEG signals for L = 0.1. The method reduced the VEOG effect on the EEG after ERP and blink-artifact averaging in analyzing real signals. The method is straightforward and effective for VEOG attenuation without significant distortion in the EEG signal and embedded ERPs.


Asunto(s)
Algoritmos , Artefactos , Parpadeo , Electroencefalografía , Electrooculografía , Humanos , Electroencefalografía/métodos , Electrooculografía/métodos , Parpadeo/fisiología , Masculino , Femenino , Adulto , Procesamiento de Señales Asistido por Computador , Potenciales Evocados/fisiología , Adulto Joven , Movimientos Oculares/fisiología
3.
Comput Biol Med ; 179: 108855, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39029432

RESUMEN

OBJECTIVE: To compare the accuracy and generalizability of an automated deep neural network and the Philip Sleepware G3™ Somnolyzer system (Somnolyzer) for sleep stage scoring using American Academy of Sleep Medicine (AASM) guidelines. METHODS: Sleep recordings from 104 participants were analyzed by a convolutional neural network (CNN), the Somnolyzer and skillful technicians. Evaluation metrics were derived for different combinations of sleep stages. A further comparison between the Somnolyzer and the CNN model using a single-channel signal as input was also performed. Sleep recordings from 263 participants with a lower prevalence of OSA served as a cross-validation dataset to validate the generalizability of the CNN model. RESULTS: The overall agreement between automated and manual scoring for sleep staging in 104 participants outperformed that of the Somnolyzer according to various metrics (accuracy: 81.81 % vs. 77.07 %; F1: 76.36 % vs. 73.80 %; Cohen's kappa: 0.7403 vs. 0.6848). The results showed that the left electrooculography (EOG) single-channel model had minor advantages over the Somnolyzer. In terms of consistency with manual sleep staging, the CNN model demonstrated superior performance in identifying more pronounced sleep transitions, particularly in the N2 stage and sleep latency metrics. Conversely, the Somnolyzer showed enhanced proficiency in the analysis of REM stages, notably in measuring REM latency. The accuracy in the cross-validation set of 263 participants was also above 80 %. CONCLUSIONS: The CNN-based automated deep neural network outperformed the Somnolyzer and is sufficiently accurate for sleep study analyses using the AASM classification criteria.


Asunto(s)
Redes Neurales de la Computación , Polisomnografía , Fases del Sueño , Humanos , Fases del Sueño/fisiología , Masculino , Femenino , Adulto , Persona de Mediana Edad , Polisomnografía/métodos , Anciano , Electrooculografía/métodos , Procesamiento de Señales Asistido por Computador
4.
Artículo en Inglés | MEDLINE | ID: mdl-38848223

RESUMEN

Sleep staging serves as a fundamental assessment for sleep quality measurement and sleep disorder diagnosis. Although current deep learning approaches have successfully integrated multimodal sleep signals, enhancing the accuracy of automatic sleep staging, certain challenges remain, as follows: 1) optimizing the utilization of multi-modal information complementarity, 2) effectively extracting both long- and short-range temporal features of sleep information, and 3) addressing the class imbalance problem in sleep data. To address these challenges, this paper proposes a two-stream encode-decoder network, named TSEDSleepNet, which is inspired by the depth sensitive attention and automatic multi-modal fusion (DSA2F) framework. In TSEDSleepNet, a two-stream encoder is used to extract the multiscale features of electrooculogram (EOG) and electroencephalogram (EEG) signals. And a self-attention mechanism is utilized to fuse the multiscale features, generating multi-modal saliency features. Subsequently, the coarser-scale construction module (CSCM) is adopted to extract and construct multi-resolution features from the multiscale features and the salient features. Thereafter, a Transformer module is applied to capture both long- and short-range temporal features from the multi-resolution features. Finally, the long- and short-range temporal features are restored with low-layer details and mapped to the predicted classification results. Additionally, the Lovász loss function is applied to alleviate the class imbalance problem in sleep datasets. Our proposed method was tested on the Sleep-EDF-39 and Sleep-EDF-153 datasets, and it achieved classification accuracies of 88.9% and 85.2% and Macro-F1 scores of 84.8% and 79.7%, respectively, thus outperforming conventional traditional baseline models. These results highlight the efficacy of the proposed method in fusing multi-modal information. This method has potential for application as an adjunct tool for diagnosing sleep disorders.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Electroencefalografía , Electrooculografía , Redes Neurales de la Computación , Fases del Sueño , Humanos , Electroencefalografía/métodos , Fases del Sueño/fisiología , Electrooculografía/métodos , Masculino , Femenino , Adulto , Polisomnografía/métodos , Procesamiento de Señales Asistido por Computador , Adulto Joven
5.
Brain Res Bull ; 215: 111017, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38914295

RESUMEN

Sleep staging plays an important role in the diagnosis and treatment of clinical sleep disorders. The sleep staging standard defines every 30 seconds as a sleep period, which may mean that there exist similar brain activity patterns during the same sleep period. Thus, in this work, we propose a novel time-related synchronization analysis framework named time-related multimodal sleep scoring model (TRMSC) to explore the potential time-related patterns of sleeping. In the proposed TRMSC, the time-related synchronization analysis is first conducted on the single channel electrophysiological signal, i.e., Electroencephalogram (EEG) and Electrooculogram (EOG), to explore the time-related patterns, and the spectral activation features are also extracted by spectrum analysis to obtain the multimodal features. With the extracted multimodal features, the feature fusion and selection strategy is utilized to obtain the optimal feature set and achieve robust sleep staging. To verify the effectiveness of the proposed TRMSC, sleep staging experiments were conducted on the Sleep-EDF dataset, and the experimental results indicate that the proposed TRMSC has achieved better performance than other existing strategies, which proves that the time-related synchronization features can make up for the shortcomings of traditional spectrum-based strategies and achieve a higher classification accuracy. The proposed TRMSC model may be helpful for portable sleep analyzers and provide a new analytical method for clinical sleeping research.


Asunto(s)
Encéfalo , Electroencefalografía , Fases del Sueño , Humanos , Electroencefalografía/métodos , Fases del Sueño/fisiología , Encéfalo/fisiología , Electrooculografía/métodos , Masculino , Adulto , Femenino , Polisomnografía/métodos
6.
PLoS One ; 19(5): e0303565, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38781127

RESUMEN

In this study, we attempted to improve brain-computer interface (BCI) systems by means of auditory stream segregation in which alternately presented tones are perceived as sequences of various different tones (streams). A 3-class BCI using three tone sequences, which were perceived as three different tone streams, was investigated and evaluated. Each presented musical tone was generated by a software synthesizer. Eleven subjects took part in the experiment. Stimuli were presented to each user's right ear. Subjects were requested to attend to one of three streams and to count the number of target stimuli in the attended stream. In addition, 64-channel electroencephalogram (EEG) and two-channel electrooculogram (EOG) signals were recorded from participants with a sampling frequency of 1000 Hz. The measured EEG data were classified based on Riemannian geometry to detect the object of the subject's selective attention. P300 activity was elicited by the target stimuli in the segregated tone streams. In five out of eleven subjects, P300 activity was elicited only by the target stimuli included in the attended stream. In a 10-fold cross validation test, a classification accuracy over 80% for five subjects and over 75% for nine subjects was achieved. For subjects whose accuracy was lower than 75%, either the P300 was also elicited for nonattended streams or the amplitude of P300 was small. It was concluded that the number of selected BCI systems based on auditory stream segregation can be increased to three classes, and these classes can be detected by a single ear without the aid of any visual modality.


Asunto(s)
Estimulación Acústica , Atención , Interfaces Cerebro-Computador , Electroencefalografía , Humanos , Masculino , Femenino , Electroencefalografía/métodos , Adulto , Atención/fisiología , Estimulación Acústica/métodos , Percepción Auditiva/fisiología , Adulto Joven , Potenciales Relacionados con Evento P300/fisiología , Electrooculografía/métodos
7.
Talanta ; 275: 126180, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-38703480

RESUMEN

Organic Electrochemical Transistors (OECTs) are integral in detecting human bioelectric signals, attributing their significance to distinct electrochemical properties, the utilization of soft materials, compact dimensions, and pronounced biocompatibility. This review traverses the technological evolution of OECT, highlighting its profound impact on non-invasive detection methodologies within the biomedicalfield. Four sensor types rooted in OECT technology were introduced: Electrocardiogram (ECG), Electroencephalogram (EEG), Electromyography (EMG), and Electrooculography (EOG), which hold promise for integration into wearable detection systems. The fundamental detection principles, material compositions, and functional attributes of these sensors are examined. Additionally, the performance metrics and delineates viable optimization strategies for assorted physiological electrical detection sensors are discussed. The overarching goal of this review is to foster deeper insights into the generation, propagation, and modulation of electrophysiological signals, thereby advancing the application and development of OECT in medical sciences.


Asunto(s)
Transistores Electrónicos , Humanos , Electromiografía/métodos , Electrocardiografía/métodos , Técnicas Electroquímicas/métodos , Electrooculografía/métodos , Electroencefalografía
8.
IEEE J Biomed Health Inform ; 28(9): 5189-5200, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38771683

RESUMEN

Sleep staging plays a critical role in evaluating the quality of sleep. Currently, most studies are either suffering from dramatic performance drops when coping with varying input modalities or unable to handle heterogeneous signals. To handle heterogeneous signals and guarantee favorable sleep staging performance when a single modality is available, a pseudo-siamese neural network (PSN) to incorporate electroencephalography (EEG), electrooculography (EOG) characteristics is proposed (PSEENet). PSEENet consists of two parts, spatial mapping modules (SMMs) and a weight-shared classifier. SMMs are used to extract high-dimensional features. Meanwhile, joint linkages among multi-modalities are provided by quantifying the similarity of features. Finally, with the cooperation of heterogeneous characteristics, associations within various sleep stages can be established by the classifier. The evaluation of the model is validated on two public datasets, namely, Montreal Archive of Sleep Studies (MASS) and SleepEDFX, and one clinical dataset from Huashan Hospital of Fudan University (HSFU). Experimental results show that the model can handle heterogeneous signals, provide superior results under multimodal signals and show good performance with single modality. PSEENet obtains accuracy of 79.1%, 82.1% with EEG, EEG and EOG on Sleep-EDFX, and significantly improves the accuracy with EOG from 73.7% to 76% by introducing similarity information.


Asunto(s)
Electroencefalografía , Electrooculografía , Redes Neurales de la Computación , Procesamiento de Señales Asistido por Computador , Fases del Sueño , Humanos , Electrooculografía/métodos , Electroencefalografía/métodos , Fases del Sueño/fisiología , Adulto , Masculino , Femenino , Adulto Joven , Persona de Mediana Edad , Algoritmos
9.
Physiol Meas ; 45(5)2024 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-38653318

RESUMEN

Objective.Sleep staging based on full polysomnography is the gold standard in the diagnosis of many sleep disorders. It is however costly, complex, and obtrusive due to the use of multiple electrodes. Automatic sleep staging based on single-channel electro-oculography (EOG) is a promising alternative, requiring fewer electrodes which could be self-applied below the hairline. EOG sleep staging algorithms are however yet to be validated in clinical populations with sleep disorders.Approach.We utilized the SOMNIA dataset, comprising 774 recordings from subjects with various sleep disorders, including insomnia, sleep-disordered breathing, hypersomnolence, circadian rhythm disorders, parasomnias, and movement disorders. The recordings were divided into train (574), validation (100), and test (100) groups. We trained a neural network that integrated transformers within a U-Net backbone. This design facilitated learning of arbitrary-distance temporal relationships within and between the EOG and hypnogram.Main results.For 5-class sleep staging, we achieved median accuracies of 85.0% and 85.2% and Cohen's kappas of 0.781 and 0.796 for left and right EOG, respectively. The performance using the right EOG was significantly better than using the left EOG, possibly because in the recommended AASM setup, this electrode is located closer to the scalp. The proposed model is robust to the presence of a variety of sleep disorders, displaying no significant difference in performance for subjects with a certain sleep disorder compared to those without.Significance.The results show that accurate sleep staging using single-channel EOG can be done reliably for subjects with a variety of sleep disorders.


Asunto(s)
Electrooculografía , Fases del Sueño , Trastornos del Sueño-Vigilia , Humanos , Fases del Sueño/fisiología , Electrooculografía/métodos , Trastornos del Sueño-Vigilia/diagnóstico , Trastornos del Sueño-Vigilia/fisiopatología , Masculino , Femenino , Adulto , Estudios de Cohortes , Persona de Mediana Edad , Procesamiento de Señales Asistido por Computador , Redes Neurales de la Computación , Adulto Joven , Polisomnografía
10.
Artículo en Inglés | MEDLINE | ID: mdl-38635384

RESUMEN

Polysomnography (PSG) recordings have been widely used for sleep staging in clinics, containing multiple modality signals (i.e., EEG and EOG). Recently, many studies have combined EEG and EOG modalities for sleep staging, since they are the most and the second most powerful modality for sleep staging among PSG recordings, respectively. However, EEG is complex to collect and sensitive to environment noise or other body activities, imbedding its use in clinical practice. Comparatively, EOG is much more easily to be obtained. In order to make full use of the powerful ability of EEG and the easy collection of EOG, we propose a novel framework to simplify multimodal sleep staging with a single EOG modality. It still performs well with only EOG modality in the absence of the EEG. Specifically, we first model the correlation between EEG and EOG, and then based on the correlation we generate multimodal features with time and frequency guided generators by adopting the idea of generative adversarial learning. We collected a real-world sleep dataset containing 67 recordings and used other four public datasets for evaluation. Compared with other existing sleep staging methods, our framework performs the best when solely using the EOG modality. Moreover, under our framework, EOG provides a comparable performance to EEG.


Asunto(s)
Algoritmos , Electroencefalografía , Electrooculografía , Polisomnografía , Fases del Sueño , Humanos , Electroencefalografía/métodos , Fases del Sueño/fisiología , Polisomnografía/métodos , Electrooculografía/métodos , Masculino , Adulto , Femenino , Adulto Joven
11.
IEEE J Biomed Health Inform ; 28(6): 3466-3477, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38502613

RESUMEN

Over recent decades, electroencephalogram (EEG) has become an essential tool in the field of clinical analysis and neurological disease research. However, EEG recordings are notably vulnerable to artifacts during acquisition, especially in clinical settings, which can significantly impede the accurate interpretation of neuronal activity. Blind source separation is currently the most popular method for EEG denoising, but most of the sources it separates often contain both artifacts and brain activity, which may lead to substantial information loss if handled improperly. In this paper, we introduce a dual-threshold denoising method combining spatial filtering with frequency-domain filtering to automatically eliminate electrooculogram (EOG) and electromyogram (EMG) artifacts from multi-channel EEG. The proposed method employs a fusion of second-order blind identification (SOBI) and canonical correlation analysis (CCA) to enhance source separation quality, followed by adaptive threshold to localize the artifact sources, and strict fixed threshold to remove strong artifact sources. Stationary wavelet transform (SWT) is utilized to decompose the weak artifact sources, with subsequent adjustment of wavelet coefficients in respective frequency bands tailored to the distinct characteristics of each artifact. The results of synthetic and real datasets show that our proposed method maximally retains the time-domain and frequency-domain information in the EEG during denoising. Compared with existing techniques, the proposed method achieves better denoising performance, which establishes a reliable foundation for subsequent clinical analyses.


Asunto(s)
Artefactos , Electroencefalografía , Procesamiento de Señales Asistido por Computador , Humanos , Electroencefalografía/métodos , Algoritmos , Electromiografía/métodos , Adulto , Análisis de Ondículas , Electrooculografía/métodos , Masculino , Adulto Joven , Femenino
12.
Comput Biol Med ; 173: 108314, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38513392

RESUMEN

Sleep staging is a vital aspect of sleep assessment, serving as a critical tool for evaluating the quality of sleep and identifying sleep disorders. Manual sleep staging is a laborious process, while automatic sleep staging is seldom utilized in clinical practice due to issues related to the inadequate accuracy and interpretability of classification results in automatic sleep staging models. In this work, a hybrid intelligent model is presented for automatic sleep staging, which integrates data intelligence and knowledge intelligence, to attain a balance between accuracy, interpretability, and generalizability in the sleep stage classification. Specifically, it is built on any combination of typical electroencephalography (EEG) and electrooculography (EOG) channels, including a temporal fully convolutional network based on the U-Net architecture and a multi-task feature mapping structure. The experimental results show that, compared to current interpretable automatic sleep staging models, our model achieves a Macro-F1 score of 0.804 on the ISRUC dataset and 0.780 on the Sleep-EDFx dataset. Moreover, we use knowledge intelligence to address issues of excessive jumps and unreasonable sleep stage transitions in the coarse sleep graphs obtained by the model. We also explore the different ways knowledge intelligence affects coarse sleep graphs by combining different sleep graph correction methods. Our research can offer convenient support for sleep physicians, indicating its significant potential in improving the efficiency of clinical sleep staging.


Asunto(s)
Fases del Sueño , Sueño , Polisomnografía/métodos , Electroencefalografía/métodos , Electrooculografía/métodos
13.
Adv Healthc Mater ; 13(15): e2303581, 2024 06.
Artículo en Inglés | MEDLINE | ID: mdl-38386698

RESUMEN

Abnormal oculomotor movements are known to be linked to various types of brain disorders, physical/mental shocks to the brain, and other neurological disorders, hence its monitoring can be developed into a simple but effective diagnostic tool. To overcome the limitations in the current eye-tracking system and electrooculography, a piezoelectric arrayed sensor system is developed using single-crystalline III-N thin-film transducers, which offers advantages of mechanical flexibility, biocompatibility, and high electromechanical conversion, for continuous monitoring of oculomotor movements by skin-attachable, safe, and highly sensitive sensors. The flexible piezoelectric eye movement sensor array (F-PEMSA), consisting of three transducers, is attached to the face temple area where it can be comfortably wearable and can detect the muscles' activity associated with the eye motions. Output voltages from upper, mid, and lower sensors (transducers) on different temple areas generate discernable patterns of output voltage signals with different combinations of positive/negative signs and their relative magnitudes for the various movements of eyeballs including 8 directional (lateral, vertical, and diagonal) and two rotational movements, which enable various types of saccade and pursuit tests. The F-PEMSA can be used in clinical studies on the brain-eye relationship to evaluate the functional integrity of multiple brain systems and cognitive processes.


Asunto(s)
Movimientos Oculares , Humanos , Movimientos Oculares/fisiología , Dispositivos Electrónicos Vestibles , Monitoreo Fisiológico/instrumentación , Monitoreo Fisiológico/métodos , Electrooculografía/instrumentación , Electrooculografía/métodos
14.
Comput Methods Programs Biomed ; 244: 107992, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38218118

RESUMEN

BACKGROUND AND OBJECTIVE: Sleep staging is an essential step for sleep disorder diagnosis, which is time-intensive and laborious for experts to perform this work manually. Automatic sleep stage classification methods not only alleviate experts from these demanding tasks but also enhance the accuracy and efficiency of the classification process. METHODS: A novel multi-channel biosignal-based model constructed by the combination of a 3D convolutional operation and a graph convolutional operation is proposed for the automated sleep stages using various physiological signals. Both the 3D convolution and graph convolution can aggregate information from neighboring brain areas, which helps to learn intrinsic connections from the biosignals. Electroencephalogram (EEG), electromyogram (EMG), electrooculogram (EOG) and electrocardiogram (ECG) signals are employed to extract time domain and frequency domain features. Subsequently, these signals are input to the 3D convolutional and graph convolutional branches, respectively. The 3D convolution branch can explore the correlations between multi-channel signals and multi-band waves in each channel in the time series, while the graph convolution branch can explore the connections between each channel and each frequency band. In this work, we have developed the proposed multi-channel convolution combined sleep stage classification model (MixSleepNet) using ISRUC datasets (Subgroup 3 and 50 random samples from Subgroup 1). RESULTS: Based on the first expert's label, our generated MixSleepNet yielded an accuracy, F1-score and Cohen kappa scores of 0.830, 0.821 and 0.782, respectively for ISRUC-S3. It obtained accuracy, F1-score and Cohen kappa scores of 0.812, 0.786, and 0.756, respectively for the ISRUC-S1 dataset. In accordance with the evaluations conducted by the second expert, the comprehensive accuracies, F1-scores, and Cohen kappa coefficients for the ISRUC-S3 and ISRUC-S1 datasets are determined to be 0.837, 0.820, 0.789, and 0.829, 0.791, 0.775, respectively. CONCLUSION: The results of the performance metrics by the proposed method are much better than those from all the compared models. Additional experiments were carried out on the ISRUC-S3 sub-dataset to evaluate the contributions of each module towards the classification performance.


Asunto(s)
Fases del Sueño , Sueño , Fases del Sueño/fisiología , Factores de Tiempo , Electroencefalografía/métodos , Electrooculografía/métodos
15.
IEEE Trans Biomed Circuits Syst ; 18(2): 322-333, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37851555

RESUMEN

Human eye activity has been widely studied in many fields such as psychology, neuroscience, medicine, and human-computer interaction engineering. In previous studies, monitoring of human eye activity mainly depends on electrooculogram (EOG) that requires a contact sensor. This article proposes a novel eye movement monitoring method called continuous wave doppler oculogram (cDOG). Unlike the conventional EOG-based eye movement monitoring methods, cDOG based on continuous wave doppler radar sensor (cDRS) can remotely measure human eye activity without placing electrodes on the head. To verify the feasibility of using cDOG for eye movement monitoring, we first theoretically analyzed the association between the radar signal and the corresponding eye movements measured with EOG. Afterward, we conducted an experiment to compare EOG and cDOG measurements under the conditions of eyes closure and opening. In addition, different eye movement states were considered, including right-left saccade, up-down saccade, eye-blink, and fixation. Several representative time domain and frequency domain features obtained from cDOG and from EOG were compared in these states, allowing us to demonstrate the feasibility of using cDOG for monitoring eye movements. The experimental results show that there is a correlation between cDOG and EOG in the time and frequency domain features, the average time error of single eye movement is less than 280.5 ms, and the accuracy of cDOG in eye movement detection is higher than 92.35%, when the distance between the cDRS and the face is 10 cm and eyes is facing the radar directly.


Asunto(s)
Movimientos Oculares , Radar , Humanos , Estudios de Factibilidad , Electrooculografía/métodos , Parpadeo
16.
Psychophysiology ; 61(3): e14461, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37855151

RESUMEN

This study aimed to evaluate the utility and applicability of electrooculography (EOG) when studying ocular activity during complex motor behavior. Due to its lower spatial resolution relative to eye tracking (ET), it is unclear whether EOG can provide valid and accurate temporal measurements such as the duration of the Quiet Eye (QE), that is the uninterrupted dwell time on the visual target prior to and during action. However, because of its greater temporal resolution, EOG is better suited for temporal-spectral decomposition, a technique that allows us to distinguish between lower and higher frequency activity as a function of time. Sixteen golfers of varying expertise (novices to experts) putted 60 balls to a 4-m distant target on a flat surface while we recorded EOG, ET, performance accuracy, and putter kinematics. Correlational and discrepancy analyses confirmed that EOG yielded valid and accurate QE measurements, but only when using certain processing parameters. Nested cross-validation indicated that, among a set of ET and EOG temporal and spectral oculomotor features, EOG power was the most useful when predicting performance accuracy through robust regression. Follow-up cross-validation and correlational analyses revealed that more accurate performance was preceded by diminished lower-frequency activity immediately before movement initiation and elevated higher-frequency activity during movement recorded from the horizontal channel. This higher-frequency activity was also found to accompany a smoother movement execution. This study validates EOG algorithms (code provided) for measuring temporal parameters and presents a novel approach to extracting temporal and spectral oculomotor features during complex motor behavior.


Asunto(s)
Algoritmos , Movimientos Oculares , Humanos , Electrooculografía/métodos , Tecnología de Seguimiento Ocular , Fenómenos Biomecánicos
17.
J Sleep Res ; 33(2): e13977, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37400248

RESUMEN

Sleep recordings are increasingly being conducted in patients' homes where patients apply the sensors themselves according to instructions. However, certain sensor types such as cup electrodes used in conventional polysomnography are unfeasible for self-application. To overcome this, self-applied forehead montages with electroencephalography and electro-oculography sensors have been developed. We evaluated the technical feasibility of a self-applied electrode set from Nox Medical (Reykjavik, Iceland) through home sleep recordings of healthy and suspected sleep-disordered adults (n = 174) in the context of sleep staging. Subjects slept with a double setup of conventional type II polysomnography sensors and self-applied forehead sensors. We found that the self-applied electroencephalography and electro-oculography electrodes had acceptable impedance levels but were more prone to losing proper skin-electrode contact than the conventional cup electrodes. Moreover, the forehead electroencephalography signals recorded using the self-applied electrodes expressed lower amplitudes (difference 25.3%-43.9%, p < 0.001) and less absolute power (at 1-40 Hz, p < 0.001) than the polysomnography electroencephalography signals in all sleep stages. However, the signals recorded with the self-applied electroencephalography electrodes expressed more relative power (p < 0.001) at very low frequencies (0.3-1.0 Hz) in all sleep stages. The electro-oculography signals recorded with the self-applied electrodes expressed comparable characteristics with standard electro-oculography. In conclusion, the results support the technical feasibility of the self-applied electroencephalography and electro-oculography for sleep staging in home sleep recordings, after adjustment for amplitude differences, especially for scoring Stage N3 sleep.


Asunto(s)
Electroencefalografía , Sueño , Adulto , Humanos , Polisomnografía/métodos , Estudios de Factibilidad , Electrooculografía/métodos , Fases del Sueño , Electrodos
18.
Artículo en Inglés | MEDLINE | ID: mdl-38088999

RESUMEN

Gaze estimation, as a technique that reflects individual attention, can be used for disability assistance and assisting physicians in diagnosing diseases such as autism spectrum disorder (ASD), Parkinson's disease, and attention deficit hyperactivity disorder (ADHD). Various techniques have been proposed for gaze estimation and achieved high resolution. Among these approaches, electrooculography (EOG)-based gaze estimation, as an economical and effective method, offers a promising solution for practical applications. OBJECTIVE: In this paper, we systematically investigated the possible EOG electrode locations which are spatially distributed around the orbital cavity. Afterward, quantities of informative features to characterize physiological information of eye movement from the temporal-spectral domain are extracted from the seven differential channels. METHODS AND PROCEDURES: To select the optimum channels and relevant features, and eliminate irrelevant information, a heuristical search algorithm (i.e., forward stepwise strategy) is applied. Subsequently, a comparative analysis of the impacts of electrode placement and feature contributions on gaze estimation is evaluated via 6 classic models with 18 subjects. RESULTS: Experimental results showed that the promising performance was achieved both in the Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) within a wide gaze that ranges from -50° to +50°. The MAE and RMSE can be improved to 2.80° and 3.74° ultimately, while only using 10 features extracted from 2 channels. Compared with the prevailing EOG-based techniques, the performance improvement of MAE and RMSE range from 0.70° to 5.48° and 0.66° to 5.42°, respectively. CONCLUSION: We proposed a robust EOG-based gaze estimation approach by systematically investigating the optimal channel/feature combination. The experimental results indicated not only the superiority of the proposed approach but also its potential for clinical application. Clinical and translational impact statement: Accurate gaze estimation is a key step for assisting disabilities and accurate diagnosis of various diseases including ASD, Parkinson's disease, and ADHD. The proposed approach can accurately estimate the points of gaze via EOG signals, and thus has the potential for various related medical applications.


Asunto(s)
Trastorno del Espectro Autista , Enfermedad de Parkinson , Humanos , Electrooculografía/métodos , Trastorno del Espectro Autista/diagnóstico , Enfermedad de Parkinson/diagnóstico , Movimientos Oculares , Electrodos
19.
Artículo en Inglés | MEDLINE | ID: mdl-38083276

RESUMEN

Human-machine interfaces (HMIs) based on Electro-oculogram (EOG) signals have been widely explored. However, due to the individual variability, it is still challenging for an EOG-based eye movement recognition model to achieve favorable results among cross-subjects. The classical transfer learning methods such as CORrelation Alignment (CORAL), Transfer Component Analysis (TCA), and Joint Distribution Adaptation (JDA) are mainly based on feature transformation and distribution alignment, which do not consider similarities/dissimilarities between target subject and source subjects. In this paper, the Kullback-Leibler (KL) divergence of the log-Power Spectral Density (log-PSD) features of horizontal EOG (HEOG) between the target subject and each source subject is calculated for adaptively selecting partial subjects that suppose to have similar distribution with target subject for further training. It not only consider the similarity but also reduce computational consumption. The results show that the proposed approach is superior to the baseline and classical transfer learning methods, and significantly improves the performance of target subjects who have poor performance with the primary classifiers. The best improvement of Support Vector Machines (SVM) classifier has improved by 13.1% for subject 31 compared with baseline result. The preliminary results of this study demonstrate the effectiveness of the proposed transfer framework and provide a promising tool for implementing cross-subject eye movement recognition models in real-life scenarios.


Asunto(s)
Electroencefalografía , Movimientos Oculares , Humanos , Electrooculografía/métodos , Electroencefalografía/métodos , Movimiento , Máquina de Vectores de Soporte
20.
Artículo en Inglés | MEDLINE | ID: mdl-38083601

RESUMEN

The rise in population and aging has led to a significant increase in the number of individuals affected by common causes of vision loss. Early diagnosis and treatment are crucial to avoid the consequences of visual impairment. However, in early stages, many visual problems are making it difficult to detect. Visual adaptation can compensate for several visual deficits with adaptive eye movements. These adaptive eye movements may serve as indicators of vision loss. In this work, we investigate the association between eye movement and blurred vision. By using Electrooculography (EOG) to record eye movements, we propose a new tracking model to identify the deterioration of refractive power. We verify the technical feasibility of this method by designing a blurred vision simulation experiment. Six sets of prescription lenses and a pair of flat lenses were used to create different levels of blurring effects. We analyzed binocular movements through EOG signals and performed a seven-class classification using the ResNet18 architecture. The results revealed an average classification accuracy of 94.7% in the subject-dependent model. However, the subject-independent model presented poor performance, with the highest accuracy reaching only 34.5%. Therefore, the potential of an EOG-based visual quality monitoring system is proven. Furthermore, our experimental design provides a novel approach to assessing blurred vision.


Asunto(s)
Movimientos Oculares , Baja Visión , Humanos , Electrooculografía/métodos , Trastornos de la Visión
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA