RESUMO
We introduce a generative Bayesian switching dynamical model for action recognition in 3D skeletal data. Our model encodes highly correlated skeletal data into a few sets of low-dimensional switching temporal processes and from there decodes to the motion data and their associated action labels. We parameterize these temporal processes with regard to a switching deep autoregressive prior to accommodate both multimodal and higher-order nonlinear inter-dependencies. This results in a dynamical deep generative latent model that parses meaningful intrinsic states in skeletal dynamics and enables action recognition. These sequences of states provide visual and quantitative interpretations about motion primitives that gave rise to each action class, which have not been explored previously. In contrast to previous works, which often overlook temporal dynamics, our method explicitly model temporal transitions and is generative. Our experiments on two large-scale 3D skeletal datasets substantiate the superior performance of our model in comparison with the state-of-the-art methods. Specifically, our method achieved 6.3% higher action classification accuracy (by incorporating a dynamical generative framework), and 3.5% better predictive error (by employing a nonlinear second-order dynamical transition model) when compared with the best-performing competitors.
Assuntos
Dinâmica não Linear , Reconhecimento Automatizado de Padrão , Teorema de Bayes , Humanos , Movimento (Física)RESUMO
Objective monitoring and assessment of human motor behavior can improve the diagnosis and management of several medical conditions. Over the past decade, significant advances have been made in the use of wearable technology for continuously monitoring human motor behavior in free-living conditions. However, wearable technology remains ill-suited for applications which require monitoring and interpretation of complex motor behaviors (e.g., involving interactions with the environment). Recent advances in computer vision and deep learning have opened up new possibilities for extracting information from video recordings. In this paper, we present a hierarchical vision-based behavior phenotyping method for classification of basic human actions in video recordings performed using a single RGB camera. Our method addresses challenges associated with tracking multiple human actors and classification of actions in videos recorded in changing environments with different fields of view. We implement a cascaded pose tracker that uses temporal relationships between detections for short-term tracking and appearance based tracklet fusion for long-term tracking. Furthermore, for action classification, we use pose evolution maps derived from the cascaded pose tracker as low-dimensional and interpretable representations of the movement sequences for training a convolutional neural network. The cascaded pose tracker achieves an average accuracy of 88% in tracking the target human actor in our video recordings, and overall system achieves average test accuracy of 84% for target-specific action classification in untrimmed video recordings.
Assuntos
Monitorização Fisiológica , Atividade Motora/fisiologia , Gravação em Vídeo/métodos , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador , Redes Neurais de ComputaçãoRESUMO
Affect-biased attention is the phenomenon of prioritizing attention to emotionally salient stimuli and away from goal-directed stimuli. It is thought that affect-biased attention to emotional stimuli is a driving factor in the development of depression. This effect has been well-studied in adults, but research shows that this is also true during adolescence, when the severity of depressive symptoms are correlated with the magnitude of affect-biased attention to negative emotional stimuli. Prior studies have shown that trainings to modify affect-biased attention may ameliorate depression in adults, but this research has also been stymied by concerns about reliability and replicability. This study describes a clinical application of augmented reality-guided EEG-based attention modification ("AttentionCARE") for adolescents who are at highest risk for future depressive disorders (i.e., daughters of depressed mothers). Our results (n = 10) indicated that the AttentionCARE protocol can reliably and accurately provide neurofeedback about adolescent attention to negative emotional distractors that detract from attention to a primary task. Through several within and cross-study replications, our work addresses concerns about the lack of reliability and reproducibility in brain-computer interface applications, offering insights for future interventions to modify affect-biased attention in high-risk adolescents.
RESUMO
Computer vision has achieved great success in interpreting semantic meanings from images, yet estimating underlying (non-visual) physical properties of an object is often limited to their bulk values rather than reconstructing a dense map. In this work, we present our pressure eye (PEye) approach to estimate contact pressure between a human body and the surface she is lying on with high resolution from vision signals directly. PEye approach could ultimately enable the prediction and early detection of pressure ulcers in bed-bound patients, that currently depends on the use of expensive pressure mats. Our PEye network is configured in a dual encoding shared decoding form to fuse visual cues and some relevant physical parameters in order to reconstruct high resolution pressure maps (PMs). We also present a pixel-wise resampling approach based on Naive Bayes assumption to further enhance the PM regression performance. A percentage of correct sensing (PCS) tailored for sensing estimation accuracy evaluation is also proposed which provides another perspective for performance evaluation under varying error tolerances. We tested our approach via a series of extensive experiments using multimodal sensing technologies to collect data from 102 subjects while lying on a bed. The individual's high resolution contact pressure data could be estimated from their RGB or long wavelength infrared (LWIR) images with 91.8% and 91.2% estimation accuracies in PCSefs0.1 criteria, superior to state-of-the-art methods in the related image regression/translation tasks.
Assuntos
Diagnóstico por Imagem , Feminino , Humanos , Teorema de BayesRESUMO
Computer vision field has achieved great success in interpreting semantic meanings from images, yet its algorithms can be brittle for tasks with adverse vision conditions and the ones suffering from data/label pair limitation. Among these tasks is in-bed human pose monitoring with significant value in many healthcare applications. In-bed pose monitoring in natural settings involves pose estimation in complete darkness or full occlusion. The lack of publicly available in-bed pose datasets hinders the applicability of many successful human pose estimation algorithms for this task. In this paper, we introduce our Simultaneously-collected multimodal Lying Pose (SLP) dataset, which includes in-bed pose images from 109 participants captured using multiple imaging modalities including RGB, long wave infrared (LWIR), depth, and pressure map. We also present a physical hyper parameter tuning strategy for ground truth pose label generation under adverse vision conditions. The SLP design is compatible with the mainstream human pose datasets; therefore, the state-of-the-art 2D pose estimation models can be trained effectively with the SLP data with promising performance as high as 95% at PCKh@0.5 on a single modality. The pose estimation performance of these models can be further improved by including additional modalities through the proposed collaborative scheme.
Assuntos
Interpretação de Imagem Assistida por Computador , Postura , Decúbito Ventral , Humanos , AlgoritmosRESUMO
Ten percent of adults in the United States have a diagnosis of diabetes and up to a third of these individuals will develop a diabetic foot ulcer (DFU) in their lifetime. Of those who develop a DFU, a fifth will ultimately require amputation with a mortality rate of up to 70% within five years. The human suffering, economic burden, and disproportionate impact of diabetes on communities of color has led to increasing interest in the use of computer vision (CV) and machine learning (ML) techniques to aid the detection, characterization, monitoring, and even prediction of DFUs. Remote monitoring and automated classification are expected to revolutionize wound care by allowing patients to self-monitor their wound pathology, assist in the remote triaging of patients by clinicians, and allow for more immediate interventions when necessary. This scoping review provides an overview of applicable CV and ML techniques. This includes automated CV methods developed for remote assessment of wound photographs, as well as predictive ML algorithms that leverage heterogeneous data streams. We discuss the benefits of such applications and the role they may play in diabetic foot care moving forward. We highlight both the need for, and possibilities of, computational sensing systems to improve diabetic foot care and bring greater knowledge to patients in need.
RESUMO
This article presents a highly scalable and rack-mountable wireless sensing system for long-term monitoring (i.e., sense and estimate) of small animal/s' physical state (SAPS), such as changes in location and posture within standard cages. The conventional tracking systems may lack one or more features such as scalability, cost efficiency, rack-mount ability, and light condition insensitivity to work 24/7 on a large scale. The proposed sensing mechanism relies on relative changes of multiple resonance frequencies due to the animal's presence over the sensor unit. The sensor unit can track SAPS changes based on changes in electrical properties in the sensors near fields, appearing in the resonance frequencies, i.e., an Electromagnetic (EM) Signature, within the 200 MHz-300 MHz frequency range. The sensing unit is located underneath a standard mouse cage and consists of thin layers of a reading coil and six resonators tuned at six distinct frequencies. ANSYS HFSS software is used to model and optimize the proposed sensor unit and calculate the Specific Absorption Rate (SAR) obtained under 0.05 W/kg. Multiple prototypes have been implemented to test, validate, and characterize the performance of the design by conducting in vitro and in vivo experiments on Mice. The in-vitro test results have shown a 15 mm spatial resolution in detecting the mouse's location over the sensor array having maximum frequency shifts of 832 kHz and posture detection with under 30° resolution. The in-vivo experiment on mouse displacement resulted in frequency shifts of up to 790 kHz, indicating the SAPS's capability to detect the Mice's physical state.
Assuntos
Ciência dos Animais de Laboratório , Tecnologia sem Fio , Animais , Camundongos , Animais de Laboratório , Ciência dos Animais de Laboratório/instrumentaçãoRESUMO
Graph theoretic approaches in analyzing spatiotemporal dynamics of brain activities are under-studied but could be very promising directions in developing effective brain-computer interfaces (BCIs). Many existing BCI systems use electroencephalogram (EEG) signals to record and decode human neural activities noninvasively. Often, however, the features extracted from the EEG signals ignore the topological information hidden in the EEG temporal dynamics. Moreover, existing graph theoretic approaches are mostly used to reveal the topological patterns of brain functional networks based on synchronization between signals from distinctive spatial regions, instead of interdependence between states at different timestamps. In this study, we present a robust fold-wise hyperparameter optimization framework utilizing a series of conventional graph-based measurements combined with spectral graph features and investigate its discriminative performance on classification of a designed mental task in 6 participants with amyotrophic lateral sclerosis (ALS). Across all of our participants, we reached an average accuracy of 71.1%±4.5% for mental task classification by combining the global graph-based measurements and the spectral graph features, higher than the conventional non-graph based feature performance (67.1%±7.5%). Compared to using either one of the graphic features (66.3%±6.5% for the eigenvalues and 65.9%±5.2% for the global graph features), our feature combination strategy shows considerable improvement in both accuracy and robustness performance. Our results indicate the feasibility and advantage of the presented fold-wise optimization framework utilizing graph-based features in BCI systems targeted at end-users.
Assuntos
Interfaces Cérebro-Computador , Humanos , Encéfalo , Eletroencefalografia/métodos , Algoritmos , ImaginaçãoRESUMO
This paper presents an automatic camera-based device to monitor and evaluate the gait speed, standing balance, and 5 times sit-stand (5TSS) tests of the Short Physical Performance Battery (SPPB) and the Timed Up and Go (TUG) test. The proposed design measures and calculates the parameters of the SPPB tests automatically. The SPPB data can be used for physical performance assessment of older patients under cancer treatment. This stand-alone device has a Raspberry Pi (RPi) computer, three cameras, and two DC motors. The left and right cameras are used for gait speed tests. The center camera is used for standing balance, 5TSS, and TUG tests and for angle positioning of the camera platform toward the subject using DC motors by turning the camera left/right and tilting it up/down. The key algorithm for operating the proposed system is developed using Channel and Spatial Reliability Tracking in the cv2 module in Python. Graphical User Interfaces (GUIs) in the RPi are developed to run tests and adjust cameras, controlled remotely via smartphone and its Wi-Fi hotspot. We have tested the implemented camera setup prototype and extracted all SPPB and TUG parameters by conducting several experiments on a human subject population of 8 volunteers (male and female, light and dark complexions) in 69 test runs. The measured data and calculated outputs of the system consist of tests of gait speed (0.041 to 1.92 m/s with average accuracy of >95%), and standing balance, 5TSS, TUG, all with average time accuracy of >97%.
Assuntos
Neoplasias , Velocidade de Caminhada , Humanos , Masculino , Feminino , Idoso , Reprodutibilidade dos Testes , Programas de Rastreamento , Desempenho Físico Funcional , Equilíbrio Postural , Avaliação Geriátrica , Neoplasias/diagnósticoRESUMO
In-bed behavior monitoring is commonly needed for bed-bound patient and has long been confined to wearable devices or expensive pressure mapping systems. Meanwhile, vision-based human pose and posture tracking while experiencing a lot of attention/success in the computer vision field has been hindered in terms of usability for in-bed cases, due to huge privacy concerns surrounding this topic. Moreover, the inference models for mainstream pose and posture estimation often require excessive computing resources, impeding their implementation on edge devices. In this paper, we introduce a privacy-preserving in-bed pose and posture tracking system running entirely on an edge device with added functionality to detect stable motion as well as setting user-specific alerts for given poses. We evaluated the estimation accuracy of our system on a series of retrospective infrared (LWIR) images as well as samples from a real-world test environment. Our test results reached over 93.6% estimation accuracy for in-bed poses and achieved over 95.9% accuracy in estimating three in-bed posture categories.
Assuntos
Privacidade , Dispositivos Eletrônicos Vestíveis , Algoritmos , Humanos , Postura , Estudos RetrospectivosRESUMO
Biases in attention to emotional stimuli (i.e., affect-biased attention) contribute to the development and mainte-nance of depression and anxiety and may be a promising target for intervention. Past attempts to therapeutically modify affect-biased attention have been unsatisfactory due to issues with reliability and precision. Electroencephalogram (EEG)-derived steady-state visual evoked potentials (SSVEPS) provide a temporally-sensitive biological index of attention to competing visual stimuli at the level of neuronal populations in the visual cortex. SSVEPS can potentially be used to quantify whether affective distractors vs. task-relevant stimuli have "won" the competition for attention at a trial-by-trial level during neuro-feedback sessions. This study piloted a protocol for a SSVEP-based neurofeedback training to modify affect-biased attention using a portable augmented-reality (AR) EEG interface. During neurofeedback sessions with five healthy participants, signifi-cantly greater attention was given to the task-relevant stimulus (a Gabor patch) than to affective distractors (negative emotional expressions) across SSVEP indices (p<0.000l). SSVEP indices exhibited excellent internal consistency as evidenced by a maximum Guttman split-half coefficient of 0.97 when comparing even to odd trials. Further testing is required, but findings suggest several SSVEP neurofeedback calculation methods most deserving of additional investigation and support ongoing efforts to develop and implement a SSVEP-guided AR-based neurofeedback training to modify affect-biased attention in adolescent girls at high risk for depression.
Assuntos
Viés de Atenção , Realidade Aumentada , Neurorretroalimentação , Adolescente , Potenciais Evocados Visuais , Feminino , Humanos , Reprodutibilidade dos TestesRESUMO
Decoding neural responses from multimodal information sources, including electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS), has the transformative potential to advance hybrid brain-computer interfaces (hBCIs). However, existing modest performance improvement of hBCIs might be attributed to the lack of computational frameworks that exploit complementary synergistic properties in multimodal features. This study proposes a multimodal data fusion framework to represent and decode synergistic multimodal motor imagery (MI) neural responses. We hypothesize that exploiting EEG nonlinear dynamics adds a new informative dimension to the commonly combined EEG-fNIRS features and will ultimately increase the synergy between EEG and fNIRS features toward an enhanced hBCI. The EEG nonlinear dynamics were quantified by extracting graph-based recurrence quantification analysis (RQA) features to complement the commonly used spectral features for an enhanced multimodal configuration when combined with fNIRS. The high-dimensional multimodal features were further given to a feature selection algorithm relying on the least absolute shrinkage and selection operator (LASSO) for fused feature selection. Linear support vector machine (SVM) was then used to evaluate the framework. The mean hybrid classification performance improved by up to 15% and 4% compared to the unimodal EEG and fNIRS, respectively. The proposed graph-based framework substantially increased the contribution of EEG features for hBCI classification from 28.16% up to 52.9% when introduced the nonlinear dynamics and improved the performance by approximately 2%. These findings suggest that graph-based nonlinear dynamics can increase the synergy between EEG and fNIRS features for an enhanced MI response representation that is not dominated by a single modality.
Assuntos
Interfaces Cérebro-Computador , Imaginação/fisiologia , Dinâmica não Linear , Eletroencefalografia/métodos , Máquina de Vetores de SuporteRESUMO
We aim to build a system incorporating electroencephalography (EEG) and augmented reality (AR) that is capable of identifying the presence of visual spatial neglect (SN) and mapping the estimated neglected visual field. An EEG-based brain-computer interface (BCI) was used to identify those spatiospectral features that best detect participants with SN among stroke survivors using their EEG responses to ipsilesional and contralesional visual stimuli. Frontal-central delta and alpha, frontal-parietal theta, Fp1 beta, and left frontal gamma were found to be important features for neglect detection. Additionally, temporal analysis of the responses shows that the proposed model is accurate in detecting potentially neglected targets. These targets were predicted using common spatial patterns as the feature extraction algorithm and regularized discriminant analysis combined with kernel density estimation for classification. With our preliminary results, our system shows promise for reliably detecting the presence of SN and predicting visual target responses in stroke patients with SN.
Assuntos
Realidade Aumentada , Interfaces Cérebro-Computador , Transtornos da Percepção , Acidente Vascular Cerebral , Eletroencefalografia , Humanos , Transtornos da Percepção/diagnóstico , Transtornos da Percepção/etiologia , Acidente Vascular Cerebral/complicações , Acidente Vascular Cerebral/diagnósticoRESUMO
Degeneracy in biological systems refers to a many-to-one mapping between physical structures and their functional (including psychological) outcomes. Despite the ubiquity of the phenomenon, traditional analytical tools for modeling degeneracy in neuroscience are extremely limited. In this study, we generated synthetic datasets to describe three situations of degeneracy in fMRI data to demonstrate the limitations of the current univariate approach. We describe a novel computational approach for the analysis referred to as neural topographic factor analysis (NTFA). NTFA is designed to capture variations in neural activity across task conditions and participants. The advantage of this discovery-oriented approach is to reveal whether and how experimental trials and participants cluster into task conditions and participant groups. We applied NTFA on simulated data, revealing the appropriate degeneracy assumption in all three situations and demonstrating NTFA's utility in uncovering degeneracy. Lastly, we discussed the importance of testing degeneracy in fMRI data and the implications of applying NTFA to do so.
Assuntos
Mapeamento Encefálico , Imageamento por Ressonância Magnética , HumanosRESUMO
This paper presents a camera-based device for monitoring walking gait speed. The walking gait speed data will be used for performance assessment of elderly patients with cancer and calibrating wearable walking gait speed monitoring devices. This standalone device has a Raspberry Pi computer, three cameras (two cameras for finding the trajectory and gait speed of the subject and one camera for tracking the subject), and two stepper motors. The stepper motors turn the camera platform left and right and tilt it up and down by using video footage from the center camera. The left and right cameras are used to record videos of the person walking. The algorithm for operating the proposed system is developed in Python. The measured data and calculated outputs of the system consist of times for frames, distances from the center camera, horizontal angles, distances moved, instantaneous gait speed (frame-by-frame), total distance walked, and average speed. This system covers a large Lab area of 134.3 m2 and has achieved errors of less than 5% for gait speed calculation.Clinical Relevance- This project will help specialists to adjust the chemo dosage for elderly patients with cancer. The results will be used to analyze the human walking movements for estimating frailty and rehabilitation applications, too.
Assuntos
Neoplasias , Velocidade de Caminhada , Idoso , Algoritmos , Marcha , Humanos , CaminhadaRESUMO
This paper describes a wearable inductive sensing system to monitor (i.e., sense and estimate) walking gait speed. This proposed design relies on the multi-resonance inductive link to quantify the angle of the human legs for calculating the speed of walking. The walking gait speed can be used to estimate the frailty in elderly patients with cancer. We have designed, optimized, and implemented a multi-resonator sensor unit to precisely measure the angle between human legs during walking. The couplings between resonators change by lateral displacements due to walking, and a reading coil senses the frequency bifurcations, corresponding to the changes in angle between legs. The proposed design is optimized using ANSYS HFSS and implemented using copper foil. The Specific Absorption Rate, SAR, in the human body is calculated 0.035 W/kg using the developed HFSS model. The operating frequency range of the proposed sensor is from 25 MHz to 46 MHz, and it can measure angles up to 90° (-45° to +45°). The measured resolution for estimating the angle shows the capability of the sensor for calculating the walking speed with a resolution of less than 0.1 m/s.
Assuntos
Fragilidade , Dispositivos Eletrônicos Vestíveis , Idoso , Marcha , Humanos , Caminhada , Velocidade de CaminhadaRESUMO
Despite continuous research, communication approaches based on brain-computer interfaces (BCIs) are not yet an efficient and reliable means that severely disabled patients can rely on. To date, most motor imagery (MI)-based BCI systems use conventional spectral analysis methods to extract discriminative features and classify the associated electroencephalogram (EEG)-based sensorimotor rhythms (SMR) dynamics that results in relatively low performance. In this study, we investigated the feasibility of using recurrence quantification analysis (RQA) and complex network theory graph-based feature extraction methods as a novel way to improve MI-BCIs performance. Rooted in chaos theory, these features explore the nonlinear dynamics underlying the MI neural responses as a new informative dimension in classifying MI. METHOD: EEG time series recorded from six healthy participants performing MI-Rest tasks were projected into multidimensional phase space trajectories in order to construct the corresponding recurrence plots (RPs). Eight nonlinear graph-based RQA features were extracted from the RPs then compared to the classical spectral features through a 5-fold nested cross-validation procedure for parameter optimization using a linear support vector machine (SVM) classifier. RESULTS: Nonlinear graph-based RQA features were able to improve the average performance of MI-BCI by 5.8% as compared to the classical features. SIGNIFICANCE: These findings suggest that RQA and complex network analysis could represent new informative dimensions for nonlinear characteristics of EEG signals in order to enhance the MI-BCI performance.
Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia , Humanos , Imagens, Psicoterapia , Imaginação , Máquina de Vetores de SuporteRESUMO
OBJECTIVE: The topological information hidden in the EEG spectral dynamics is often ignored in the majority of the existing brain-computer interface (BCI) systems. Moreover, a systematic multimodal fusion of EEG with other informative brain signals such as functional near-infrared spectroscopy (fNIRS) towards enhancing the performance of the BCI systems is not fully investigated. In this study, we present a robust EEG-fNIRS data fusion framework utilizing a series of graph-based EEG features to investigate their performance on a motor imaginary (MI) classification task. METHOD: We first extract the amplitude and phase sequences of users' multi-channel EEG signals based on the complex Morlet wavelet time-frequency maps, and then convert them into an undirected graph to extract EEG topological features. The graph-based features from EEG are then selected by a thresholding method and fused with the temporal features from fNIRS signals after each being selected by the least absolute shrinkage and selection operator (LASSO) algorithm. The fused features were then classified as MI task vs. baseline by a linear support vector machine (SVM) classifier. RESULTS: The time-frequency graphs of EEG signals improved the MI classification accuracy by â¼5% compared to the graphs built on the band-pass filtered temporal EEG signals. Our proposed graph-based method also showed comparable performance to the classical EEG features based on power spectral density (PSD), however with a much smaller standard deviation, showing its robustness for potential use in a practical BCI system. Our fusion analysis revealed a considerable improvement of â¼17% as opposed to the highest average accuracy of EEG only and â¼3% compared with the highest fNIRS only accuracy demonstrating an enhanced performance when modality fusion is used relative to single modal outcomes. SIGNIFICANCE: Our findings indicate the potential use of the proposed data fusion framework utilizing the graph-based features in the hybrid BCI systems by making the motor imaginary inference more accurate and more robust.
Assuntos
Interfaces Cérebro-Computador , Algoritmos , Eletroencefalografia , Imaginação , Máquina de Vetores de SuporteRESUMO
Spatial neglect (SN) is a neurological disorder that causes inattention to visual stimuli in the contralesional visual field, stemming from unilateral brain injury such as stroke. The current gold standard method of SN assessment, the conventional Behavioral Inattention Test (BIT-C), is highly variable and inconsistent in its results. In our previous work, we built an augmented reality (AR)-based BCI to overcome the limitations of the BIT-C and classified between neglected and non-neglected targets with high accuracy. Our previous approach included personalization of the neglect detection classifier but the process required rigorous retraining from scratch and time-consuming feature selection for each participant. Future steps of our work will require rapid personalization of the neglect classifier; therefore, in this paper, we investigate fine-tuning of a neural network model to hasten the personalization process.
Assuntos
Transtornos da Percepção , Acidente Vascular Cerebral , Eletroencefalografia , Lateralidade Funcional , Humanos , Transtornos da Percepção/diagnóstico , Acidente Vascular Cerebral/diagnóstico , Campos VisuaisRESUMO
Spatial neglect (SN) is a neurological syndrome in stroke patients, commonly due to unilateral brain injury. It results in inattention to stimuli in the contralesional visual field. The current gold standard for SN assessment is the behavioral inattention test (BIT). BIT includes a series of penand-paper tests. These tests can be unreliable due to high variablility in subtest performances; they are limited in their ability to measure the extent of neglect, and they do not assess the patients in a realistic and dynamic environment. In this paper, we present an electroencephalography (EEG)-based brain-computer interface (BCI) that utilizes the Starry Night Test to overcome the limitations of the traditional SN assessment tests. Our overall goal with the implementation of this EEG-based Starry Night neglect detection system is to provide a more detailed assessment of SN. Specifically, to detect the presence of SN and its severity. To achieve this goal, as an initial step, we utilize a convolutional neural network (CNN) based model to analyze EEG data and accordingly propose a neglect detection method to distinguish between stroke patients without neglect and stroke patients with neglect.Clinical relevance-The proposed EEG-based BCI can be used to detect neglect in stroke patients with high accuracy, specificity and sensitivity. Further research will additionally allow for an estimation of a patient's field of view (FOV) for more detailed assessment of neglect.