Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Inf Fusion ; 91: 15-30, 2023 Mar.
Article in English | MEDLINE | ID: mdl-37324653

ABSTRACT

In the area of human performance and cognitive research, machine learning (ML) problems become increasingly complex due to limitations in the experimental design, resulting in the development of poor predictive models. More specifically, experimental study designs produce very few data instances, have large class imbalances and conflicting ground truth labels, and generate wide data sets due to the diverse amount of sensors. From an ML perspective these problems are further exacerbated in anomaly detection cases where class imbalances occur and there are almost always more features than samples. Typically, dimensionality reduction methods (e.g., PCA, autoencoders) are utilized to handle these issues from wide data sets. However, these dimensionality reduction methods do not always map to a lower dimensional space appropriately, and they capture noise or irrelevant information. In addition, when new sensor modalities are incorporated, the entire ML paradigm has to be remodeled because of new dependencies introduced by the new information. Remodeling these ML paradigms is time-consuming and costly due to lack of modularity in the paradigm design, which is not ideal. Furthermore, human performance research experiments, at times, creates ambiguous class labels because the ground truth data cannot be agreed upon by subject-matter experts annotations, making ML paradigm nearly impossible to model. This work pulls insights from Dempster-Shafer theory (DST), stacking of ML models, and bagging to address uncertainty and ignorance for multi-classification ML problems caused by ambiguous ground truth, low samples, subject-to-subject variability, class imbalances, and wide data sets. Based on these insights, we propose a probabilistic model fusion approach, Naive Adaptive Probabilistic Sensor (NAPS), which combines ML paradigms built around bagging algorithms to overcome these experimental data concerns while maintaining a modular design for future sensor (new feature integration) and conflicting ground truth data. We demonstrate significant overall performance improvements using NAPS (an accuracy of 95.29%) in detecting human task errors (a four class problem) caused by impaired cognitive states and a negligible drop in performance with the case of ambiguous ground truth labels (an accuracy of 93.93%), when compared to other methodologies (an accuracy of 64.91%). This work potentially sets the foundation for other human-centric modeling systems that rely on human state prediction modeling.

2.
Sci Rep ; 10(1): 3909, 2020 03 03.
Article in English | MEDLINE | ID: mdl-32127579

ABSTRACT

Electroencephalography (EEG) is a method for recording electrical activity, indicative of cortical brain activity from the scalp. EEG has been used to diagnose neurological diseases and to characterize impaired cognitive states. When the electrical activity of neurons are temporally synchronized, the likelihood to reach their threshold potential for the signal to propagate to the next neuron, increases. This phenomenon is typically analyzed as the spectral intensity increasing from the summation of these neurons firing. Non-linear analysis methods (e.g., entropy) have been explored to characterize neuronal firings, but only analyze temporal information and not the frequency spectrum. By examining temporal and spectral entropic relationships simultaneously, we can better characterize how neurons are isolated, (the signal's inability to propagate to adjacent neurons), an indicator of impairment. A novel time-frequency entropic analysis method, referred to as Activation Complexity (AC), was designed to quantify these dynamics from key EEG frequency bands. The data was collected during a cognitive impairment study at NASA Langley Research Center, involving hypoxia induction in 49 human test subjects. AC demonstrated significant changes in EEG firing patterns characterize within explanatory (p < 0.05) and predictive models (10% increase in accuracy). The proposed work sets the methodological foundation for quantifying neuronal isolation and introduces new potential technique to understand human cognitive impairment for a range of neurological diseases and insults.


Subject(s)
Brain/physiopathology , Cognitive Dysfunction/physiopathology , Electroencephalography , Brain/pathology , Cognitive Dysfunction/pathology , Entropy , Humans , Neurons/pathology , Signal Processing, Computer-Assisted
3.
Comput Biol Med ; 103: 198-207, 2018 12 01.
Article in English | MEDLINE | ID: mdl-30384177

ABSTRACT

Heart rate complexity (HRC) is a proven metric for gaining insight into human stress and physiological deterioration. To calculate HRC, the detection of the exact instance of when the heart beats, the R-peak, is necessary. Electrocardiogram (ECG) signals can often be corrupted by environmental noise (e.g., from electromagnetic interference, movement artifacts), which can potentially alter the HRC measurement, producing erroneous inputs which feed into decision support models. Current literature has only investigated how HRC is affected by noise when R-peak detection errors occur (false positives and false negatives). However, the numerical methods used to calculate HRC are also sensitive to the specific location of the fiducial point of the R-peak. This raises many questions regarding how this fiducial point is altered by noise, the resulting impact on the measured HRC, and how we can account for noisy HRC measures as inputs into our decision models. This work uses Monte Carlo simulations to systematically add white and pink noise at different permutations of signal-to-noise ratios (SNRs), time segments, sampling rates, and HRC measurements to characterize the influence of noise on the HRC measure by altering the fiducial point of the R-peak. Using the generated information from these simulations provides improved decision processes for system design which address key concerns such as permutation entropy being a more precise, reliable, less biased, and more sensitive measurement for HRC than sample and approximate entropy.


Subject(s)
Electrocardiography/methods , Heart Rate/physiology , Signal Processing, Computer-Assisted , Algorithms , Computer Simulation , Entropy , Humans , Hypoxia/physiopathology , Monte Carlo Method , Signal-To-Noise Ratio
4.
Biomed Opt Express ; 7(3): 979-1002, 2016 Mar 01.
Article in English | MEDLINE | ID: mdl-27231602

ABSTRACT

Brain activity can predict a person's level of engagement in an attentional task. However, estimates of brain activity are often confounded by measurement artifacts and systemic physiological noise. The optimal method for filtering this noise - thereby increasing such state prediction accuracy - remains unclear. To investigate this, we asked study participants to perform an attentional task while we monitored their brain activity with functional near infrared spectroscopy (fNIRS). We observed higher state prediction accuracy when noise in the fNIRS hemoglobin [Hb] signals was filtered with a non-stationary (adaptive) model as compared to static regression (84% ± 6% versus 72% ± 15%).

5.
Front Hum Neurosci ; 7: 861, 2013.
Article in English | MEDLINE | ID: mdl-24379771

ABSTRACT

The ability to distinguish between high and low levels of task engagement in the real world is important for detecting and preventing performance decrements during safety-critical operational tasks. We therefore investigated whether functional Near Infrared Spectroscopy (fNIRS), a portable brain neuroimaging technique, can be used to distinguish between high and low levels of task engagement during the performance of a selective attention task. A group of participants performed the multi-source interference task (MSIT) while we recorded brain activity with fNIRS from two brain regions. One was a key region of the "task-positive" network, which is associated with relatively high levels of task engagement. The second was a key region of the "task-negative" network, which is associated with relatively low levels of task engagement (e.g., resting and not performing a task). Using activity in these regions as inputs to a multivariate pattern classifier, we were able to predict above chance levels whether participants were engaged in performing the MSIT or resting. We were also able to replicate prior findings from functional magnetic resonance imaging (fMRI) indicating that activity in task-positive and task-negative regions is negatively correlated during task performance. Finally, data from a companion fMRI study verified our assumptions about the sources of brain activity in the fNIRS experiment and established an upper bound on classification accuracy in our task. Together, our findings suggest that fNIRS could prove quite useful for monitoring cognitive state in real-world settings.

SELECTION OF CITATIONS
SEARCH DETAIL
...