Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Bases de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Inf Fusion ; 91: 15-30, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37324653

RESUMEN

In the area of human performance and cognitive research, machine learning (ML) problems become increasingly complex due to limitations in the experimental design, resulting in the development of poor predictive models. More specifically, experimental study designs produce very few data instances, have large class imbalances and conflicting ground truth labels, and generate wide data sets due to the diverse amount of sensors. From an ML perspective these problems are further exacerbated in anomaly detection cases where class imbalances occur and there are almost always more features than samples. Typically, dimensionality reduction methods (e.g., PCA, autoencoders) are utilized to handle these issues from wide data sets. However, these dimensionality reduction methods do not always map to a lower dimensional space appropriately, and they capture noise or irrelevant information. In addition, when new sensor modalities are incorporated, the entire ML paradigm has to be remodeled because of new dependencies introduced by the new information. Remodeling these ML paradigms is time-consuming and costly due to lack of modularity in the paradigm design, which is not ideal. Furthermore, human performance research experiments, at times, creates ambiguous class labels because the ground truth data cannot be agreed upon by subject-matter experts annotations, making ML paradigm nearly impossible to model. This work pulls insights from Dempster-Shafer theory (DST), stacking of ML models, and bagging to address uncertainty and ignorance for multi-classification ML problems caused by ambiguous ground truth, low samples, subject-to-subject variability, class imbalances, and wide data sets. Based on these insights, we propose a probabilistic model fusion approach, Naive Adaptive Probabilistic Sensor (NAPS), which combines ML paradigms built around bagging algorithms to overcome these experimental data concerns while maintaining a modular design for future sensor (new feature integration) and conflicting ground truth data. We demonstrate significant overall performance improvements using NAPS (an accuracy of 95.29%) in detecting human task errors (a four class problem) caused by impaired cognitive states and a negligible drop in performance with the case of ambiguous ground truth labels (an accuracy of 93.93%), when compared to other methodologies (an accuracy of 64.91%). This work potentially sets the foundation for other human-centric modeling systems that rely on human state prediction modeling.

2.
Sci Rep ; 10(1): 3909, 2020 03 03.
Artículo en Inglés | MEDLINE | ID: mdl-32127579

RESUMEN

Electroencephalography (EEG) is a method for recording electrical activity, indicative of cortical brain activity from the scalp. EEG has been used to diagnose neurological diseases and to characterize impaired cognitive states. When the electrical activity of neurons are temporally synchronized, the likelihood to reach their threshold potential for the signal to propagate to the next neuron, increases. This phenomenon is typically analyzed as the spectral intensity increasing from the summation of these neurons firing. Non-linear analysis methods (e.g., entropy) have been explored to characterize neuronal firings, but only analyze temporal information and not the frequency spectrum. By examining temporal and spectral entropic relationships simultaneously, we can better characterize how neurons are isolated, (the signal's inability to propagate to adjacent neurons), an indicator of impairment. A novel time-frequency entropic analysis method, referred to as Activation Complexity (AC), was designed to quantify these dynamics from key EEG frequency bands. The data was collected during a cognitive impairment study at NASA Langley Research Center, involving hypoxia induction in 49 human test subjects. AC demonstrated significant changes in EEG firing patterns characterize within explanatory (p < 0.05) and predictive models (10% increase in accuracy). The proposed work sets the methodological foundation for quantifying neuronal isolation and introduces new potential technique to understand human cognitive impairment for a range of neurological diseases and insults.


Asunto(s)
Encéfalo/fisiopatología , Disfunción Cognitiva/fisiopatología , Electroencefalografía , Encéfalo/patología , Disfunción Cognitiva/patología , Entropía , Humanos , Neuronas/patología , Procesamiento de Señales Asistido por Computador
3.
Comput Biol Med ; 103: 198-207, 2018 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-30384177

RESUMEN

Heart rate complexity (HRC) is a proven metric for gaining insight into human stress and physiological deterioration. To calculate HRC, the detection of the exact instance of when the heart beats, the R-peak, is necessary. Electrocardiogram (ECG) signals can often be corrupted by environmental noise (e.g., from electromagnetic interference, movement artifacts), which can potentially alter the HRC measurement, producing erroneous inputs which feed into decision support models. Current literature has only investigated how HRC is affected by noise when R-peak detection errors occur (false positives and false negatives). However, the numerical methods used to calculate HRC are also sensitive to the specific location of the fiducial point of the R-peak. This raises many questions regarding how this fiducial point is altered by noise, the resulting impact on the measured HRC, and how we can account for noisy HRC measures as inputs into our decision models. This work uses Monte Carlo simulations to systematically add white and pink noise at different permutations of signal-to-noise ratios (SNRs), time segments, sampling rates, and HRC measurements to characterize the influence of noise on the HRC measure by altering the fiducial point of the R-peak. Using the generated information from these simulations provides improved decision processes for system design which address key concerns such as permutation entropy being a more precise, reliable, less biased, and more sensitive measurement for HRC than sample and approximate entropy.


Asunto(s)
Electrocardiografía/métodos , Frecuencia Cardíaca/fisiología , Procesamiento de Señales Asistido por Computador , Algoritmos , Simulación por Computador , Entropía , Humanos , Hipoxia/fisiopatología , Método de Montecarlo , Relación Señal-Ruido
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA