Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Stud Health Technol Inform ; 309: 170-174, 2023 Oct 20.
Artículo en Inglés | MEDLINE | ID: mdl-37869833

RESUMEN

The WHISPER (Widespread Hearing Impairment Screening and PrEvention of Risk) platform was recently developed for screening for hearing loss (HL) and cognitive decline in adults. It includes a battery of tests (a risk factors (RF) questionnaire, a language-independent speech-in-noise test, and cognitive tests) and provides a pass/fail outcome based on the analysis of several features. Earlier studies demonstrated high accuracy of the speech-in-noise test for predicting HL in 350 participants. In this study, preliminary results from the RF questionnaire (137 participants) and from the visual digit span test (DST) (78 participants) are presented. Despite the relatively small sample size, these findings indicate that the RF and DST may provide additional features that could be useful to characterize the overall individual profile, providing additional knowledge related to short-term memory performance and overall risk of HL and cognitive decline. Future research is needed to expand number of subjects tested, number of features analyzed, and the range of algorithms (including supervised and unsupervised machine learning) used to identify novel measures able to predict the individual hearing and cognitive abilities, also including components related to the individual risk.


Asunto(s)
Disfunción Cognitiva , Sordera , Pérdida Auditiva , Percepción del Habla , Adulto , Humanos , Pérdida Auditiva/diagnóstico , Pérdida Auditiva/prevención & control , Disfunción Cognitiva/diagnóstico , Disfunción Cognitiva/prevención & control , Ruido
2.
Front Hum Neurosci ; 17: 1286621, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38259333

RESUMEN

Emotions significantly shape decision-making, and targeted emotional elicitations represent an important factor in neuromarketing, where they impact advertising effectiveness by capturing potential customers' attention intricately associated with emotional triggers. Analyzing biometric parameters after stimulus exposure may help in understanding emotional states. This study investigates autonomic and central nervous system responses to emotional stimuli, including images, auditory cues, and their combination while recording physiological signals, namely the electrocardiogram, blood volume pulse, galvanic skin response, pupillometry, respiration, and the electroencephalogram. The primary goal of the proposed analysis is to compare emotional stimulation methods and to identify the most effective approach for distinct physiological patterns. A novel feature selection technique is applied to further optimize the separation of four emotional states. Basic machine learning approaches are used in order to discern emotions as elicited by different kinds of stimulation. Electroencephalographic signals, Galvanic skin response and cardio-respiratory coupling-derived features provided the most significant features in distinguishing the four emotional states. Further findings highlight how auditory stimuli play a crucial role in creating distinct physiological patterns that enhance classification within a four-class problem. When combining all three types of stimulation, a validation accuracy of 49% was achieved. The sound-only and the image-only phases resulted in 52% and 44% accuracy respectively, whereas the combined stimulation of images and sounds led to 51% accuracy. Isolated visual stimuli yield less distinct patterns, necessitating more signals for relatively inferior performance compared to other types of stimuli. This surprising significance arises from limited auditory exploration in emotional recognition literature, particularly contrasted with the pleathora of studies performed using visual stimulation. In marketing, auditory components might hold a more relevant potential to significantly influence consumer choices.

3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1968-1971, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-36086244

RESUMEN

Many studies in the literature attempt recognition of emotions through the use of videos or images, but very few have explored the role that sounds have in evoking emotions. In this study we have devised an experimental protocol for elicitation of emotions by using, separately and jointly, images and sounds from the widely used International Affective Pictures System and International Affective Digital Sounds databases. During the experiments we have recorded the skin conductance and pupillary signals and processed them with the goal of extracting indices linked to the autonomic nervous system, thus revealing specific patterns of behavior depending on the different stimulation modalities. Our results show that skin conductance helps discriminate emotions along the arousal dimension, whereas features derived from the pupillary signal are able to discriminate different states along both valence and arousal dimensions. In particular, the pupillary diameter was found to be significantly greater at increasing arousal and during elicitation of negative emotions in the phases of viewing images and images with sounds. In the sound-only phase, on the other hand, the power calculated in the high and very high frequency bands of the pupillary diameter were significantly greater at higher valence (valence ratings > 5). Clinical relevance- This study demonstrates the ability of physiological signals to assess specific emotional states by providing different activation patterns depending on the stimulation through images, sounds and images with sounds. The approach has high clinical relevance as it could be extended to evaluate mood disorders (e.g. depression, bipolar disorders, or just stress), or to use physiological patterns found for sounds in order to study whether hearing aids can lead to increased emotional perception.


Asunto(s)
Emociones , Pupila , Nivel de Alerta/fisiología , Sistema Nervioso Autónomo/fisiología , Emociones/fisiología , Respuesta Galvánica de la Piel , Pupila/fisiología
4.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 989-992, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34891454

RESUMEN

Many studies in literature successfully use classification algorithms to classify emotions by means of physiological signals. However, there are still important limitations in interpretability of the results, i.e. lack of feature specific characterizations for each emotional state. To this extent, our study proposes a feature selection method that allows to determine the most informative subset of features extracted from physiological signals by maintaining their original dimensional space. Results show that features from the galvanic skin response are confirmed to be relevant in separating the arousal dimension, especially fear from happiness and relaxation. Furthermore, the average and the median value of the galvanic skin response signal together with the ratio between SD1 and SD2 from the Poincarè analysis of the electrocardiogram signal, were found to be the most important features for the discrimination along the valence dimension. A Linear Discriminant Analysis model using the first ten features sorted by importance, as defined by their ability to discriminate emotions with a bivariate approach, led to a three-class test accuracy in discriminating happiness, relaxation and fear equal to 72%, 67% and 89% respectively.Clinical relevance This study demonstrates the ability of physiological signals to assess the emotional state of different subjects, by providing a fast and efficient method to select most important indexes from the autonomic nervous system. The approach has high clinical relevance as it could be extended to assess other emotional states (e.g. stress and pain) characterizing pathological states such as post traumatic stress disorder and depression.


Asunto(s)
Nivel de Alerta , Respuesta Galvánica de la Piel , Algoritmos , Emociones , Humanos
5.
Am J Audiol ; 29(3S): 564-576, 2020 Sep 18.
Artículo en Inglés | MEDLINE | ID: mdl-32946249

RESUMEN

Purpose The aim of this study was to develop and evaluate a novel, automated speech-in-noise test viable for widespread in situ and remote screening. Method Vowel-consonant-vowel sounds in a multiple-choice consonant discrimination task were used. Recordings from a professional male native English speaker were used. A novel adaptive staircase procedure was developed, based on the estimated intelligibility of stimuli rather than on theoretical binomial models. Test performance was assessed in a population of 26 young adults (YAs) with normal hearing and in 72 unscreened adults (UAs), including native and nonnative English listeners. Results The proposed test provided accurate estimates of the speech recognition threshold (SRT) compared to a conventional adaptive procedure. Consistent outcomes were observed in YAs in test/retest and in controlled/uncontrolled conditions and in UAs in native and nonnative listeners. The SRT increased with increasing age, hearing loss, and self-reported hearing handicap in UAs. Test duration was similar in YAs and UAs irrespective of age and hearing loss. The test-retest repeatability of SRTs was high (Pearson correlation coefficient = .84), and the pass/fail outcomes of the test were reliable in repeated measures (Cohen's κ = .8). The test was accurate in identifying ears with pure-tone thresholds > 25 dB HL (accuracy = 0.82). Conclusion This study demonstrated the viability of the proposed test in subjects of varying language in terms of accuracy, reliability, and short test time. Further research is needed to validate the test in a larger population across a wider range of languages and hearing loss and to identify optimal classification criteria for screening purposes.


Asunto(s)
Pérdida Auditiva/diagnóstico , Ruido , Percepción del Habla , Prueba del Umbral de Recepción del Habla/métodos , Telemedicina/métodos , Adulto , Anciano , Anciano de 80 o más Años , Automatización , Femenino , Pérdida Auditiva/fisiopatología , Humanos , Masculino , Persona de Mediana Edad , Reproducibilidad de los Resultados , Adulto Joven
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 6991-6994, 2019 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-31947447

RESUMEN

This article introduces a novel automated staircase procedure for a speech-in-noise test based on Vowel-Consonant-Vowel (VCV) stimuli. Conventional staircase procedures are based on pre-determined changes in stimulus presentation levels under the assumption of homogeneous set. The proposed staircase takes into account differences in intelligibility across the set by using changes in stimulus presentation levels that vary depending on the stimulus and the presentation level. Preliminary evaluation of the proposed staircase, as compared to a conventional staircase, demonstrated test-retest reliability, agreement with the conventional method in terms of speech-in-noise threshold estimation, and lower test duration. As such, the proposed approach shows promise for the implementation of rapid and reliable speech-in-noise tests in adults. Further research is needed to assess test performance in a larger sample of participants, also including subjects of various mother tongue and subjects with hearing loss.


Asunto(s)
Ruido , Habla , Adulto , Pérdida Auditiva , Humanos , Reproducibilidad de los Resultados , Percepción del Habla
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...