Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 69
Filtrar
1.
Sensors (Basel) ; 24(8)2024 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-38676050

RESUMO

The use of drones has recently gained popularity in a diverse range of applications, such as aerial photography, agriculture, search and rescue operations, the entertainment industry, and more. However, misuse of drone technology can potentially lead to military threats, terrorist acts, as well as privacy and safety breaches. This emphasizes the need for effective and fast remote detection of potentially threatening drones. In this study, we propose a novel approach for automatic drone detection utilizing the usage of both radio frequency communication signals and acoustic signals derived from UAV rotor sounds. In particular, we propose the use of classical and deep machine-learning techniques and the fusion of RF and acoustic features for efficient and accurate drone classification. Distinct types of ML-based classifiers have been examined, including CNN- and RNN-based networks and the classical SVM method. The proposed approach has been evaluated with both frequency and audio features using common drone datasets, demonstrating better accuracy than existing state-of-the-art methods, especially in low SNR scenarios. The results presented in this paper show a classification accuracy of approximately 91% at an SNR ratio of -10 dB using the LSTM network and fused features.

2.
Sensors (Basel) ; 24(10)2024 May 11.
Artigo em Inglês | MEDLINE | ID: mdl-38793909

RESUMO

Constipation is a common gastrointestinal disorder that impairs quality of life. Evaluating bowel motility via traditional methods, such as MRI and radiography, is expensive and inconvenient. Bowel sound (BS) analysis has been proposed as an alternative, with BS-time-domain acoustic features (BSTDAFs) being effective for evaluating bowel motility via several food and drink consumption tests. However, the effect of BSTDAFs before drink consumption on those after drink consumption is yet to be investigated. This study used BS-based stimulus-response plots (BSSRPs) to investigate this effect on 20 participants who underwent drinking tests. A strong negative correlation was observed between the number of BSs per minute before carbonated water consumption and the ratio of that before and after carbonated water consumption. However, a similar trend was not observed when the participants drank cold water. These findings suggest that when carbonated water is drunk, bowel motility before ingestion affects motor response to ingestion. This study provides a non-invasive BS-based approach for evaluating motor response to food and drink, offering a new research window for investigators in this field.


Assuntos
Ingestão de Líquidos , Motilidade Gastrointestinal , Humanos , Ingestão de Líquidos/fisiologia , Masculino , Motilidade Gastrointestinal/fisiologia , Feminino , Adulto , Adulto Jovem , Constipação Intestinal/fisiopatologia , Voluntários Saudáveis , Água Carbonatada
3.
Appl Psychophysiol Biofeedback ; 49(1): 71-83, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38165498

RESUMO

Biofeedback therapy is mainly based on the analysis of physiological features to improve an individual's affective state. There are insufficient objective indicators to assess symptom improvement after biofeedback. In addition to psychological and physiological features, speech features can precisely convey information about emotions. The use of speech features can improve the objectivity of psychiatric assessments. Therefore, biofeedback based on subjective symptom scales, objective speech, and physiological features to evaluate efficacy provides a new approach for early screening and treatment of emotional problems in college students. A 4-week, randomized, controlled, parallel biofeedback therapy study was conducted with college students with symptoms of anxiety or depression. Speech samples, physiological samples, and clinical symptoms were collected at baseline and at the end of treatment, and the extracted speech features and physiological features were used for between-group comparisons and correlation analyses between the biofeedback and wait-list groups. Based on the speech features with differences between the biofeedback intervention and wait-list groups, an artificial neural network was used to predict the therapeutic effect and response after biofeedback therapy. Through biofeedback therapy, improvements in depression (p = 0.001), anxiety (p = 0.001), insomnia (p = 0.013), and stress (p = 0.004) severity were observed in college-going students (n = 52). The speech and physiological features in the biofeedback group also changed significantly compared to the waitlist group (n = 52) and were related to the change in symptoms. The energy parameters and Mel-Frequency Cepstral Coefficients (MFCC) of speech features can predict whether biofeedback intervention effectively improves anxiety and insomnia symptoms and treatment response. The accuracy of the classification model built using the artificial neural network (ANN) for treatment response and non-response was approximately 60%. The results of this study provide valuable information about biofeedback in improving the mental health of college-going students. The study identified speech features, such as the energy parameters, and MFCC as more accurate and objective indicators for tracking biofeedback therapy response and predicting efficacy. Trial Registration ClinicalTrials.gov ChiCTR2100045542.


Assuntos
Distúrbios do Início e da Manutenção do Sono , Fala , Humanos , Biorretroalimentação Psicológica/métodos , Estudantes/psicologia , Biomarcadores , Aprendizado de Máquina
4.
BMC Med Inform Decis Mak ; 23(1): 45, 2023 03 03.
Artigo em Inglês | MEDLINE | ID: mdl-36869377

RESUMO

OBJECTIVES: Automatic speech and language assessment methods (SLAMs) can help clinicians assess speech and language impairments associated with dementia in older adults. The basis of any automatic SLAMs is a machine learning (ML) classifier that is trained on participants' speech and language. However, language tasks, recording media, and modalities impact the performance of ML classifiers. Thus, this research has focused on evaluating the effects of the above-mentioned factors on the performance of ML classifiers that can be used for dementia assessment. METHODOLOGY: Our methodology includes the following steps: (1) Collecting speech and language datasets from patients and healthy controls; (2) Using feature engineering methods which include feature extraction methods to extract linguistic and acoustic features and feature selection methods to select most informative features; (3) Training different ML classifiers; and (4) Evaluating the performance of ML classifiers to investigate the impacts of language tasks, recording media, and modalities on dementia assessment. RESULTS: Our results show that (1) the ML classifiers trained with the picture description language task perform better than the classifiers trained with the story recall language task; (2) the data obtained from phone-based recordings improves the performance of ML classifiers compared to data obtained from web-based recordings; and (3) the ML classifiers trained with acoustic features perform better than the classifiers trained with linguistic features. CONCLUSION: This research demonstrates that we can improve the performance of automatic SLAMs as dementia assessment methods if we: (1) Use the picture description task to obtain participants' speech; (2) Collect participants' voices via phone-based recordings; and (3) Train ML classifiers using only acoustic features. Our proposed methodology will help future researchers to investigate the impacts of different factors on the performance of ML classifiers for assessing dementia.


Assuntos
Demência , Idioma , Humanos , Idoso , Linguística , Algoritmos , Aprendizado de Máquina
5.
Sensors (Basel) ; 23(19)2023 Sep 27.
Artigo em Inglês | MEDLINE | ID: mdl-37836929

RESUMO

Birds play a vital role in the study of ecosystems and biodiversity. Accurate bird identification helps monitor biodiversity, understand the functions of ecosystems, and develop effective conservation strategies. However, previous bird sound recognition methods often relied on single features and overlooked the spatial information associated with these features, leading to low accuracy. Recognizing this gap, the present study proposed a bird sound recognition method that employs multiple convolutional neural-based networks and a transformer encoder to provide a reliable solution for identifying and classifying birds based on their unique sounds. We manually extracted various acoustic features as model inputs, and feature fusion was applied to obtain the final set of feature vectors. Feature fusion combines the deep features extracted by various networks, resulting in a more comprehensive feature set, thereby improving recognition accuracy. The multiple integrated acoustic features, such as mel frequency cepstral coefficients (MFCC), chroma features (Chroma) and Tonnetz features, were encoded by a transformer encoder. The transformer encoder effectively extracted the positional relationships between bird sound features, resulting in enhanced recognition accuracy. The experimental results demonstrated the exceptional performance of our method with an accuracy of 97.99%, a recall of 96.14%, an F1 score of 96.88% and a precision of 97.97% on the Birdsdata dataset. Furthermore, our method achieved an accuracy of 93.18%, a recall of 92.43%, an F1 score of 93.14% and a precision of 93.25% on the Cornell Bird Challenge 2020 (CBC) dataset.


Assuntos
Ecossistema , Reconhecimento Psicológico , Animais , Som , Acústica , Aves
6.
Sensors (Basel) ; 23(17)2023 Aug 31.
Artigo em Inglês | MEDLINE | ID: mdl-37688009

RESUMO

Although cochlear implants work well for people with hearing impairment in quiet conditions, it is well-known that they are not as effective in noisy environments. Noise reduction algorithms based on machine learning allied with appropriate speech features can be used to address this problem. The purpose of this study is to investigate the importance of acoustic features in such algorithms. Acoustic features are extracted from speech and noise mixtures and used in conjunction with the ideal binary mask to train a deep neural network to estimate masks for speech synthesis to produce enhanced speech. The intelligibility of this speech is objectively measured using metrics such as Short-time Objective Intelligibility (STOI), Hit Rate minus False Alarm Rate (HIT-FA) and Normalized Covariance Measure (NCM) for both simulated normal-hearing and hearing-impaired scenarios. A wide range of existing features is experimentally evaluated, including features that have not been traditionally applied in this application. The results demonstrate that frequency domain features perform best. In particular, Gammatone features performed best for normal hearing over a range of signal-to-noise ratios and noise types (STOI = 0.7826). Mel spectrogram features exhibited the best overall performance for hearing impairment (NCM = 0.7314). There is a stronger correlation between STOI and NCM than HIT-FA and NCM, suggesting that the former is a better predictor of intelligibility for hearing-impaired listeners. The results of this study may be useful in the design of adaptive intelligibility enhancement systems for cochlear implants based on both the noise level and the nature of the noise (stationary or non-stationary).


Assuntos
Implante Coclear , Implantes Cocleares , Humanos , Acústica , Algoritmos , Benchmarking
7.
Alzheimers Dement ; 19(10): 4675-4687, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37578167

RESUMO

Recent advancements in the artificial intelligence (AI) domain have revolutionized the early detection of cognitive impairments associated with dementia. This has motivated clinicians to use AI-powered dementia detection systems, particularly systems developed based on individuals' and patients' speech and language, for a quick and accurate identification of patients with dementia. This paper reviews articles about developing assessment tools using machine learning and deep learning algorithms trained by vocal and textual datasets.

8.
Behav Res Methods ; 55(3): 1441-1459, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-35641682

RESUMO

Emotional prosody is fully embedded in language and can be influenced by the linguistic properties of a specific language. Considering the limitations of existing Chinese auditory stimulus database studies, we developed and validated an emotional auditory stimuli database composed of Chinese pseudo-sentences, recorded by six professional actors in Mandarin Chinese. Emotional expressions included happiness, sadness, anger, fear, disgust, pleasant surprise, and neutrality. All emotional categories were vocalized into two types of sentence patterns, declarative and interrogative. In addition, all emotional pseudo-sentences, except for neutral, were vocalized at two levels of emotional intensity: normal and strong. Each recording was validated with 40 native Chinese listeners in terms of the recognition accuracy of the intended emotion portrayal; finally, 4361 pseudo-sentence stimuli were included in the database. Validation of the database using a forced-choice recognition paradigm revealed high rates of emotional recognition accuracy. The detailed acoustic attributes of vocalization were provided and connected to the emotion recognition rates. This corpus could be a valuable resource for researchers and clinicians to explore the behavioral and neural mechanisms underlying emotion processing of the general population and emotional disturbances in neurological, psychiatric, and developmental disorders. The Mandarin Chinese auditory emotion stimulus database is available at the Open Science Framework ( https://osf.io/sfbm6/?view_only=e22a521e2a7d44c6b3343e11b88f39e3 ).


Assuntos
Emoções , Idioma , Humanos , Ira , Felicidade , China , Bases de Dados como Assunto
9.
BMC Psychiatry ; 22(1): 830, 2022 12 27.
Artigo em Inglês | MEDLINE | ID: mdl-36575442

RESUMO

BACKGROUND: Automated speech analysis has gained increasing attention to help diagnosing depression. Most previous studies, however, focused on comparing speech in patients with major depressive disorder to that in healthy volunteers. An alternative may be to associate speech with depressive symptoms in a non-clinical sample as this may help to find early and sensitive markers in those at risk of depression. METHODS: We included n = 118 healthy young adults (mean age: 23.5 ± 3.7 years; 77% women) and asked them to talk about a positive and a negative event in their life. Then, we assessed the level of depressive symptoms with a self-report questionnaire, with scores ranging from 0-60. We transcribed speech data and extracted acoustic as well as linguistic features. Then, we tested whether individuals below or above the cut-off of clinically relevant depressive symptoms differed in speech features. Next, we predicted whether someone would be below or above that cut-off as well as the individual scores on the depression questionnaire. Since depression is associated with cognitive slowing or attentional deficits, we finally correlated depression scores with performance in the Trail Making Test. RESULTS: In our sample, n = 93 individuals scored below and n = 25 scored above cut-off for clinically relevant depressive symptoms. Most speech features did not differ significantly between both groups, but individuals above cut-off spoke more than those below that cut-off in the positive and the negative story. In addition, higher depression scores in that group were associated with slower completion time of the Trail Making Test. We were able to predict with 93% accuracy who would be below or above cut-off. In addition, we were able to predict the individual depression scores with low mean absolute error (3.90), with best performance achieved by a support vector machine. CONCLUSIONS: Our results indicate that even in a sample without a clinical diagnosis of depression, changes in speech relate to higher depression scores. This should be investigated in more detail in the future. In a longitudinal study, it may be tested whether speech features found in our study represent early and sensitive markers for subsequent depression in individuals at risk.


Assuntos
Transtorno Depressivo Maior , Adulto Jovem , Humanos , Feminino , Adulto , Masculino , Transtorno Depressivo Maior/diagnóstico , Transtorno Depressivo Maior/psicologia , Depressão/diagnóstico , Estudos Longitudinais , Fala , Inquéritos e Questionários
10.
Sensors (Basel) ; 22(13)2022 Jun 23.
Artigo em Inglês | MEDLINE | ID: mdl-35808238

RESUMO

In recent years, the use of Artificial Intelligence for emotion recognition has attracted much attention. The industrial applicability of emotion recognition is quite comprehensive and has good development potential. This research uses voice emotion recognition technology to apply it to Chinese speech emotion recognition. The main purpose of this research is to transform gradually popularized smart home voice assistants or AI system service robots from a touch-sensitive interface to a voice operation. This research proposed a specifically designed Deep Neural Network (DNN) model to develop a Chinese speech emotion recognition system. In this research, 29 acoustic characteristics in acoustic theory are used as the training attributes of the proposed model. This research also proposes a variety of audio adjustment methods to amplify datasets and enhance training accuracy, including waveform adjustment, pitch adjustment, and pre-emphasize. This study achieved an average emotion recognition accuracy of 88.9% in the CASIA Chinese sentiment corpus. The results show that the deep learning model and audio adjustment method proposed in this study can effectively identify the emotions of Chinese short sentences and can be applied to Chinese voice assistants or integrated with other dialogue applications.


Assuntos
Inteligência Artificial , Fala , Acústica , China , Emoções , Redes Neurais de Computação
11.
Sensors (Basel) ; 22(13)2022 Jul 04.
Artigo em Inglês | MEDLINE | ID: mdl-35808528

RESUMO

Humans are able to detect an instantaneous change in correlation, demonstrating an ability to temporally process extremely rapid changes in interaural configurations. This temporal dynamic is correlated with human listeners' ability to store acoustic features in a transient auditory manner. The present study investigated whether the ability of transient auditory storage of acoustic features was affected by the interaural delay, which was assessed by measuring the sensitivity for detecting the instantaneous change in correlation for both wideband and narrowband correlated noise with various interaural delays. Furthermore, whether an instantaneous change in correlation between correlated interaural narrowband or wideband noise was detectable when introducing the longest interaural delay was investigated. Then, an auditory computational description model was applied to explore the relationship between wideband and narrowband simulation noise with various center frequencies in the auditory processes of lower-level transient memory of acoustic features. The computing results indicate that low-frequency information dominated perception and was more distinguishable in length than the high-frequency components, and the longest interaural delay for narrowband noise signals was highly correlated with that for wideband noise signals in the dynamic process of auditory perception.


Assuntos
Percepção Auditiva , Ruído , Estimulação Acústica , Acústica , Humanos
12.
Biomed Eng Online ; 20(1): 114, 2021 Nov 21.
Artigo em Inglês | MEDLINE | ID: mdl-34802448

RESUMO

BACKGROUND AND OBJECTIVE: Automatic voice condition analysis systems to detect Parkinson's disease (PD) are generally based on speech data recorded under acoustically controlled conditions and professional supervision. The performance of these approaches in a free-living scenario is unknown. The aim of this research is to investigate the impact of uncontrolled conditions (realistic acoustic environment and lack of supervision) on the performance of automatic PD detection systems based on speech. METHODS: A mobile-assisted voice condition analysis system is proposed to aid in the detection of PD using speech. The system is based on a server-client architecture. In the server, feature extraction and machine learning algorithms are designed and implemented to discriminate subjects with PD from healthy ones. The Android app allows patients to submit phonations and physicians to check the complete record of every patient. Six different machine learning classifiers are applied to compare their performance on two different speech databases. One of them is an in-house database (UEX database), collected under professional supervision by using the same Android-based smartphone in the same room, whereas the other one is an age, sex and health-status balanced subset of mPower study for PD, which provides real-world data. By applying identical methodology, single-database experiments have been performed on each database, and also cross-database tests. Cross-validation has been applied to assess generalization performance and hypothesis tests have been used to report statistically significant differences. RESULTS: In the single-database experiments, a best accuracy rate of 0.92 (AUC = 0.98) has been obtained on UEX database, while a considerably lower best accuracy rate of 0.71 (AUC = 0.76) has been achieved using the mPower-based database. The cross-database tests provided very degraded accuracy metrics. CONCLUSION: The results clearly show the potential of the proposed system as an aid for general practitioners to conduct triage or an additional tool for neurologists to perform diagnosis. However, due to the performance degradation observed using data from mPower study, semi-controlled conditions are encouraged, i.e., voices recorded at home by the patients themselves following a strict recording protocol and control of the information about patients by the medical doctor at charge.


Assuntos
Doença de Parkinson , Algoritmos , Humanos , Aprendizado de Máquina , Doença de Parkinson/diagnóstico , Smartphone , Fala
13.
Sensors (Basel) ; 21(19)2021 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-34640876

RESUMO

Rheumatic heart disease (RHD) is one of the most common causes of cardiovascular complications in developing countries. It is a heart valve disease that typically affects children. Impaired heart valves stop functioning properly, resulting in a turbulent blood flow within the heart known as a murmur. This murmur can be detected by cardiac auscultation. However, the specificity and sensitivity of manual auscultation were reported to be low. The other alternative is echocardiography, which is costly and requires a highly qualified physician. Given the disease's current high prevalence rate (the latest reported rate in the study area (Ethiopia) was 5.65%), there is a pressing need for early detection of the disease through mass screening programs. This paper proposes an automated RHD screening approach using machine learning that can be used by non-medically trained persons outside of a clinical setting. Heart sound data was collected from 124 persons with RHD (PwRHD) and 46 healthy controls (HC) in Ethiopia with an additional 81 HC records from an open-access dataset. Thirty-one distinct features were extracted to correctly represent RHD. A support vector machine (SVM) classifier was evaluated using two nested cross-validation approaches to quantitatively assess the generalization of the system to previously unseen subjects. For regular nested 10-fold cross-validation, an f1-score of 96.0 ± 0.9%, recall 95.8 ± 1.5%, precision 96.2 ± 0.6% and a specificity of 96.0 ± 0.6% were achieved. In the imbalanced nested cross-validation at a prevalence rate of 5%, it achieved an f1-score of 72.2 ± 0.8%, recall 92.3 ± 0.4%, precision 59.2 ± 3.6%, and a specificity of 94.8 ± 0.6%. In screening tasks where the prevalence of the disease is small, recall is more important than precision. The findings are encouraging, and the proposed screening tool can be inexpensive, easy to deploy, and has an excellent detection rate. As a result, it has the potential for mass screening and early detection of RHD in developing countries.


Assuntos
Cardiopatia Reumática , Criança , Estudos Transversais , Ecocardiografia , Auscultação Cardíaca , Humanos , Programas de Rastreamento , Cardiopatia Reumática/diagnóstico , Cardiopatia Reumática/epidemiologia
14.
Eur Arch Otorhinolaryngol ; 276(6): 1633-1641, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-30937559

RESUMO

PURPOSE: The present study aimed to investigate the discrimination ability for acoustic cues in individuals with auditory neuropathy spectrum disorder (ANSD) using both behavioral and neural measures and to compare the result with normal hearing individuals. METHODS: Four naturally produced syllables /ba/, /da/, /ma/ and /pa/ were used to study discrimination skills. They were combined in pairs such that the two syllables differ in acoustic features, that is place (/ba/-/da/), manner (/ba/-/ma/) and voicing (/ba/-/pa/) cues. Thirty individuals with ANSD and 30 individuals with normal hearing sensitivity were the participants. Syllable discrimination skill was assessed using behavioral (reaction time, sensitivity and D-prime) and neural (P300) measures. RESULTS: There was prolongation in latency and reduction in amplitude of P300 in individuals with ANSD compared to individuals with normal hearing sensitivity. Individuals with ANSD showed better discrimination skill for stimulus pairs differing in the manner followed by place and the least perceived was voicing information. CONCLUSION: Discrimination ability of individuals with ANSD is found to be affected as evident on behavioral and neural measures. The discrimination ability varies with acoustic features of speech.


Assuntos
Discriminação Psicológica/fisiologia , Perda Auditiva Central/fisiopatologia , Percepção da Fala/fisiologia , Acústica , Adolescente , Adulto , Estudos de Casos e Controles , Sinais (Psicologia) , Feminino , Perda Auditiva Central/psicologia , Testes Auditivos , Humanos , Masculino , Pessoa de Meia-Idade , Tempo de Reação , Adulto Jovem
15.
Biom J ; 61(3): 503-513, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-30408226

RESUMO

Vocal fold nodules are recognized as an occupational disease for all collective of workers performing activities for which maintained and continued use of voice is required. Computer-aided systems based on features extracted from voice recordings have been considered as potential noninvasive and low cost tools to diagnose some voice-related diseases. A Bayesian decision analysis approach has been proposed to classify university lectures in three levels of risk: low, medium, and high, based on the information provided by acoustic features extracted from healthy controls and people suffering from vocal fold nodules. The proposed risk groups are associated with different treatments. The approach is based on the calculation of posterior probabilities of developing vocal fold nodules and considers utility functions that include the financial cost and the probability of recovery for the corresponding treatment. Maximization of the expected utilities is considered. By using this approach, the risk of having vocal fold nodules is identified for each university lecturer, so he/she can be properly assigned to the right treatment. The approach has been applied to university lecturers according to the Disease Prevention Program of the University of Extremadura. However, it can also be applied to other voice professionals (singers, speakers, coaches, actors…).


Assuntos
Acústica , Biometria/métodos , Distúrbios da Voz/diagnóstico , Teorema de Bayes , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Modelos Estatísticos , Método de Monte Carlo , Medição de Risco , Incerteza
16.
J Psycholinguist Res ; 48(4): 859-876, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-30927183

RESUMO

Jungian active imagination is a well known and valuable method in analytical psychology and psychotherapy. The present study assessed, for the first time, psychological and psycho-acoustical (voice and speech quality) effects of active imagination experiment in outdoor, forest, ambient. In order to analyse the voice and speech quality, participants' verbal expressions were recorded before and after the experiment. Psychological observations were based on the thirteen features and were rated according to the bipolar Comparison Mean Opinion Score scale. The results showed a noticeable positive participants' experiences after the experiment, connected with themselves, others, their behaviour, other verbal and non-verbal expressions and relations towards nature. Voice and speech quality analysis, based on the speech signal processing approach, was done based on the fourteen acoustic features. The results showed a statistically significantly better voice and speech quality of the participants at the end of the experiment (p < 0.05). Applying the averaging model from the Information Integration Theory, we obtained integral evaluative ratings in active imagination for psychological observations EAI and voice and speech quality observations EVQ, for each participant. The value of Pearson's correlation coefficient R = 0.6385 (p < 0.05) has shown a significant correlation between these two ratings. Overall results highlight starting hypothesis that exist some strong voice and speech correlates of psychological observations in Jungian active imagination experiment.


Assuntos
Imaginação , Acústica da Fala , Fala/fisiologia , Voz/fisiologia , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Teoria Psicológica
18.
Proc Biol Sci ; 283(1835)2016 Jul 27.
Artigo em Inglês | MEDLINE | ID: mdl-27466453

RESUMO

The expression of bird song is expected to signal male quality to females. 'Quality' is determined by genetic and environmental factors, but, surprisingly, there is very limited evidence if and how genetic aspects of male quality are reflected in song. Here, we manipulated the genetic make-up of canaries (Serinus canaria) via inbreeding, and studied its effects upon song output, complexity, phonetics and, for the first time, song learning. To this end, we created weight-matched inbred and outbred pairs of male fledglings, which were subsequently exposed to the same tutor male during song learning. Inbreeding strongly affected syllable phonetics, but there were little or no effects on other song features. Nonetheless, females discriminated among inbred and outbred males, as they produced heavier clutches when mated with an outbred male. Our study highlights the importance of song phonetics, which has hitherto often been overlooked.


Assuntos
Canários/genética , Endogamia , Vocalização Animal , Animais , Feminino , Aprendizagem , Masculino
19.
Brain Cogn ; 101: 1-11, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26544602

RESUMO

It is widely acknowledged that music can communicate and induce a wide range of emotions in the listener. However, music is a highly-complex audio signal composed of a wide range of complex time- and frequency-varying components. Additionally, music-induced emotions are known to differ greatly between listeners. Therefore, it is not immediately clear what emotions will be induced in a given individual by a piece of music. We attempt to predict the music-induced emotional response in a listener by measuring the activity in the listeners electroencephalogram (EEG). We combine these measures with acoustic descriptors of the music, an approach that allows us to consider music as a complex set of time-varying acoustic features, independently of any specific music theory. Regression models are found which allow us to predict the music-induced emotions of our participants with a correlation between the actual and predicted responses of up to r=0.234,p<0.001. This regression fit suggests that over 20% of the variance of the participant's music induced emotions can be predicted by their neural activity and the properties of the music. Given the large amount of noise, non-stationarity, and non-linearity in both EEG and music, this is an encouraging result. Additionally, the combination of measures of brain activity and acoustic features describing the music played to our participants allows us to predict music-induced emotions with significantly higher accuracies than either feature type alone (p<0.01).


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Emoções/fisiologia , Música/psicologia , Estimulação Acústica , Adolescente , Adulto , Idoso , Mapeamento Encefálico , Eletroencefalografia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
20.
Int J Audiol ; 54(11): 852-64, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26203722

RESUMO

OBJECTIVE: To investigate speech stimuli and background-noise-dependent changes in cortical auditory-evoked potentials (CAEPs) in unaided and aided conditions, and determine amplification effects on CAEPs. DESIGN: CAEPs to naturally produced syllables in quiet and in multi-talker babble were recorded, with and without a hearing aid in the right ear. At least 300 artifact-free trials for each participant were required to measure latencies and amplitudes of CAEPs. Acoustic characteristics of the hearing-aid-transduced stimuli were measured using in-the-canal probe microphone measurements to determine unaided versus aided SNR and to compare stimulus acoustic characteristics to CAEP findings. STUDY SAMPLE: Ten participants with normal hearing, aged 19 to 35 years. RESULTS: CAEP latencies and amplitudes showed significant effects of speech contrast, background noise, and amplification. N1 and P2 components varied differently across conditions. In general, cortical processing in noise was influenced by SNR and the spectrum of the speech stimuli. Hearing-aid-induced spectral and temporal changes to the speech stimuli affected P1-N1-P2 components. Amplification produced complex effects on latencies and amplitudes across speech stimuli and CAEP components, and for quiet versus noise conditions. CONCLUSION: CAEP components reflect spectral and temporal characteristics of speech stimuli and acoustic changes induced by background noise and amplification.


Assuntos
Córtex Cerebral/fisiologia , Potenciais Evocados Auditivos , Ruído , Percepção da Fala/fisiologia , Adulto , Feminino , Voluntários Saudáveis , Auxiliares de Audição , Humanos , Masculino , Acústica da Fala , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA