Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 14.807
Filter
1.
J Acoust Soc Am ; 156(2): 865-878, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39120868

ABSTRACT

This study aims to detect the bioacoustics signal in the underwater soundscape, specifically those produced by snapping shrimp, using adaptive iterative transfer learning. The proposed network is initially trained with pre-classified snapping shrimp sounds and Gaussian noise, then applied to classify and remove snapping-free noise from field data. This separated ambient noise is subsequently used for transfer learning. This process was iterated to distinguish more effectively between ambient noise and snapping shrimp sounds characteristics, resulting in improved classification. Through iterative transfer learning, significant improvements in precision and recall were observed. The application to field data confirmed that the trained network could detect signals that were difficult to identify using existing threshold classification methods. Furthermore, it was found that the rate of false detection decreased, and detection probability improved with each stage. This research demonstrates that incorporating the noise characteristics of field data into the trained network via iterative transfer learning can generate more realistic training data. The proposed network can successfully detect signals that are challenging to identify using existing threshold classification methods.


Subject(s)
Acoustics , Animals , Signal Processing, Computer-Assisted , Noise , Sound , Sound Spectrography/methods , Machine Learning , Neural Networks, Computer
2.
Curr Biol ; 34(15): R736-R738, 2024 Aug 05.
Article in English | MEDLINE | ID: mdl-39106832

ABSTRACT

When recreating outdoors in remote landscapes, people are encouraged to "leave no trace". However, the mere presence of humans on a trail can elicit changes in animal behavior, potentially compromising the effectiveness of protected areas for wildlife conservation.


Subject(s)
Conservation of Natural Resources , Recreation , Animals , Animals, Wild/physiology , Humans , Behavior, Animal/physiology , Sound
3.
PLoS One ; 19(8): e0308481, 2024.
Article in English | MEDLINE | ID: mdl-39121092

ABSTRACT

With the increasing demand for building acoustic performance, accurately evaluating the acoustic performance of building walls has become an important research topic. However, existing research has mostly focused on general building materials such as concrete, iron and steel, and glass. For wooden structure wall, due to the sound absorption performance of the materials themselves and the complexity of structural design, the analysis of their acoustic performance is still relatively weak. Moreover, there is a lack of quantitative description of their spectral characteristics and acoustic impedance. To analyze the acoustic performance of wooden structure building walls, Building Information Model (BIM) and impedance tube method were integrated to construct a building wall performance testing system with BIM technology. The impedance tube method was applied and testing functions for sound absorption and insulation performance were designed. The outcomes indicated that in the error test, the error range between the experimental group and the control group was [0.01, 0.18], indicating a high reliability of the experimental results. In the calculation of sound insulation of different specimens at different sound frequencies, when the frequency was 1600Hz, the sound insulation of the control group and experimental group was 65.30dB and 70.14dB, proving the effectiveness of the design method. The above results demonstrate the practicality of integrating BIM technology and impedance tube method in the acoustic performance analysis of wooden structure building walls. This study provides strong technical support for reducing the indoor environment of wooden buildings and improving the comfort of people's living environment.


Subject(s)
Acoustics , Construction Materials , Wood , Construction Materials/analysis , Wood/chemistry , Sound , Electric Impedance
4.
Mar Pollut Bull ; 206: 116792, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39106628

ABSTRACT

Anthropogenic noise has been identified as one of the most harmful forms of global pollutants impacting both terrestrial and aquatic ecosystems. As global populations continue to increase, coastlines are seeing substantial increases in the level of urbanisation. Although measures are in place to minimise stress on fauna, they rarely consider the impact of anthropogenic noise. In Australia, New South Wales (NSW) estuaries have seen extensive increases in urbanisation in recent years. Yet, there remains minimal baseline data on their soundscapes to determine if noise pollution is a threat. This research provides a first assessment of baseline sounds across a temporal and seasonal scale. Recreational boating was the primary soundscape contributor in estuaries, and estuaries with higher urbanisation levels contained higher sound levels. This research provides useful information for managers of NSW estuaries and is of global relevance in an era of increasing generation of anthropogenic noise in estuarine and coastal systems.


Subject(s)
Environmental Monitoring , Estuaries , Noise , Ships , Urbanization , New South Wales , Ecosystem , Sound
5.
IEEE J Transl Eng Health Med ; 12: 550-557, 2024.
Article in English | MEDLINE | ID: mdl-39155923

ABSTRACT

The objective of this study was to develop a sound recognition-based cardiopulmonary resuscitation (CPR) training system that is accessible, cost-effective, easy-to-maintain and provides accurate CPR feedback. Beep-CPR, a novel device with accordion squeakers that emit high-pitched sounds during compression, was developed. The sounds emitted by Beep-CPR were recorded using a smartphone, segmented into 2-second audio fragments, and then transformed into spectrograms. A total of 6,065 spectrograms were generated from approximately 40 minutes of audio data, which were then randomly split into training, validation, and test datasets. Each spectrogram was matched with the depth, rate, and release velocity of the compression measured at the same time interval by the ZOLL X Series monitor/defibrillator. Deep learning models utilizing spectrograms as input were trained using transfer learning based on EfficientNet to predict the depth (Depth model), rate (Rate model), and release velocity (Recoil model) of compressions. Results: The mean absolute error (MAE) for the Depth model was 0.30 cm (95% confidence interval [CI]: 0.27-0.33). The MAE of the Rate model was 3.6/min (95% CI: 3.2-3.9). For the Recoil model, the MAE was 2.3 cm/s (95% CI: 2.1-2.5). External validation of the models demonstrated acceptable performance across multiple conditions, including the utilization of a newly-manufactured device, a fatigued device, and evaluation in an environment with altered spatial dimensions. We have developed a novel sound recognition-based CPR training system, that accurately measures compression quality during training. Significance: Beep-CPR is a cost-effective and easy-to-maintain solution that can improve the efficacy of CPR training by facilitating decentralized at-home training with performance feedback.


Subject(s)
Cardiopulmonary Resuscitation , Cardiopulmonary Resuscitation/education , Cardiopulmonary Resuscitation/instrumentation , Humans , Sound , Sound Spectrography , Signal Processing, Computer-Assisted/instrumentation , Deep Learning , Smartphone , Equipment Design
6.
Cogn Sci ; 48(8): e13486, 2024 Aug.
Article in English | MEDLINE | ID: mdl-39155515

ABSTRACT

Research shows that high- and low-pitch sounds can be associated with various meanings. For example, high-pitch sounds are associated with small concepts, whereas low-pitch sounds are associated with large concepts. This study presents three experiments revealing that high-pitch sounds are also associated with open concepts and opening hand actions, while low-pitch sounds are associated with closed concepts and closing hand actions. In Experiment 1, this sound-meaning correspondence effect was shown using the two-alternative forced-choice task, while Experiments 2 and 3 used reaction time tasks to show this interaction. In Experiment 2, high-pitch vocalizations were found to facilitate opening hand gestures, and low-pitch vocalizations were found to facilitate closing hand gestures, when performed simultaneously. In Experiment 3, high-pitched vocalizations were produced particularly rapidly when the visual target stimulus presented an open object, and low-pitched vocalizations were produced particularly rapidly when the target presented a closed object. These findings are discussed concerning the meaning of intonational cues. They are suggested to be based on cross-modally representing conceptual spatial knowledge in sensory, motor, and affective systems. Additionally, this pitch-opening effect might share cognitive processes with other pitch-meaning effects.


Subject(s)
Reaction Time , Humans , Male , Female , Young Adult , Adult , Pitch Perception/physiology , Space Perception/physiology , Gestures , Sound , Acoustic Stimulation , Cues
7.
PLoS One ; 19(8): e0308385, 2024.
Article in English | MEDLINE | ID: mdl-39150934

ABSTRACT

End-stage kidney disease (ESKD) presents a significant public health challenge, with hemodialysis (HD) remaining one of the most prevalent kidney replacement therapies. Ensuring the longevity and functionality of arteriovenous accesses is challenging for HD patients. Blood flow sound, which contains valuable information, has often been neglected in the past. However, machine learning offers a new approach, leveraging data non-invasively and learning autonomously to match the experience of healthcare professionas. This study aimed to devise a model for detecting arteriovenous grafts (AVGs) stenosis. A smartphone stethoscope was used to record the sound of AVG blood flow at the arterial and venous sides, with each recording lasting one minute. The sound recordings were transformed into mel spectrograms, and a 14-layer convolutional neural network (CNN) was employed to detect stenosis. The CNN comprised six convolution blocks with 3x3 kernel mapping, batch normalization, and rectified linear unit activation function. We applied contrastive learning to train the pre-training audio neural networks model with unlabeled data through self-supervised learning, followed by fine-tuning. In total, 27,406 dialysis session blood flow sounds were documented, including 180 stenosis blood flow sounds. Our proposed framework demonstrated a significant improvement (p<0.05) over training from scratch and a popular pre-trained audio neural networks (PANNs) model, achieving an accuracy of 0.9279, precision of 0.8462, and recall of 0.8077, compared to previous values of 0.8649, 0.7391, and 0.6538. This study illustrates how contrastive learning with unlabeled blood flow sound data can enhance convolutional neural networks for detecting AVG stenosis in HD patients.


Subject(s)
Neural Networks, Computer , Renal Dialysis , Humans , Male , Female , Constriction, Pathologic , Middle Aged , Kidney Failure, Chronic/therapy , Kidney Failure, Chronic/physiopathology , Aged , Arteriovenous Shunt, Surgical , Machine Learning , Sound , Graft Occlusion, Vascular/physiopathology , Graft Occlusion, Vascular/etiology
8.
PLoS One ; 19(8): e0306812, 2024.
Article in English | MEDLINE | ID: mdl-39146270

ABSTRACT

This investigation into the effects of indoor soundscapes on learning efficiency during home-based online classes amidst the COVID-19 pandemic leveraged a questionnaire survey to gather insights from participants across 32 provinces in China. The survey findings reveal a notable preference among respondents for sounds emanating from nature and culture, alongside an acceptance of sounds inherent to lectures. A significant majority showed a preference for a tranquil soundscape or one enriched with natural and cultural elements, emphasizing that such an environment, coupled with the ability for active communication, is conducive to enhancing learning efficiency. Through semantic differential analysis, the study identified four pivotal factors that influence subjective evaluations of indoor soundscapes: the nature of online classes, relaxation, physical attributes of the soundscape, and aspects related to personal study. Additionally, the analysis delved into gender and regional differences in soundscape perceptions and their impact on learning. A key finding is that complex soundscapes negatively affect the learning process, with 45.7% of respondents reporting a perceived decrease in learning efficiency attributable to the indoor soundscape experienced during home-based online classes. Consequently, this study suggests that optimizing learning efficiency requires creating simpler, lighter, quieter, and more relaxing soundscapes. These insights hold both theoretical and practical value, offering a foundational basis for further research into indoor soundscapes and informing the development and management of online classes. The findings underscore the importance of considering the auditory environment as a critical component of effective online education, highlighting the need for strategies that mitigate auditory distractions and foster an acoustically conducive learning space.


Subject(s)
COVID-19 , Education, Distance , Learning , Humans , Male , Female , COVID-19/epidemiology , COVID-19/prevention & control , Adult , China , Education, Distance/methods , Surveys and Questionnaires , Sound , SARS-CoV-2 , Young Adult , Semantics , Pandemics , Middle Aged
9.
Rev Assoc Med Bras (1992) ; 70(7): e20231599, 2024.
Article in English | MEDLINE | ID: mdl-39166658

ABSTRACT

OBJECTIVE: The objective of this study was to determine the effects of listening to nature sounds alone and virtual reality plus listening to nature sounds on pain and anxiety in hysterosalpingography. METHODS: This three-arm parallel randomized controlled trial included 135 (45 in each group) women who underwent hysterosalpingography in Turkey. The virtual reality+nature sounds group viewed a nature video with virtual reality glasses and listened to nature sounds during hysterosalpingography, whereas the nature sounds group only listened to nature sounds. The control group received only routine care. RESULTS: During hysterosalpingography, women in virtual reality+nature sounds group experienced less pain than those in control group (p=0.009). After hysterosalpingography, pain levels were lower in both virtual reality+nature sounds group and nature sounds group than in control group (p=0.000 and p=0.000, respectively), anxiety levels were lower in virtual reality+nature sounds group than in nature sounds group and control group (p=0.018 and p=0.000, respectively), and anxiety levels were lower in nature sounds group than in control group (p=0.013). CONCLUSION: Virtual reality with nature content plus listening to nature sounds and only listening to nature sounds are effective in reducing pain and anxiety related to hysterosalpingography procedures in women. Compared with only listening to nature sounds, virtual reality plus listening to nature sounds further reduced hysterosalpingography-related pain and anxiety.


Subject(s)
Anxiety , Hysterosalpingography , Virtual Reality , Humans , Female , Hysterosalpingography/methods , Hysterosalpingography/adverse effects , Adult , Anxiety/prevention & control , Anxiety/psychology , Sound , Pain Measurement , Pain/psychology , Pain/prevention & control , Young Adult , Turkey
10.
Nat Commun ; 15(1): 6806, 2024 Aug 19.
Article in English | MEDLINE | ID: mdl-39160146

ABSTRACT

Bimodal neuromodulation is emerging as a nonsurgical treatment for tinnitus. Bimodal treatment combining sound therapy with electrical tongue stimulation using the Lenire device is evaluated in a controlled pivotal trial (TENT-A3, NCT05227365) consisting of 6-weeks of sound-only stimulation (Stage 1) followed by 6-weeks of bimodal treatment (Stage 2) with 112 participants serving as their own control. The primary endpoint compares the responder rate observed in Stage 2 versus Stage 1, where a responder exceeds 7 points in the Tinnitus Handicap Inventory. In participants with moderate or more severe tinnitus, there is a clinically superior performance of bimodal treatment (58.6%; 95% CI: 43.5%, 73.6%; p = 0.022) compared to sound therapy alone (43.2%; 95% CI: 29.7%, 57.8%), which is not observed in the full cohort across all severity groups. Consistent results are observed for the secondary endpoint based on the Tinnitus Functional Index (bimodal treatment: 45.5%; 95% CI: 31.7%, 59.9%; sound-only stimulation: 29.6%; 95% CI: 18.2%, 44.2%; p = 0.010), where a responder exceeds 13 points. There are no device related serious adverse events. These positive outcomes led to FDA De Novo approval of the Lenire device for tinnitus treatment.


Subject(s)
Tinnitus , Tongue , Tinnitus/therapy , Humans , Female , Male , Middle Aged , Adult , Treatment Outcome , Aged , Electric Stimulation Therapy/methods , Electric Stimulation Therapy/instrumentation , Acoustic Stimulation/methods , Sound , Combined Modality Therapy/methods
11.
Sci Rep ; 14(1): 19181, 2024 08 19.
Article in English | MEDLINE | ID: mdl-39160202

ABSTRACT

How we move our bodies affects how we perceive sound. For instance, head movements help us to better localize the source of a sound and to compensate for asymmetric hearing loss. However, many auditory experiments are designed to restrict head and body movements. To study the role of movement in hearing, we developed a behavioral task called sound-seeking that rewarded freely moving mice for tracking down an ongoing sound source. Over the course of learning, mice more efficiently navigated to the sound. Next, we asked how sound-seeking was affected by hearing loss induced by surgical removal of the malleus from the middle ear. After bilateral hearing loss sound-seeking performance drastically declined and did not recover. In striking contrast, after unilateral hearing loss mice were only transiently impaired and then recovered their sound-seek ability over about a week. Throughout recovery, unilateral mice increasingly relied on a movement strategy of sequentially checking potential locations for the sound source. In contrast, the startle reflex (an innate auditory behavior) was preserved after unilateral hearing loss and abolished by bilateral hearing loss without recovery over time. In sum, mice compensate with body movement for permanent unilateral damage to the peripheral auditory system. Looking forward, this paradigm provides an opportunity to examine how movement enhances perception and enables resilient adaptation to sensory disorders.


Subject(s)
Sound Localization , Animals , Mice , Sound Localization/physiology , Reflex, Startle/physiology , Hearing Loss/physiopathology , Male , Acoustic Stimulation , Mice, Inbred C57BL , Behavior, Animal , Sound , Female
12.
PeerJ ; 12: e17622, 2024.
Article in English | MEDLINE | ID: mdl-38952977

ABSTRACT

Introduction: High velocity thrust manipulation is commonly used when managing joint dysfunctions. Often, these thrust maneuvers will elicit an audible pop. It has been unclear what conclusively causes this audible sound and its clinical meaningfulness. This study sought to identify the effect of the audible pop on brainwave activity directly following a prone T7 thrust manipulation in asymptomatic/healthy subjects. Methods: This was a quasi-experimental repeated measure study design in which 57 subjects completed the study protocol. Brain wave activity was measured with the Emotiv EPOC+, which collects data with a frequency of 128 HZ and has 14 electrodes. Testing was performed in a controlled environment with minimal electrical interference (as measured with a Gauss meter), temperature variance, lighting variance, sound pollution, and other variable changes that could have influenced or interfered with pure EEG data acquisition. After accommodation each subject underwent a prone T7 posterior-anterior thrust manipulation. Immediately after the thrust manipulation the brainwave activity was measured for 10 seconds. Results: The non-audible group (N = 20) consisted of 55% males, and the audible group (N = 37) consisted of 43% males. The non-audible group EEG data revealed a significant change in brain wave activity under some of the electrodes in the frontal, parietal, and the occipital lobes. In the audible group, there was a significant change in brain wave activity under all electrodes in the frontal lobes, the parietal lobe, and the occipital lobes but not the temporal lobes. Conclusion: The audible sounds caused by a thoracic high velocity thrust manipulation did not affect the activity in the audible centers in the temporal brain region. The results support the hypothesis that thrust manipulation with or without audible sound results in a generalized relaxation immediately following the manipulation. The absence of a significant difference in brainwave activity in the frontal lobe in this study might indicate that the audible pop does not produce a "placebo" mechanism.


Subject(s)
Manipulation, Spinal , Humans , Male , Female , Adult , Manipulation, Spinal/methods , Brain Waves/physiology , Electroencephalography/methods , Young Adult , Sound
13.
PLoS One ; 19(7): e0303994, 2024.
Article in English | MEDLINE | ID: mdl-38968280

ABSTRACT

In recent years, the relation between Sound Event Detection (SED) and Source Separation (SSep) has received a growing interest, in particular, with the aim to enhance the performance of SED by leveraging the synergies between both tasks. In this paper, we present a detailed description of JSS (Joint Source Separation and Sound Event Detection), our joint-training scheme for SSep and SED, and we measure its performance in the DCASE Challenge for SED in domestic environments. Our experiments demonstrate that JSS can improve SED performance, in terms of Polyphonic Sound Detection Score (PSDS), even without additional training data. Additionally, we conduct a thorough analysis of JSS's effectiveness across different event classes and in scenarios with severe event overlap, where it is expected to yield further improvements. Furthermore, we introduce an objective measure to assess the diversity of event predictions across the estimated sources, shedding light on how different training strategies impact the separation of sound events. Finally, we provide graphical examples of the Source Separation and Sound Event Detection steps, aiming to facilitate the interpretation of the JSS methods.


Subject(s)
Sound , Humans , Algorithms
14.
J Acoust Soc Am ; 156(1): 359-368, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38994905

ABSTRACT

A noise attenuation performance test was conducted on earmuffs using a recoilless weapon launch platform in a confined space, along with two acoustic test fixtures (ATFs). The overpressure at the ATF's effective tympanic membrane comprised direct sound at 185 dB sound pressure level (SPL) and reflected sound at 179 dB SPL. Wearing earmuffs reduced these peaks to 162 dB SPL and 169 dB SPL, respectively. The reflected sound from walls was defined as delayed sound. An analytical model for earmuff noise attenuation simulated their effectiveness. The simulation revealed that when the earmuffs attenuated delayed sound, the acoustic impedance of acoustic leakage and the acoustic impedance of the earmuff material decreased by 96% and 50%, respectively. The negative overpressure zone between direct and delayed sound decreased the earmuffs' fit against the ATF. Additionally, the enclosed volume between the earmuff and the ear canal decreased by 12%. After the installation of bandages on the earmuffs, the overpressure peak of delayed sound was reduced by 5 dB. Furthermore, the acoustic impedance of the earmuff's sound leakage path and the acoustic impedance of the earmuff material deformation path increased by 100% and 809%, respectively.


Subject(s)
Acoustics , Ear Protective Devices , Pressure , Humans , Equipment Design , Noise , Sound , Firearms , Adult , Male , Time Factors , Models, Theoretical
15.
Bioinspir Biomim ; 19(5)2024 Jul 23.
Article in English | MEDLINE | ID: mdl-38991522

ABSTRACT

This work examines the acoustically actuated motions of artificial flagellated micro-swimmers (AFMSs) and compares the motility of these micro-swimmers with the predictions based on the corrected resistive force theory (RFT) and the bar-joint model proposed in our previous work. The key ingredient in the theory is the introduction of a correction factorKin drag coefficients to correct the conventional RFT so that the dynamics of an acoustically actuated AFMS with rectangular cross-sections can be accurately modeled. Experimentally, such AFMSs can be easily manufactured based on digital light processing of ultra-violet (UV)-curable resins. We first determined the viscoelastic properties of a UV-cured resin through dynamic mechanical analysis. In particular, the high-frequency storage moduli and loss factors were obtained based on the assumption of time-temperature superposition (TTS), which were then applied in theoretical calculations. Though the extrapolation based on the TTS implied the uncertainty of high-frequency material response and there is limited accuracy in determining head oscillation amplitude, the differences between the measured terminal velocities of the AFMSs and the predicted ones are less than 50%, which, to us, is well acceptable. These results indicate that the motions of acoustic AFMS can be predicted, and thus, designed, which pave the way for their long-awaited applications in targeted therapy.


Subject(s)
Computer Simulation , Equipment Design , Models, Biological , Swimming , Swimming/physiology , Equipment Failure Analysis , Biomimetic Materials/chemistry , Biomimetics/methods , Robotics/methods , Robotics/instrumentation , Sound , Acoustics , Computer-Aided Design , Animals
16.
PLoS One ; 19(7): e0304027, 2024.
Article in English | MEDLINE | ID: mdl-39018315

ABSTRACT

Rhythms are the most natural cue for temporal anticipation because many sounds in our living environment have rhythmic structures. Humans have cortical mechanisms that can predict the arrival of the next sound based on rhythm and periodicity. Herein, we showed that temporal anticipation, based on the regularity of sound sequences, modulates peripheral auditory responses via efferent innervation. The medial olivocochlear reflex (MOCR), a sound-activated efferent feedback mechanism that controls outer hair cell motility, was inferred noninvasively by measuring the suppression of otoacoustic emissions (OAE). First, OAE suppression was compared between conditions in which sound sequences preceding the MOCR elicitor were presented at regular (predictable condition) or irregular (unpredictable condition) intervals. We found that OAE suppression in the predictable condition was stronger than that in the unpredictable condition. This implies that the MOCR is strengthened by the regularity of preceding sound sequences. In addition, to examine how many regularly presented preceding sounds are required to enhance the MOCR, we compared OAE suppression within stimulus sequences with 0-3 preceding tones. The OAE suppression was strengthened only when there were at least three regular preceding tones. This suggests that the MOCR was not automatically enhanced by a single stimulus presented immediately before the MOCR elicitor, but rather that it was enhanced by the regularity of the preceding sound sequences.


Subject(s)
Acoustic Stimulation , Cochlea , Humans , Male , Adult , Female , Young Adult , Cochlea/physiology , Olivary Nucleus/physiology , Reflex/physiology , Sound , Auditory Perception/physiology , Otoacoustic Emissions, Spontaneous/physiology , Reflex, Acoustic/physiology
17.
PLoS One ; 19(7): e0306427, 2024.
Article in English | MEDLINE | ID: mdl-39083499

ABSTRACT

When individuals are exposed to two pure tones with close frequencies presented separately in each ear, they perceive a third sound known as binaural beats (BB), characterized by a frequency equal to the difference between the two tones. Previous research has suggested that BB may influence brain activity, potentially benefiting attention and relaxation. In this study, we hypothesized that the impact of BB on cognition and EEG is linked to the spatial characteristics of the sound. Participants listened to various types of spatially moving sounds (BB, panning and alternate beeps) at 6Hz and 40Hz frequencies. EEG measurements were conducted throughout the auditory stimulation, and participants completed questionnaires on relaxation, affect, and a sustained attention task. The results indicated that binaural, panning sounds and alternate beeps had a more pronounced effect on electrical brain activity than the control condition. Additionally, an improvement in relaxation was observed with these sounds at both 6Hz and 40Hz. Overall, these findings support our hypothesis that the impact of auditory stimulation lies in the spatial attributes rather than the sensation of beating itself.


Subject(s)
Acoustic Stimulation , Attention , Electroencephalography , Humans , Male , Female , Adult , Young Adult , Attention/physiology , Auditory Perception/physiology , Sound , Sound Localization/physiology , Brain/physiology , Cognition/physiology , Relaxation/physiology
18.
Sci Rep ; 14(1): 17656, 2024 07 26.
Article in English | MEDLINE | ID: mdl-39085282

ABSTRACT

Emotionally expressive vocalizations can elicit approach-avoidance responses in humans and non-human animals. We investigated whether artificially generated sounds have similar effects on humans. We assessed whether subjects' reactions were linked to acoustic properties, and associated valence and intensity. We generated 343 artificial sounds with differing call lengths, fundamental frequencies and added acoustic features across 7 categories and 3 levels of biological complexity. We assessed the hypothetical behavioural response using an online questionnaire with a manikin task, in which 172 participants indicated whether they would approach or withdraw from an object emitting the sound. (1) Quieter sounds elicited approach, while loud sounds were associated with avoidance. (2) The effect of pitch was modulated by category, call length and loudness. (2a) Low-pitched sounds in complex sound categories prompted avoidance, while in other categories they elicited approach. (2b) Higher pitch in loud sounds had a distancing effect, while higher pitch in quieter sounds prompted approach. (2c) Longer sounds promoted avoidance, especially at high frequencies. (3) Sounds with higher intensity and negative valence elicited avoidance. We conclude that biologically based acoustic signals can be used to regulate the distance between social robots and humans, which can provide an advantage in interactive scenarios.


Subject(s)
Acoustic Stimulation , Motivation , Sound , Humans , Male , Female , Adult , Motivation/physiology , Young Adult , Surveys and Questionnaires , Emotions/physiology
19.
PLoS One ; 19(7): e0302497, 2024.
Article in English | MEDLINE | ID: mdl-38976700

ABSTRACT

This paper presents a deep-learning-based method to detect recreational vessels. The method takes advantage of existing underwater acoustic measurements from an Estuarine Soundscape Observatory Network based in the estuaries of South Carolina (SC), USA. The detection method is a two-step searching method, called Deep Scanning (DS), which includes a time-domain energy analysis and a frequency-domain spectrum analysis. In the time domain, acoustic signals with higher energy, measured by sound pressure level (SPL), are labeled for the potential existence of moving vessels. In the frequency domain, the labeled acoustic signals are examined against a predefined training dataset using a neural network. This research builds training data using diverse vessel sound features obtained from real measurements, with a duration between 5.0 seconds and 7.5 seconds and a frequency between 800 Hz to 10,000 Hz. The proposed method was then evaluated using all acoustic data in the years 2017, 2018, and 2021, respectively; a total of approximately 171,262 2-minute.wav files at three deployed locations in May River, SC. The DS detections were compared to human-observed detections for each audio file and results showed the method was able to classify the existence of vessels, with an average accuracy of around 99.0%.


Subject(s)
Acoustics , Deep Learning , Estuaries , Rivers , South Carolina , Humans , Recreation , Sound , Ships
20.
Philos Trans R Soc Lond B Biol Sci ; 379(1908): 20230254, 2024 Aug 26.
Article in English | MEDLINE | ID: mdl-39005038

ABSTRACT

Sound serves as a potent medium for emotional well-being, with phenomena like the autonomous sensory meridian response (ASMR) showing a unique capacity for inducing relaxation and alleviating stress. This study aimed to understand how tingling sensations (and, for comparison, pleasant feelings) that such videos induce relate to acoustic features, using a broader range of ASMR videos as stimuli. The sound texture statistics and their timing predictive of tingling and pleasantness were identified through L1-regularized linear regression. Tingling was well-predicted (r = 0.52), predominantly by the envelope of frequencies near 5 kHz in the 1500 to 750 ms period before the response: stronger tingling was associated with a lower amplitude around the 5 kHz frequency range. This finding was further validated using an independent set of ASMR sounds. The prediction of pleasantness was more challenging (r = 0.26), requiring a longer effective time window, threefold that for tingling. These results enhance our understanding of how specific acoustic elements can induce tingling sensations, and how these elements differ from those that induce pleasant feelings. Our findings have potential applications in optimizing ASMR stimuli to improve quality of life and alleviate stress and anxiety, thus expanding the scope of ASMR stimulus production beyond traditional methods. This article is part of the theme issue 'Sensing and feeling: an integrative approach to sensory processing and emotional experience'.


Subject(s)
Emotions , Humans , Male , Emotions/physiology , Female , Adult , Young Adult , Pleasure/physiology , Acoustic Stimulation , Sound , Meridians , Auditory Perception , Sensation/physiology
SELECTION OF CITATIONS
SEARCH DETAIL