Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 14.799
Filter
1.
Philos Trans R Soc Lond B Biol Sci ; 379(1908): 20230254, 2024 Aug 26.
Article in English | MEDLINE | ID: mdl-39005038

ABSTRACT

Sound serves as a potent medium for emotional well-being, with phenomena like the autonomous sensory meridian response (ASMR) showing a unique capacity for inducing relaxation and alleviating stress. This study aimed to understand how tingling sensations (and, for comparison, pleasant feelings) that such videos induce relate to acoustic features, using a broader range of ASMR videos as stimuli. The sound texture statistics and their timing predictive of tingling and pleasantness were identified through L1-regularized linear regression. Tingling was well-predicted (r = 0.52), predominantly by the envelope of frequencies near 5 kHz in the 1500 to 750 ms period before the response: stronger tingling was associated with a lower amplitude around the 5 kHz frequency range. This finding was further validated using an independent set of ASMR sounds. The prediction of pleasantness was more challenging (r = 0.26), requiring a longer effective time window, threefold that for tingling. These results enhance our understanding of how specific acoustic elements can induce tingling sensations, and how these elements differ from those that induce pleasant feelings. Our findings have potential applications in optimizing ASMR stimuli to improve quality of life and alleviate stress and anxiety, thus expanding the scope of ASMR stimulus production beyond traditional methods. This article is part of the theme issue 'Sensing and feeling: an integrative approach to sensory processing and emotional experience'.


Subject(s)
Emotions , Humans , Male , Emotions/physiology , Female , Adult , Young Adult , Pleasure/physiology , Acoustic Stimulation , Sound , Meridians , Auditory Perception , Sensation/physiology
2.
Int J Mol Sci ; 25(13)2024 Jun 21.
Article in English | MEDLINE | ID: mdl-38999952

ABSTRACT

Vibration and sound are the shaping matrix of the entire universe. Everything in nature is shaped by energy vibrating and communicating through its own sound trail. Every cell within our body vibrates at defined frequencies, generating its peculiar "sound signature". Mitochondria are dynamic, energy-transforming, biosynthetic, and signaling organelles that actively transduce biological information. Novel research has shown that the mitochondrial function of mammalian cells can be modulated by various energetic stimuli, including sound vibrations. Regarding acoustic vibrations, definite types of music have been reported to produce beneficial impacts on human health. In very recent studies, the effects of different sound stimuli and musical styles on cellular function and mitochondrial activity were evaluated and compared in human cells cultured in vitro, investigating the underlying responsible molecular mechanisms. This narrative review will take a multilevel trip from macro to intracellular microenvironment, discussing the intimate vibrational sound activities shaping living matter, delving deeper into the molecular mechanisms underlying the sound modulation of biological systems, and mainly focusing our discussion on novel evidence showing the competence of mitochondria in acting as energy portals capable of sensing and transducing the subtle informational biofields of sound vibration.


Subject(s)
Cellular Microenvironment , Mitochondria , Sound , Vibration , Humans , Mitochondria/metabolism , Animals , Music , Energy Metabolism
3.
PeerJ ; 12: e17622, 2024.
Article in English | MEDLINE | ID: mdl-38952977

ABSTRACT

Introduction: High velocity thrust manipulation is commonly used when managing joint dysfunctions. Often, these thrust maneuvers will elicit an audible pop. It has been unclear what conclusively causes this audible sound and its clinical meaningfulness. This study sought to identify the effect of the audible pop on brainwave activity directly following a prone T7 thrust manipulation in asymptomatic/healthy subjects. Methods: This was a quasi-experimental repeated measure study design in which 57 subjects completed the study protocol. Brain wave activity was measured with the Emotiv EPOC+, which collects data with a frequency of 128 HZ and has 14 electrodes. Testing was performed in a controlled environment with minimal electrical interference (as measured with a Gauss meter), temperature variance, lighting variance, sound pollution, and other variable changes that could have influenced or interfered with pure EEG data acquisition. After accommodation each subject underwent a prone T7 posterior-anterior thrust manipulation. Immediately after the thrust manipulation the brainwave activity was measured for 10 seconds. Results: The non-audible group (N = 20) consisted of 55% males, and the audible group (N = 37) consisted of 43% males. The non-audible group EEG data revealed a significant change in brain wave activity under some of the electrodes in the frontal, parietal, and the occipital lobes. In the audible group, there was a significant change in brain wave activity under all electrodes in the frontal lobes, the parietal lobe, and the occipital lobes but not the temporal lobes. Conclusion: The audible sounds caused by a thoracic high velocity thrust manipulation did not affect the activity in the audible centers in the temporal brain region. The results support the hypothesis that thrust manipulation with or without audible sound results in a generalized relaxation immediately following the manipulation. The absence of a significant difference in brainwave activity in the frontal lobe in this study might indicate that the audible pop does not produce a "placebo" mechanism.


Subject(s)
Manipulation, Spinal , Humans , Male , Female , Adult , Manipulation, Spinal/methods , Brain Waves/physiology , Electroencephalography/methods , Young Adult , Sound
4.
J Acoust Soc Am ; 156(1): 359-368, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38994905

ABSTRACT

A noise attenuation performance test was conducted on earmuffs using a recoilless weapon launch platform in a confined space, along with two acoustic test fixtures (ATFs). The overpressure at the ATF's effective tympanic membrane comprised direct sound at 185 dB sound pressure level (SPL) and reflected sound at 179 dB SPL. Wearing earmuffs reduced these peaks to 162 dB SPL and 169 dB SPL, respectively. The reflected sound from walls was defined as delayed sound. An analytical model for earmuff noise attenuation simulated their effectiveness. The simulation revealed that when the earmuffs attenuated delayed sound, the acoustic impedance of acoustic leakage and the acoustic impedance of the earmuff material decreased by 96% and 50%, respectively. The negative overpressure zone between direct and delayed sound decreased the earmuffs' fit against the ATF. Additionally, the enclosed volume between the earmuff and the ear canal decreased by 12%. After the installation of bandages on the earmuffs, the overpressure peak of delayed sound was reduced by 5 dB. Furthermore, the acoustic impedance of the earmuff's sound leakage path and the acoustic impedance of the earmuff material deformation path increased by 100% and 809%, respectively.


Subject(s)
Acoustics , Ear Protective Devices , Pressure , Humans , Equipment Design , Noise , Sound , Firearms , Adult , Male , Time Factors , Models, Theoretical
5.
PLoS One ; 19(7): e0303994, 2024.
Article in English | MEDLINE | ID: mdl-38968280

ABSTRACT

In recent years, the relation between Sound Event Detection (SED) and Source Separation (SSep) has received a growing interest, in particular, with the aim to enhance the performance of SED by leveraging the synergies between both tasks. In this paper, we present a detailed description of JSS (Joint Source Separation and Sound Event Detection), our joint-training scheme for SSep and SED, and we measure its performance in the DCASE Challenge for SED in domestic environments. Our experiments demonstrate that JSS can improve SED performance, in terms of Polyphonic Sound Detection Score (PSDS), even without additional training data. Additionally, we conduct a thorough analysis of JSS's effectiveness across different event classes and in scenarios with severe event overlap, where it is expected to yield further improvements. Furthermore, we introduce an objective measure to assess the diversity of event predictions across the estimated sources, shedding light on how different training strategies impact the separation of sound events. Finally, we provide graphical examples of the Source Separation and Sound Event Detection steps, aiming to facilitate the interpretation of the JSS methods.


Subject(s)
Sound , Humans , Algorithms
6.
PLoS One ; 19(7): e0302497, 2024.
Article in English | MEDLINE | ID: mdl-38976700

ABSTRACT

This paper presents a deep-learning-based method to detect recreational vessels. The method takes advantage of existing underwater acoustic measurements from an Estuarine Soundscape Observatory Network based in the estuaries of South Carolina (SC), USA. The detection method is a two-step searching method, called Deep Scanning (DS), which includes a time-domain energy analysis and a frequency-domain spectrum analysis. In the time domain, acoustic signals with higher energy, measured by sound pressure level (SPL), are labeled for the potential existence of moving vessels. In the frequency domain, the labeled acoustic signals are examined against a predefined training dataset using a neural network. This research builds training data using diverse vessel sound features obtained from real measurements, with a duration between 5.0 seconds and 7.5 seconds and a frequency between 800 Hz to 10,000 Hz. The proposed method was then evaluated using all acoustic data in the years 2017, 2018, and 2021, respectively; a total of approximately 171,262 2-minute.wav files at three deployed locations in May River, SC. The DS detections were compared to human-observed detections for each audio file and results showed the method was able to classify the existence of vessels, with an average accuracy of around 99.0%.


Subject(s)
Acoustics , Deep Learning , Estuaries , Rivers , South Carolina , Humans , Recreation , Sound , Ships
7.
Bioinspir Biomim ; 19(5)2024 Jul 23.
Article in English | MEDLINE | ID: mdl-38991522

ABSTRACT

This work examines the acoustically actuated motions of artificial flagellated micro-swimmers (AFMSs) and compares the motility of these micro-swimmers with the predictions based on the corrected resistive force theory (RFT) and the bar-joint model proposed in our previous work. The key ingredient in the theory is the introduction of a correction factorKin drag coefficients to correct the conventional RFT so that the dynamics of an acoustically actuated AFMS with rectangular cross-sections can be accurately modeled. Experimentally, such AFMSs can be easily manufactured based on digital light processing of ultra-violet (UV)-curable resins. We first determined the viscoelastic properties of a UV-cured resin through dynamic mechanical analysis. In particular, the high-frequency storage moduli and loss factors were obtained based on the assumption of time-temperature superposition (TTS), which were then applied in theoretical calculations. Though the extrapolation based on the TTS implied the uncertainty of high-frequency material response and there is limited accuracy in determining head oscillation amplitude, the differences between the measured terminal velocities of the AFMSs and the predicted ones are less than 50%, which, to us, is well acceptable. These results indicate that the motions of acoustic AFMS can be predicted, and thus, designed, which pave the way for their long-awaited applications in targeted therapy.


Subject(s)
Computer Simulation , Equipment Design , Models, Biological , Swimming , Swimming/physiology , Equipment Failure Analysis , Biomimetic Materials/chemistry , Biomimetics/methods , Robotics/methods , Robotics/instrumentation , Sound , Acoustics , Computer-Aided Design , Animals
8.
Elife ; 122024 Jul 24.
Article in English | MEDLINE | ID: mdl-39046781

ABSTRACT

Predator-prey arms races have led to the evolution of finely tuned disguise strategies. While the theoretical benefits of predator camouflage are well established, no study has yet been able to quantify its consequences for hunting success in natural conditions. We used high-resolution movement data to quantify how barn owls (Tyto alba) conceal their approach when using a sit-and-wait strategy. We hypothesized that hunting barn owls would modulate their landing force, potentially reducing noise levels in the vicinity of prey. Analysing 87,957 landings by 163 individuals equipped with GPS tags and accelerometers, we show that barn owls reduce their landing force as they approach their prey, and that landing force predicts the success of the following hunting attempt. Landing force also varied with the substrate, being lowest on man-made poles in field boundaries. The physical environment, therefore, affects the capacity for sound camouflage, providing an unexpected link between predator-prey interactions and land use. Finally, hunting strike forces in barn owls were the highest recorded in any bird, relative to body mass, highlighting the range of selective pressures that act on landings and the capacity of these predators to modulate their landing force. Overall, our results provide the first measurements of landing force in a wild setting, revealing a new form of motion-induced sound camouflage and its link to hunting success.


Subject(s)
Predatory Behavior , Strigiformes , Animals , Strigiformes/physiology , Predatory Behavior/physiology , Sound , Motion
9.
PLoS One ; 19(7): e0304027, 2024.
Article in English | MEDLINE | ID: mdl-39018315

ABSTRACT

Rhythms are the most natural cue for temporal anticipation because many sounds in our living environment have rhythmic structures. Humans have cortical mechanisms that can predict the arrival of the next sound based on rhythm and periodicity. Herein, we showed that temporal anticipation, based on the regularity of sound sequences, modulates peripheral auditory responses via efferent innervation. The medial olivocochlear reflex (MOCR), a sound-activated efferent feedback mechanism that controls outer hair cell motility, was inferred noninvasively by measuring the suppression of otoacoustic emissions (OAE). First, OAE suppression was compared between conditions in which sound sequences preceding the MOCR elicitor were presented at regular (predictable condition) or irregular (unpredictable condition) intervals. We found that OAE suppression in the predictable condition was stronger than that in the unpredictable condition. This implies that the MOCR is strengthened by the regularity of preceding sound sequences. In addition, to examine how many regularly presented preceding sounds are required to enhance the MOCR, we compared OAE suppression within stimulus sequences with 0-3 preceding tones. The OAE suppression was strengthened only when there were at least three regular preceding tones. This suggests that the MOCR was not automatically enhanced by a single stimulus presented immediately before the MOCR elicitor, but rather that it was enhanced by the regularity of the preceding sound sequences.


Subject(s)
Acoustic Stimulation , Cochlea , Humans , Male , Adult , Female , Young Adult , Cochlea/physiology , Olivary Nucleus/physiology , Reflex/physiology , Sound , Auditory Perception/physiology , Otoacoustic Emissions, Spontaneous/physiology , Reflex, Acoustic/physiology
10.
PLoS One ; 19(7): e0306427, 2024.
Article in English | MEDLINE | ID: mdl-39083499

ABSTRACT

When individuals are exposed to two pure tones with close frequencies presented separately in each ear, they perceive a third sound known as binaural beats (BB), characterized by a frequency equal to the difference between the two tones. Previous research has suggested that BB may influence brain activity, potentially benefiting attention and relaxation. In this study, we hypothesized that the impact of BB on cognition and EEG is linked to the spatial characteristics of the sound. Participants listened to various types of spatially moving sounds (BB, panning and alternate beeps) at 6Hz and 40Hz frequencies. EEG measurements were conducted throughout the auditory stimulation, and participants completed questionnaires on relaxation, affect, and a sustained attention task. The results indicated that binaural, panning sounds and alternate beeps had a more pronounced effect on electrical brain activity than the control condition. Additionally, an improvement in relaxation was observed with these sounds at both 6Hz and 40Hz. Overall, these findings support our hypothesis that the impact of auditory stimulation lies in the spatial attributes rather than the sensation of beating itself.


Subject(s)
Acoustic Stimulation , Attention , Electroencephalography , Humans , Male , Female , Adult , Young Adult , Attention/physiology , Auditory Perception/physiology , Sound , Sound Localization/physiology , Brain/physiology , Cognition/physiology , Relaxation/physiology
11.
Sci Rep ; 14(1): 17656, 2024 Jul 26.
Article in English | MEDLINE | ID: mdl-39085282

ABSTRACT

Emotionally expressive vocalizations can elicit approach-avoidance responses in humans and non-human animals. We investigated whether artificially generated sounds have similar effects on humans. We assessed whether subjects' reactions were linked to acoustic properties, and associated valence and intensity. We generated 343 artificial sounds with differing call lengths, fundamental frequencies and added acoustic features across 7 categories and 3 levels of biological complexity. We assessed the hypothetical behavioural response using an online questionnaire with a manikin task, in which 172 participants indicated whether they would approach or withdraw from an object emitting the sound. (1) Quieter sounds elicited approach, while loud sounds were associated with avoidance. (2) The effect of pitch was modulated by category, call length and loudness. (2a) Low-pitched sounds in complex sound categories prompted avoidance, while in other categories they elicited approach. (2b) Higher pitch in loud sounds had a distancing effect, while higher pitch in quieter sounds prompted approach. (2c) Longer sounds promoted avoidance, especially at high frequencies. (3) Sounds with higher intensity and negative valence elicited avoidance. We conclude that biologically based acoustic signals can be used to regulate the distance between social robots and humans, which can provide an advantage in interactive scenarios.


Subject(s)
Acoustic Stimulation , Motivation , Sound , Humans , Male , Female , Adult , Motivation/physiology , Young Adult , Surveys and Questionnaires , Emotions/physiology
12.
Sci Rep ; 14(1): 16519, 2024 07 17.
Article in English | MEDLINE | ID: mdl-39019952

ABSTRACT

Incidental capture of non-target species poses a pervasive threat to many marine species, with sometimes devastating consequences for both fisheries and conservation efforts. Because of the well-known importance of vocalizations in cetaceans, acoustic deterrents have been extensively used for these species. In contrast, acoustic communication for sea turtles has been considered negligible, and this question has been largely unexplored. Addressing this challenge therefore requires a comprehensive understanding of sea turtles' responses to sensory signals. In this study, we scrutinized the avenue of auditory cues, specifically the natural sounds produced by green turtles (Chelonia mydas) in Martinique, as a potential tool to reduce bycatch. We recorded 10 sounds produced by green turtles and identified those that appear to correspond to alerts, flight or social contact between individuals. Subsequently, these turtle sounds-as well synthetic and natural (earthquake) sounds-were presented to turtles in known foraging areas to assess the behavioral response of green turtles to these sounds. Our data highlighted that the playback of sounds produced by sea turtles was associated with alert or increased the vigilance of individuals. This therefore suggests novel opportunities for using sea turtle sounds to deter them from fishing gear or other potentially harmful areas, and highlights the potential of our research to improve sea turtles populations' conservation.


Subject(s)
Turtles , Vocalization, Animal , Animals , Turtles/physiology , Vocalization, Animal/physiology , Conservation of Natural Resources/methods , Sound
13.
Sci Rep ; 14(1): 17462, 2024 Jul 29.
Article in English | MEDLINE | ID: mdl-39075109

ABSTRACT

Most of the current soundscape research content is limited to the discussion of the restoration effect of single-element soundscapes, but it is the combination of sounds that is common in outdoor activities, and there is no evidence that the restoration of natural soundscapes is better with multi-element combinations. In this study, the Zhangjiajie National Forest Park in China was used as the research object, and the physiological indices of the subjects were collected through electroencephalogram signals, and the POMS short-form psychological scale was used to understand the subjective psychological responses of the subjects to the soundscape. The results showed that (1) The psychophysiological restorative ability of the natural soundscape of the National Forest Park was confirmed, and the subjects' psychological and physiological indices changed significantly and positively after listening to each section of the natural soundscape (p = 0.001). (2) The restorative effect of the multi-natural sound combination was ranked first in the overall ranking of the five natural soundscapes, and the multi-natural sound combination did indeed provide better restorative effects than the single-element sounds. (3) Gender does not usually have a significant effect on the restoration effect, and only Windy Sound among the four single-element nature sound landscapes and one multi-element combination of nature sound landscapes showed a significant gender difference, so in general, the effect of gender on the restoration effect of nature sound landscapes is not significant. In terms of research methodology, this study used cluster analysis to cluster the five types of natural soundscapes according to psychological and physiological recovery ability, and used ridge regression to construct mathematical models of the psychological and physiological recovery of each of the four natural soundscapes. The study of human physiological and psychological recovery from different types of natural soundscapes in China's national forest parks will provide a basis for soundscape planning, design, and policy formulation in national forest parks.


Subject(s)
Forests , Sound , Humans , Female , Male , China , Adult , Parks, Recreational , Psychophysiology , Electroencephalography , Auditory Perception/physiology , Young Adult
14.
Clin Exp Dent Res ; 10(4): e917, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38973208

ABSTRACT

OBJECTIVES: To determine the correlation between the primary implant stability quotient and the implant percussion sound frequency. MATERIALS AND METHODS: A total of 14 pigs' ribs were scanned using a dental cone beam computed tomography (CBCT) scanner to classify the bone specimens into three distinct bone density Hounsfield units (HU) value categories: D1 bone: >1250 HU; D2: 850-1250 HU; D3: <850 HU. Then, 96 implants were inserted: 32 implants in D1 bone, 32 implants in D2 bone, and 32 implants in D3 bone. The primary implant stability quotient (ISQ) was analyzed, and percussion sound was recorded using a wireless microphone connected and analyzed with frequency analysis software. RESULTS: Statistically significant positive correlations were found between the primary ISQ and the bone density HU value (r = 0.719; p < 0.001), and statistically significant positive correlations between the primary ISQ and the percussion sound frequency (r = 0.606; p < 0.001). Furthermore, significant differences in primary ISQ values and percussion sound frequency were found between D1 and D2 bone, as well as between D1 and D3 bone. However, no significant differences were found in primary ISQ values and percussion sound frequency between D2 and D3 bone. CONCLUSION: The primary ISQ value and the percussion sound frequency are positively correlated.


Subject(s)
Bone Density , Cone-Beam Computed Tomography , Dental Implants , Percussion , Animals , Swine , Percussion/instrumentation , Bone Density/physiology , Sound , Ribs/surgery , Dental Implantation, Endosseous/methods , Dental Implantation, Endosseous/instrumentation , Dental Prosthesis Retention
15.
Sensors (Basel) ; 24(11)2024 May 27.
Article in English | MEDLINE | ID: mdl-38894232

ABSTRACT

Sound localization is a crucial aspect of human auditory perception. VR (virtual reality) technologies provide immersive audio platforms that allow human listeners to experience natural sounds based on their ability to localize sound. However, the simulations of sound generated by these platforms, which are based on the general head-related transfer function (HRTF), often lack accuracy in terms of individual sound perception and localization due to significant individual differences in this function. In this study, we aimed to investigate the disparities between the perceived locations of sound sources by users and the locations generated by the platform. Our goal was to determine if it is possible to train users to adapt to the platform-generated sound sources. We utilized the Microsoft HoloLens 2 virtual platform and collected data from 12 subjects based on six separate training sessions arranged in 2 weeks. We employed three modes of training to assess their effects on sound localization, in particular for studying the impacts of multimodal error, visual, and sound guidance in combination with kinesthetic/postural guidance, on the effectiveness of the training. We analyzed the collected data in terms of the training effect between pre- and post-sessions as well as the retention effect between two separate sessions based on subject-wise paired statistics. Our findings indicate that, as far as the training effect between pre- and post-sessions is concerned, the effect is proven to be statistically significant, in particular in the case wherein kinesthetic/postural guidance is mixed with visual and sound guidance. Conversely, visual error guidance alone was found to be largely ineffective. On the other hand, as far as the retention effect between two separate sessions is concerned, we could not find any meaningful statistical implication on the effect for all three error guidance modes out of the 2-week session of training. These findings can contribute to the improvement of VR technologies by ensuring they are designed to optimize human sound localization abilities.


Subject(s)
Sound Localization , Humans , Sound Localization/physiology , Female , Male , Adult , Virtual Reality , Young Adult , Auditory Perception/physiology , Sound
16.
Sensors (Basel) ; 24(11)2024 May 29.
Article in English | MEDLINE | ID: mdl-38894296

ABSTRACT

Jump height tests are employed to measure lower-limb muscle power of athletic and non-athletic populations. The most popular instruments for this purpose are jump mats and, in recent years, smartphone apps, which compute jump height through the manual annotation of video recordings and recently automatically using the sound produced during the jump to extract the flight time. In a previous work, the afore-mentioned sound systems were presented by the authors in which the take-off and landing events from the audio recordings of jump executions were obtained using classical signal processing. In this work, a more precise, noise-immune, and robust system, capable of working in the most unfavorable environments, is presented. The system uses a deep neural network trained specifically for this purpose. More than 300 jumps were recorded to train and validate the network performance. The ground truth was a jump mat, providing a slightly better accuracy in quiet and medium quiet environments but excellent accuracy in noisy and complicated ones. The developed audio-based system is a trustworthy instrument for measuring jump height accurately in any kind of environment, providing a perfect measurement tool that can be accessed through a mobile phone in the form of an app.


Subject(s)
Neural Networks, Computer , Humans , Sound , Mobile Applications , Smartphone , Sports/physiology , Male , Muscle Strength/physiology
17.
PLoS One ; 19(6): e0304913, 2024.
Article in English | MEDLINE | ID: mdl-38900836

ABSTRACT

Research has shown that perceiving the order of successive auditory stimuli could be affected by their nameability. The present research re-examined this hypothesis, using tasks requiring participants to report the order of successively presented (with no interstimulus gaps) environmental (i.e., easily named stimuli) and abstract (i.e., hard-to-name stimuli) sounds of short duration (i.e., 200 ms). Using the same sequences, we also examined the accuracy of the sounds perceived by administering enumeration tasks. Data analyses showed that accuracy in the ordering tasks was equally low for both environmental and abstract sounds, whereas accuracy in the enumeration tasks was higher for the former as compared to the latter sounds. Importantly, overall accuracy in the enumeration tasks did not reach ceiling levels, suggesting some limitations in the perception of successively presented stimuli. Overall, naming fluency seemed to affect sound enumeration, but no effects were obtained for order perception. Furthermore, an effect of each sound's location in a sequence on ordering accuracy was noted. Our results question earlier notions suggesting that order perception is mediated by stimuli's nameability and leave open the possibility that memory capacity limits may play a role.


Subject(s)
Acoustic Stimulation , Auditory Perception , Memory, Short-Term , Sound , Humans , Male , Female , Auditory Perception/physiology , Adult , Memory, Short-Term/physiology , Young Adult , Names
18.
Zh Nevrol Psikhiatr Im S S Korsakova ; 124(5. Vyp. 2): 20-25, 2024.
Article in Russian | MEDLINE | ID: mdl-38934662

ABSTRACT

OBJECTIVE: To test the hypothesis of the difference between 3 means of sleep latency (SL) during falling asleep: accompanied by audio stimulus embedded with binaural beats (BB); after listening to suggestive body relaxation instructions; accompanied by audio stimulus embedded with BB after listening to suggestive body relaxation instructions (that is the combination of 1 and 2). MATERIAL AND METHODS: For the purpose of the study, a special Android application was developed and installed on the subjects' individual smartphones. The application assumed screen tapping test to control for fall-asleep process. The data of 63 subjects presented with the 3 types of sound stimuli mentioned above in a counterbalanced scheme were analyzed. RESULTS: Statistical analysis confirmed the initial hypothesis about the dependence of LS on the type of sound stimulus (p<0.05). Pairwise SL comparison showed reliable difference between stimuli (3) - 1149±113 s, and (1) - 1469±89 s (p<0.01). SL for the stimulus (2) had an intermediate value of 1269±112 s (difference from (1) at a trend level). CONCLUSION: The use of background sound embedded with BBs enhances the effect of suggestive instructions to improve sleep. But it is the suggestion as a psychotherapeutic technique that is determinant.


Subject(s)
Acoustic Stimulation , Humans , Male , Female , Adult , Acoustic Stimulation/methods , Young Adult , Sleep Latency/physiology , Sleep/physiology , Sound , Middle Aged
19.
Eur Rev Med Pharmacol Sci ; 28(11): 3781-3786, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38884513

ABSTRACT

OBJECTIVE: Tinnitus Retraining Therapy (TRT) is a rehabilitation approach for tinnitus that is currently considered an effective treatment with an elevated response rate. TRT is usually delivered through sound generators; however, they are often difficult to find and expensive. Recently, mobile apps have been proposed for TRT. This study aims to verify the effectiveness of TRT performed using mobile apps in reducing the adverse effects of tinnitus on the quality of life. PATIENTS AND METHODS: A total of 80 patients affected by tinnitus in category 0 (mild tinnitus) and category 1 (moderate tinnitus), according to the Jastreboff classification, were included in the study. Patients of both classes were subsequently differentiated into two homogeneous groups; the first (Group A) was treated with a traditional sound generator, and the second (Group B) using a mobile app. The Tinnitus Handicap Inventory - the Italian version of the questionnaire - was used to investigate the impact of tinnitus on the quality of life in enrolled patients and evaluate their response to TRT. RESULTS: A significant improvement was found in THI scores in category 0 patients for both sound generator and mobile app groups; no difference was found between the two-treatment delivery technology (-1.186, p=0.783); conversely, tinnitus improvements in category 1 patients were only reported for subjects treated using a sound generator (-14.529, p<0.001), while no significant improvement was found in patients treated using the mobile app. CONCLUSIONS: This study confirms the value of TRT, which in patients with mild tinnitus (category 0), can also be delivered through mobile apps with results comparable to traditional sound generators. Further studies are necessary to confirm the effects of the different tinnitus treatments available and improve the knowledge on this topic.


Subject(s)
Mobile Applications , Quality of Life , Tinnitus , Tinnitus/therapy , Humans , Male , Female , Middle Aged , Sound , Adult , Surveys and Questionnaires , Aged , Treatment Outcome
20.
Mar Environ Res ; 199: 106600, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38875901

ABSTRACT

Marine ecosystems are increasingly subjected to anthropogenic pressures, which demands urgent monitoring plans. Understanding soundscapes can offer unique insights into the ocean status providing important information and revealing different sounds and their sources. Fishes can be prominent soundscape contributors, making passive acoustic monitoring (PAM) a potential tool to detect the presence of vocal fish species and to monitor changes in biodiversity. The major goal of this research was to provide a first reference of the marine soundscapes of the Madeira Archipelago focusing on fish sounds, as a basis for a long-term PAM program. Based on the literature, 102 potentially vocal and 35 vocal fish species were identified. Additionally 43 putative fish sound types were detected in audio recordings from two marine protected areas (MPAs) in the Archipelago: the Garajau MPA and the Desertas MPA. The Garajau MPA exhibited higher fish vocal activity, a greater variety of putative fish sound types and higher fish sound diversity. Lower abundance of sounds was found at night at both MPAs. Acoustic activity revealed a clear distinction between diurnal and nocturnal fish groups and demonstrated daily patterns of fish sound activity, suggesting temporal and spectral partitioning of the acoustic space. Pomacentridae species were proposed as candidates for some of the dominant sound types detected during the day, while scorpionfishes (Scorpaena spp.) were proposed as sources for some of the dominant nocturnal fish sounds. This study provides an important baseline about this community acoustic behaviour and is a valuable steppingstone for future non-invasive and cost-effective monitoring programs in Madeira.


Subject(s)
Acoustics , Biodiversity , Fishes , Vocalization, Animal , Animals , Fishes/physiology , Atlantic Ocean , Environmental Monitoring/methods , Sound , Ecosystem , Portugal
SELECTION OF CITATIONS
SEARCH DETAIL