Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 4.574
Filter
1.
ESC Heart Fail ; 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39090841

ABSTRACT

AIMS: A fourth heart sound (S4) was reported to be almost never present in patients with amyloid light-chain cardiomyopathy. There have been no reports on S4 in patients with wild-type transthyretin amyloid cardiomyopathy (ATTRwt-CM). This study aimed to clarify the clinical implications of S4 in patients with ATTRwt-CM. METHODS AND RESULTS: Seventy-six patients with ATTRwt-CM (mean age: 80.4 ± 5.4 years, 68 males) who had undergone phonocardiography (PCG) were retrospectively assessed. We measured S4 amplitude on digitally recorded PCG. S4 was considered to be present when its amplitude was 1.0 mm or greater on the PCG. Distinct S4 was defined as S4 with an amplitude of 2.0 mm or greater, which is usually recognizable by auscultation. According to the rhythm and presence or absence of S4, the patients were divided into three groups, namely, sinus rhythm (SR) with S4, SR without S4, and non-SR. Non-SR consisted of atrial fibrillation, atrial flutter, and atrial tachycardia. Thirty-six patients were in SR and the remaining 40 patients were in non-SR. In the 36 patients in SR, S4 was shown by PCG to be present in 17 patients (47%), and distinct S4 was recognized in 7 patients (19%) by auscultation. In patients who were in SR, those with S4 had higher systolic blood pressure (124 ± 15 vs. 99 ± 8 mmHg, P < 0.001), lower level of plasma B-type natriuretic peptide (308 [interquartile range (IQR): 165, 354] vs. 508 [389, 765] pg/mL, P = 0.034) and lower level of high-sensitivity cardiac troponin T (0.068 [0.046, 0.089] vs. 0.109 [0.063, 0.148] ng/mL, P = 0.042) than those without S4. There was no significant difference in left atrium (LA) volume index or LA reservoir strain between patients with S4 and without S4. Patients with S4 had more preserved LA systolic function than those without S4 (peak atrial filling velocity: 53 ± 25 vs. 34 ± 9 cm/s, P = 0.033; LA contractile strain: 4.1 ± 2.1 vs. 1.6 ± 2.0%, P = 0.012). Patients in SR without S4 had worse short-term prognosis compared with the other two groups (generalized Wilcoxon test, P = 0.033). CONCLUSIONS: S4 was present in 47% of the patients in SR with ATTRwt-CM. Patients in SR without S4 had more impaired LA systolic function than those in SR with S4. The absence of S4 portends a poor short-term prognosis in patients with ATTRwt-CM.

2.
Front Hum Neurosci ; 18: 1434786, 2024.
Article in English | MEDLINE | ID: mdl-39086377

ABSTRACT

Cochlear implant (CI) systems differ in terms of electrode design and signal processing. It is likely that patients fit with different implant systems will experience different percepts when presented speech via their implant. The sound quality of speech can be evaluated by asking single-sided-deaf (SSD) listeners fit with a cochlear implant (CI) to modify clean signals presented to their typically hearing ear to match the sound quality of signals presented to their CI ear. In this paper, we describe very close matches to CI sound quality, i.e., similarity ratings of 9.5 to 10 on a 10-point scale, by ten patients fit with a 28 mm electrode array and MED EL signal processing. The modifications required to make close approximations to CI sound quality fell into two groups: One consisted of a restricted frequency bandwidth and spectral smearing while a second was characterized by a wide bandwidth and no spectral smearing. Both sets of modifications were different from those found for patients with shorter electrode arrays who chose upshifts in voice pitch and formant frequencies to match CI sound quality. The data from matching-based metrics of CI sound quality document that speech sound-quality differs for patients fit with different CIs and among patients fit with the same CI.

3.
Front Neurol ; 15: 1428106, 2024.
Article in English | MEDLINE | ID: mdl-39108653

ABSTRACT

Objectives: Single-sided deafness (SSD) is often accompanied by tinnitus, resulting in a decreased quality of life. Currently, there is a lack of high level of evidence studies comparing different treatment options for SSD regarding tinnitus reduction. This randomized controlled trial (RCT) evaluated the effect of a cochlear implant (CI), bone conduction device (BCD), contralateral routing of sound (CROS), and no treatment on tinnitus outcomes in SSD patients, with follow-up extending to 24 months. Methods: A total of 120 adult SSD patients were randomized to three groups: CI, a trial period with first a BCD on a headband, then a CROS, or vice versa. After the trial periods, patients opted for a BCD, CROS, or no treatment. At the start of follow-up, 28 patients were implanted with a CI, 25 patients with a BCD, 34 patients had a CROS, and 26 patients chose no treatment. The Tinnitus Handicap Inventory (THI), Tinnitus Questionnaire (TQ), the Visual Analog Scale (VAS), and the Hospital Anxiety and Depression Scale (HADS) were completed at baseline and at 3, 6, 12, and 24 months of follow-up. Results: The CI and BCD groups showed significantly decreased tinnitus impact scores. The CI group showed the largest decrease, which was already observed at 3 months of follow-up. Compared to the baseline, the median THI score decreased by 23 points, the TQ score by 17 points, and the VAS score by 60 points at 24 months. In the BCD group, the TQ score decreased by 9 points, and the VAS decreased by 25 points at 24 months. The HADS anxiety and depression subscale showed no indication for anxiety or depression at baseline, nor at 24 months, for all groups. Conclusion: In this RCT, SSD patients treated with a CI or BCD showed an overall decrease in tinnitus impact scores up to 24 months compared to baseline. The CI group reported a stable and the largest reduction. Cochlear implants appear to be superior to BCD and CROS, and no treatment for achieving partial or complete resolution of tinnitus in patients with SSD. Clinical trial registration: Netherlands Trial Register, www.onderzoekmetmensen.nl/nl/trial/26952, NTR4457, CINGLE trial.

4.
Heliyon ; 10(14): e34067, 2024 Jul 30.
Article in English | MEDLINE | ID: mdl-39104510

ABSTRACT

In this paper, a new approach has been introduced for classifying the music genres. The proposed approach involves transforming an audio signal into a unified representation known as a sound spectrum, from which texture features have been extracted using an enhanced Rigdelet Neural Network (RNN). Additionally, the RNN has been optimized using an improved version of the partial reinforcement effect optimizer (IPREO) that effectively avoids local optima and enhances the RNN's generalization capability. The GTZAN dataset has been utilized in experiments to assess the effectiveness of the proposed RNN/IPREO model for music genre classification. The results show an impressive accuracy of 92 % by incorporating a combination of spectral centroid, Mel-spectrogram, and Mel-frequency cepstral coefficients (MFCCs) as features. This performance significantly outperformed K-Means (58 %) and Support Vector Machines (up to 68 %). Furthermore, the RNN/IPREO model outshined various deep learning architectures such as Neural Networks (65 %), RNNs (84 %), CNNs (88 %), DNNs (86 %), VGG-16 (91 %), and ResNet-50 (90 %). It is worth noting that the RNN/IPREO model was able to achieve comparable results to well-known deep models like VGG-16, ResNet-50, and RNN-LSTM, sometimes even surpassing their scores. This highlights the strength of its hybrid CNN-Bi-directional RNN design in conjunction with the IPREO parameter optimization algorithm for extracting intricate and sequential auditory data.

5.
Article in English | MEDLINE | ID: mdl-39110404

ABSTRACT

When a rhythm makes an event predictable, that event is perceived faster, and typically more accurately. However, the experiments showing this used simple tasks, and most manipulated temporal expectancy by using periodic or aperiodic precursors unrelated to stimulus and task. Three experiments tested the generality of these observations in a complex task in which rhythm was intrinsic to, rather than a precursor of, the information needed to respond: listeners averaged the laterality of a stream of noise bursts. We varied presentation rate, degree of periodicity, and average lateralisation. Decisions following a probe tone were fastest after periodic stimuli, and slowest after the most aperiodic stimuli. Without a probe tone, listeners responded sooner during periodic sequences, thus hearing less information. Periodicity did not benefit accuracy overall. This gain in speed but not accuracy for less information is not reported for simpler tasks. Neural entrainment supplemented by cognitive factors provide a tentative explanation. When the task is inherently complex and demands high attention over long durations, both expected-periodic and unexpected-aperiodic stimuli can increase response amplitude, enhancing stimulus representation, but periodicity increases confidence to respond early. Drift diffusion modelling supports this proposal: aperiodicity modulated the decision threshold, but not the drift rate or non-decision time. Together, these new data and the literature point towards task-dependent effects of temporal expectation on decision-making, showing interactions between rhythmic variance, task complexity, and sources of expectation about stimuli. We suggest the implications are worth exploring to extend understanding of rhythmicity on decision-making to everyday situations.

6.
Med Biol Eng Comput ; 2024 Aug 05.
Article in English | MEDLINE | ID: mdl-39098860

ABSTRACT

Heart sound signals are vital for the machine-assisted detection of congenital heart disease. However, the performance of diagnostic results is limited by noise during heart sound acquisition. A limitation of existing noise reduction schemes is that the pathological components of the signal are weak, which have the potential to be filtered out with the noise. In this research, a novel approach for classifying heart sounds based on median ensemble empirical mode decomposition (MEEMD), Hurst analysis, improved threshold denoising, and neural networks are presented. In decomposing the heart sound signal into several intrinsic mode functions (IMFs), mode mixing and mode splitting can be effectively suppressed by MEEMD. Hurst analysis is adopted for identifying the noisy content of IMFs. Then, the noise-dominated IMFs are denoised by an improved threshold function. Finally, the noise reduction signal is generated by reconstructing the processed components and the other components. A database of 5000 heart sounds from congenital heart disease and normal volunteers was constructed. The Mel spectral coefficients of the denoised signals were used as input vectors to the convolutional neural network for classification to verify the effectiveness of the preprocessing algorithm. An accuracy of 93.8%, a specificity of 93.1%, and a sensitivity of 94.6% were achieved for classifying the normal cases from abnormal one.

7.
HardwareX ; 19: e00555, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39099721

ABSTRACT

The design and characterization of a low-cost, open-source auditory delivery system to deliver high performance auditory stimuli is presented. The system includes a high-fidelity sound card and audio amplifier devices with low-latency and wide bandwidth targeted for behavioral neuroscience research. The characterization of the individual devices and the entire system is performed, providing a thorough audio characterization data for varying frequencies and sound levels. The system implements the open-source Harp protocol, enabling the hardware timestamping of devices and seamless synchronization with other Harp devices.

8.
Sci Rep ; 14(1): 18382, 2024 Aug 08.
Article in English | MEDLINE | ID: mdl-39117693

ABSTRACT

This study aims to investigate the potential of integrating natural biochar (BC) derived from eggshell waste into flexible polyurethane (FPU) foam to enhance its mechanical and acoustic performance. The study explores the impact of incorporating BC at various weight ratios (0.1, 0.3, 0.5, and 0.7 wt. %) on the properties of the FPU foam. Additionally, the effects of modifying the BC with (3-aminopropyl)trimethoxysilane (APTMS) at different ratios (10, 20, and 30 wt. %) and the influence of diverse particle sizes of BC on the thermal, mechanical, and acoustic characteristics of the FPU composite are investigated. The functional groups, morphology, and elemental composition of the developed FPU composites are analyzed using Fourier-transform infrared spectroscopy (FTIR), field-emission scanning electron microscopy (FESEM), and energy-dispersive X-ray (EDX) techniques. Characteristics such as density, gel fraction, and porosity were also assessed. The results reveal that the density of FPU foam increased by 4.32% and 7.83% while the porosity decreased to 50.22% and 47.05% with the addition of 0.1 wt. % of unmodified BC and modified BC with 20 wt. % APTMS, respectively, compared to unfilled FPU. Additionally, the gel fraction of the FPU matrix increases by 1.91% and 3.55% with the inclusion of 0.1 wt. % unmodified BC and modified BC with 20 wt. % APTMS, respectively. Furthermore, TGA analysis revealed that all FPU composites demonstrate improved thermal stability compared to unfilled FPU, reaching a peak value of 312.17°C for the FPU sample incorporating BC modified with 20 wt. % APTMS. Compression strength increased with 0.1 wt. % untreated BC but decreased at higher concentrations. Modifying BC with 20% APTMS resulted in an 8.23% increase in compressive strength compared to unfilled FPU. Acoustic analysis showed that the addition of BC improved absorption, and modified BC enhanced absorption characteristics of FPU, reaching Class D with a 20 mm thickness. BC modified with APTMS further improved acoustic properties compared to the unfilled FPU sample (Class E), with 20% modification showing the best results. These composites present promising materials for sound absorption applications and address environmental issues related to eggshell waste.

9.
J Voice ; 2024 Aug 09.
Article in English | MEDLINE | ID: mdl-39127534

ABSTRACT

PURPOSE: The purpose of the present study was to determine the extrinsic laryngeal muscle activity and vocal economy during two different singing conditions (straight-tone- vs vibrato singing) over a physiologically relevant singing range. METHODS: Thirty professional singers or voice coaches participated in the study. The participants sang a sustained /a:/ vowel for approximately 5seconds, once in straight-tone singing conditions and once more in vibrato. The target pitches were C3, F3, A3, C4, F4, A4, and C5. Surface electromyographic (sEMG) measures were performed in the infrahyoid (IH)- and the suprahyoid (SH) muscle region. Contact quotient (CQ), sound pressure level (SPL), and fundamental frequencies were measured to derive the electroglottographic-based vocal economy parameter quasi-output cost ratio (QOCR). RESULTS: sEMG measures show that IH and SH muscles significantly increased in activity with ascending pitch. IH and SH muscle activity was also significantly higher when singing in vibrato than straight-tone. Moreover, SPL also increased with ascending pitch and when sung in vibrato. CQ increased and QOCR decreased as pitch ascended but did not significantly change when sung in vibrato. CONCLUSION: Singing higher pitches was generally associated with higher extrinsic laryngeal muscle activity and lower QOCR values. When comparing two singing conditions, extrinsic laryngeal muscle activity was higher during vibrato, implicating that IH and the SH muscles may contribute to rhythmic pulsations of pitch modulation. Although the QOCR value did not show significant differences between the two singing conditions, a significantly higher SPL during vibrato may offer some acoustical and physiological advantages. Results also indicate that extrinsic muscle activity may not be reliably measure vocal economy.

10.
Trends Hear ; 28: 23312165241264466, 2024.
Article in English | MEDLINE | ID: mdl-39106413

ABSTRACT

This study investigated sound localization abilities in patients with bilateral conductive and/or mixed hearing loss (BCHL) when listening with either one or two middle ear implants (MEIs). Sound localization was measured by asking patients to point as quickly and accurately as possible with a head-mounted LED in the perceived sound direction. Loudspeakers, positioned around the listener within a range of +73°/-73° in the horizontal plane, were not visible to the patients. Broadband (500 Hz-20 kHz) noise bursts (150 ms), roved over a 20-dB range in 10 dB steps was presented. MEIs stimulate the ipsilateral cochlea only and therefore the localization response was not affected by crosstalk. Sound localization was better with bilateral MEIs compared with the unilateral left and unilateral right conditions. Good sound localization performance was found in the bilaterally aided hearing condition in four patients. In two patients, localization abilities equaled normal hearing performance. Interestingly, in the unaided condition, when both devices were turned off, subjects could still localize the stimuli presented at the highest sound level. Comparison with data of patients implanted bilaterally with bone-conduction devices, demonstrated that localization abilities with MEIs were superior. The measurements demonstrate that patients with BCHL, using remnant binaural cues in the unaided condition, are able to process binaural cues when listening with bilateral MEIs. We conclude that implantation with two MEIs, each stimulating only the ipsilateral cochlea, without crosstalk to the contralateral cochlea, can result in good sound localization abilities, and that this topic needs further investigation.


Subject(s)
Acoustic Stimulation , Hearing Loss, Conductive , Hearing Loss, Mixed Conductive-Sensorineural , Ossicular Prosthesis , Sound Localization , Humans , Sound Localization/physiology , Female , Male , Middle Aged , Hearing Loss, Conductive/physiopathology , Hearing Loss, Conductive/surgery , Hearing Loss, Conductive/diagnosis , Hearing Loss, Conductive/rehabilitation , Adult , Hearing Loss, Mixed Conductive-Sensorineural/physiopathology , Hearing Loss, Mixed Conductive-Sensorineural/rehabilitation , Hearing Loss, Mixed Conductive-Sensorineural/surgery , Hearing Loss, Mixed Conductive-Sensorineural/diagnosis , Aged , Hearing Loss, Bilateral/physiopathology , Hearing Loss, Bilateral/rehabilitation , Hearing Loss, Bilateral/diagnosis , Hearing Loss, Bilateral/surgery , Treatment Outcome , Prosthesis Design , Cues , Young Adult , Auditory Threshold , Bone Conduction/physiology
11.
Audiol Res ; 14(4): 674-683, 2024 Aug 06.
Article in English | MEDLINE | ID: mdl-39194413

ABSTRACT

Hearing aids (HAs), especially those with sound generators (SGs), are used in the management of tinnitus. However, their comparative efficacies and long-term outcomes remain unknown. Therefore, we investigated the efficacy and long-term outcomes of tinnitus therapy using various HA SG models. We retrospectively reviewed 666 patients with chronic tinnitus characterized by persistent symptoms for >6 months. At the initial visit, the patients received educational counselling on tinnitus (Utsunomiya method) and completed a comprehensive questionnaire comprising the tinnitus handicap inventory, a visual analog scale, the state-trait anxiety inventory, and the emotional intelligence scale. The scores were compared among various models of HA SGs and SGs. The patients underwent follow-ups for up to 2 years. Our results indicated that tinnitus retraining therapy using SGs and conventional HAs effectively managed chronic tinnitus. The prolonged use of HAs appeared to exacerbate tinnitus symptoms, emphasizing the superior long-term effectiveness of SG HAs, particularly ZEN (Widex ZEN, WS Audiology, Lynge, Denmark). Our findings indicate that HAs are useful in the first year, but their prolonged use may exacerbate tinnitus symptoms, whereas HA SGs are effective in the long term. Future studies should account for the variations in tinnitus treatment effects based on the type of sound employed.

12.
Indian J Otolaryngol Head Neck Surg ; 76(4): 3088-3093, 2024 Aug.
Article in English | MEDLINE | ID: mdl-39130335

ABSTRACT

Aim: The aim of the study was (1) to investigate the development of identification of environmental sounds in children with Cochlear Implantation (CI) within four months from switch on (i.e. at 0, 2 and 4 months) and (2) to see the effect of family type in the perception of environmental sounds. Materials and methods: A longitudinal study design was utilized on a total of 18 children using CI within the chronological age range of 3 to 7 years. All participants underwent a closed set test of Environmental Sound Perception (ESP) to measure the longitudinal outcomes of ESP, at 0 (within 1 week of switch on), 2 months and 4 months of implant age. They were asked to identify the sounds by pointing at the picture representing the sound. Results: Results using One-way and Two-way ANOVA demonstrated that at 0 month of implant age, the scores were 0%. At 2 months of implant age the scores ranged from 0 to 25% and at 4th month the scores ranged from 0 to 40%. There was a statistically significant improvement observed in ESP at every 2 months of testing from 0 to 4 months of implant age. However, effect of family type revealed no significant differences between the performances across the implant age. Conclusion: The current study reveals that identification of environmental sounds are one of the foremost benefits and early outcomes of CI in children. The perception of environmental sounds are constantly but gradually developing with increasing implant age. This information is useful to predict the performance of CI during rehabilitation and to set the therapy goals accordingly.

13.
Intensive Crit Care Nurs ; 86: 103777, 2024 Aug 24.
Article in English | MEDLINE | ID: mdl-39182325

ABSTRACT

BACKGROUND: Hospitalization in the ICU can have long-term physiological and psychological impacts, affecting functional recovery and quality of life of post-ICU patients. Despite systematic reviews showing the impact of music interventions on physiological and psychological outcomes in ICU patients, their applicability and effectiveness in the post-ICU context remain unclear. AIM: This review aimed to summarize: a) the types and characteristics of music/sound of interventions used in the rehabilitation of ICU patients, b) evidence on the feasibility, safety and acceptability of sound and music interventions for post ICU survivors, c) the types of post-ICU outcomes explored and the effects of sound and music interventions on any type of outcome in post-ICU survivors, and d) potential mechanisms or theoretical frameworks underlying the effects of sound and music interventions. METHOD: We combined current systematic review search methods with a critical narrative approach to synthesize a diverse body of evidence. RESULTS: Results showed that music interventions positively affect the psychological well-being and health outcomes of post-ICU patients. Outcomes included improvements in stress, anxiety, mood, movement, sleep, and pain, despite differences in patient populations and intervention design. No safety concerns were reported. The identified theoretical frameworks described physiological, neurobiological and/or psycho-social pathways as key mediators, however, these mechanisms are not completely understood. CONCLUSION: Research evidence supports the positive effects of music interventions in post-ICU patients. Further experimental studies are required, especially in adult post-ICU populations to elucidate the characteristics, components, feasibility, and long-term effects of sound/music interventions. IMPLICATION TO PRACTICE: 1. Music interventions help in post-ICU patients' recovery benefitting stress, anxiety, PTSD, mood, movement, sleep, and pain. 2. Integrating theoretical frameworks into music interventions can expand outcome measures to include physiological markers alongside psychological ones, improving quality of life. 3. Further rigorous interventional studies are required to identify the effectiveness of sound and music interventions in post-ICU patients.

14.
Bioinspir Biomim ; 2024 Aug 23.
Article in English | MEDLINE | ID: mdl-39178899

ABSTRACT

Like other odontocetes, Risso's dolphins actively emit clicks and passively listen to the echoes during echolocation. However, the head anatomy of Risso's dolphins differs from that of other odontocetes by a unique vertical cleft along the anterior surface of the forehead and a differently-shaped lower jaw. In this study, 3D finite-element sound reception and production models were constructed based on CT data of a deceased Risso's dolphin. Our results were verified by finding good agreement with experimental measurements of hearing sensitivity. Moreover, the acoustic pathway for sounds to travel from the seawater into the dolphin's tympanoperiotic complexes (TPCs) was computed. The gular reception mechanism, previously discovered in Delphinus delphis and Ziphius cavirostris, was also found in this species. The received sound pressure levels and relative displacement at TPC surfaces were compared between the cases with and without the mandibular fats or mandible. The results demonstrate a pronounced wave-guiding role of the mandibular fats and a limited bone-conductor role of the mandible. For sound production modelling, we digitally filled the cleft with neighbouring soft tissues, creating a hypothetical "cleftless" head. Comparison between sound travelling through a "cleftless" head vs. an original head indicates that the distinctive cleft plays a limited role in biosonar sound propagation.

15.
Front Neural Circuits ; 18: 1430598, 2024.
Article in English | MEDLINE | ID: mdl-39184455

ABSTRACT

Auditory space has been conceptualized as a matrix of systematically arranged combinations of binaural disparity cues that arise in the superior olivary complex (SOC). The computational code for interaural time and intensity differences utilizes excitatory and inhibitory projections that converge in the inferior colliculus (IC). The challenge is to determine the neural circuits underlying this convergence and to model how the binaural cues encode location. It has been shown that midbrain neurons are largely excited by sound from the contralateral ear and inhibited by sound leading at the ipsilateral ear. In this context, ascending projections from the lateral superior olive (LSO) to the IC have been reported to be ipsilaterally glycinergic and contralaterally glutamatergic. This study used CBA/CaH mice (3-6 months old) and applied unilateral retrograde tracing techniques into the IC in conjunction with immunocytochemical methods with glycine and glutamate transporters (GlyT2 and vGLUT2, respectively) to analyze the projection patterns from the LSO to the IC. Glycinergic and glutamatergic neurons were spatially intermixed within the LSO, and both types projected to the IC. For GlyT2 and vGLUT2 neurons, the average percentage of ipsilaterally and contralaterally projecting cells was similar (ANOVA, p = 0.48). A roughly equal number of GlyT2 and vGLUT2 neurons did not project to the IC. The somatic size and shape of these neurons match the descriptions of LSO principal cells. A minor but distinct population of small (< 40 µm2) neurons that labeled for GlyT2 did not project to the IC; these cells emerge as candidates for inhibitory local circuit neurons. Our findings indicate a symmetric and bilateral projection of glycine and glutamate neurons from the LSO to the IC. The differences between our results and those from previous studies suggest that species and habitat differences have a significant role in mechanisms of binaural processing and highlight the importance of research methods and comparative neuroscience. These data will be important for modeling how excitatory and inhibitory systems converge to create auditory space in the CBA/CaH mouse.


Subject(s)
Auditory Pathways , Glutamic Acid , Glycine Plasma Membrane Transport Proteins , Glycine , Inferior Colliculi , Mice, Inbred CBA , Superior Olivary Complex , Animals , Glycine/metabolism , Glycine Plasma Membrane Transport Proteins/metabolism , Mice , Inferior Colliculi/physiology , Inferior Colliculi/metabolism , Inferior Colliculi/cytology , Auditory Pathways/physiology , Auditory Pathways/metabolism , Glutamic Acid/metabolism , Superior Olivary Complex/physiology , Superior Olivary Complex/metabolism , Male , Vesicular Glutamate Transport Protein 2/metabolism , Neurons/metabolism , Neurons/physiology
16.
Cureus ; 16(7): e65394, 2024 Jul.
Article in English | MEDLINE | ID: mdl-39184734

ABSTRACT

The assessment of auscultation using a stethoscope is unsuitable for continuous monitoring. Therefore, we developed a novel acoustic monitoring system that continuously, objectively, and visually evaluates respiratory sounds. In this report, we assess the usefulness of our revised system in a ventilated extremely low birth weight infant (ELBWI) for the diagnosis of pulmonary atelectasis and evaluation of treatment by lung lavage. A female infant was born at 24 weeks of age with a birth weight of 636 g after emergency cesarean section. The patient received invasive mechanical ventilation immediately after birth in our neonatal (NICU). After obtaining informed consent, we monitored her respiratory status using the respiratory-sound monitoring system by attaching a sound collection sensor to the right anterior chest wall. On day 26, lung-sound spectrograms showed that the breath sounds were attenuated simultaneously as hypoxemia progressed. Finally, chest radiography confirmed the diagnosis as pulmonary atelectasis. To relieve atelectasis, surfactant lavage was performed, after which the lung-sound spectrograms returned to normal. Hypoxemia and chest radiographic findings improved significantly. On day 138, the patient was discharged from the NICU without complications. The continuous respiratory-sound monitoring system enabled the visual, quantitative, and noninvasive detection of acute regional lung abnormalities at the bedside. We, therefore, believe that this system can resolve several problems associated with neonatal respiratory management and save lives.

17.
Environ Pollut ; 360: 124709, 2024 Aug 10.
Article in English | MEDLINE | ID: mdl-39128604

ABSTRACT

A global increase in offshore windfarm development is critical to our renewable energy future. Yet, widespread construction plans have generated substantial concern for impacts to co-occurring organisms and the communities they form. Pile driving construction, prominent in offshore windfarm development, produces among the highest amplitude sounds in the ocean creating widespread concern for a diverse array of taxa. However, studies addressing ecologically key species are generally lacking and most research is disparate, failing to integrate across response types (e.g., behavior, physiology, and ecological interactions), particularly in situ. The lack of integrative field studies presents major challenges to understand or mitigate actual impacts of offshore wind development. Here, we examined critical behavioral, physiological, and antipredator impacts of actual pile driving construction on the giant sea scallop (Placopecten magellanicus). Benthic taxa including bivalves are of particular concern because they are sound-sensitive, cannot move appreciable distances away from the stressor, and support livelihoods as one of the world's most economically and socially important fisheries. Overall, pile driving sound impacted scallops across a series of behavioral and physiological assays. Sound-exposed scallops consistently reduced their valve opening (22%), resulting in lowered mantle water oxygen levels available to the gills. Repeated and rapid valve adductions led to a 56% increase in metabolic rates relative to pre-exposure baselines. Consequently, in response to predator stimuli, sound-exposed scallops displayed a suite of significantly weaker antipredator behaviors including fewer swimming events and shorter time-to-exhaustion. These results show aquatic construction activities can induce metabolic and ecologically relevant changes in a key benthic animal. As offshore windfarm construction accelerates globally, our field-based study highlights that spatial overlap with benthic taxa may cause substantial metabolic changes, alter important fisheries resources, and ultimately could lead to increased predation.

18.
IEEE J Transl Eng Health Med ; 12: 550-557, 2024.
Article in English | MEDLINE | ID: mdl-39155923

ABSTRACT

The objective of this study was to develop a sound recognition-based cardiopulmonary resuscitation (CPR) training system that is accessible, cost-effective, easy-to-maintain and provides accurate CPR feedback. Beep-CPR, a novel device with accordion squeakers that emit high-pitched sounds during compression, was developed. The sounds emitted by Beep-CPR were recorded using a smartphone, segmented into 2-second audio fragments, and then transformed into spectrograms. A total of 6,065 spectrograms were generated from approximately 40 minutes of audio data, which were then randomly split into training, validation, and test datasets. Each spectrogram was matched with the depth, rate, and release velocity of the compression measured at the same time interval by the ZOLL X Series monitor/defibrillator. Deep learning models utilizing spectrograms as input were trained using transfer learning based on EfficientNet to predict the depth (Depth model), rate (Rate model), and release velocity (Recoil model) of compressions. Results: The mean absolute error (MAE) for the Depth model was 0.30 cm (95% confidence interval [CI]: 0.27-0.33). The MAE of the Rate model was 3.6/min (95% CI: 3.2-3.9). For the Recoil model, the MAE was 2.3 cm/s (95% CI: 2.1-2.5). External validation of the models demonstrated acceptable performance across multiple conditions, including the utilization of a newly-manufactured device, a fatigued device, and evaluation in an environment with altered spatial dimensions. We have developed a novel sound recognition-based CPR training system, that accurately measures compression quality during training. Significance: Beep-CPR is a cost-effective and easy-to-maintain solution that can improve the efficacy of CPR training by facilitating decentralized at-home training with performance feedback.


Subject(s)
Cardiopulmonary Resuscitation , Cardiopulmonary Resuscitation/education , Cardiopulmonary Resuscitation/instrumentation , Humans , Sound , Sound Spectrography , Signal Processing, Computer-Assisted/instrumentation , Deep Learning , Smartphone , Equipment Design
19.
Front Psychol ; 15: 1357975, 2024.
Article in English | MEDLINE | ID: mdl-39135868

ABSTRACT

Introduction: This study aimed to explore the arousal and valence that people experience in response to Hangul phonemes based on the gender of an AI speaker through comparison with Korean and Chinese cultures. Methods: To achieve this, 42 Hangul phonemes were used, in a combination of three Korean vowels and 14 Korean consonants, to explore cultural differences in arousal, valence, and the six foundational emotions based on the gender of an AI speaker. A total 136 Korean and Chinese women were recruited and randomly assigned to one of two conditions based on voice gender (man or woman). Results and discussion: This study revealed significant differences in arousal levels between Korean and Chinese women when exposed to male voices. Specifically, Chinese women exhibited clear differences in emotional perceptions of male and female voices in response to voiced consonants. These results confirm that arousal and valence may differ with articulation types and vowels due to cultural differences and that voice gender can affect perceived emotions. This principle can be used as evidence for sound symbolism and has practical implications for voice gender and branding in AI applications.

20.
Article in English | MEDLINE | ID: mdl-39137266

ABSTRACT

BACKGROUND: Within cohorts of children with autism spectrum disorder (ASD) there is considerable variation in terms of language ability. In the past, it was believed that children with ASD either had delayed articulation and phonology skills or excelled in those areas compared to other language domains. Very little is known about speech sound ability in relation to language ability and non-verbal ability in Swedish preschool children with ASD. AIM: The current study aimed to describe language variation in a group of 4-6-year-old children with ASD, focusing on in-depth analyses of speech sound error patterns with and without non-phonological language disorder and concomitant non-verbal delays. METHOD & PROCEDURES: We examined and analysed the speech sound skills (including consonant inventory, percentage of correct consonants and speech sound error patterns) in relation to receptive language skills in a sample of preschool children who had screened positive for ASD in a population-based screening at 2.5 years of age. Seventy-three children diagnosed with ASD participated and were divided into subgroups based on their receptive language (i.e., non-phonological language) and non-verbal abilities. OUTCOMES & RESULTS: The subgroup division revealed that 29 children (40%) had language delay/disorder without concurrent non-verbal general cognitive delay (ALD), 27 children (37%) had language delay/disorder with non-verbal general cognitive delay (AGD), and 17 children (23%) had language and non-verbal abilities within the normal range (ALN). Results revealed that children with ALD and children with AGD both had atypical speech sound error patterns significantly more often than the children with ALN. CONCLUSIONS & IMPLICATIONS: This study showed that many children who had screened positive for ASD before age 3 years - with or without non-verbal general cognitive delays - had deficits in language as well as in speech sound ability. However, individual differences were considerable. Our results point to speech sound error patterns as a potential clinical marker for language problems (disorder/delay) in preschool children with ASD. WHAT THIS PAPER ADDS: What is already known on the subject Children with autism spectrum disorder (ASD) have deficits in social communication, restricted interests and repetitive behaviour. They show very considerable variation in both receptive and expressive language abilities. Previously, articulation and phonology were viewed as either delayed in children with ASD or superior compared with other (non-phonological) language domains. What this paper adds to existing knowledge Children with ASD and language disorders also have problems with speech sound error patterns. What are the potential or actual clinical implications of this work? About 75% of children with ASD experience language delays/disorders, as well as speech sound problems, related to speech sound error patterns. Understanding/acknowledging these phonological patterns and their implications can help in the diagnosis and intervention of speech sound disorders in children with ASD. Direct intervention targeting phonology might lead to language gains, but more research is needed.

SELECTION OF CITATIONS
SEARCH DETAIL