Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
1.
J Acoust Soc Am ; 153(3): 1776, 2023 03.
Article in English | MEDLINE | ID: mdl-37002110

ABSTRACT

In recent years, experimental studies have demonstrated that malfunction of the inner-hair cells and their synapse to the auditory nerve is a significant hearing loss (HL) contributor. This study presents a detailed biophysical model of the inner-hair cells embedded in an end-to-end computational model of the auditory pathway with an acoustic signal as an input and prediction of human audiometric thresholds as an output. The contribution of the outer hair cells is included in the mechanical model of the cochlea. Different types of HL were simulated by changing mechanical and biochemical parameters of the inner and outer hair cells. The predicted thresholds yielded common audiograms of hearing impairment. Outer hair cell damage could only introduce threshold shifts at mid-high frequencies up to 40 dB. Inner hair cell damage affects low and high frequencies differently. All types of inner hair cell deficits yielded a maximum of 40 dB HL at low frequencies. Only a significant reduction in the number of cilia of the inner-hair cells yielded HL of up to 120 dB HL at high frequencies. Sloping audiograms can be explained by a combination of gradual change in the number of cilia of inner and outer hair cells along the cochlear partition from apex to base.


Subject(s)
Deafness , Hearing Loss , Humans , Hair Cells, Auditory, Inner/physiology , Auditory Threshold/physiology , Cochlea , Audiometry , Hair Cells, Auditory, Outer/physiology
2.
J Acoust Soc Am ; 151(6): 3719, 2022 06.
Article in English | MEDLINE | ID: mdl-35778181

ABSTRACT

Unmanned aerial vehicles are rapidly advancing and becoming ubiquitous in an unlimited number of applications, from parcel delivery to people transportation. As unmanned aerial vehicle (UAV) markets expand, the increased acoustic nuisance on population becomes a more acute problem. Previous aircraft noise assessments have highlighted the necessity of a psychoacoustic metric for quantification of human audio perception. This study presents a framework for estimating propeller-based UAV auditory detection probability on the ground for a listener in a real-life scenario. The detection probability is derived by using its free-field measured acoustic background and estimating the UAV threshold according to a physiological model of the auditory pathway. The method is presented via results of an exemplar measurement in an anechoic environment with a single two- and five-bladed propeller. It was found that the auditory detection probability is primarily affected by the background noise level, whereas the number of blades is a less significant parameter. The significance of the proposed method lies in providing a quantitative evaluation of auditory detection probability of the UAV on the ground in the presence of a given soundscape. The results of this work are of practical significance since the method can aid anyone who plans a hovering flight mode.


Subject(s)
Aircraft , Remote Sensing Technology , Acoustics , Humans , Probability , Remote Sensing Technology/methods , Unmanned Aerial Devices
3.
Assist Technol ; 34(1): 11-19, 2022 01 02.
Article in English | MEDLINE | ID: mdl-31577190

ABSTRACT

This research focused on examining the sonification properties that can lead people who are blind to distinguish and to identify different sounds. This research included 10 participants, all of whom were examined individually. They listened to a sonified scenario, which was generated by an agent-based NetLogo computer model of a gas particle in a container. The participants identified the different sounds as opposed to examining their ability to identify the value of the sounds or to understand the scientific phenomena as a result of hearing the model scenario. This research found that, in regard to complexity levels, the participants were able to identify stimuli that included up to four sounds. The analyses reveal that in the second trial the participants displayed heightened ability. The long-term practical benefits of this research may well influence program developers in education and rehabilitation for people who are blind. A learning environment based on sonified feedback can address a central need among people who are blind, providing equal access to learning environments equivalent to those available to sighted users and allowing independent interaction with exploratory materials and control of the learning process.


Subject(s)
Auditory Perception , Hearing , Blindness , Humans , Learning , Sound
4.
PLoS Comput Biol ; 13(1): e1005338, 2017 01.
Article in English | MEDLINE | ID: mdl-28099436

ABSTRACT

Our acoustical environment abounds with repetitive sounds, some of which are related to pitch perception. It is still unknown how the auditory system, in processing these sounds, relates a physical stimulus and its percept. Since, in mammals, all auditory stimuli are conveyed into the nervous system through the auditory nerve (AN) fibers, a model should explain the perception of pitch as a function of this particular input. However, pitch perception is invariant to certain features of the physical stimulus. For example, a missing fundamental stimulus with resolved or unresolved harmonics, or a low and high-level amplitude stimulus with the same spectral content-these all give rise to the same percept of pitch. In contrast, the AN representations for these different stimuli are not invariant to these effects. In fact, due to saturation and non-linearity of both cochlear and inner hair cells responses, these differences are enhanced by the AN fibers. Thus there is a difficulty in explaining how pitch percept arises from the activity of the AN fibers. We introduce a novel approach for extracting pitch cues from the AN population activity for a given arbitrary stimulus. The method is based on a technique known as sparse coding (SC). It is the representation of pitch cues by a few spatiotemporal atoms (templates) from among a large set of possible ones (a dictionary). The amount of activity of each atom is represented by a non-zero coefficient, analogous to an active neuron. Such a technique has been successfully applied to other modalities, particularly vision. The model is composed of a cochlear model, an SC processing unit, and a harmonic sieve. We show that the model copes with different pitch phenomena: extracting resolved and non-resolved harmonics, missing fundamental pitches, stimuli with both high and low amplitudes, iterated rippled noises, and recorded musical instruments.


Subject(s)
Cochlear Nerve/physiology , Models, Neurological , Nerve Fibers/physiology , Pitch Perception/physiology , Acoustic Stimulation , Computational Biology , Humans , Music
5.
Arch Womens Ment Health ; 20(1): 139-147, 2017 02.
Article in English | MEDLINE | ID: mdl-27796596

ABSTRACT

Body image disturbances are a prominent feature of eating disorders (EDs). Our aim was to test and evaluate a computerized assessment of body image (CABI), to compare the body image disturbances in different ED types, and to assess the factors affecting body image. The body image of 22 individuals undergoing inpatient treatment with restricting anorexia nervosa (AN-R), 22 with binge/purge AN (AN-B/P), 20 with bulimia nervosa (BN), and 41 healthy controls was assessed using the Contour Drawing Rating Scale (CDRS), the CABI, which simulated the participants' self-image in different levels of weight changes, and the Eating Disorder Inventory-2-Body Dissatisfaction (EDI-2-BD) scale. Severity of depression and anxiety was also assessed. Significant differences were found among the three scales assessing body image, although most of their dimensions differentiated between patients with EDs and controls. Our findings support the use of the CABI in the comparison of body image disturbances in patients with EDs vs. CONTROLS: Moreover, the use of different assessment tools allows for a better understanding of the differences in body image disturbances in different ED types.


Subject(s)
Anorexia Nervosa/psychology , Body Image , Bulimia Nervosa/psychology , Computers , Self Concept , Adolescent , Adult , Anxiety/complications , Anxiety/psychology , Case-Control Studies , Depression/complications , Depression/psychology , Female , Humans , Image Processing, Computer-Assisted , Israel , Severity of Illness Index , Surveys and Questionnaires , Young Adult
6.
Handb Clin Neurol ; 129: 649-65, 2015.
Article in English | MEDLINE | ID: mdl-25726295

ABSTRACT

Multiple sclerosis (MS) is a disease that is both a focal inflammatory and a chronic neurodegenerative disease. The focal inflammatory component is characterized by destruction of central nervous system myelin, including the spinal cord; as such it can impair any central neural system, including the auditory system. While on the one hand auditory complaints in MS patients are rare compared to other senses, such as vision and proprioception, on the other hand auditory tests of precise neural timing are never "silent." Whenever focal MS lesions are detected involving the pontine auditory pathway, auditory tests requiring precise neural timing are always abnormal, while auditory functions not requiring such precise timing are often normal. Azimuth sound localization is accomplished by comparing the timing and loudness of the sound at the two ears. Hence tests of azimuth sound localization must obligatorily involve the central nervous system and particularly the brainstem. Whenever a focal lesion was localized to the pontine auditory pathway, timing tests were always abnormal, but loudness tests were not. Moreover, a timing test that included only high-frequency sounds was very often abnormal, even when there was no detectable focal MS lesion involving the pontine auditory pathway. This test may be a marker for the chronic neurodegenerative aspect of MS, and, as such could be used to complement the magnetic resonance imaging scan in monitoring the neurodegenerative aspect of MS. Studies of MS brainstem lesion location and auditory function have led to advances in understanding how the human brain processes sound. The brain processes binaural sounds independently for time and level in a two-stage process. The first stage is at the level of the superior olivary complex (SOC) and the second at a level rostral to the SOC.


Subject(s)
Hearing Disorders/etiology , Multiple Sclerosis/complications , Auditory Pathways/pathology , Functional Laterality , Humans , Multiple Sclerosis/pathology
7.
Comput Intell Neurosci ; 2014: 575716, 2014.
Article in English | MEDLINE | ID: mdl-24799888

ABSTRACT

The minimum audible angle test which is commonly used for evaluating human localization ability depends on interaural time delay, interaural level differences, and spectral information about the acoustic stimulus. These physical properties are estimated at different stages along the brainstem auditory pathway. The interaural time delay is ambiguous at certain frequencies, thus confusion arises as to the source of these frequencies. It is assumed that in a typical minimum audible angle experiment, the brain acts as an unbiased optimal estimator and thus the human performance can be obtained by deriving optimal lower bounds. Two types of lower bounds are tested: the Cramer-Rao and the Barankin. The Cramer-Rao bound only takes into account the approximation of the true direction of the stimulus; the Barankin bound considers other possible directions that arise from the ambiguous phase information. These lower bounds are derived at the output of the auditory nerve and of the superior olivary complex where binaural cues are estimated. An agreement between human experimental data was obtained only when the superior olivary complex was considered and the Barankin lower bound was used. This result suggests that sound localization is estimated by the auditory nuclei using ambiguous binaural information.


Subject(s)
Action Potentials/physiology , Auditory Pathways/cytology , Brain Stem/physiology , Neurons/physiology , Sound Localization/physiology , Acoustic Stimulation , Auditory Pathways/physiology , Cues , Humans , Predictive Value of Tests , Psychoacoustics , Reaction Time/physiology , Stochastic Processes
8.
J Acoust Soc Am ; 132(3): 1718-31, 2012 Sep.
Article in English | MEDLINE | ID: mdl-22978899

ABSTRACT

A common complaint of the hearing impaired is the inability to understand speech in noisy environments even with their hearing assistive devices. Only a few single-channel algorithms have significantly improved speech intelligibility in noise for hearing-impaired listeners. The current study introduces a cochlear noise reduction algorithm. It is based on a cochlear representation of acoustic signals and real-time derivation of a binary speech mask. The contribution of the algorithm for enhancing word recognition in noise was evaluated on a group of 42 normal-hearing subjects, 35 hearing-aid users, 8 cochlear implant recipients, and 14 participants with bimodal devices. Recognition scores of Hebrew monosyllabic words embedded in Gaussian noise at several signal-to-noise ratios (SNRs) were obtained with processed and unprocessed signals. The algorithm was not effective among the normal-hearing participants. However, it yielded a significant improvement in some of the hearing-impaired subjects under different listening conditions. Its most impressive benefit appeared among cochlear implant recipients. More than 20% improvement in recognition score of noisy words was obtained by 12, 16, and 26 hearing-impaired at SNR of 30, 24, and 18 dB, respectively. The algorithm has a potential to improve speech intelligibility in background noise, yet further research is required to improve its performances.


Subject(s)
Algorithms , Cochlear Implants , Correction of Hearing Impairment , Hearing Aids , Noise/adverse effects , Perceptual Masking , Persons With Hearing Impairments/rehabilitation , Recognition, Psychology , Signal Processing, Computer-Assisted , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Adolescent , Adult , Aged , Analysis of Variance , Audiometry, Pure-Tone , Audiometry, Speech , Auditory Threshold , Comprehension , Correction of Hearing Impairment/psychology , Female , Humans , Male , Middle Aged , Persons With Hearing Impairments/psychology , Signal-To-Noise Ratio , Sound Spectrography , Time Factors , Young Adult
9.
Neural Comput ; 21(9): 2524-53, 2009 Sep.
Article in English | MEDLINE | ID: mdl-19548801

ABSTRACT

Neural information is characterized by sets of spiking events that travel within the brain through neuron junctions that receive, transmit, and process streams of spikes. Coincidence detection is one of the ways to describe the functionality of a single neural cell. This letter presents an analytical derivation of the output stochastic behavior of a coincidence detector (CD) cell whose stochastic inputs behave as a nonhomogeneous Poisson process (NHPP) with both excitatory and inhibitory inputs. The derivation, which is based on an efficient breakdown of the cell into basic functional elements, results in an output process whose behavior can be approximated as an NHPP as long as the coincidence interval is much smaller than the refractory period of the cell's inputs. Intuitively, the approximation is valid as long as the processing rate is much faster than the incoming information rate. This type of modeling is a simplified but very useful description of neurons since it enables analytical derivations. The statistical properties of single CD cell's output make it possible to integrate and analyze complex neural cells in a feedforward network using the methodology presented here. Accordingly, basic biological characteristics of neural activity are demonstrated, such as a decrease in the spontaneous rate at higher brain levels and improved signal-to-noise ratio for harmonic input signals.


Subject(s)
Models, Neurological , Nerve Net/physiology , Neurons/physiology , Stochastic Processes , Action Potentials/physiology , Animals , Neural Inhibition/physiology , Neural Pathways/physiology
10.
J Acoust Soc Am ; 125(3): 1567-83, 2009 Mar.
Article in English | MEDLINE | ID: mdl-19275315

ABSTRACT

In the mammalian auditory brainstem, two types of coincidence detector cells are involved in binaural localization: excitatory-excitatory (EE) and excitatory-inhibitory (EI). Using statistics derived from EE and EI spike trains, binaural discrimination abilities of single tones were predicted. The minimum audible angle (MAA), as well as the just noticeable difference of interaural time delay (ITD) and interaural level difference (ILD) were analytically derived for both EE and EI cells on the basis of two possible neural coding patterns, rate coding that ignores a spike's timing information and all-information coding (AIN), which considers a spike's timing occurrences. Simulation results for levels below saturation were qualitatively compared to experimental data, which yielded the following conclusions: (1) ITD is primarily estimated by EE cells with AIN coding when the ipsilateral auditory input exhibits phase delay between 40 degrees and 65 degrees . (2) In ILD, both AIN and rate coding provide identical performances. It is most likely that ILD is primarily estimated by EI cells according to rate coding, and for ILD the information derived from the spikes' timing is redundant. (3) For MAA estimation, the derivation should take into account ambiguous directions of a source signal in addition to its true value.


Subject(s)
Auditory Perception/physiology , Evoked Potentials, Auditory, Brain Stem/physiology , Stochastic Processes , Acoustics , Cochlear Nerve/physiology , Cues , Humans , Models, Biological
11.
Int J Audiol ; 46(3): 119-27, 2007 Mar.
Article in English | MEDLINE | ID: mdl-17365065

ABSTRACT

The ear vulnerability of a group of combat soldiers was tested. The study initially included 84 soldiers and lasted two years. The soldiers were exposed to the noise of small-arms fire. Measurements included transient-evoked otoacoustic emissions (TEOAE) and pure-tone audiometry. Measurements, initially performed prior to the soldiers' basic training, were repeated several times during the study. In general, TEOAE levels (Em) decreased over time. About 57% of the ears developed a slight hearing loss (SHL) after two years of noise exposure. We define SHL as a threshold shift of 10 dB or greater, in at least at one of the audiometric frequencies 1000, 2000, 3000, 4000, or 6000 Hz. About 63% of the tested ears that had medium TEOAE level (1 or =8 dB SPL), less than 30% developed SHL. We suggest a prediction for ear vulnerability on the basis of Em prior to noise exposure.


Subject(s)
Audiometry/methods , Noise/adverse effects , Otoacoustic Emissions, Spontaneous/physiology , Acoustic Stimulation/methods , Adolescent , Auditory Threshold/physiology , Follow-Up Studies , Humans , Male , Military Personnel/statistics & numerical data , Models, Biological
12.
J Basic Clin Physiol Pharmacol ; 17(3): 173-85, 2006.
Article in English | MEDLINE | ID: mdl-17598308

ABSTRACT

Firing noise of small-arms is characterized by a rapid change in pressure and a sharp peak in sound pressure level of 155-170 dB SPL. In the present study, we examined the behavior of transient-evoked otoacoustic emissions (TEOAE) of a group of soldiers exposed initially to the noise of small-arms fire during their basic training. The study included 15 soldiers and lasted 6 months. Measurements were performed before and immediately after two firing sessions, 2 weeks apart, and after 6 months. There was no significant difference between the audiograms that were measured prior to the noise exposure and those measured after 6 months. Wide-band TEOAE levels decreased over time, but the most significant decrease occurred between the two last sessions that were 5.5 months apart. No significant changes in TEOAE levels were observed before and immediately after exposure. However, at the high frequency range, an increase in TEOAE levels was observed in the third session relative to the previous one. During the 2 weeks after the first exposure, the soldiers were not exposed to noise. This TEOAE property might indicate the existence of a protective mechanism in the ear from traumatic harmful noise.


Subject(s)
Firearms , Noise/adverse effects , Otoacoustic Emissions, Spontaneous/physiology , Adolescent , Humans , Male , Military Personnel , Noise, Occupational/adverse effects
13.
J Acoust Soc Am ; 115(5 Pt 1): 2185-92, 2004 May.
Article in English | MEDLINE | ID: mdl-15139630

ABSTRACT

Recently, significant progress has been made in understanding the contribution of the mammalian cochlear outer hair cells (OHCs) to normal auditory signal processing. In the present paper an outer hair cell model is incorporated in a complete, time-domain, one-dimensional cochlear model. The two models control each other through cochlear partition movement and pressure. An OHC gain (gamma) is defined to indicate the outer hair cell contribution at each location along the cochlear partition. Its value ranges from 0 to 1: gamma=0 represents a cochlea with no active OHCs, gamma=1 represents a nonrealistic cochlea that becomes unstable at resonance frequencies, and gamma=0.5 represents an ideal cochlea. The model simulations reveal typical normal and abnormal excitation patterns according to the value of gamma. The model output is used to estimate normal and hearing-impairment audiograms. High frequency loss is predicted by the model, when the OHC gain is relatively small at the basal part of the cochlear partition. The model predicts phonal trauma audiograms, when the OHC gain is random along the cochlear partition. A maximum threshold shift of about 60 dB is obtained at 4 kHz.


Subject(s)
Endolymph/physiology , Hair Cells, Auditory, Outer/physiology , Perilymph/physiology , Animals , Computer Simulation , Electric Stimulation , Electrophysiology , Humans , Linear Models , Mathematical Computing , Models, Biological , Time Factors
14.
Hear Res ; 187(1-2): 63-72, 2004 Jan.
Article in English | MEDLINE | ID: mdl-14698088

ABSTRACT

Binaural processing of sounds in mammals is presumably initiated within the auditory nuclei of the caudal pons. The binaural difference waveform (BD) can be derived from the sum of the waveforms evoked by right monaural clicks plus left monaural clicks minus the waveform evoked by binaural clicks. In adults, the BD's first positive peak (beta) is large only for stimuli with interaural time differences (ITDs) that produce a fused acoustic percept. Humans at birth can localize and discriminate sound sources, but their head circumference is about two-thirds of an adult head. In order to test whether beta is related to head circumference, we recorded beta in human neonates as a function of ITD. Binaural clicks with ITDs ranging between 0 and 1000 micros were used to derive BD waveforms in 34 neonates. For ITD=0, beta was detectable in 56% of newborns. The incidence of beta detection then decreased as ITD increased. Only 9% of the babies had detectable beta for all ITDs. No correlation was found between the existence of beta and other properties of the monaural or binaural auditory brainstem response. The finding that for some infants beta was present for all ITDs up to 1.0 ms suggests that there is no recalibration of brainstem delay lines with head growth. Our data suggest that the brainstem auditory pathway for detecting interaural time differences in the adult is probably present at birth. Maturational factors such as increased myelination and greater firing synchrony probably improve the detectability of beta with age. The second peak in the BD waveform (delta) was highly correlated with the existence of wave VI in the binaural and monaural waveforms.


Subject(s)
Ear/physiology , Hearing/physiology , Infant, Newborn/physiology , Parturition , Adult , Evoked Potentials, Auditory, Brain Stem , Head/anatomy & histology , Humans , Models, Biological , Reaction Time
15.
Hear Res ; 165(1-2): 117-27, 2002 Mar.
Article in English | MEDLINE | ID: mdl-12031521

ABSTRACT

The main purpose of this study was to describe and compare lateralization of earphone-presented stimuli in younger and older individuals. Lateralization functions, relating perceived location to either interaural time differences (ITDs) or interaural level differences (ILDs) were determined for 78 subjects, aged 21-88 years, who responded by pressing one of nine keys to indicate the perceived location of the stimulus. All subjects were healthy, without any history of hearing loss or ear surgery and within the normal pure tone audiometric range for their age group. Interaural pure tone and click thresholds did not differ by more than 5 dB across ears. The ILD lateralization functions, ranging from 10 dB favoring the left ear to 10 dB favoring the right ear were linear. In contrast, the ITD lateralization functions were S-shaped with a clear linear component ranging from 750 micros favoring one ear to 750 micros favoring the other ear and with an asymptote from 750 micros to 1 ms. The same general shape of the ITD and ILD lateralization functions was found at all ages, but the linear slope of the ITD lateralization function became shallower with age. The ability to discriminate midline-located click trains (ITD and ILD=0) from ITD-lateralized click trains deteriorated with age, while the comparable ability to discriminate ILD-lateralized click trains did not change significantly with age. The data support two general conclusions. First there seems to be an overall reduction in the range of ITD-based lateralization due to aging. Second, there is a greater reduction in sensitivity due to aging in changes from the perceived midline position (ITD and ILD=0) when ITD is manipulated than when ILD is manipulated.


Subject(s)
Aging/physiology , Functional Laterality , Sound Localization/physiology , Acoustic Stimulation/methods , Adult , Aged , Aged, 80 and over , Auditory Threshold , Ear/physiology , Female , Hearing/physiology , Humans , Male , Middle Aged , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...