Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 31
Filter
1.
Mem Cognit ; 51(4): 930-951, 2023 05.
Article in English | MEDLINE | ID: mdl-36239898

ABSTRACT

Previous studies suggest that task-irrelevant changing-state sound interferes specifically with the processing of serial order information in the focal task (e.g., serial recall from short-term memory), whereas a deviant sound in the auditory background is supposed to divert central attention, thus producing distraction in various types of cognitive tasks. Much of the evidence for this distinction rests on the observed dissociations in auditory distraction between serial and non-serial short-term memory tasks. In this study, both the changing-state effect and the deviation effect were contrasted between serial digit recall and mental arithmetic tasks. In three experiments (two conducted online), changing-state sound was found to disrupt serial recall, but it did not lead to a general decrement in performance in different mental arithmetic tasks. In contrast, a deviant voice in the stream of irrelevant speech sounds did not cause reliable distraction in serial recall and simple addition/subtraction tasks, but it did disrupt a more demanding mental arithmetic task. Specifically, the evaluation of math equations (multiplication and addition/subtraction), which was combined with a pair-associate memory task to increase the task demand, was found to be susceptible to auditory distraction in participants who did not serially rehearse the pair-associates. Together, the results support the assumption that the interference produced by changing-state sound is highly specific to tasks that require serial-order processing, whereas auditory deviants may cause attentional capture primarily in highly demanding cognitive tasks (e.g., mental arithmetic) that cannot be solved through serial rehearsal.


Subject(s)
Auditory Perception , Memory, Short-Term , Humans , Mental Recall , Attention , Phonetics
2.
J Acoust Soc Am ; 145(6): 3686, 2019 06.
Article in English | MEDLINE | ID: mdl-31255145

ABSTRACT

Irrelevant speech is known to interfere with short-term memory of visually presented items. Here, this irrelevant speech effect was studied with a factorial combination of three variables: the participants' native language, the language the irrelevant speech was derived from, and the playback direction of the irrelevant speech. We used locally time-reversed speech as well to disentangle the contributions of local and global integrity. German and Japanese speech was presented to German (n = 79) and Japanese (n = 81) participants while participants were performing a serial-recall task. In both groups, any kind of irrelevant speech impaired recall accuracy as compared to a pink-noise control condition. When the participants' native language was presented, normal speech and locally time-reversed speech with short segment duration, preserving intelligibility, was the most disruptive. Locally time-reversed speech with longer segment durations and normal or locally time-reversed speech played entirely backward, both lacking intelligibility, was less disruptive. When the unfamiliar, incomprehensible signal was presented as irrelevant speech, no significant difference was found between locally time-reversed speech and its globally inverted version, suggesting that the effect of global inversion depends on the familiarity of the language.


Subject(s)
Language , Memory, Short-Term/physiology , Mental Recall/physiology , Speech/physiology , Adult , Attention/physiology , Female , Humans , Male , Noise , Recognition, Psychology/physiology , Young Adult
3.
J Acoust Soc Am ; 145(6): 3625, 2019 06.
Article in English | MEDLINE | ID: mdl-31255159

ABSTRACT

The irrelevant sound effect (ISE) denotes the fact that short-term memory is disrupted while being exposed to sound. The ISE is largest for speech. The presented study investigated the underlying acoustic properties that cause the ISE. Stimuli contained changes in either the spectral content only, the envelope only, or both. For this purpose two experiments were conducted and two vocoding strategies were developed to degrade the spectral content of speech and the envelope independently. The first strategy employed a noise vocoder that was based on perceptual dimensions, analyzing the original utterance into 1, 2, 4, 8, or 24 channels (critical bands) and independently manipulating loudness. The second strategy involved a temporal segmentation of the signal, freezing either spectrum or level for durations ranging from 50 ms to 14 s. In both experiments, changes in envelope alone did not have measurable effects on performance, but the ISE was significantly increased when both the spectral content and the envelope varied. Furthermore, when the envelope changes were uncorrelated with the spectral changes, the effect size was the same as with a constant-loudness envelope. This suggests that the ISE is primarily caused by spectral changes, but concurrent changes in level tend to amplify it.


Subject(s)
Memory, Short-Term/physiology , Perceptual Masking/physiology , Speech Perception/physiology , Speech/physiology , Acoustic Stimulation/methods , Female , Humans , Male , Noise , Sound
4.
J Acoust Soc Am ; 143(3): 1283, 2018 03.
Article in English | MEDLINE | ID: mdl-29604715

ABSTRACT

Sound propagation effects need to be considered in studies dealing with the perception of annoying auditory sensations evoked by transportation noise. Thus, in a listening test requiring participants to make dissimilarity ratings, the effects of several feasible propagation models are compared to actual recordings of vehicle noises made at a given distance. As a result, a model taking into account first order reflections without any phase term is found to be the most appropriate model for simulating road traffic noise propagation in an urban environment from a perceptual point of view.

5.
J Acoust Soc Am ; 137(1): EL26-31, 2015 Jan.
Article in English | MEDLINE | ID: mdl-25618095

ABSTRACT

There are two competing national standards for the calculation of loudness of steady sounds, DIN 45631 and ANSI S3.4. Their different concepts of critical bands lead to different predictions for broadband sounds. As that discrepancy is neither constant nor linear but highly frequency-dependent, the present study investigates spectral loudness summation in three frequency regions, at various levels, and using two different methods. The results show that both algorithms overestimate loudness; however, DIN 45631 comes closer to the subjective evaluations and often falls within their interquartile range. The overestimation by the standards is particularly large in the frequency range from 2 to 5 kHz.

6.
J Acoust Soc Am ; 138(3): 1561-9, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26428793

ABSTRACT

To investigate the mechanisms by which unattended speech impairs short-term memory performance, speech samples were systematically degraded by means of a noise vocoder. For experiment 1, recordings of German and Japanese sentences were passed through a filter bank dividing the spectrum between 50 and 7000 Hz into 20 critical-band channels or combinations of those, yielding 20, 4, 2, or just 1 channel(s) of noise-vocoded speech. Listening tests conducted with native speakers of both languages showed a monotonic decrease in speech intelligibility as the number of frequency channels was reduced. For experiment 2, 40 native German and 40 native Japanese participants were exposed to speech processed in the same manner while trying to memorize visually presented sequences of digits in the correct order. Half of each sample received the German, the other half received the Japanese speech samples. The results show large irrelevant-speech effects increasing in magnitude with the number of frequency channels. The effects are slightly larger when subjects are exposed to their own native language. The results are neither predicted very well by the speech transmission index, nor by psychoacoustical fluctuation strength, most likely, since both metrics fail to disentangle amplitude and frequency modulations in the signals.


Subject(s)
Language , Memory, Short-Term/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Female , Humans , Male , Noise , Perceptual Masking/physiology , Psychoacoustics , Sound Spectrography , Speech Discrimination Tests , Speech Intelligibility/physiology , Young Adult
7.
Trends Hear ; 28: 23312165241262517, 2024.
Article in English | MEDLINE | ID: mdl-39051688

ABSTRACT

Listeners with normal audiometric thresholds show substantial variability in their ability to understand speech in noise (SiN). These individual differences have been reported to be associated with a range of auditory and cognitive abilities. The present study addresses the association between SiN processing and the individual susceptibility of short-term memory to auditory distraction (i.e., the irrelevant sound effect [ISE]). In a sample of 67 young adult participants with normal audiometric thresholds, we measured speech recognition performance in a spatial listening task with two interfering talkers (speech-in-speech identification), audiometric thresholds, binaural sensitivity to the temporal fine structure (interaural phase differences [IPD]), serial memory with and without interfering talkers, and self-reported noise sensitivity. Speech-in-speech processing was not significantly associated with the ISE. The most important predictors of high speech-in-speech recognition performance were a large short-term memory span, low IPD thresholds, bilaterally symmetrical audiometric thresholds, and low individual noise sensitivity. Surprisingly, the susceptibility of short-term memory to irrelevant sound accounted for a substantially smaller amount of variance in speech-in-speech processing than the nondisrupted short-term memory capacity. The data confirm the role of binaural sensitivity to the temporal fine structure, although its association to SiN recognition was weaker than in some previous studies. The inverse association between self-reported noise sensitivity and SiN processing deserves further investigation.


Subject(s)
Acoustic Stimulation , Auditory Threshold , Memory, Short-Term , Noise , Perceptual Masking , Recognition, Psychology , Speech Perception , Humans , Noise/adverse effects , Male , Female , Speech Perception/physiology , Young Adult , Memory, Short-Term/physiology , Adult , Speech Intelligibility , Attention/physiology , Adolescent
8.
Crisis ; 44(4): 276-284, 2023 Jul.
Article in English | MEDLINE | ID: mdl-35548882

ABSTRACT

Background: Although suicide prevention programs have been shown to change suicide-related knowledge and attitudes, relatively little is known about their effects on actual behavior. Aims: Therefore, the focus of the present study was on improving participating school staff's practical and communication skills. Method: Suicide prevention workshops for students in grades 8-10 (N = 200) and a gatekeeper training program for school staff (N = 150) were conducted in 12 secondary schools in Germany. Schools were alternately assigned to one of three interventions (staff, students, or both trained) or to a waitlist control group. Results: School staff undergoing the training showed increased action-related knowledge, greater self-efficacy when counseling students in need and augmented counseling skills, and also had more conversations with students in need. Although students participating in the workshops did not seek help more frequently, they provided help to their peers more often in the conditions in which both students and school staff or only the latter had been trained. Limitations: The generalizability of the results is constrained by high dropout rates due to the COVID-19 pandemic and the relatively small sample size. Conclusion: A combination of suicide prevention programs for school staff and students appears to be most effective.


Subject(s)
COVID-19 , Suicide , Teacher Training , Humans , Suicide Prevention , Pandemics , Students/psychology , Program Evaluation
9.
Hum Brain Mapp ; 33(4): 797-811, 2012 Apr.
Article in English | MEDLINE | ID: mdl-21425399

ABSTRACT

Multisensory integration assists us to identify objects by providing multiple cues with respect to object category and spatial location. We used a semantic audiovisual object matching task to determine the effect of spatial congruency on response behavior and fMRI brain activation. Fifteen subjects responded in a four-alternative response paradigm, which visual quadrant contained the object best matched to the sound presented. Realistic sounds based on head-related transfer functions were presented binaurally with the simulated sound source corresponding to one of the four quadrants. Following a random sequence, the location of the sound corresponded to the quadrant containing the semantically congruent target on half the trials, whereas on other trials the sound arose from an incongruent location. We examined the effects of spatial congruency on response latencies, hit-rates and fMRI responses. Preliminary behavioral results revealed a significant effect of spatial congruency on response latency or performance for stimuli with noise added. In the fMRI experiment, spatial congruency had a significant effect on the BOLD response. A cluster in the right middle and superior temporal gyrus was more activated when the auditory sound sources were spatially congruent with the semantically matching visual stimulus. In an exploratory post-hoc analysis, in which we correlated the BOLD signal with the subjects' ability to locate the sound sources, we found a significant cluster in the left inferior frontal cortex, where the BOLD response increased with sound-source localization performance. Thus spatial congruency appears to enhance the semantic integration of audiovisual object information in these brain regions.


Subject(s)
Auditory Perception/physiology , Brain Mapping , Brain/physiology , Visual Perception/physiology , Acoustic Stimulation , Adult , Female , Humans , Image Interpretation, Computer-Assisted , Magnetic Resonance Imaging , Male , Photic Stimulation , Young Adult
10.
J Acoust Soc Am ; 130(4): 2063-75, 2011 Oct.
Article in English | MEDLINE | ID: mdl-21973361

ABSTRACT

The binaural auralization of a 3D sound field using spherical-harmonics beamforming (SHB) techniques was investigated and compared with the traditional method using a head-and-torso simulator (HATS). The new procedure was verified by comparing simulated room impulse responses with measured ones binaurally. The objective comparisons show that there is good agreement in the frequency range between 0.1 and 6.4 kHz. A listening experiment was performed to validate the SHB method subjectively and to compare it to the HATS method. Two musical excerpts, pop and classical, were used. Subjective responses were collected in two head rotation conditions (fixed and rotating) and six spatial reproduction modes, including phantom mono, stereo, and surround sound. The results show that subjective scales of width, spaciousness, and preference based on the SHB method were similar to those obtained for the HATS method, although the width and spaciousness of the stimuli processed by the SHB method were judged slightly higher than the ones using the HATS method in general. Thus, binaural synthesis using SHB may be a useful tool to reproduce a 3D sound field binaurally, while saving considerably on measurement time because head rotation can be simulated based on a single recording.


Subject(s)
Acoustic Stimulation/methods , Head Movements , Music , Psychoacoustics , Sound Localization , Acoustic Stimulation/instrumentation , Adult , Amplifiers, Electronic , Audiometry, Pure-Tone , Auditory Threshold , Equipment Design , Female , Humans , Male , Middle Aged , Models, Theoretical , Motion , Pressure , Reproducibility of Results , Rotation , Sound , Time Factors
11.
Front Psychol ; 12: 635557, 2021.
Article in English | MEDLINE | ID: mdl-34045990

ABSTRACT

Continuous magnitude estimation and continuous cross-modality matching with line length can efficiently track the momentary loudness of time-varying sounds in behavioural experiments. These methods are known to be prone to systematic biases but may be checked for consistency using their counterpart, magnitude production. Thus, in Experiment 1, we performed such an evaluation for time-varying sounds. Twenty participants produced continuous cross-modality matches to assess the momentary loudness of fourteen songs by continuously adjusting the length of a line. In Experiment 2, the resulting temporal line length profile for each excerpt was played back like a video together with the given song and participants were asked to continuously adjust the volume to match the momentary line length. The recorded temporal line length profile, however, was manipulated for segments with durations between 7 to 12 s by eight factors between 0.5 and 2, corresponding to expected differences in adjusted level of -10, -6, -3, -1, 1, 3, 6, and 10 dB according to Stevens's power law for loudness. The average adjustments 5 s after the onset of the change were -3.3, -2.4, -1.0, -0.2, 0.2, 1.4, 2.4, and 4.4 dB. Smaller adjustments than predicted by the power law are in line with magnitude-production results by Stevens and co-workers due to "regression effects." Continuous cross-modality matches of line length turned out to be consistent with current loudness models, and by passing the consistency check with cross-modal productions, demonstrate that the method is suited to track the momentary loudness of time-varying sounds.

12.
Atten Percept Psychophys ; 83(7): 2955-2967, 2021 Oct.
Article in English | MEDLINE | ID: mdl-34105093

ABSTRACT

In their fundamental paper, Luce, Steingrimsson, and Narens (2010, Psychological Review, 117, 1247-1258) proposed that ratio productions constituting a generalization of cross-modality matching may be represented on a single scale of subjective intensity, if they meet "cross-dimensional commutativity." The present experiment is the first to test this axiom by making truly cross-modal adjustments of the type: "Make the sound three times as loud as the light appears bright!" Twenty participants repeatedly adjusted the level of a burst of noise to result in the desired sensation ratio (e.g., to be three times as intense) compared to the brightness emanating from a grayscale square, and vice versa. Cross-modal commutativity was tested by comparing a set of successive ×2×3 productions with a set of ×3×2 productions. When this property was individually evaluated for each of 20 participants and for two possible directions, i.e., starting out with a noise burst or a luminous patch, only seven of the 40 tests indicated a statistically significant violation of cross-modal commutativity. Cross-modal monotonicity, i.e. checking whether ×1, ×2, and ×3 adjustments are strictly ordered, was evaluated on the same data set and found to hold. Multiplicativity, by contrast, i.e., comparing the outcome of a ×1×6 adjustment with ×2×3 sequences, irrespective of order, was violated in 17 of 40 tests, or at least once for all but six participants. This suggests that both loudness and brightness sensations may be measured on a common ratio scale of subjective intensity, but cautions against interpreting the numbers involved at face value.


Subject(s)
Loudness Perception , Sensation , Humans , Psychophysics
13.
Front Robot AI ; 7: 20, 2020.
Article in English | MEDLINE | ID: mdl-33501189

ABSTRACT

For effective virtual realities, "presence," the feeling of "being there" in a virtual environment (VR), is deemed an essential prerequisite. Several studies have assessed the effect of the (non-)availability of auditory stimulation on presence, but due to differences in study design (e.g., virtual realities used, types of sounds included, rendering technologies employed), generalizing the results and estimating the effect of the auditory component is difficult. In two experiments, the influence of an ambient nature soundscape and movement-triggered step sounds were investigated regarding their effects on presence. In each experiment, approximately forty participants walked on a treadmill, thereby strolling through a virtual park environment reproduced via a stereoscopic head-mounted display (HMD), while the acoustical environment was delivered via noise-canceling headphones. In Experiment 1, conditions with the ambient soundscape and the step sounds either present or absent were combined in a 2 × 2 within-subjects design, supplemented with an additional "no-headphones" control condition. For the synchronous playback of step sounds, the probability of a step being taken was estimated by an algorithm using the HMD's sensor data. The results of Experiment 1 show that questionnaire-based measures of presence and realism were influenced by the soundscape but not by the reproduction of steps, which might be confounded with the fact that the perceived synchronicity of the sensor-triggered step sounds was rated rather low. Therefore, in Experiment 2, the step-reproduction algorithm was improved and judged to be more synchronous by participants. Consequently, large and statistically significant effects of both kinds of audio manipulations on perceived presence and realism were observed, with the effect of the soundscape being larger than that of including footstep sounds, possibly due to the remaining imperfections in the reproduction of steps. Including an appropriate soundscape or self-triggered footsteps had differential effects on subscales of presence, in that both affected overall presence and realism, while involvement was improved and distraction reduced by the ambient soundscape only.

14.
J Exp Psychol Hum Percept Perform ; 46(1): 10-20, 2020 Jan.
Article in English | MEDLINE | ID: mdl-31436452

ABSTRACT

Task-irrelevant auditory stimuli such as speech are known to disrupt the retention of serial information held in verbal short-term memory (STM). Although such effects of irrelevant sound are typically very robust, there is evidence suggesting that some forms of auditory distraction are susceptible to cognitive control or auditory attention. In the present study, we tested whether an extensive training of auditory selective attention reduces the degree of interference produced by irrelevant speech in a serial STM task. Participants (n = 38) were trained on an adaptive dichotic-listening task requiring selective processing of a varying list of verbal items presented via headphones while ignoring auditory distractors presented simultaneously either by a different voice or in the irrelevant ear. The number of target items that could be memorized increased throughout 5 training sessions, suggesting improvement of auditory selective attention in a dichotic-listening situation. An active control group (n = 37) was trained on an auditory duration discrimination task for 5 sessions. Prior to the training, task-irrelevant speech was shown to interfere with serial recall in both groups. After the training, however, the irrelevant speech effect was attenuated in the group that was trained on the dichotic-listening task, whereas no reduction of auditory distraction was observed in the active control group. The results show that the interference produced by task-irrelevant speech can be reduced through an extensive dichotic-listening training, suggesting that the irrelevant speech effect is susceptible to auditory selective attention. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Subject(s)
Attention , Auditory Perception , Learning , Speech Perception , Adolescent , Adult , Female , Humans , Male , Memory, Short-Term , Young Adult
15.
Noise Health ; 22(105): 46-55, 2020.
Article in English | MEDLINE | ID: mdl-33380616

ABSTRACT

INTRODUCTION: Two aspects of noise annoyance were addressed in the present laboratory study: (1) the disturbance produced by vehicle pass-by noise while engaging in a challenging non-auditory task, and (2) the evaluative response elicited by the same sounds while imagining to relax at home in the absence of a primary activity. METHODS AND MATERIAL: In Experiment 1, N = 29 participants were exposed to short (3-6 s) pass-by recordings presented at graded levels between 50 and 70 dB(A). Concurrent with each sound presentation, they performed a visual multiple-object tracking task, and subsequently rated the annoyance of the sounds on a VAS scale. In Experiment 2, N = 30 participants judged the sounds while imagining to relax, without such a cognitive task. RESULTS AND DISCUSSION: Annoyance was reduced when participants were engaged in the cognitively demanding task, in Experiment 1. Furthermore, when occupied with the task, annoyance slightly, but significantly increased with task load. Across both experiments, the magnitude of simultaneously recorded skin conductance responses in the first 1-4 s after the onset of stimulation increased significantly with sound pressure level. Annoyance ratings tended to be elevated across all sound levels, though significantly only in Experiment 2, in participants classified as noise sensitive based on a 52-item questionnaire. CONCLUSIONS: The results suggest that noise annoyance depends on the primary activity the listener is engaged in. They demonstrate that phasic skin conductance responses may serve as an objective correlate of the degree of annoyance experienced. Finally, noise sensitivity is once more shown to augment annoyance ratings in an additive fashion.


Subject(s)
Auditory Perception/physiology , Cognition/physiology , Eye-Tracking Technology/psychology , Galvanic Skin Response/physiology , Noise, Transportation/adverse effects , Adolescent , Adult , Female , Humans , Male , Middle Aged , Pressure , Relaxation/psychology , Sound , Task Performance and Analysis , Young Adult
16.
Psychophysiology ; 56(10): e13418, 2019 10.
Article in English | MEDLINE | ID: mdl-31206737

ABSTRACT

To study whether psychophysiological indicators are suitable measures of user experience in a digital exercise game (exergame), a laboratory study employing both psychophysiological and self-report measures was conducted. Sixty-six participants cycled for 10 min on an ergometer while pupil diameter, skin conductance, and heart rate were measured; afterward, they completed a user experience questionnaire. The participants performed under three experimental conditions varying between subjects: active gaming (participants controlled the altitude of a digital bird by varying their pedal rate in order to catch letters flying across the screen), observing a game (they observed a replay of another participant's game), and no-game (blank screen). Only the gaming condition showed evidence for statistically significant pupil dilations-indicating emotional arousal-in response to game events (catching a letter) or corresponding points in time. The observational condition did not differ statistically from the no-game control condition. Self-reports also indicated that the gaming condition was rated most fun and least demanding. Other psychophysiological indicators (heart rate, skin conductance) showed no systematic effects in response to game events, rather they steadily increased during training. Thus, pupil responses were shown to be suitable indicators of positive emotional reactions to game events and user experience in a (training) game.


Subject(s)
Exercise/physiology , Games, Experimental , Pupil/physiology , Bicycling/physiology , Ergometry , Female , Galvanic Skin Response/physiology , Heart Rate/physiology , Humans , Male , Psychomotor Performance/physiology , Self Report , Video Games/psychology , Young Adult
17.
J Acoust Soc Am ; 123(2): 963-72, 2008 Feb.
Article in English | MEDLINE | ID: mdl-18247899

ABSTRACT

To determine how listeners weight different portions of the signal when integrating level information, they were presented with 1-s noise samples the levels of which randomly changed every 100 ms by repeatedly, and independently, drawing from a normal distribution. A given stimulus could be derived from one of two such distributions, a decibel apart, and listeners had to classify each sound as belonging to the "soft" or "loud" group. Subsequently, logistic regression analyses were used to determine to what extent each of the ten temporal segments contributed to the overall judgment. In Experiment 1, a nonoptimal weighting strategy was found that emphasized the beginning, and, to a lesser extent, the ending of the sounds. When listeners received trial-by-trial feedback, however, they approached equal weighting of all stimulus components. In Experiment 2, a spectral change was introduced in the middle of the stimulus sequence, changing from low-pass to high-pass noise, and vice versa. The temporal location of the stimulus change was strongly weighted, much as a new onset. These findings are not accounted for by current models of loudness or intensity discrimination, but are consistent with the idea that temporal weighting in loudness judgments is driven by salient events.


Subject(s)
Acoustic Stimulation/psychology , Discrimination, Psychological/physiology , Loudness Perception/physiology , Psychoacoustics , Time , Adolescent , Adult , Decision Making , Feedback, Psychological , Female , Humans , Male , Memory, Short-Term/physiology , Middle Aged , Sound Spectrography
18.
J Acoust Soc Am ; 123(2): 910-24, 2008 Feb.
Article in English | MEDLINE | ID: mdl-18247894

ABSTRACT

The potential of spherical-harmonics beamforming (SHB) techniques for the auralization of target sound sources in a background noise was investigated and contrasted with traditional head-related transfer function (HRTF)-based binaural synthesis. A scaling of SHB was theoretically derived to estimate the free-field pressure at the center of a spherical microphone array and verified by comparing simulated frequency response functions with directly measured ones. The results show that there is good agreement in the frequency range of interest. A listening experiment was conducted to evaluate the auralization method subjectively. A set of ten environmental and product sounds were processed for headphone presentation in three different ways: (1) binaural synthesis using dummy head measurements, (2) the same with background noise, and (3) SHB of the noisy condition in combination with binaural synthesis. Two levels of background noise (62, 72 dB SPL) were used and two independent groups of subjects (N=14) evaluated either the loudness or annoyance of the processed sounds. The results indicate that SHB almost entirely restored the loudness (or annoyance) of the target sounds to unmasked levels, even when presented with background noise, and thus may be a useful tool to psychoacoustically analyze composite sources.


Subject(s)
Loudness Perception/physiology , Noise , Psychoacoustics , Recruitment Detection, Audiologic/methods , Sound Localization/physiology , Stress, Psychological/physiopathology , Acoustic Stimulation/psychology , Adult , Algorithms , Computer Simulation , Equipment Design , Female , Humans , Male , Noise/adverse effects , Random Allocation , Recruitment Detection, Audiologic/instrumentation , Stress, Psychological/etiology
19.
J Exp Psychol Hum Percept Perform ; 44(8): 1303-1312, 2018 Aug.
Article in English | MEDLINE | ID: mdl-29629780

ABSTRACT

Task-irrelevant speech and other temporally changing sounds are known to interfere with the short-term memorization of ordered verbal materials, as compared to silence or stationary sounds. It has been argued that this disruption of short-term memory (STM) may be due to (a) interference of automatically encoded acoustical fluctuations with the process of serial rehearsal or (b) attentional capture by salient task-irrelevant information. To disentangle the contributions of these 2 processes, the authors investigated whether the disruption of serial recall is due to the semantic or acoustical properties of task-irrelevant speech (Experiment 1). They found that performance was affected by the prosody (emotional intonation), but not by the semantics (word meaning), of irrelevant speech, suggesting that the disruption of serial recall is due to interference of precategorically encoded changing-state sound (with higher fluctuation strength of emotionally intonated speech). The authors further demonstrated a functional distinction between this form of distraction and attentional capture by contrasting the effect of (a) speech prosody and (b) sudden prosody deviations on both serial and nonserial STM tasks (Experiment 2). Although serial recall was again sensitive to the emotional prosody of irrelevant speech, performance on a nonserial missing-item task was unaffected by the presence of neutral or emotionally intonated speech sounds. In contrast, sudden prosody changes tended to impair performance on both tasks, suggesting an independent effect of attentional capture. (PsycINFO Database Record


Subject(s)
Attention/physiology , Emotions/physiology , Mental Recall/physiology , Serial Learning/physiology , Speech Perception/physiology , Adult , Aged , Female , Humans , Male , Middle Aged , Semantics , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL