Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
1.
J Acoust Soc Am ; 146(2): EL172, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-31472560

RESUMO

Influence of loudness on sound recognition was investigated in an explicit memory experiment based on a conscious recollection-test phase-of previously encoded information-study phase. Three encoding conditions were compared: semantic (sounds were sorted in three different categories), sensory (sounds were rated in loudness), and control (participants were solely asked to listen to the sounds). Results revealed a significant study-to-test change effect: loudness change between the study and the test phases affects recognition. The effect was not specific to the encoding condition (semantic vs sensory) suggesting that loudness is an important hint for everyday sounds recognition.


Assuntos
Acústica da Fala , Percepção da Fala , Adulto , Feminino , Humanos , Masculino , Semântica
2.
J Acoust Soc Am ; 143(1): 575, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-29390738

RESUMO

Two experiments were conducted to investigate how the perceptual organization of a multi-tone mixture interacts with global and partial loudness judgments. Grouping (single-object) and segregating (two-object) conditions were created using frequency modulation by applying the same or different modulation frequencies to the odd- and even-rank harmonics. While in Experiment 1 (Exp. 1) the two objects had the same loudness, in Experiment 2 (Exp. 2), loudness level differences (LLD) were introduced (LLD = 6, 12, 18, or 24 phons). In the two-object condition, the loudness of each object was not affected by the mixture when LLD = 0 (Exp. 1), otherwise (Exp. 2), the loudness of the softest object was modulated by LLD, and the loudness of the loudest object was the same regardless of whether it was presented in or out of the mixture. In the single- and the two-object conditions, the global loudness of the mixture was close to the loudness of the loudest object. Taken together, these results suggest that while partial loudness judgments are dependent on the perceptual organization of the scene, global loudness is not. Yet, both partial and global loudness computations are governed by relative "saliences" between different auditory objects (in the segregating condition) or within a single object (in the grouping condition).

3.
Exp Brain Res ; 235(3): 691-701, 2017 03.
Artigo em Inglês | MEDLINE | ID: mdl-27858128

RESUMO

The use of continuous auditory feedback for motor control and learning is still understudied and deserves more attention regarding fundamental mechanisms and applications. This paper presents the results of three experiments studying the contribution of task-, error-, and user-related sonification to visuo-manual tracking and assessing its benefits on sensorimotor learning. First results show that sonification can help decreasing the tracking error, as well as increasing the energy in participant's movement. In the second experiment, when alternating feedback presence, the user-related sonification did not show feedback dependency effects, contrary to the error and task-related feedback. In the third experiment, a reduced exposure of 50% diminished the positive effect of sonification on performance, whereas the increase of the average energy with sound was still significant. In a retention test performed on the next day without auditory feedback, movement energy was still superior for the groups previously trained with the feedback. Although performance was not affected by sound, a learning effect was measurable in both sessions and the user-related group improved its performance also in the retention test. These results confirm that a continuous auditory feedback can be beneficial for movement training and also show an interesting effect of sonification on movement energy. User-related sonification can prevent feedback dependency and increase retention. Consequently, sonification of the user's own motion appears as a promising solution to support movement learning with interactive feedback.


Assuntos
Percepção Auditiva/fisiologia , Retroalimentação Sensorial/fisiologia , Movimento/fisiologia , Desempenho Psicomotor/fisiologia , Estimulação Acústica , Adolescente , Adulto , Idoso , Análise de Variância , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estimulação Luminosa , Tempo de Reação , Adulto Jovem
4.
J Acoust Soc Am ; 142(1): 256, 2017 07.
Artigo em Inglês | MEDLINE | ID: mdl-28764470

RESUMO

The mechanisms underlying global loudness judgments of rising- or falling-intensity tones were further investigated in two magnitude estimation experiments. By manipulating the temporal characteristics of such stimuli, it was examined whether judgments could be accounted for by an integration of their loudest portion over a certain temporal window associated to a "decay mechanism" downsizing this integration over time for falling ramps. In experiment 1, 1-kHz intensity-ramps were stretched in time between 1 and 16 s keeping their dynamics (difference between maximum and minimum levels) unchanged. While global loudness of rising tones increased up to 6 s, evaluations of falling tones increased at a weaker rate and slightly decayed between 6 and 16 s, resulting in significant differences between the two patterns. In experiment 2, ramps were stretched in time between 2 and 12 s keeping their slopes (rate of change in dB/s) unchanged. In this context, the main effect of duration became non-significant and the interaction between the two profiles remained, although the decay of falling tones was not significant. These results qualitatively support the view that the global loudness computation of intensity-ramps involves an integration of their loudest portions; the presence of a decay mechanism could, however, not be attested.

5.
J Acoust Soc Am ; 142(2): 878, 2017 08.
Artigo em Inglês | MEDLINE | ID: mdl-28863587

RESUMO

Sounds involving liquid sources are part of everyday life. They form a category of sounds easily identified by human listeners in different experimental studies. Unlike acoustic models that focus on bubble vibrations, real life instances of liquid sounds, such as sounds produced by liquids with or without other materials, are very diverse and include water drop sounds, noisy flows, and even solid vibrations. The process that allows listeners to group these different sounds in the same category remains unclear. This article presents a perceptual experiment based on a sorting task of liquid sounds from a household environment. It seeks to reveal the cognitive subcategories of this set of sounds. The clarification of this perceptive process led to the observation of similarities between the perception of liquid sounds and other categories of environmental sounds. Furthermore, the results provide a taxonomy of liquid sounds on which an acoustic analysis was performed that highlights the acoustical properties of the categories, including different rates of air bubble vibration.

6.
J Acoust Soc Am ; 139(1): 406-17, 2016 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-26827035

RESUMO

Psychophysical studies on loudness have so far examined the temporal weighting of loudness solely in level-discrimination tasks. Typically, listeners were asked to discriminate hundreds of level-fluctuating sounds regarding their global loudness. Temporal weights, i.e., the importance of each temporal portion of the stimuli for the loudness judgment, were then estimated from listeners' responses. Consistent non-uniform "u-shaped" temporal weighting patterns were observed, with greater weights assigned to the first and the last temporal portions of the stimuli, revealing significant primacy and recency effects, respectively. In this study, the question was addressed whether the same weighting pattern could be found in a traditional loudness estimation task. Temporal loudness weights were compared between a level-discrimination (LD) task and an absolute magnitude estimation (AME) task. Stimuli were 3-s broadband noises consisting of 250-ms segments randomly varying in level. Listeners were asked to evaluate the global loudness of the stimuli by classifying them as "loud" or "soft" (LD), or by assigning a number representing their loudness (AME). Results showed non-uniform temporal weighting in both tasks, but also significant differences between the two tasks. An explanation based on the difference in complexity between the evaluation processes underlying each task is proposed.


Assuntos
Percepção Sonora/fisiologia , Estimulação Acústica , Adulto , Análise de Variância , Tomada de Decisões/fisiologia , Feminino , Humanos , Julgamento/fisiologia , Masculino , Ruído , Mascaramento Perceptivo/fisiologia , Testes Psicológicos , Adulto Jovem
7.
J Acoust Soc Am ; 139(1): 290-300, 2016 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-26827025

RESUMO

Describing complex sounds with words is a difficult task. In fact, previous studies have shown that vocal imitations of sounds are more effective than verbal descriptions [Lemaitre and Rocchesso (2014). J. Acoust. Soc. Am. 135, 862-873]. The current study investigated how vocal imitations of sounds enable their recognition by studying how two expert and two lay participants reproduced four basic auditory features: pitch, tempo, sharpness, and onset. It used 4 sets of 16 referent sounds (modulated narrowband noises and pure tones), based on 1 feature or crossing 2 of the 4 features. Dissimilarity rating experiments and multidimensional scaling analyses confirmed that listeners could accurately perceive the four features composing the four sets of referent sounds. The four participants recorded vocal imitations of the four sets of sounds. Analyses identified three strategies: (1) Vocal imitations of pitch and tempo reproduced faithfully the absolute value of the feature; (2) Vocal imitations of sharpness transposed the feature into the participants' registers; (3) Vocal imitations of onsets categorized the continuum of onset values into two discrete morphological profiles. Overall, these results highlight that vocal imitations do not simply mimic the referent sounds, but seek to emphasize the characteristic features of the referent sounds within the constraints of human vocal production.


Assuntos
Discriminação da Altura Tonal/fisiologia , Reconhecimento Psicológico/fisiologia , Voz/fisiologia , Acústica , Adolescente , Adulto , Análise de Variância , Feminino , Humanos , Percepção Sonora/fisiologia , Masculino , Pessoa de Meia-Idade , Espectrografia do Som , Percepção do Tempo/fisiologia , Adulto Jovem
8.
J Acoust Soc Am ; 136(2): EL166-72, 2014 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-25096142

RESUMO

The perceived duration of 1-kHz pure tones with increasing or decreasing intensity profiles was measured. The ratio between the down- and up-ramp durations at equal subjective durations was examined as a function of the sound duration (50, 100, 200, 500, 1000, 2000 ms). At 50 and 100 ms, the ratio was constant and equaled about 1.7, then it logarithmically decreased from 100 to 1000 ms to reach a constant value of 1 at 1 and 2 s. The different mechanisms proposed in the literature to explain the perceived duration asymmetry between up-ramp and down-ramp were discussed in the light of the dependence of this ratio on duration.

9.
J Acoust Soc Am ; 134(4): EL321-6, 2013 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-24116537

RESUMO

Using molecular psychophysics, temporal loudness weights were measured for 2-s, 1-kHz tones with flat, increasing and decreasing time-intensity profiles. While primacy and recency effects were observed for flat profile stimuli, the so-called "level dominance" effect was observed for both increasing and decreasing profile stimuli, fully determining their temporal weights. The weighs obtained for these profiles were basically zero for all but the most intense parts of these sounds. This supports the view that the "level dominance" effect is prominent with intensity-varying sounds and that it persists over time since temporal weights are not affected by the direction of intensity change.


Assuntos
Percepção Sonora , Percepção do Tempo , Estimulação Acústica , Adulto , Audiometria de Tons Puros , Limiar Auditivo , Feminino , Lateralidade Funcional , Humanos , Masculino , Psicoacústica , Fatores de Tempo , Adulto Jovem
10.
JASA Express Lett ; 3(8)2023 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-37566904

RESUMO

Temporal and frequency auditory streaming capacities were assessed for non-musician (NM), expert musician (EM), and amateur musician (AM) listeners using a local-global task and an interleaved melody recognition task, respectively. Data replicate differences previously observed between NM and EM, and reveal that while AM exhibits a local-over-global processing change comparable to EM, their performance for segregating a melody embedded in a stream remains as poor as NM. The observed group partitioning along the temporal-frequency auditory streaming capacity map suggests a sequential, two-step development model of musical learning, whose contributing factors are discussed.


Assuntos
Música , Reconhecimento Psicológico
11.
Sci Rep ; 13(1): 6842, 2023 04 26.
Artigo em Inglês | MEDLINE | ID: mdl-37100849

RESUMO

Attention allows the listener to select relevant information from their environment, and disregard what is irrelevant. However, irrelevant stimuli sometimes manage to capture it and stand out from a scene because of bottom-up processes driven by salient stimuli. This attentional capture effect was observed using an implicit approach based on the additional singleton paradigm. In the auditory domain, it was shown that sound attributes such as intensity and frequency tend to capture attention during auditory search (cost to performance) for targets defined on a different dimension such as duration. In the present study, the authors examined whether a similar phenomenon occurs for attributes of timbre such as brightness (related to the spectral centroid) and roughness (related the amplitude modulation depth). More specifically, we revealed the relationship between the variations of these attributes and the magnitude of the attentional capture effect. In experiment 1, the occurrence of a brighter sound (higher spectral centroid) embedded in sequences of successive tones produced significant search costs. In experiments 2 and 3, different values of brightness and roughness confirmed that attention capture is monotonically driven by the sound features. In experiment 4, the effect was found to be symmetrical: positive or negative, the same difference in brightness had the same negative effect on performance. Experiment 5 suggested that the effect produced by the variations of the two attributes is additive. This work provides a methodology for quantifying the bottom-up component of attention and brings new insights on attention capture and auditory salience.


Assuntos
Atenção , Som , Tempo de Reação
12.
Sci Rep ; 13(1): 5180, 2023 03 30.
Artigo em Inglês | MEDLINE | ID: mdl-36997613

RESUMO

Communication between sound and music experts is based on the shared understanding of a metaphorical vocabulary derived from other sensory modalities. Yet, the impact of sound expertise on the mental representation of these sound concepts remains blurry. To address this issue, we investigated the acoustic portraits of four metaphorical sound concepts (brightness, warmth, roundness, and roughness) in three groups of participants (sound engineers, conductors, and non-experts). Participants (N = 24) rated a corpus of orchestral instrument sounds (N = 520) using Best-Worst Scaling. With this data-driven method, we sorted the sound corpus for each concept and population. We compared the population ratings and ran machine learning algorithms to unveil the acoustic portraits of each concept. Overall, the results revealed that sound engineers were the most consistent. We found that roughness is widely shared while brightness is expertise dependent. The frequent use of brightness by expert populations suggests that its meaning got specified through sound expertise. As for roundness and warmth, it seems that the importance of pitch and noise in their acoustic definition is the key to distinguishing them. These results provide crucial information on the mental representations of a metaphorical vocabulary of sound and whether it is shared or refined by sound expertise.


Assuntos
Música , Som , Humanos , Estimulação Acústica , Ruído , Acústica , Vocabulário
13.
JASA Express Lett ; 2(6): 064404, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-36154161

RESUMO

When designing sound evaluation experiments, researchers rely on listening test methods, such as rating scales (RS). This work aims to investigate the suitability of best-worst scaling (BWS) for the perceptual evaluation of sound qualities. To do so, 20 participants rated the "brightness" of a corpus of instrumental sounds (N = 100) with RS and BWS methods. The results show that BWS procedure is the fastest and that RS and BWS are equivalent in terms of performance. Interestingly, participants preferred BWS over RS. Therefore, BWS is an alternative method that reliably measures perceptual sound qualities and could be used in many-sounds paradigm.

14.
J Acoust Soc Am ; 130(5): 2902-16, 2011 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-22087919

RESUMO

The analysis of musical signals to extract audio descriptors that can potentially characterize their timbre has been disparate and often too focused on a particular small set of sounds. The Timbre Toolbox provides a comprehensive set of descriptors that can be useful in perceptual research, as well as in music information retrieval and machine-learning approaches to content-based retrieval in large sound databases. Sound events are first analyzed in terms of various input representations (short-term Fourier transform, harmonic sinusoidal components, an auditory model based on the equivalent rectangular bandwidth concept, the energy envelope). A large number of audio descriptors are then derived from each of these representations to capture temporal, spectral, spectrotemporal, and energetic properties of the sound events. Some descriptors are global, providing a single value for the whole sound event, whereas others are time-varying. Robust descriptive statistics are used to characterize the time-varying descriptors. To examine the information redundancy across audio descriptors, correlational analysis followed by hierarchical clustering is performed. This analysis suggests ten classes of relatively independent audio descriptors, showing that the Timbre Toolbox is a multidimensional instrument for the measurement of the acoustical structure of complex sound signals.


Assuntos
Acústica , Modelos Teóricos , Música , Processamento de Sinais Assistido por Computador , Software , Análise por Conglomerados , Análise de Fourier , Linguagens de Programação , Espectrografia do Som , Fatores de Tempo
15.
J Acoust Soc Am ; 128(4): EL163-8, 2010 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-20968320

RESUMO

Three experiments on loudness of sounds with linearly increasing levels were performed: global loudness was measured using direct ratings, loudness change was measured using direct and indirect estimations. Results revealed differences between direct and indirect estimations of loudness change, indicating that the underlying perceptual phenomena are not the same. The effect of ramp size is small for the former and important for the latter. A similar trend was revealed between global loudness and direct estimations of loudness change according to the end level, suggesting they may have been confounded. Measures provided by direct estimations of loudness change are more participant-dependent.


Assuntos
Percepção Sonora , Estimulação Acústica , Adolescente , Adulto , Audiometria de Tons Puros , Limiar Auditivo , Viés , Humanos , Pessoa de Meia-Idade , Acústica da Fala , Adulto Jovem
16.
J Acoust Soc Am ; 127(3): EL105-10, 2010 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-20329809

RESUMO

Simple reaction times (RTs) were used to measure differences in processing time between natural animal sounds and artificial sounds. When the artificial stimuli were sequences of short tone pulses, the animal sounds were detected faster than the artificial sounds. The animal sounds were then compared with acoustically modified versions (white noise modulated by the temporal envelope of the animal sounds). No differences in RTs were observed between the animal sounds and their modified counterparts. These results show that the fast detection observed for natural sounds, in the present task, could be explained by their acoustic properties.


Assuntos
Percepção Auditiva/fisiologia , Ruído , Tempo de Reação/fisiologia , Vocalização Animal , Estimulação Acústica/métodos , Acústica , Adulto , Animais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Reconhecimento Psicológico/fisiologia
17.
Sci Rep ; 10(1): 16390, 2020 10 02.
Artigo em Inglês | MEDLINE | ID: mdl-33009439

RESUMO

The way the visual system processes different scales of spatial information has been widely studied, highlighting the dominant role of global over local processing. Recent studies addressing how the auditory system deals with local-global temporal information suggest a comparable processing scheme, but little is known about how this organization is modulated by long-term musical training, in particular regarding musical sequences. Here, we investigate how non-musicians and expert musicians detect local and global pitch changes in short hierarchical tone sequences structured across temporally-segregated triplets made of musical intervals (local scale) forming a melodic contour (global scale) varying either in one direction (monotonic) or both (non-monotonic). Our data reveal a clearly distinct organization between both groups. Non-musicians show global advantage (enhanced performance to detect global over local modifications) and global-to-local interference effects (interference of global over local processing) only for monotonic sequences, while musicians exhibit the reversed pattern for non-monotonic sequences. These results suggest that the local-global processing scheme depends on the complexity of the melodic contour, and that long-term musical training induces a prominent perceptual reorganization that reshapes its initial global dominance to favour local information processing. This latter result supports the theory of "analytic" processing acquisition in musicians.


Assuntos
Percepção Auditiva/fisiologia , Discriminação da Altura Tonal/fisiologia , Estimulação Acústica/métodos , Adulto , Cognição/fisiologia , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Masculino , Música , Percepção da Altura Sonora/fisiologia , Tempo de Reação/fisiologia , Percepção do Tempo/fisiologia , Adulto Jovem
18.
J Exp Psychol Appl ; 14(3): 201-12, 2008 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-18808274

RESUMO

It is well-established that subjective judgments of perceived urgency of alarm sounds can be affected by acoustic parameters. In this study, the authors investigated an objective measurement, the reaction time (RT), to test the effectiveness of temporal parameters of sounds in the context of warning sounds. Three experiments were performed using a RT paradigm, with two different concurrent visuomotor tracking tasks simulating driving conditions. Experiments 1 and 2 show that RT decreases as interonset interval (IOI) decreases, where IOI is defined as the time elapsed from the onset of one sound pulse to the onset of the next. Experiment 3 shows that temporal irregularity between pulses can capture a listener's attention. These findings lead to concrete recommendations: IOI can be used to modulate warning sound urgency; and temporal irregularity can provoke an arousal effect in listeners. The authors also argue that the RT paradigm provides a useful tool for clarifying some of the factors involved in alarm processing.


Assuntos
Nível de Alerta , Atenção , Percepção Auditiva , Condução de Veículo/psicologia , Desempenho Psicomotor , Tempo de Reação , Adulto , Simulação por Computador , Feminino , Humanos , Julgamento , Percepção Sonora , Masculino , Percepção de Movimento , Reconhecimento Visual de Modelos , Percepção do Tempo
19.
PLoS One ; 12(7): e0181786, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28750071

RESUMO

Communicating an auditory experience with words is a difficult task and, in consequence, people often rely on imitative non-verbal vocalizations and gestures. This work explored the combination of such vocalizations and gestures to communicate auditory sensations and representations elicited by non-vocal everyday sounds. Whereas our previous studies have analyzed vocal imitations, the present research focused on gestural depictions of sounds. To this end, two studies investigated the combination of gestures and non-verbal vocalizations. A first, observational study examined a set of vocal and gestural imitations of recordings of sounds representative of a typical everyday environment (ecological sounds) with manual annotations. A second, experimental study used non-ecological sounds whose parameters had been specifically designed to elicit the behaviors highlighted in the observational study, and used quantitative measures and inferential statistics. The results showed that these depicting gestures are based on systematic analogies between a referent sound, as interpreted by a receiver, and the visual aspects of the gestures: auditory-visual metaphors. The results also suggested a different role for vocalizations and gestures. Whereas the vocalizations reproduce all features of the referent sounds as faithfully as vocally possible, the gestures focus on one salient feature with metaphors based on auditory-visual correspondences. Both studies highlighted two metaphors consistently shared across participants: the spatial metaphor of pitch (mapping different pitches to different positions on the vertical dimension), and the rustling metaphor of random fluctuations (rapidly shaking of hands and fingers). We interpret these metaphors as the result of two kinds of representations elicited by sounds: auditory sensations (pitch and loudness) mapped to spatial position, and causal representations of the sound sources (e.g. rain drops, rustling leaves) pantomimed and embodied by the participants' gestures.


Assuntos
Gestos , Metáfora , Som , Adolescente , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Percepção da Altura Sonora , Espectrografia do Som , Adulto Jovem
20.
PLoS One ; 11(12): e0168167, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27992480

RESUMO

Imitative behaviors are widespread in humans, in particular whenever two persons communicate and interact. Several tokens of spoken languages (onomatopoeias, ideophones, and phonesthemes) also display different degrees of iconicity between the sound of a word and what it refers to. Thus, it probably comes at no surprise that human speakers use a lot of imitative vocalizations and gestures when they communicate about sounds, as sounds are notably difficult to describe. What is more surprising is that vocal imitations of non-vocal everyday sounds (e.g. the sound of a car passing by) are in practice very effective: listeners identify sounds better with vocal imitations than with verbal descriptions, despite the fact that vocal imitations are inaccurate reproductions of a sound created by a particular mechanical system (e.g. a car driving by) through a different system (the voice apparatus). The present study investigated the semantic representations evoked by vocal imitations of sounds by experimentally quantifying how well listeners could match sounds to category labels. The experiment used three different types of sounds: recordings of easily identifiable sounds (sounds of human actions and manufactured products), human vocal imitations, and computational "auditory sketches" (created by algorithmic computations). The results show that performance with the best vocal imitations was similar to the best auditory sketches for most categories of sounds, and even to the referent sounds themselves in some cases. More detailed analyses showed that the acoustic distance between a vocal imitation and a referent sound is not sufficient to account for such performance. Analyses suggested that instead of trying to reproduce the referent sound as accurately as vocally possible, vocal imitations focus on a few important features, which depend on each particular sound category. These results offer perspectives for understanding how human listeners store and access long-term sound representations, and sets the stage for the development of human-computer interfaces based on vocalizations.


Assuntos
Comportamento Imitativo , Som , Voz/fisiologia , Estimulação Acústica , Acústica , Adulto , Animais , Percepção Auditiva/fisiologia , Feminino , Audição/fisiologia , Humanos , Masculino , Gravação em Vídeo , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA