Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 48
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Front Psychol ; 14: 1287334, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38023037

RESUMEN

Introduction: In musical affect research, there is considerable discussion on the best method to represent affective response. This discussion mainly revolves around the dimensional (valence, tension arousal, energy arousal) and discrete (anger, fear, sadness, happiness, tenderness) models of affect. Here, we compared these models' ability to capture self-reported affect in response to short, affectively ambiguous sounds. Methods: In two online experiments (n1 = 263, n2 = 152), participants rated perceived and induced affect in response to single notes (Exp 1) and chromatic scales (Exp 2), which varied across instrument family and pitch register. Additionally, participants completed questionnaires measuring pre-existing mood, trait empathy, Big-Five personality, musical sophistication, and musical preferences. Results: Rater consistency and agreement were high across all affect scales. Correlation and principal component analyses showed that two dimensions or two affect categories captured most of the variation in affective response. Canonical correlation and regression analyses also showed that energy arousal varied in a manner that was not captured by discrete affect ratings. Furthermore, all sources of individual differences were moderately correlated with all affect scales, particularly pre-existing mood and dimensional affect. Discussion: We conclude that when it comes to single notes and chromatic scales, the dimensions of valence and energy arousal best capture the perceived and induced affective response to affectively ambiguous sounds, although the role of individual differences should also be considered.

2.
J Acoust Soc Am ; 153(2): 797, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36859162

RESUMEN

Timbre provides an important cue to identify musical instruments. Many timbral attributes covary with other parameters like pitch. This study explores listeners' ability to construct categories of instrumental sound sources from sounds that vary in pitch. Nonmusicians identified 11 instruments from the woodwind, brass, percussion, and plucked and bowed string families. In experiment 1, they were trained to identify instruments playing a pitch of C4, and in experiments 2 and 3, they were trained with a five-tone sequence (F#3-F#4), exposing them to the way timbre varies with pitch. Participants were required to reach a threshold of 75% correct identification in training. In the testing phase, successful listeners heard single tones (experiments 1 and 2) or three-tone sequences from (A3-D#4) (experiment 3) across each instrument's full pitch range to test their ability to generalize identification from the learned sound(s). Identification generalization over pitch varies a great deal across instruments. No significant differences were found between single-pitch and multi-pitch training or testing conditions. Identification rates can be predicted moderately well by spectrograms or modulation spectra. These results suggest that listeners use the most relevant acoustical invariance to identify musical instrument sounds, also using previous experience with the tested instruments.


Asunto(s)
Señales (Psicología) , Aprendizaje , Humanos , Generalización Psicológica , Acústica
3.
Front Psychol ; 13: 835401, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35432077

RESUMEN

Two experiments were conducted for the derivation of psychophysical scales of the following audio descriptors: spectral centroid, spectral spread, spectral skewness, odd-to-even harmonic ratio, spectral deviation, and spectral slope. The stimulus sets of each audio descriptor were synthesized and (wherever possible) independently controlled through appropriate synthesis techniques. Partition scaling methods were used in both experiments, and the scales were constructed by fitting well-behaving functions to the listeners' ratings. In the first experiment, the listeners' task was the estimation of the relative differences between successive levels of a particular audio descriptor. The median values of listeners' ratings increased with increasing feature values, which confirmed listeners' abilities to estimate intervals. However, there was a large variability in the reliability of the derived interval scales depending on the stimulus spacing in each trial. In the second experiment, listeners had control over the stimulus values and were asked to divide the presented range of values into perceptually equal intervals, which provides a ratio scale. For every descriptor, the reliability of the derived ratio scales was excellent. The unit of a particular ratio scale was assigned empirically so as to facilitate qualitative comparisons between the scales of all audio descriptors. The construction of psychophysical scales based on univariate stimuli allowed for the establishment of cause-and-effect relations between audio descriptors and perceptual dimensions, contrary to past research that has relied on multivariate stimuli and has only examined the correlations between the two. Most importantly, this study provides an understanding of the ways in which the sensation magnitudes of several audio descriptors are apprehended.

4.
Front Psychol ; 13: 796422, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35432090

RESUMEN

Audio features such as inharmonicity, noisiness, and spectral roll-off have been identified as correlates of "noisy" sounds. However, such features are likely involved in the experience of multiple semantic timbre categories of varied meaning and valence. This paper examines the relationships of stimulus properties and audio features with the semantic timbre categories raspy/grainy/rough, harsh/noisy, and airy/breathy. Participants (n = 153) rated a random subset of 52 stimuli from a set of 156 approximately 2-s orchestral instrument sounds representing varied instrument families (woodwinds, brass, strings, percussion), registers (octaves 2 through 6, where middle C is in octave 4), and both traditional and extended playing techniques (e.g., flutter-tonguing, bowing at the bridge). Stimuli were rated on the three semantic categories of interest, as well as on perceived playing exertion and emotional valence. Correlational analyses demonstrated a strong negative relationship between positive valence and perceived physical exertion. Exploratory linear mixed models revealed significant effects of extended technique and pitch register on valence, the perception of physical exertion, raspy/grainy/rough, and harsh/noisy. Instrument family was significantly related to ratings of airy/breathy. With an updated version of the Timbre Toolbox (R-2021 A), we used 44 summary audio features, extracted from the stimuli using spectral and harmonic representations, as input for various models built to predict mean semantic ratings for each sound on the three semantic categories, on perceived exertion, and on valence. Random Forest models predicting semantic ratings from audio features outperformed Partial Least-Squares Regression models, consistent with previous results suggesting that non-linear methods are advantageous in timbre semantic predictions using audio features. Relative Variable Importance measures from the models among the three semantic categories demonstrate that although these related semantic categories are associated in part with overlapping features, they can be differentiated through individual patterns of audio feature relationships.

5.
J Acoust Soc Am ; 150(5): 3461, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34852574

RESUMEN

Temporal audio features play an important role in timbre perception and sound identification. An experiment was conducted to test whether listeners are able to rank order synthesized stimuli over a wide range of feature values restricted within the range of instrument sounds. The following audio descriptors were tested: attack and decay time, temporal centroid with fixed attack and decay time, and inharmonicity. The results indicate that these descriptors are susceptible to ordinal scaling. The spectral envelope played an important role when ordering stimuli with various inharmonicity levels, whereas the shape of the amplitude envelope was an important parameter when ordering stimuli with different attack and decay times. Linear amplitude envelopes made the ordering of attack times easier and caused the least amount of confusion among listeners, whereas exponential envelopes were more effective when ordering decay times. Although there were many confusions in ordering short attack and decay times, listeners performed well in ordering temporal centroids even at very short attack and decay times. A meta-analysis of six timbre spaces was therefore conducted to test the explanatory power of attack time versus the attack temporal centroid along a perceptual dimension. The results indicate that attack temporal centroid has greater overall explanatory power than attack time itself.


Asunto(s)
Percepción Auditiva , Música , Estimulación Acústica , Sonido
6.
Front Psychol ; 12: 732865, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34659045

RESUMEN

Timbre is one of the psychophysical cues that has a great impact on affect perception, although, it has not been the subject of much cross-cultural research. Our aim is to investigate the influence of timbre on the perception of affect conveyed by Western and Chinese classical music using a cross-cultural approach. Four listener groups (Western musicians, Western nonmusicians, Chinese musicians, and Chinese nonmusicians; 40 per group) were presented with 48 musical excerpts, which included two musical excerpts (one piece of Chinese and one piece of Western classical music) per affect quadrant from the valence-arousal space, representing angry, happy, peaceful, and sad emotions and played with six different instruments (erhu, dizi, pipa, violin, flute, and guitar). Participants reported ratings of valence, tension arousal, energy arousal, preference, and familiarity on continuous scales ranging from 1 to 9. ANOVA reveals that participants' cultural backgrounds have a greater impact on affect perception than their musical backgrounds, and musicians more clearly distinguish between a perceived measure (valence) and a felt measure (preference) than do nonmusicians. We applied linear partial least squares regression to explore the relation between affect perception and acoustic features. The results show that the important acoustic features for valence and energy arousal are similar, which are related mostly to spectral variation, the shape of the temporal envelope, and the dynamic range. The important acoustic features for tension arousal describe the shape of the spectral envelope, noisiness, and the shape of the temporal envelope. The explanation for the similarity of perceived affect ratings between instruments is the similar acoustic features that were caused by the physical characteristics of specific instruments and performing techniques.

7.
J Acoust Soc Am ; 149(6): 3785, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-34241417

RESUMEN

A psychophysical experiment was conducted to perceptually validate several spectral audio features through ordinal scaling: spectral centroid, spectral spread, spectral skewness, odd-to-even harmonic ratio, spectral slope, and harmonic spectral deviation. Several sets of stimuli per audio feature were synthesized at different fundamental frequencies and spectral centroids by controlling (wherever possible) each spectral feature independently of the others, thus isolating the effect that each feature had on the stimulus rankings within each sound set. Listeners were overall able to order stimuli varying along all the spectral features tested when presented with an appropriate spacing of feature values. For specific cases of stimuli in which the ordering task partially failed, psychophysical interpretations are provided to explain listeners' confusions. The results of the ordinal scaling experiment outline trajectories of spectral features that correspond to listeners' perceptions and suggest a number of sound synthesis parameters that could carry timbral contour information.


Asunto(s)
Música , Estimulación Acústica , Sonido
8.
Nat Hum Behav ; 5(3): 369-377, 2021 03.
Artículo en Inglés | MEDLINE | ID: mdl-33257878

RESUMEN

Humans excel at using sounds to make judgements about their immediate environment. In particular, timbre is an auditory attribute that conveys crucial information about the identity of a sound source, especially for music. While timbre has been primarily considered to occupy a multidimensional space, unravelling the acoustic correlates of timbre remains a challenge. Here we re-analyse 17 datasets from published studies between 1977 and 2016 and observe that original results are only partially replicable. We use a data-driven computational account to reveal the acoustic correlates of timbre. Human dissimilarity ratings are simulated with metrics learned on acoustic spectrotemporal modulation models inspired by cortical processing. We observe that timbre has both generic and experiment-specific acoustic correlates. These findings provide a broad overview of former studies on musical timbre and identify its relevant acoustic substrates according to biologically inspired models.


Asunto(s)
Percepción Auditiva/fisiología , Modelos Biológicos , Música , Acústica , Adulto , Conjuntos de Datos como Asunto , Humanos
9.
Q J Exp Psychol (Hove) ; 72(6): 1422-1438, 2019 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-30404574

RESUMEN

Studies examining the formation of melodic and harmonic expectations during music listening have repeatedly demonstrated that a tonal context primes listeners to expect certain (tonally related) continuations over others. However, few such studies have (1) selected stimuli using ready examples of expectancy violation derived from real-world instances of tonal music, (2) provided a consistent account for the influence of sensory and cognitive mechanisms on tonal expectancies by comparing different computational simulations, or (3) combined melodic and harmonic representations in modelling cognitive processes of expectation. To resolve these issues, this study measures expectations for the most recurrent cadence patterns associated with tonal music and then simulates the reported findings using three sensory-cognitive models of auditory expectation. In Experiment 1, participants provided explicit retrospective expectancy ratings both before and after hearing the target melodic tone and chord of the cadential formula. In Experiment 2, participants indicated as quickly as possible whether those target events were in or out of tune relative to the preceding context. Across both experiments, cadences terminating with stable melodic tones and chords elicited the highest expectancy ratings and the fastest and most accurate responses. Moreover, the model simulations supported a cognitive interpretation of tonal processing, in which listeners with exposure to tonal music generate expectations as a consequence of the frequent (co-)occurrence of events on the musical surface.


Asunto(s)
Anticipación Psicológica/fisiología , Percepción Auditiva/fisiología , Cognición/fisiología , Música , Adulto , Femenino , Humanos , Masculino , Adulto Joven
10.
11.
Front Psychol ; 8: 587, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28450846

RESUMEN

The ability of a listener to recognize sound sources, and in particular musical instruments from the sounds they produce, raises the question of determining the acoustical information used to achieve such a task. It is now well known that the shapes of the temporal and spectral envelopes are crucial to the recognition of a musical instrument. More recently, Modulation Power Spectra (MPS) have been shown to be a representation that potentially explains the perception of musical instrument sounds. Nevertheless, the question of which specific regions of this representation characterize a musical instrument is still open. An identification task was applied to two subsets of musical instruments: tuba, trombone, cello, saxophone, and clarinet on the one hand, and marimba, vibraphone, guitar, harp, and viola pizzicato on the other. The sounds were processed with filtered spectrotemporal modulations with 2D Gaussian windows. The most relevant regions of this representation for instrument identification were determined for each instrument and reveal the regions essential for their identification. The method used here is based on a "molecular approach," the so-called bubbles method. Globally, the instruments were correctly identified and the lower values of spectrotemporal modulations are the most important regions of the MPS for recognizing instruments. Interestingly, instruments that were confused with each other led to non-overlapping regions and were confused when they were filtered in the most salient region of the other instrument. These results suggest that musical instrument timbres are characterized by specific spectrotemporal modulations, information which could contribute to music information retrieval tasks such as automatic source recognition.

12.
Front Psychol ; 8: 153, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28228741

RESUMEN

Composers often pick specific instruments to convey a given emotional tone in their music, partly due to their expressive possibilities, but also due to their timbres in specific registers and at given dynamic markings. Of interest to both music psychology and music informatics from a computational point of view is the relation between the acoustic properties that give rise to the timbre at a given pitch and the perceived emotional quality of the tone. Musician and nonmusician listeners were presented with 137 tones produced at a fixed dynamic marking (forte) playing tones at pitch class D# across each instrument's entire pitch range and with different playing techniques for standard orchestral instruments drawn from the brass, woodwind, string, and pitched percussion families. They rated each tone on six analogical-categorical scales in terms of emotional valence (positive/negative and pleasant/unpleasant), energy arousal (awake/tired), tension arousal (excited/calm), preference (like/dislike), and familiarity. Linear mixed models revealed interactive effects of musical training, instrument family, and pitch register, with non-linear relations between pitch register and several dependent variables. Twenty-three audio descriptors from the Timbre Toolbox were computed for each sound and analyzed in two ways: linear partial least squares regression (PLSR) and nonlinear artificial neural net modeling. These two analyses converged in terms of the importance of various spectral, temporal, and spectrotemporal audio descriptors in explaining the emotion ratings, but some differences also emerged. Different combinations of audio descriptors make major contributions to the three emotion dimensions, suggesting that they are carried by distinct acoustic properties. Valence is more positive with lower spectral slopes, a greater emergence of strong partials, and an amplitude envelope with a sharper attack and earlier decay. Higher tension arousal is carried by brighter sounds, more spectral variation and more gentle attacks. Greater energy arousal is associated with brighter sounds, with higher spectral centroids and slower decrease of the spectral slope, as well as with greater spectral emergence. The divergences between linear and nonlinear approaches are discussed.

13.
Memory ; 25(4): 550-564, 2017 04.
Artículo en Inglés | MEDLINE | ID: mdl-27314886

RESUMEN

We study short-term recognition of timbre using familiar recorded tones from acoustic instruments and unfamiliar transformed tones that do not readily evoke sound-source categories. Participants indicated whether the timbre of a probe sound matched with one of three previously presented sounds (item recognition). In Exp. 1, musicians better recognised familiar acoustic compared to unfamiliar synthetic sounds, and this advantage was particularly large in the medial serial position. There was a strong correlation between correct rejection rate and the mean perceptual dissimilarity of the probe to the tones from the sequence. Exp. 2 compared musicians' and non-musicians' performance with concurrent articulatory suppression, visual interference, and with a silent control condition. Both suppression tasks disrupted performance by a similar margin, regardless of musical training of participants or type of sounds. Our results suggest that familiarity with sound source categories and attention play important roles in short-term memory for timbre, which rules out accounts solely based on sensory persistence.


Asunto(s)
Atención/fisiología , Percepción Auditiva/fisiología , Memoria a Corto Plazo/fisiología , Reconocimiento en Psicología , Estimulación Acústica , Femenino , Humanos , Masculino , Música , Adulto Joven
14.
J Acoust Soc Am ; 140(1): 409, 2016 07.
Artículo en Inglés | MEDLINE | ID: mdl-27475165

RESUMEN

In two experiments, similarity ratings and categorization performance with recorded impact sounds representing three material categories (wood, metal, glass) being manipulated by three different categories of action (drop, strike, rattle) were examined. Previous research focusing on single impact sounds suggests that temporal cues related to damping are essential for material discrimination, but spectral cues are potentially more efficient for discriminating materials manipulated by different actions that include multiple impacts (e.g., dropping, rattling). Perceived similarity between material categories across different actions was correlated with the distribution of long-term spectral energy (spectral centroid). Similarity between action categories was described by the temporal distribution of envelope energy (temporal centroid) or by the density of impacts. Moreover, perceptual similarity correlated with the pattern of confusion in categorization judgments. Listeners tended to confuse materials with similar spectral centroids, and actions with similar temporal centroids and onset densities. To confirm the influence of these different features, spectral cues were removed by applying the envelopes of the original sounds to a broadband noise carrier. Without spectral cues, listeners retained sensitivity to action categories but not to material categories. Conversely, listeners recognized material but not action categories after envelope scrambling that preserved long-term spectral content.

15.
Psychophysiology ; 53(6): 891-904, 2016 06.
Artículo en Inglés | MEDLINE | ID: mdl-26927928

RESUMEN

A comprehensive characterization of autonomic and somatic responding within the auditory domain is currently lacking. We studied whether simple types of auditory change that occur frequently during music listening could elicit measurable changes in heart rate, skin conductance, respiration rate, and facial motor activity. Participants heard a rhythmically isochronous sequence consisting of a repeated standard tone, followed by a repeated target tone that changed in pitch, timbre, duration, intensity, or tempo, or that deviated momentarily from rhythmic isochrony. Changes in all parameters produced increases in heart rate. Skin conductance response magnitude was affected by changes in timbre, intensity, and tempo. Respiratory rate was sensitive to deviations from isochrony. Our findings suggest that music researchers interpreting physiological responses as emotional indices should consider acoustic factors that may influence physiology in the absence of induced emotions.


Asunto(s)
Percepción Auditiva/fisiología , Música/psicología , Psicoacústica , Estimulación Acústica , Adulto , Cara/fisiología , Femenino , Respuesta Galvánica de la Piel , Frecuencia Cardíaca , Humanos , Masculino , Actividad Motora , Frecuencia Respiratoria , Adulto Joven
16.
Exp Brain Res ; 234(4): 1145-58, 2016 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-26790425

RESUMEN

Skilled interactions with sounding objects, such as drumming, rely on resolving the uncertainty in the acoustical and tactual feedback signals generated by vibrating objects. Uncertainty may arise from mis-estimation of the objects' geometry-independent mechanical properties, such as surface stiffness. How multisensory information feeds back into the fine-tuning of sound-generating actions remains unexplored. Participants (percussionists, non-percussion musicians, or non-musicians) held a stylus and learned to control their wrist velocity while repeatedly striking a virtual sounding object whose surface stiffness was under computer control. Sensory feedback was manipulated by perturbing the surface stiffness specified by audition and haptics in a congruent or incongruent manner. The compensatory changes in striking velocity were measured as the motor effects of the sensory perturbations, and sensory dominance was quantified by the asymmetry of congruency effects across audition and haptics. A pronounced dominance of haptics over audition suggested a superior utility of somatosensation developed through long-term experience with object exploration. Large interindividual differences in the motor effects of haptic perturbation potentially arose from a differential reliance on the type of tactual prediction error for which participants tend to compensate: vibrotactile force versus object deformation. Musical experience did not have much of an effect beyond a slightly greater reliance on object deformation in mallet percussionists. The bias toward haptics in the presence of crossmodal perturbations was greater when participants appeared to rely on object deformation feedback, suggesting a weaker association between haptically sensed object deformation and the acoustical structure of concomitant sound during everyday experience of actions upon objects.


Asunto(s)
Estimulación Acústica/métodos , Percepción Auditiva/fisiología , Movimiento/fisiología , Muñeca/fisiología , Adolescente , Adulto , Femenino , Humanos , Masculino , Estimulación Física/métodos , Adulto Joven
17.
J Exp Psychol Hum Percept Perform ; 42(4): 594-609, 2016 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-26594881

RESUMEN

This research explored the relations between the predictability of musical structure, expressive timing in performance, and listeners' perceived musical tension. Studies analyzing the influence of expressive timing on listeners' affective responses have been constrained by the fact that, in most pieces, the notated durations limit performers' interpretive freedom. To circumvent this issue, we focused on the unmeasured prelude, a semi-improvisatory genre without notated durations. In Experiment 1, 12 professional harpsichordists recorded an unmeasured prelude on a harpsichord equipped with a MIDI console. Melodic expectation was assessed using a probabilistic model (IDyOM [Information Dynamics of Music]) whose expectations have been previously shown to match closely those of human listeners. Performance timing information was extracted from the MIDI data using a score-performance matching algorithm. Time-series analyses showed that, in a piece with unspecified note durations, the predictability of melodic structure measurably influenced tempo fluctuations in performance. In Experiment 2, another 10 harpsichordists, 20 nonharpsichordist musicians, and 20 nonmusicians listened to the recordings from Experiment 1 and rated the perceived tension continuously. Granger causality analyses were conducted to investigate predictive relations among melodic expectation, expressive timing, and perceived tension. Although melodic expectation, as modeled by IDyOM, modestly predicted perceived tension for all participant groups, neither of its components, information content or entropy, was Granger causal. In contrast, expressive timing was a strong predictor and was Granger causal. However, because melodic expectation was also predictive of expressive timing, our results outline a complete chain of influence from predictability of melodic structure via expressive performance timing to perceived musical tension. (PsycINFO Database Record


Asunto(s)
Percepción Auditiva , Música/psicología , Percepción del Tiempo , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
18.
J Acoust Soc Am ; 140(6): EL478, 2016 12.
Artículo en Inglés | MEDLINE | ID: mdl-28039992

RESUMEN

Modulation Power Spectra include dimensions of spectral and temporal modulation that contribute significantly to the perception of musical instrument timbres. Nevertheless, it remains unknown whether each instrument's identity is characterized by specific regions in this representation. A recognition task was applied to tuba, trombone, cello, saxophone, and clarinet sounds resynthesized with filtered spectrotemporal modulations. The most relevant parts of this representation for instrument identification were determined for each instrument. In addition, instruments that were confused with each other led to non-overlapping spectrotemporal modulation regions, suggesting that musical instrument timbres are characterized by specific spectrotemporal modulations.

19.
Front Psychol ; 6: 1977, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26779086

RESUMEN

This paper investigates the role of acoustic and categorical information in timbre dissimilarity ratings. Using a Gammatone-filterbank-based sound transformation, we created tones that were rated as less familiar than recorded tones from orchestral instruments and that were harder to associate with an unambiguous sound source (Experiment 1). A subset of transformed tones, a set of orchestral recordings, and a mixed set were then rated on pairwise dissimilarity (Experiment 2A). We observed that recorded instrument timbres clustered into subsets that distinguished timbres according to acoustic and categorical properties. For the subset of cross-category comparisons in the mixed set, we observed asymmetries in the distribution of ratings, as well as a stark decay of inter-rater agreement. These effects were replicated in a more robust within-subjects design (Experiment 2B) and cannot be explained by acoustic factors alone. We finally introduced a novel model of timbre dissimilarity based on partial least-squares regression that compared the contributions of both acoustic and categorical timbre descriptors. The best model fit (R (2) = 0.88) was achieved when both types of descriptors were taken into account. These findings are interpreted as evidence for an interplay of acoustic and categorical information in timbre dissimilarity perception.

20.
PLoS One ; 9(12): e112552, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25474036

RESUMEN

The role of auditory and tactile modalities involved in violin playing and evaluation was investigated in an experiment employing a blind violin evaluation task under different conditions: i) normal playing conditions, ii) playing with auditory masking, and iii) playing with vibrotactile masking. Under each condition, 20 violinists evaluated five violins according to criteria related to violin playing and sound characteristics and rated their overall quality and relative preference. Results show that both auditory and vibrotactile feedback are important in the violinists' evaluations but that their relative importance depends on the violinist, the violin and the type of evaluation (different criteria ratings or preference). In this way, the overall quality ratings were found to be accurately predicted by the rating criteria, which also proved to be perceptually relevant to violinists, but were poorly correlated with the preference ratings; this suggests that the two types of ratings (overall quality vs preference) may stem from different decision-making strategies. Furthermore, the experimental design confirmed that violinists agree more on the importance of criteria in their overall evaluation than on their actual ratings for different violins. In particular, greater agreement was found on the importance of criteria related to the sound of the violin. Nevertheless, this study reveals that there are fundamental differences in the way players interpret and evaluate each criterion, which may explain why correlating physical properties with perceptual properties has been challenging so far in the field of musical acoustics.


Asunto(s)
Cognición/fisiología , Música , Tacto/fisiología , Acústica , Mano/fisiología , Humanos , Sonido , Vibración
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA