Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Vis Comput Graph ; 29(11): 4472-4482, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37782609

RESUMO

In immersive Audio Augmented Reality, a virtual sound source should be indistinguishable from the existing real ones. This property can be evaluated with the co-immersion criterion, which encompasses scenes constituted by arbitrary configurations of real and virtual objects. Thus, we introduce the term Audio Augmented Virtuality (AAV) to describe a fully virtual environment consisting of auditory content captured from the real world, augmented by synthetic sound generation. We propose an experimental design in AAV investigating how simplified late reverberation (LR) affects the co-immersion of a sound source. Participants listened to simultaneous virtual speakers dynamically rendered through spatial Room Impulse Responses, and were asked to detect the presence of an impostor, i.e., a speaker rendered with one of two simplified LR conditions. Detection rates were found to be close to chance level, especially for one condition, suggesting a limited influence on co-immersion of the simplified LR in the evaluated AAV scenes. This methodology can be straightforwardly extended and applied to different acoustics scenes, complexities, i.e., the number of simultaneous speakers, and rendering parameters in order to further investigate the requirements for immersive audio technologies in AAR and AAV applications.

2.
Multimed Tools Appl ; 81(22): 32371-32391, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35437421

RESUMO

This study focuses on the perception of music performances when contextual factors, such as room acoustics and instrument, change. We propose to distinguish the concept of "performance" from the one of "interpretation", which expresses the "artistic intention". Towards assessing this distinction, we carried out an experimental evaluation where 91 subjects were invited to listen to various audio recordings created by resynthesizing MIDI data obtained through Automatic Music Transcription (AMT) systems and a sensorized acoustic piano. During the resynthesis, we simulated different contexts and asked listeners to evaluate how much the interpretation changes when the context changes. Results show that: (1) MIDI format alone is not able to completely grasp the artistic intention of a music performance; (2) usual objective evaluation measures based on MIDI data present low correlations with the average subjective evaluation. To bridge this gap, we propose a novel measure which is meaningfully correlated with the outcome of the tests. In addition, we investigate multimodal machine learning by providing a new score-informed AMT method and propose an approximation algorithm for the p-dispersion problem.

3.
J Acoust Soc Am ; 142(5): 2953, 2017 11.
Artigo em Inglês | MEDLINE | ID: mdl-29195444

RESUMO

Two experiments were conducted on an upright and a grand piano, both either producing string vibrations or conversely being silent after the initial keypress, while pianists were listening to the feedback from a synthesizer through insulating headphones. In a quality experiment, participants unaware of the silent mode were asked to play freely and then rate the instrument according to a set of attributes and general preference. Participants preferred the vibrating over the silent setup, and preference ratings were associated to auditory attributes of richness and naturalness in the low and middle ranges. Another experiment on the same setup measured the detection of vibrations at the keyboard, while pianists played notes and chords of varying dynamics and duration. Sensitivity to string vibrations was highest in the lowest register and gradually decreased up to note D5. After the percussive transient, the tactile stimuli exhibited spectral peaks of acceleration whose perceptibility was demonstrated by tests conducted in active touch conditions. The two experiments confirm that piano performers perceive vibratory cues of strings mediated by spectral and spatial summations occurring in the Pacinian system in their fingertips, and suggest that such cues play a role in the evaluation of quality of the musical instrument.


Assuntos
Percepção Auditiva , Dedos/inervação , Música , Corpúsculos de Pacini/fisiologia , Percepção do Tato , Tato , Estimulação Acústica , Adulto , Limiar Auditivo , Sinais (Psicologia) , Feminino , Humanos , Julgamento , Percepção Sonora , Masculino , Movimento (Física) , Percepção da Altura Sonora , Som , Fatores de Tempo , Vibração
4.
J Acoust Soc Am ; 141(4): EL375, 2017 04.
Artigo em Inglês | MEDLINE | ID: mdl-28464620

RESUMO

Stimulus order has been reported to affect perceived loudness. This letter investigates how temporal order affects distance discrimination of receding and approaching pairs of sound sources rendered binaurally in the anechoic near-field. Individual discrimination thresholds for different virtual locations were measured through an adaptive procedure. The threshold values show a bias toward approaching stimuli for closer reference distances (≤50 cm) and toward receding stimuli for farther reference distances (100 cm), but only when absolute intensity cues are available. The results show how an illusion of loudness can translate into an illusion of perceived relative distance.

5.
J Acoust Soc Am ; 139(5): 2489, 2016 05.
Artigo em Inglês | MEDLINE | ID: mdl-27250145

RESUMO

The scattering around the human pinna that is captured by the Head-Related Transfer Functions (HRTFs) is a complex problem that creates uncertainties in both acoustical measurements and simulations. Within the simulation framework of Finite Difference Time Domain (FDTD) with axis-aligned staircase boundaries resulting from a voxelization process, the voxelization-based uncertainty propagating in the HRTF-captured sound field is quantified for one solid and two surface voxelization algorithms. Simulated results utilizing a laser-scanned mesh of Knowles Electronics Manikin for Acoustic Research (KEMAR) show that in the context of complex geometries with local topology comparable to grid spacing such as the human pinna, the voxelization-related uncertainties in simulations emerge at lower frequencies than the generally used accuracy bandwidths. Numerical simulations show that the voxelization process induces both random error and algorithm-dependent bias in the simulated HRTF spectral features. Frequencies fr below which the random error is bounded by various dB thresholds are estimated and predicted. Particular shortcomings of the used voxelization algorithms are identified and the influence of the surface impedance on the induced errors is studied. Simulations are also validated against measurements.


Assuntos
Acústica , Simulação por Computador , Pavilhão Auricular/fisiologia , Cabeça/fisiologia , Modelos Teóricos , Processamento de Sinais Assistido por Computador , Som , Algoritmos , Pavilhão Auricular/anatomia & histologia , Cabeça/anatomia & histologia , Humanos , Manequins , Método de Monte Carlo , Movimento (Física) , Análise Numérica Assistida por Computador , Espalhamento de Radiação , Fatores de Tempo
6.
Exp Brain Res ; 234(4): 1145-58, 2016 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-26790425

RESUMO

Skilled interactions with sounding objects, such as drumming, rely on resolving the uncertainty in the acoustical and tactual feedback signals generated by vibrating objects. Uncertainty may arise from mis-estimation of the objects' geometry-independent mechanical properties, such as surface stiffness. How multisensory information feeds back into the fine-tuning of sound-generating actions remains unexplored. Participants (percussionists, non-percussion musicians, or non-musicians) held a stylus and learned to control their wrist velocity while repeatedly striking a virtual sounding object whose surface stiffness was under computer control. Sensory feedback was manipulated by perturbing the surface stiffness specified by audition and haptics in a congruent or incongruent manner. The compensatory changes in striking velocity were measured as the motor effects of the sensory perturbations, and sensory dominance was quantified by the asymmetry of congruency effects across audition and haptics. A pronounced dominance of haptics over audition suggested a superior utility of somatosensation developed through long-term experience with object exploration. Large interindividual differences in the motor effects of haptic perturbation potentially arose from a differential reliance on the type of tactual prediction error for which participants tend to compensate: vibrotactile force versus object deformation. Musical experience did not have much of an effect beyond a slightly greater reliance on object deformation in mallet percussionists. The bias toward haptics in the presence of crossmodal perturbations was greater when participants appeared to rely on object deformation feedback, suggesting a weaker association between haptically sensed object deformation and the acoustical structure of concomitant sound during everyday experience of actions upon objects.


Assuntos
Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Movimento/fisiologia , Punho/fisiologia , Adolescente , Adulto , Feminino , Humanos , Masculino , Estimulação Física/métodos , Adulto Jovem
7.
Front Psychol ; 6: 1369, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26441745

RESUMO

Although acoustic frequency is not a spatial property of physical objects, in common language, pitch, i.e., the psychological correlated of frequency, is often labeled spatially (i.e., "high in pitch" or "low in pitch"). Pitch-height is known to modulate (and interact with) the response of participants when they are asked to judge spatial properties of non-auditory stimuli (e.g., visual) in a variety of behavioral tasks. In the current study we investigated whether the modulatory action of pitch-height extended to the haptic estimation of height of a virtual step. We implemented a HW/SW setup which is able to render virtual 3D objects (stair-steps) haptically through a PHANTOM device, and to provide real-time continuous auditory feedback depending on the user interaction with the object. The haptic exploration was associated with a sinusoidal tone whose pitch varied as a function of the interaction point's height within (i) a narrower and (ii) a wider pitch range, or (iii) a random pitch variation acting as a control audio condition. Explorations were also performed with no sound (haptic only). Participants were instructed to explore the virtual step freely, and to communicate height estimation by opening their thumb and index finger to mimic the step riser height, or verbally by reporting the height in centimeters of the step riser. We analyzed the role of musical expertise by dividing participants into non-musicians and musicians. Results showed no effects of musical pitch on high-realistic haptic feedback. Overall there is no difference between the two groups in the proposed multimodal conditions. Additionally, we observed a different haptic response distribution between musicians and non-musicians when estimations of the auditory conditions are matched with estimations in the no sound condition.

8.
Comput Intell Neurosci ; 2013: 586138, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24382952

RESUMO

The goal of this paper is to address a topic that is rarely investigated in the literature of technology-assisted motor rehabilitation, that is, the integration of auditory feedback in the rehabilitation device. After a brief introduction on rehabilitation robotics, the main concepts of auditory feedback are presented, together with relevant approaches, techniques, and technologies available in this domain. Current uses of auditory feedback in the context of technology-assisted rehabilitation are then reviewed. In particular, a comparative quantitative analysis over a large corpus of the recent literature suggests that the potential of auditory feedback in rehabilitation systems is currently and largely underexploited. Finally, several scenarios are proposed in which the use of auditory feedback may contribute to overcome some of the main limitations of current rehabilitation systems, in terms of user engagement, development of acute-phase and home rehabilitation devices, learning of more complex motor tasks, and improving activities of daily living.


Assuntos
Retroalimentação Sensorial/fisiologia , Movimento/fisiologia , Robótica/métodos , Reabilitação do Acidente Vascular Cerebral , Humanos
9.
J Neuroeng Rehabil ; 9: 79, 2012 Oct 10.
Artigo em Inglês | MEDLINE | ID: mdl-23046683

RESUMO

BACKGROUND: This paper presents the results of a set of experiments in which we used continuous auditory feedback to augment motor training exercises. This feedback modality is mostly underexploited in current robotic rehabilitation systems, which usually implement only very basic auditory interfaces. Our hypothesis is that properly designed continuous auditory feedback could be used to represent temporal and spatial information that could in turn, improve performance and motor learning. METHODS: We implemented three different experiments on healthy subjects, who were asked to track a target on a screen by moving an input device (controller) with their hand. Different visual and auditory feedback modalities were envisaged. The first experiment investigated whether continuous task-related auditory feedback can help improve performance to a greater extent than error-related audio feedback, or visual feedback alone. In the second experiment we used sensory substitution to compare different types of auditory feedback with equivalent visual feedback, in order to find out whether mapping the same information on a different sensory channel (the visual channel) yielded comparable effects with those gained in the first experiment. The final experiment applied a continuously changing visuomotor transformation between the controller and the screen and mapped kinematic information, computed in either coordinate system (controller or video), to the audio channel, in order to investigate which information was more relevant to the user. RESULTS: Task-related audio feedback significantly improved performance with respect to visual feedback alone, whilst error-related feedback did not. Secondly, performance in audio tasks was significantly better with respect to the equivalent sensory-substituted visual tasks. Finally, with respect to visual feedback alone, video-task-related sound feedback decreased the tracking error during the learning of a novel visuomotor perturbation, whereas controller-task-related sound feedback did not. This result was particularly interesting, as the subjects relied more on auditory augmentation of the visualized target motion (which was altered with respect to arm motion by the visuomotor perturbation), rather than on sound feedback provided in the controller space, i.e., information directly related to the effective target motion of their arm. CONCLUSIONS: Our results indicate that auditory augmentation of visual feedback can be beneficial during the execution of upper limb movement exercises. In particular, we found that continuous task-related information provided through sound, in addition to visual feedback can improve not only performance but also the learning of a novel visuomotor perturbation. However, error-related information provided through sound did not improve performance and negatively affected learning in the presence of the visuomotor perturbation.


Assuntos
Estimulação Acústica , Retroalimentação Psicológica/fisiologia , Aprendizagem/fisiologia , Destreza Motora/fisiologia , Desempenho Psicomotor/fisiologia , Adulto , Algoritmos , Fenômenos Biomecânicos , Sistemas Computacionais , Interpretação Estatística de Dados , Feminino , Humanos , Masculino , Estimulação Luminosa , Adulto Jovem
10.
Exp Brain Res ; 221(1): 33-41, 2012 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-22733310

RESUMO

The arm movement control system often relies on visual feedback to drive motor adaptation and to help specify desired trajectories. Here we studied whether kinematic errors that were indicated with auditory feedback could be used to control reaching in a way comparable with when vision was available. We randomized twenty healthy adult subjects to receive either visual or auditory feedback of their movement trajectory error with respect to a line as they performed timed reaching movements while holding a robotic joystick. We delivered auditory feedback using spatialized pink noise, the loudness and location of which reflected kinematic error. After a baseline period, we unexpectedly perturbed the reaching trajectories using a perpendicular viscous force field applied by the joystick. Subjects adapted to the force field as well with auditory feedback as they did with visual feedback and exhibited comparable after effects when the force field was removed. When we changed the reference trajectory to be a trapezoid instead of a line, subjects shifted their trajectories by about the same amount with either auditory or visual feedback of error. These results indicate that arm motor networks can readily incorporate auditory feedback to alter internal models and desired trajectories, a finding with implications for the organization of the arm motor control adaptation system as well as sensory substitution and motor training technologies.


Assuntos
Adaptação Psicológica/fisiologia , Meio Ambiente , Retroalimentação Sensorial/fisiologia , Movimento/fisiologia , Desempenho Psicomotor/fisiologia , Estimulação Acústica , Adulto , Análise de Variância , Fenômenos Biomecânicos , Feminino , Humanos , Masculino , Estimulação Luminosa , Tempo de Reação/fisiologia , Adulto Jovem
11.
J Acoust Soc Am ; 131(1): 897-906, 2012 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-22280712

RESUMO

String and membrane vibrations cannot be considered as linear above a certain amplitude due to the variation in string or membrane tension. A relevant special case is when the tension is spatially constant and varies in time only in dependence of the overall string length or membrane surface. The most apparent perceptual effect of this tension modulation phenomenon is the exponential decay of pitch in time. Pitch glides due to tension modulation are an important timbral characteristic of several musical instruments, including the electric guitar and tom-tom drum, and many ethnic instruments. This paper presents a unified formulation to the tension modulation problem for one-dimensional (1-D) (string) and two-dimensional (2-D) (membrane) cases. In addition, it shows that the short-time average of the tension variation, which is responsible for pitch glides, is approximately proportional to the system energy. This proportionality allows the efficient physics-based sound synthesis of pitch glides. The proposed models require only slightly more computational resources than linear models as opposed to earlier tension-modulated models of higher complexity.

12.
Neuroimage ; 56(3): 1480-92, 2011 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-21397699

RESUMO

When we observe someone perform a familiar action, we can usually predict what kind of sound that action will produce. Musical actions are over-experienced by musicians and not by non-musicians, and thus offer a unique way to examine how action expertise affects brain processes when the predictability of the produced sound is manipulated. We used functional magnetic resonance imaging to scan 11 drummers and 11 age- and gender-matched novices who made judgments on point-light drumming movements presented with sound. In Experiment 1, sound was synchronized or desynchronized with drumming strikes, while in Experiment 2 sound was always synchronized, but the natural covariation between sound intensity and velocity of the drumming strike was maintained or eliminated. Prior to MRI scanning, each participant completed psychophysical testing to identify personal levels of synchronous and asynchronous timing to be used in the two fMRI activation tasks. In both experiments, the drummers' brain activation was reduced in motor and action representation brain regions when sound matched the observed movements, and was similar to that of novices when sound was mismatched. This reduction in neural activity occurred bilaterally in the cerebellum and left parahippocampal gyrus in Experiment 1, and in the right inferior parietal lobule, inferior temporal gyrus, middle frontal gyrus and precentral gyrus in Experiment 2. Our results indicate that brain functions in action-sound representation areas are modulated by multimodal action expertise.


Assuntos
Encéfalo/fisiologia , Destreza Motora/fisiologia , Música/psicologia , Desempenho Psicomotor/fisiologia , Estimulação Acústica , Adolescente , Adulto , Análise de Variância , Cerebelo/fisiologia , Análise por Conglomerados , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Giro Para-Hipocampal/fisiologia , Lobo Parietal/fisiologia , Estimulação Luminosa , Córtex Pré-Frontal/fisiologia , Psicofísica , Lobo Temporal/fisiologia , Adulto Jovem
13.
IEEE Int Conf Rehabil Robot ; 2011: 5975373, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-22275577

RESUMO

This paper reports on an ongoing research collaboration between the University of Padua and the University of California Irvine, on the use of continuous auditory-feedback in robot-assisted neurorehabilitation of post-stroke patients. This feedback modality is mostly underexploited in current robotic rehabilitation systems, that usually implement very basic auditory feedback interfaces. The results of this research show that generating a proper sound cue during robot assisted movement training can help patients in improving engagement, performance and learning in the exercise.


Assuntos
Aprendizagem/fisiologia , Robótica/instrumentação , Reabilitação do Acidente Vascular Cerebral , Retroalimentação Sensorial/fisiologia , Humanos , Robótica/métodos
14.
Exp Brain Res ; 198(2-3): 339-52, 2009 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-19404620

RESUMO

We investigated the effect of musical expertise on sensitivity to asynchrony for drumming point-light displays, which varied in their physical characteristics (Experiment 1) or in their degree of audiovisual congruency (Experiment 2). In Experiment 1, 21 repetitions of three tempos x three accents x nine audiovisual delays were presented to four jazz drummers and four novices. In Experiment 2, ten repetitions of two audiovisual incongruency conditions x nine audiovisual delays were presented to 13 drummers and 13 novices. Participants gave forced-choice judgments of audiovisual synchrony. The results of Experiment 1 show an enhancement in experts' ability to detect asynchrony, especially for slower drumming tempos. In Experiment 2 an increase in sensitivity to asynchrony was found for incongruent stimuli; this increase, however, is attributable only to the novice group. Altogether the results indicated that through musical practice we learn to ignore variations in stimulus characteristics that otherwise would affect our multisensory integration processes.


Assuntos
Percepção Auditiva , Percepção de Movimento , Música , Estimulação Acústica , Adulto , Análise de Variância , Escolaridade , Humanos , Julgamento , Modelos Lineares , Masculino , Distribuição Normal , Estimulação Luminosa , Psicofísica , Detecção de Sinal Psicológico , Espectrografia do Som , Fatores de Tempo , Gravação em Vídeo , Adulto Jovem
15.
Med Eng Phys ; 24(7-8): 453-60, 2002.
Artigo em Inglês | MEDLINE | ID: mdl-12237039

RESUMO

A glottal model based on physical constraints is proposed. The model describes the vocal fold as a simple oscillator, i.e. a damped mass-spring system. The oscillator is coupled with a nonlinear block, accounting for fold interaction with the airflow. The nonlinear block is modelled as a regressor-based functional with weights to be identified, and a pitch-synchronous identification procedure is outlined. The model is used to analyse voiced sounds from normal and from pathological voices, and the application of the proposed analysis procedure to voice quality assessment is discussed.


Assuntos
Análise por Conglomerados , Glote/fisiopatologia , Modelos Biológicos , Fonação , Processamento de Sinais Assistido por Computador , Qualidade da Voz , Retroalimentação , Humanos , Laringectomia , Masculino , Dinâmica não Linear , Pressão , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Acústica da Fala , Distúrbios da Fala/classificação , Distúrbios da Fala/fisiopatologia
16.
J Acoust Soc Am ; 111(5 Pt 1): 2293-301, 2002 May.
Artigo em Inglês | MEDLINE | ID: mdl-12051449

RESUMO

A quantitative study of discrete-time simulations for a single reed physical model is presented. It is shown that when the continuous-time model is discretized, a delay-free path is generated in the computation. A general solution is proposed to this problem, that amounts to operating a geometrical transformation on the equations. The transformed equations are discretized using four different numerical methods. Stability properties of each method are assessed through analysis in the frequency domain. By comparing the discrete and continuous frequency responses, it is studied how the physical parameters are mapped by each method into the discrete-time domain. Time-domain simulations are developed by coupling the four digital reeds to an idealized bore model, Quantitative analysis of the simulations shows that the discrete-time systems produced by the four methods have significantly different behaviors, even when high sampling rates are used. As a result of this study, a general scheme for accurate and efficient time-domain simulations of the single reed model is proposed.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA