Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Entropy (Basel) ; 22(11)2020 Nov 13.
Artigo em Inglês | MEDLINE | ID: mdl-33287059

RESUMO

In tonal music, musical tension is strongly associated with musical expression, particularly with expectations and emotions. Most listeners are able to perceive musical tension subjectively, yet musical tension is difficult to be measured objectively, as it is connected with musical parameters such as rhythm, dynamics, melody, harmony, and timbre. Musical tension specifically associated with melodic and harmonic motion is called tonal tension. In this article, we are interested in perceived changes of tonal tension over time for chord progressions, dubbed tonal tension profiles. We propose an objective measure capable of capturing tension profile according to different tonal music parameters, namely, tonal distance, dissonance, voice leading, and hierarchical tension. We performed two experiments to validate the proposed model of tonal tension profile and compared against Lerdahl's model and MorpheuS across 12 chord progressions. Our results show that the considered four tonal parameters contribute differently to the perception of tonal tension. In our model, their relative importance adopts the following weights, summing to unity: dissonance (0.402), hierarchical tension (0.246), tonal distance (0.202), and voice leading (0.193). The assumption that listeners perceive global changes in tonal tension as prototypical profiles is strongly suggested in our results, which outperform the state-of-the-art models.

2.
Sensors (Basel) ; 18(10)2018 Sep 21.
Artigo em Inglês | MEDLINE | ID: mdl-30248966

RESUMO

The automatic generation of music is an emergent field of research that has attracted the attention of countless researchers. As a result, there is a broad spectrum of state of the art research in this field. Many systems have been designed to facilitate collaboration between humans and machines in the generation of valuable music. This research proposes an intelligent system that generates melodies under the supervision of a user, who guides the process through a mechanical device. The mechanical device is able to capture the movements of the user and translate them into a melody. The system is based on a Case-Based Reasoning (CBR) architecture, enabling it to learn from previous compositions and to improve its performance over time. The user uses a device that allows them to adapt the composition to their preferences by adjusting the pace of a melody to a specific context or generating more serious or acute notes. Additionally, the device can automatically resist some of the user's movements, this way the user learns how they can create a good melody. Several experiments were conducted to analyze the quality of the system and the melodies it generates. According to the users' validation, the proposed system can generate music that follows a concrete style. Most of them also believed that the partial control of the device was essential for the quality of the generated music.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa