Your browser doesn't support javascript.
loading
A Lightweight Deep Learning-Based Approach for Jazz Music Generation in MIDI Format.
Yadav, Prasant Singh; Khan, Shadab; Singh, Yash Veer; Garg, Puneet; Singh, Ram Sewak.
Afiliación
  • Yadav PS; Department of Computer Science and Engineering, Mahamaya Polytechnic of Information Technology (Govt.), Hathras, Uttar Pradesh 204102, India.
  • Khan S; Department of Computer Science & Engineering, Sunder Deep Engineering College, Ghaziabad 201002, Uttar Pradesh, India.
  • Singh YV; Department of Information Technology, ABES Engineering College, Ghaziabad 201009, Uttar Pradesh, India.
  • Garg P; Department of Computer Science, ABES Engineering College, Ghaziabad 201009, Uttar Pradesh, India.
  • Singh RS; Department of Electronics and Communication, School of Electrical Engineering and Computing, Adama Science and Technology University, Adama, Ethiopia.
Comput Intell Neurosci ; 2022: 2140895, 2022.
Article en En | MEDLINE | ID: mdl-36035841
ABSTRACT
In today's real-world, estimation of the level of difficulty of the musical is part of very meaningful musical learning. A musical learner cannot learn without a defined precise estimation. This problem is not very basic but it is complicated up to some extent because of the subjectivity of the contents and the scarcity of the data. In this paper, a lightweight model that generates original music content using deep learning along with generating music based on a specific genre is proposed. The paper discusses a lightweight deep learning-based approach for jazz music generation in MIDI format. In this work, the genre of music chosen is Jazz, and the songs selected are classical numbers composed by various artists. All the songs are in MIDI format and there might be differences in the pace or tone of the music. It is prudential to make sure that the chosen datasets that do not have these kinds of differences and are similar to the final output as desired. A model is trained to take in a part of a music file as input and should produce its continuation. The result generated should be similar to the dataset given as the input. Moreover, the proposed model also generates music using a particular instrument.
Asunto(s)

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Aprendizaje Profundo / Música Idioma: En Revista: Comput Intell Neurosci Asunto de la revista: INFORMATICA MEDICA / NEUROLOGIA Año: 2022 Tipo del documento: Article País de afiliación: India

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Aprendizaje Profundo / Música Idioma: En Revista: Comput Intell Neurosci Asunto de la revista: INFORMATICA MEDICA / NEUROLOGIA Año: 2022 Tipo del documento: Article País de afiliación: India
...