Pengembangan Model Deep Learning Long Short-Term Memory untuk Generasi Musik Berbasis Data MIDI

Authors

  • Muhammad Djamaluddin Universitas Hamzanwadi
  • Imam Fathurrahman Universitas Hamzanwadi
  • M. Nurul Wathani Universitas Hamzanwadi

DOI:

https://doi.org/10.29408/jit.v9i1.33165

Keywords:

Deep Learning, Music Generation, Long Short-Term Memory (LSTM), MIDI, Sequence Model

Abstract

This study develops an automatic music generation model based on Long Short-Term Memory (LSTM) by utilizing MIDI data as a symbolic representation of classical piano music sequences. The approach is computational and experimental, with a workflow that includes extracting and converting MIDI files using music21, constructing note and chord tokens, forming input–output sequences, designing a three-layer LSTM architecture, and generating music in an autoregressive manner. The model is trained for 100 epochs with a batch size of 64 and evaluated using loss, accuracy, top-3 accuracy, and perplexity metrics to assess its predictive capability on unseen data. The experimental results show a consistent decrease in validation loss, with a final value of approximately 2.93 and a validation accuracy of 0.33, while the top-3 accuracy reaches 0.53, indicating that more than half of the correct predictions fall within the top three candidates. A perplexity value around 18 suggests that the model has a reasonably adequate sequence prediction ability for symbolic music data. Qualitatively, the model is able to generate simple melodies whose patterns remain coherent with the note distribution in the dataset, although some parts of the compositions still exhibit repetition and limited variation. An important contribution of this study is the provision of a systematic methodological documentation of the LSTM-based music generation pipeline, which can serve as a practical reference for future development and research in deep learning–based music generation.

References

[1] A. S. J. Lysal, M. P. Jothilakshmi, P. Muralidharan, and S. S. Rathipriya, “Generation of music using LSTM,” 2025, p. 020022. doi: 10.1063/5.0263106.

[2] M. Han, S. Soradi-Zeid, T. Anwlnkom, and Y. Yang, “Firefly algorithm-based LSTM model for Guzheng tunes switching with big data analysis,” Heliyon, vol. 10, no. 12, p. e32092, Jun. 2024, doi: 10.1016/j.heliyon.2024.e32092.

[3] I. Fathurrahman and I. Gunawan, “Pengenalan Citra Logo Kendaraan Menggunakan Metode Gray Level Co-Occurence Matrix (Glcm) dan Jst-Backpropagation,” Infotek : Jurnal Informatika dan Teknologi, vol. 1, no. 1, pp. 47–55, Jan. 2018, doi: 10.29408/jit.v1i1.894.

[4] I. Fathurrahman, A. M. Nur, and F. Farhurrahman, “Identifikasi Kematangan Buah Mentimun Berbasis Citra Digital Menggunakan Jaringan Syaraf Tiruan Backpropagation,” Infotek: Jurnal Informatika dan Teknologi, vol. 2, no. 1, pp. 27–33, Jan. 2019, doi: 10.29408/jit.v2i1.976.

[5] I. Fathurrahman, M. Djamaluddin, Z. Amri, and M. N. Wathani, “Klasifikasi Motif Batik Nusantara Menggunakan Vision Transformer (ViT) Berbasis Deep Learning,” Infotek: Jurnal Informatika dan Teknologi, vol. 8, no. 2, pp. 511–522, Jul. 2025, doi: 10.29408/jit.v8i2.31108.

[6] Imam Fathurrahman, Mahpuz, Muhammad Djamaluddin, Lalu Kerta Wijaya, and Ida Wahidah, “Pengembangan Model Convolutional Neural Network (CNN) untuk Klasifikasi Penyakit Kulit Berbasis Citra Digital,” Infotek: Jurnal Informatika dan Teknologi, vol. 8, no. 1, pp. 298–308, Jan. 2025, doi: 10.29408/jit.v8i1.28655.

[7] S. Wanjari, P. Tupe, A. Nawale, and P. Papade, “Music Generation Using LSTM Model,” TIJER - International Research Journal, vol. 10, no. 12, Dec. 2023, [Online]. Available: www.tijer.org

[8] S. S. Patil et al., “Music Generation Using RNN-LSTM with GRU,” in 2023 International Conference on Integration of Computational Intelligent System (ICICIS), IEEE, Nov. 2023, pp. 1–5. doi: 10.1109/ICICIS56802.2023.10430293.

[9] A. Ycart and E. Benetos, “A Study on Lstm Networks For Polyphonic Music Sequence Modelling,” China, Aug. 2017. [Online]. Available: http://www.eecs.qmul.ac.uk/

[10] Y. Huang, X. Huang, and Q. Cai, “Music Generation Based on Convolution-LSTM,” Computer and Information Science, vol. 11, no. 3, p. 50, Jun. 2018, doi: 10.5539/cis.v11n3p50.

[11] T. Do Quang and T. Hoang, “An efficient method to build music generative model by controlling both general and local note characteristics,” Journal of King Saud University - Computer and Information Sciences, vol. 35, no. 9, p. 101761, Oct. 2023, doi: 10.1016/j.jksuci.2023.101761.

[12] Z. Ning, X. Han, and J. Pan, “Semi-supervised emotion-driven music generation model based on category-dispersed Gaussian Mixture Variational Autoencoders,” PLoS One, vol. 19, no. 12, p. e0311541, Dec. 2024, doi: 10.1371/journal.pone.0311541.

[13] C.-Z. A. Huang et al., “Music Transformer: Generating Music with Long-Term Structure,” in International Conference on Learning Representations, 2018. [Online]. Available: https://api.semanticscholar.org/CorpusID:54477714

[14] I. Ruddin, H. Santoso, and R. E. Indrajit, “Digitalisasi Musik Industri: Bagaimana Teknologi Informasi Mempengaruhi Industri Musik di Indonesia,” Jurnal Pendidikan Sains dan Komputer, vol. 2, no. 01, pp. 124–136, Feb. 2022, doi: 10.47709/jpsk.v2i01.1395.

[15] T. S. Fitriani, A. Saepudin, and J. Karawitan, “Midi Sebagai Inovasi Dan Alternatif Musik Iringan Tari Di Masa Pandemi,” vol. 5, no. 1, 2022, doi: 10.26887/mapj.

[16] Y. Samosir, I. K. G. Suhartana, and I. G. N. A. C. Putra, “Konversi Suara ke MIDI Menggunakan Short Time Fourier Transform Sebagai Virtual Midi Controller Pada Digital Audio Workstation,” JELIKU (Jurnal Elektronik Ilmu Komputer Udayana), vol. 12, no. 2, p. 451, Aug. 2023, doi: 10.24843/JLK.2023.v12.i02.p24.

[17] J. Cahyani, S. Mujahidin, and T. P. Fiqar, “Implementasi Metode Long Short Term Memory (LSTM) untuk Memprediksi Harga Bahan Pokok Nasional,” Jurnal Sistem dan Teknologi Informasi (JustIN), vol. 11, no. 2, p. 346, Jul. 2023, doi: 10.26418/justin.v11i2.57395.

[18] M. A. Saputra and T. Sugihartono, “Evaluasi Kinerja Model LSTM Untuk Prediksi Risiko Penyakit Jantung Menggunakan Dataset,” Jurnal Pendidikan dan Teknologi Indonesia, vol. 5, no. 7, pp. 1823–1833, Jul. 2025, doi: 10.52436/1.jpti.821.

[19] D. ROSMALA and M. N. FADHILAH, “Audio Conversion for Music Genre Classification Using Short-Time Fourier Transform and Inception V3,” ELKOMIKA: Jurnal Teknik Energi Elektrik, Teknik Telekomunikasi, & Teknik Elektronika, vol. 13, no. 1, p. 84, Jan. 2025, doi: 10.26760/elkomika.v13i1.84.

[20] G. Zidane Dhamara and A. Nugroho, “Klasifikasi Genre Musik Menggunakan Machine Learning,” Bulletin of Information Technology (BIT), vol. 6, no. 3, pp. 206–217, 2025, doi: 10.47065/bit.v5i2.2021.

[21] S. Pouyanfar et al., “A Survey on Deep Learning: Algorithms, Techniques, and Applications,” ACM Comput. Surv., vol. 51, no. 5, Sep. 2018, doi: 10.1145/3234150

Downloads

Published

20-01-2026

How to Cite

Muhammad Djamaluddin, Fathurrahman, I., & M. Nurul Wathani. (2026). Pengembangan Model Deep Learning Long Short-Term Memory untuk Generasi Musik Berbasis Data MIDI. Infotek: Jurnal Informatika Dan Teknologi, 9(1), 197–208. https://doi.org/10.29408/jit.v9i1.33165

Most read articles by the same author(s)

1 2 > >> 

Similar Articles

<< < 6 7 8 9 10 11 12 13 14 15 > >> 

You may also start an advanced similarity search for this article.