Skip to content

Multi-Instrument music generation using C-RNN-GAN with MIDI format input 🎼

License

Notifications You must be signed in to change notification settings

seyedsaleh/music-generator

Repository files navigation

Contributors Forks Stargazers Issues MIT License


music-generator

Multi-Instrument music generation using C-RNN-GAN with MIDI format input
Explore the docs »

Demo Music Generation (Google Colab) · Demo Multi-Instrument Music Generation (Google Colab) . Report Bug & Request Feature

Table of Contents
  1. About The Project
  2. Parts
  3. Results
  4. License
  5. Datasets
  6. Refereces
  7. Contact
  8. Roadmap

About The Project

Generative adversarial networks have been proposed as a way of efficiently training deep generative neural networks. We propose a generative adversarial model that works on continuous sequential data, and apply it by training it on a collection of classical music. We conclude that it generates music that sounds better and better as the model is trained, report statistics on generated music, and let the reader judge the quality by downloading the generated songs.

Recently, generative neural networks have taken the stage for artistic pursuits, such as image generation and photo retouching. Another area where these deep learning networks are beginning to leave a mark is in music generation. In this project, our goal is to explore the use of LSTM and GAN neural networks to generate music that seems as if it were human-made. By treating the notes and chords within MIDI files as discrete sequential data, we were able to train these two models and use them to generate completely new MIDI files.

Listen to our results! 😄

(back to top)

Built With

Major frameworks/libraries used to this project:

(back to top)

Parts

MIDI format an acronym for Musical Instrument Digital Interface, a technical standard that describes a communications protocol, digital interface, and electrical connectors that connect a wide variety of electronic musical instruments, computers.

Music21 is a powerful library in python whose tools are very helpful for creating, analysis and processing of audio files like songs, melodies and etc.. In this project, we have used this library for our purposes of converting the MIDI files into notes, categorizing of notes for preparing the training data, and choosing the playing instruments for the output of our GAN and converting it back to MIDI.

MIDI Class:

  • Parser
  • sequence preparation
  • MIDI creation

Parsing MIDI file and preparing the training data and preparing data for C-RNN-GAN network

Model Class:

  • Discriminator
  • Generator
  • Train
  • Plot loss function
  • Save model

Generative Adversarial Network (GAN) vs. LSTM

C-RNN-GAN Network Structure

The project has been done with aid of GPU Computing and the use of NVIDIA cuDNN and NVIDIA CUDA Toolkit. It helped us to use Tensorflow with GPU support for computing and learning with more compatibility. The model has been trained on an NVIDIA GeForce GTX 1080Ti GPU. CUDA is a parallel computing platform interface that allows software developers to use GPUs for ML computing.

(back to top)

Results

piano.20music.mov
multi-instrument.20music.mov

(back to top)

License

Distributed under the MIT License. See LICENSE.txt for more information.

(back to top)

Datasets

  1. GiantMIDI-Piano is a classical piano MIDI dataset contains 10,854 MIDI files of 2,786 composers. The curated subset by constraining composer surnames contains 7,236 MIDI files of 1,787 composers. GiantMIDI-Piano are transcribed from live recordings with a high-resolution piano transcription system. find out more on Github
    • Qiuqiang Kong, Bochen Li, Jitong Chen, and Yuxuan Wang. "GiantMIDI-Piano: A large-scale MIDI dataset for classical piano music." arXiv preprint arXiv:2010.07061 (2020). https://arxiv.org/pdf/2010.07061
  2. Persian MIDI Dataset
  3. The Lakh MIDI Dataset

(back to top)

Refereces

[1] Mogren, Olof. (2016). C-RNN-GAN: Continuous recurrent neural networks with adversarial training. arXiv:1611.09904.

[2] “Generating Music with GANs—An Overview and Case Studies” at ISMIR 2019 (November 4th at Delft, The Netherlands). salu133445.github.io/ismir2019tutorial.

[3] Goodfellow, Ian & Pouget-Abadie, Jean & Mirza, Mehdi & Xu, Bing & Warde-Farley, David & Ozair, Sherjil & Courville, Aaron & Bengio, Y.. (2014). Generative Adversarial Nets. ArXiv.

(back to top)

Contact

Seyedmohammadsaleh Mirzatabatabaei - @seyedsaleh - [email protected]

Salman Amimotlagh - @SMotlaq - [email protected]

Project Link: https://github.com/seyedsaleh/music-generator

(back to top)

Roadmap

  • Multi-instrument by partitioning and joining each part (Music21 Instrument package)
  • Use offset, duration, velocity with pyPianoroll package
  • UI mobile and desktop application to create music
  • using CGANs network to avoid falchs

See the open issues for a full list of proposed features (and known issues).

(back to top)


Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •