Project developed for the Computationa Intelligence and Deep Learning course of the Master of Artificial Intelligence and Data Engineering at the University of Pisa.
This project consists in the design of Deep Learning architecture able to generate both single-track monophonic music, a musical texture with just one voice, and single-track polyphonic music, a type of musical texture consisting of two or more simultaneous lines of independent melody.
More informetion about the project are available in the documentation.
The repository is organized as follows:
- Tokenization.ipynb/ contains the implementation of the tokenizer used to process MIDI files
- DeepMusic_RNN.ipynb/ contains the implementation of the model based on LSTM
- DeepMusic_transformer.ipynb/ contains the implementation of the model based on transformers
- models/ contains the trained models.
- Tommaso Baldi @balditommaso
- Jacopo Cecchetti @jacopocecch