This chapter will show the importance of Long Short-Term Memory (LSTM) networks in generating longer sequences. We'll see how to use a monophonic Magenta model, the Melody RNN—an LSTM network with a loopback and attention configuration. You'll also learn to use two polyphonic models, the Polyphony RNN and Performance RNN, both LSTM networks using a specific encoding, with the latter having support for note velocity and expressive timing.
- A newer version of this code is available.
This branch shows the code for Magenta v1.1.7, which corresponds to the code in the book. For a more recent version, use the updated Magenta v2.0.1 branch.
Before you start, follow the installation instructions for Magenta 1.1.7.
This example shows a melody (monophonic) generation using the Melody RNN model and 3 configurations: basic, lookback and attention. For the Python script, while in the Magenta environment (conda activate magenta
):
# Runs 3 melody rnn generation using the basic, lookback and attention config
python chapter_03_example_01.py
For the Jupyter notebook:
jupyter notebook notebook.ipynb
This example shows polyphonic generations with the Polyphony RNN model. For the Python script, while in the Magenta environment (conda activate magenta
):
# Runs 4 polyphonic generations using polyphony rnn and
# different configurations
python chapter_03_example_02.py
This example shows polyphonic generations with the Performance RNN model. For the Python script, while in the Magenta environment (conda activate magenta
):
# Runs 3 polyphonic generations using performance rnn and
# different configurations
python chapter_03_example_03.py