Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Keras 3 example for "Transformer model for MIDI music generation" #1992

Merged
merged 6 commits into from
Nov 27, 2024

Conversation

johacks
Copy link
Contributor

@johacks johacks commented Nov 22, 2024

Hi,

This is my first contribution, so apologies in advance for mistakes made!

I saw a call for contribution with an example of midi generation using transformers. I adapted the referenced code to keras 3.

Here are some notes on the implementation:

  • All MIDI datasets used by original code are inaccessible, so I switched to Maestro dataset.
  • I implemented the relative global attention used in the paper, which results in lots of additional code over using existing multi-head attention. Maybe it would be better to use already implemented attention layer CachedMultiHeadAttention to compact code, but probably getting worse results.
  • Tokenization used in the code needs lots of code and is even hosted on a different repo. I have currently forked their repo and published to pypi so it can be easily installed, but I'm not sure this is the most appropiate way to handle it. Maybe it would make more sense to have a MIDI tokenizer on keras_hub?
  • Following advice of Keras contributing guide issue, I'm only including Python script before generating .md and .ipynb files.

Thanks!

Copy link

google-cla bot commented Nov 22, 2024

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

Copy link
Contributor

@fchollet fchollet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR. It looks great!

  • Did you it works with all backends?
  • Which backend is the fastest on the Colab GPU?
  • Excluding text, how many lines of code do you end up with (are you able to run the rendering script)? You can use scripts/tutobooks.py:count_locs_in_file to count

examples/generative/midi_generation_with_transformer.py Outdated Show resolved Hide resolved
examples/generative/midi_generation_with_transformer.py Outdated Show resolved Hide resolved
examples/generative/midi_generation_with_transformer.py Outdated Show resolved Hide resolved
- Update import statement for keras_hub
- Fix documentation for audio dependencies
- Remove unnecessary markdown annotation

Other changes:
- Update last modified date in MIDI generation example
- Fix: Increase training epochs from 1 to 20 in MIDI generation example
- Fix: use flip for unsupported negative step in torch
- Fix: Remove unused learning rate
@johacks johacks force-pushed the johacks_contribute_midi_example branch 2 times, most recently from 3cdad2f to 1ccf450 Compare November 26, 2024 15:52
@johacks johacks force-pushed the johacks_contribute_midi_example branch from 1ccf450 to e35c507 Compare November 26, 2024 17:53
@johacks
Copy link
Contributor Author

johacks commented Nov 26, 2024

@fchollet
Hi, thanks for the feedback, I have made corresponding changes according to comments on the code.

Regarding your other questions:

  • Did you check it works with all backends?
    • I have now tried in in colab on all backends. I had to make some adjustments to the code:
      • In JAX it was too slow to concatenate multiple times for each generation, so I changed approach to fill a padded array with always the same size.
      • In torch there were some incompatibilities with negative slicing steps that have been fixed. It also seems that due to numerical stability issues, torch needs a lower learning rate. I adjusted the script to conditionally set a learning rate adjustment factor based on the backend.
  • Which backend is the fastest on the Colab GPU?
    • I tried all backends with a T4 GPU. It takes about 11 minutes for data preprocessing. Training + generation time is about 40 minutes, 27 minutes, 25 minutes in Torch, JAX and Tensorflow respectively.
  • Excluding text, how many lines of code do you end up with (are you able to run the rendering script)? You can use scripts/tutobooks.py:count_locs_in_file to count
    • I was at 460. I refactored and got to 350.
    • I can generate the rendering script (autogen.py add_example), as long as it is launched with root privileges. I have added generated files to the PR.

Copy link
Contributor

@fchollet fchollet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome -- thanks a lot for the contribution!

@fchollet fchollet merged commit 275300a into keras-team:master Nov 27, 2024
2 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants