Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Keras 3 example for "Transformer model for MIDI music generation" #1992

Open
wants to merge 12 commits into
base: master
Choose a base branch
from

Conversation

johacks
Copy link

@johacks johacks commented Nov 22, 2024

Hi,

This is my first contribution, so apologies in advance for mistakes made!

I saw a call for contribution with an example of midi generation using transformers. I adapted the referenced code to keras 3.

Here are some notes on the implementation:

  • All MIDI datasets used by original code are inaccessible, so I switched to Maestro dataset.
  • I implemented the relative global attention used in the paper, which results in lots of additional code over using existing multi-head attention. Maybe it would be better to use already implemented attention layer CachedMultiHeadAttention to compact code, but probably getting worse results.
  • Tokenization used in the code needs lots of code and is even hosted on a different repo. I have currently forked their repo and published to pypi so it can be easily installed, but I'm not sure this is the most appropiate way to handle it. Maybe it would make more sense to have a MIDI tokenizer on keras_hub?
  • Following advice of Keras contributing guide issue, I'm only including Python script before generating .md and .ipynb files.

Thanks!

Copy link

google-cla bot commented Nov 22, 2024

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

Copy link
Member

@fchollet fchollet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR. It looks great!

  • Did you it works with all backends?
  • Which backend is the fastest on the Colab GPU?
  • Excluding text, how many lines of code do you end up with (are you able to run the rendering script)? You can use scripts/tutobooks.py:count_locs_in_file to count

import midi_neural_processor.processor as midi_tokenizer
import numpy as np
from keras import callbacks, layers, ops, optimizers, utils
from keras_hub.api import layers as hub_layers
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do from keras_hub import layers as hub_layers, don't import from keras_hub.api


def visualize_midi(midi_path, sampling_rate=16000, seconds=15, out_dir=None):
# Requires:
# sudo apt install -y fluidsynth
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add the installation instructions in the text rather than in a comment

class RelativeGlobalAttention(layers.Layer):
"""
From Music Transformer (Huang et al., 2018)
[paper link](https://arxiv.org/pdf/1809.04281.pdf)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In a code example class docstring, links are never rendered so no need for markdown here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants