-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Keras 3 example for "Transformer model for MIDI music generation" #1992
base: master
Are you sure you want to change the base?
Add Keras 3 example for "Transformer model for MIDI music generation" #1992
Conversation
…ts and improve readability. Fix generation
…, num_transformer_blocks to 10, and reduce batch_size to 5
…. Add topk sampling
…o 6, batch_size to 6, and epochs to 20. Add top_k parameter for music generation.
…and enhance documentation for clarity
…y and consistency, replace mkdtemp with tempfile.mkdtemp, and adjust encoding/decoding function calls
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR. It looks great!
- Did you it works with all backends?
- Which backend is the fastest on the Colab GPU?
- Excluding text, how many lines of code do you end up with (are you able to run the rendering script)? You can use
scripts/tutobooks.py:count_locs_in_file
to count
import midi_neural_processor.processor as midi_tokenizer | ||
import numpy as np | ||
from keras import callbacks, layers, ops, optimizers, utils | ||
from keras_hub.api import layers as hub_layers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do from keras_hub import layers as hub_layers
, don't import from keras_hub.api
|
||
def visualize_midi(midi_path, sampling_rate=16000, seconds=15, out_dir=None): | ||
# Requires: | ||
# sudo apt install -y fluidsynth |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add the installation instructions in the text rather than in a comment
class RelativeGlobalAttention(layers.Layer): | ||
""" | ||
From Music Transformer (Huang et al., 2018) | ||
[paper link](https://arxiv.org/pdf/1809.04281.pdf) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In a code example class docstring, links are never rendered so no need for markdown here
Hi,
This is my first contribution, so apologies in advance for mistakes made!
I saw a call for contribution with an example of midi generation using transformers. I adapted the referenced code to keras 3.
Here are some notes on the implementation:
CachedMultiHeadAttention
to compact code, but probably getting worse results.keras_hub
?Thanks!