Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Everything is in 4/4 time #13

Open
xhevahir opened this issue Nov 29, 2019 · 4 comments
Open

Everything is in 4/4 time #13

xhevahir opened this issue Nov 29, 2019 · 4 comments

Comments

@xhevahir
Copy link

Everything seems to change to 4/4 time regardless of the original time signature, which I suppose is because 4/4 MIDIs are so much more numerous. Would training separately on other time signatures be the fix for this? (I haven't done any training myself.)

@xhevahir
Copy link
Author

After thinking about this some more, I'm wondering if this is only the result of quantization in the web app? (Which I've been using instead of the notebook, since I haven't been able to get that to run yet.)

@xhevahir
Copy link
Author

xhevahir commented Dec 6, 2019

OK, so after getting the notebooks working (on my slow Ubuntu machine), I see that they do indeed change the time signatures to 4/4, and that numpy_encode has 4/4 set as a default (and learner.py has 16 subdivisions per bar, suggesting 4/4 time). Is there any easy way of setting a specific time signature in a notebook when using the pre trained models? Or does one need to account for different time signatures in training?

@bearpelican
Copy link
Owner

Currently 4/4 time is hard encoded into the project.

Most of the data I trained on was 4/4 anyways so it simplified it a little bit.
You can try changing the settings here: https://github.com/bearpelican/musicautobot/blob/master/musicautobot/numpy_encode.py
But you'll have to retrain the model and the webapp is also quantized to 4/4.

Do you have a lot of music with different time signatures?

@xhevahir
Copy link
Author

xhevahir commented Dec 6, 2019

I'm sure the great majority of the MIDI files I've collected are 4/4. One of the bar charts in this pdf shows the numbers in the Lakh MIDI Dataset; 4/4 MIDIs are several times more numerous than the next most popular time signature in that set. I was thinking about using music21 to sort a big set of MIDIs by time signature. (As well as maybe using pretty-midi to find specifically vocal melodies in karaoke MIDIs, which I've got a lot of, or maybe getting simplified harmony tracks using music21's chordify or reduction functions. But those are different questions.) But I'm guessing that, being so much smaller, the sets consisting of other time signatures wouldn't generate such good predictions? I don't know much about training, but maybe the handling of meters can be successively refined--as in, starting with all the MIDIs, and then in later phases training on specific meters? Or maybe it would make sense to group them by more general categories, like duple/triple/quadruple meter, or simple/compound?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants