Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New MIDI track classifier #12

Open
xhevahir opened this issue Nov 16, 2019 · 2 comments
Open

New MIDI track classifier #12

xhevahir opened this issue Nov 16, 2019 · 2 comments

Comments

@xhevahir
Copy link

Issue #4 mentioned training solely on chord progressions, and I've wondered about doing the same with melodies. To that end I'm linking a project I've just found that uses random forest to isolate melodies, harmonies, and bass parts in MIDI files: https://github.com/ruiguo-bio/midi-miner.

@bearpelican
Copy link
Owner

bearpelican commented Dec 6, 2019

Thanks for linking this!

I'm currently using a naive way to separate the base. However, this only detects chords played at the exact same time.

Feel free to add a PR to use midi-miner instead!

@xhevahir
Copy link
Author

xhevahir commented Dec 6, 2019

I'm a total novice at this--never even submitted a pull request, nor run a jupyter notebook until now--but I'll start experimenting with ways to process MIDIs that may be useful. I've wondered, for instance, if it would make sense to translate between bass tracks and drum tracks, which are probably easy to identify since they're on MIDI channel 10. Or maybe generating a bass line from chords+drums? But that's three different tracks, which is probably a whole different story. Sorting monophonic tracks into melodies and bass lines probably would be a more immediately useful application for something like midi-miner.

I've thought about learning how to train models, but I've been confused by the advice they give beginners on fast.ai. In one place they recommend starting with cloud services, while elsewhere all the talk is about which GPU to buy. I couldn't get anything working right on Windows or WSL (which doesn't support CUDA anyway), and my Ubuntu machine is a pitifully slow thing with integrated graphics. In the video interview you did you mentioned using a bunch of Tesla GPUs for 36 hours--did you use something less powerful for experimentation, maybe, and use the big cloud stuff for the training?

Anyway, thanks.

Edit: I just remembered jSymbolic, some of which is implemented within music21. That's got lots of analysis functions that could be useful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants