Skip to content
/ ddc Public
forked from chrisdonahue/ddc

Predicting dance choreographies from music in Tensorflow. Extension of the Dance Dance Convolution project.

Notifications You must be signed in to change notification settings

Linths/ddc

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Dance Dance Convolution

Dance Dance Convolution is an automatic choreography system for Dance Dance Revolution (DDR), converting raw audio into playable dances. Important note: this is an extension of Chris Donahue's work, performed by Lindsay Kempen. The remaining README is from the original project.

This repository contains the code used to produce the dataset and results in the Dance Dance Convolution paper. You can find a live demo of our system here as well as an example video.

The Fraxtil and In The Groove datasets from the paper are amalgamations of three and two StepMania "packs" respectively. Instructions for downloading these packs and building the datasets can be found below.

We are in the process of reimplementing this code (under branch master_v2), primarily to add on-the-fly feature extraction and remove the essentia dependency. However, you can get started with master if you are eager to dance.

Please email me with any issues: cdonahue [@at@] ucsd (.dot.) edu

Attribution

If you use this dataset in your research, cite via the following BibTex:

@inproceedings{donahue2017dance,
  title={Dance Dance Convolution},
  author={Donahue, Chris and Lipton, Zachary C and McAuley, Julian},
  booktitle={Proceedings of the 34th International Conference on Machine Learning},
  year={2017},
}

Requirements

Directory description

  • dataset/: code to generate the dataset from StepMania files
  • infer/: code to run demo locally
  • learn/: code to train step placement (onset) and selection (sym) models
  • scripts/: shell scripts to build the dataset (smd_*) and train (sml_*)

Running demo locally

The demo (unfortunately) requires tensorflow 0.12.1 and essentia. virtualenv recommended

  1. Install tensorflow 0.12.1
  2. Run server: ./ddc_server.sh
  3. Send server choreography requests: python ddc_client.py $ARTIST_NAME $SONG_TITLE $FILEPATH

Building dataset

  1. Make a directory named data under ~/ddc (or change scripts/var.sh to point to a different directory)
  2. Under data, make directories raw, json_raw and json_filt
  3. Under data/raw, make directories fraxtil and itg
  4. Under data/raw/fraxil, download and unzip:
  5. Under data/raw/itg, download and unzip:
  6. Navigate to scripts/
  7. Parse .sm files to JSON: ./all.sh ./smd_1_extract.sh
  8. Filter JSON files (removing mines, etc.): ./all.sh ./smd_2_filter.sh
  9. Split dataset 80/10/10: ./all.sh ./smd_3_dataset.sh
  10. Analyze dataset (e.g.): ./smd_4_analyze.sh fraxtil

Running training

  1. Navigate to scripts/
  2. Extract features: ./all.sh ./sml_onset_0_extract.sh
  3. Generate chart .pkl files (this may take a while): ./all.sh ./sml_onset_1_chart.sh
  4. Train a step placement (onset detection) model on a dataset: ./sml_onset_2_train.sh fraxtil
  5. Train a step selection (symbolic) model on a dataset: ./sml_sym_2_train.sh fraxtil
  6. Train and evaluate a Laplace-smoothed 5gram model on a dataset: ./sml_sym_2_mark.sh fraxtil 5

About

Predicting dance choreographies from music in Tensorflow. Extension of the Dance Dance Convolution project.

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 97.2%
  • Shell 2.4%
  • Dockerfile 0.4%