Skip to content

peilingjiang-DEV/training-charRNN

 
 

Repository files navigation

Training a charRNN and using the model in ml5js

Multi-layer Recurrent Neural Networks (LSTM, RNN) for character-level language models in Python using Tensorflow and modified to work with tensorflow.js and ml5js

Based on char-rnn-tensorflow.

Requirements

Usage

1) Download this repository

Start by downloading or cloning this repository:

git clone https://github.com/ml5js/training-charRNN.git
cd training-charRNN

2) Collect data

RNNs work well when you want predict sequences or patterns from your inputs. Try to gather as much input data as you can. The more the better.

Once your data is ready, create a text file (with any name) in the root (or any subdirectory) of this project. I'll assume it's called input.txt for the rest of this documentation.

(A quick tip to concatenate many small disparate .txt files into one large training file: ls *.txt | xargs -L 1 cat >> input.txt)

2) Train

Run the training script with the default settings:

python train.py --data_path=./folder_with_my_custom_data/input.txt

Or you can specify the hyperparameters you want depending on the training set, size of your data, etc:

python train.py --data_path=./folder_with_my_custom_data/input.txt --rnn_size 128 --num_layers 2 --seq_length 64 --batch_size 32 --num_epochs 1000 --save_model ./models --save_checkpoints ./checkpoints

This will train your model and save a JavaScript version in a folder called ./models, if you don't specify a different path.

You can also run the script called run.sh:

bash run.sh

This file contains the same parameters as the one's described before:

# This are the hyperparameters you can change to fit your data
python train.py --data_path=./data \
--rnn_size 128 \
--num_layers 2 \
--seq_length 50 \
--batch_size 50 \
--num_epochs 50 \
--save_checkpoints ./checkpoints \
--save_model ./models

3) Use it!

Once the model is ready, you'll just need to point to it in your ml5 sketch:

const charRNN = new ml5.charRNN('./models/your_new_model');

That's it!

Hyperparameters

Given the size of the training dataset, here are some hyperparameters that might work:

  • 2 MB:
    • rnn_size 256 (or 128)
    • num_layers 2
    • seq_length 64
    • batch_size 32
    • output_keep_prob 0.75
  • 5-8 MB:
    • rnn_size 512
    • num_layers 2 (or 3)
    • seq_length 128
    • batch_size 64
    • dropout 0.25
  • 10-20 MB:
    • rnn_size 1024
    • num_layers 2 (or 3)
    • seq_length 128 (or 256)
    • batch_size 128
    • output_keep_prob 0.75
  • 25+ MB:
    • rnn_size 2048
    • num_layers 2 (or 3)
    • seq_length 256 (or 128)
    • batch_size 128
    • output_keep_prob 0.75

Note: output_keep_prob 0.75 is equivalent to dropout probability of 0.25.

About

Training charRNN modek with ml5js

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.8%
  • Shell 1.2%