From 701691911c6e90f19697034b224f27d734747c4f Mon Sep 17 00:00:00 2001 From: Ryo Takahashi Date: Tue, 9 Jun 2020 06:29:50 +0900 Subject: [PATCH] Update word-level LM arguments in README (#786) --- word_language_model/README.md | 45 +++++++++++++++++------------------ 1 file changed, 22 insertions(+), 23 deletions(-) diff --git a/word_language_model/README.md b/word_language_model/README.md index db23265e..b2f1cd71 100644 --- a/word_language_model/README.md +++ b/word_language_model/README.md @@ -25,29 +25,28 @@ The `main.py` script accepts the following arguments: ```bash optional arguments: - -h, --help show this help message and exit - --data DATA location of the data corpus - --model MODEL type of recurrent net (RNN_TANH, RNN_RELU, LSTM, GRU) - --emsize EMSIZE size of word embeddings - --nhid NHID number of hidden units per layer - --nlayers NLAYERS number of layers - --lr LR initial learning rate - --clip CLIP gradient clipping - --epochs EPOCHS upper epoch limit - --batch_size N batch size - --bptt BPTT sequence length - --dropout DROPOUT dropout applied to layers (0 = no dropout) - --decay DECAY learning rate decay per epoch - --tied tie the word embedding and softmax weights - --seed SEED random seed - --cuda use CUDA - --log-interval N report interval - --save SAVE path to save the final model - --onnx-export path to export the final model in onnx format - --transformer_head N the number of heads in the encoder/decoder of the transformer model - --transformer_encoder_layers N the number of layers in the encoder of the transformer model - --transformer_decoder_layers N the number of layers in the decoder of the transformer model - --transformer_d_ff N the number of nodes on the hidden layer in feed forward nn + -h, --help show this help message and exit + --data DATA location of the data corpus + --model MODEL type of recurrent net (RNN_TANH, RNN_RELU, LSTM, GRU, + Transformer) + --emsize EMSIZE size of word embeddings + --nhid NHID number of hidden units per layer + --nlayers NLAYERS number of layers + --lr LR initial learning rate + --clip CLIP gradient clipping + --epochs EPOCHS upper epoch limit + --batch_size N batch size + --bptt BPTT sequence length + --dropout DROPOUT dropout applied to layers (0 = no dropout) + --tied tie the word embedding and softmax weights + --seed SEED random seed + --cuda use CUDA + --log-interval N report interval + --save SAVE path to save the final model + --onnx-export ONNX_EXPORT + path to export the final model in onnx format + --nhead NHEAD the number of heads in the encoder/decoder of the + transformer model ``` With these arguments, a variety of models can be tested.