Skip to content

Releases: whr94621/NJUNMT-pytorch

Stable version with all commits until Oct 9, 2018

09 Oct 12:44
Compare
Choose a tag to compare

New Features - Travis CI Enable

We add components for travis ci. Now all tests can only run on CPU.

New Features - Standalone Logging Module

We combine functions about logging into a standalone module. Now we can redirect logging info to files when using tqdm at the same time.

New Features - Different way to batch

This feature enables you to use different way to batch your data. We provide two method, "samples" and "tokens". "samples" means how many bi-text pairs (samples) in one batch, while "tokens" means how many tokens in one batch (if there are several sentences in one sample, this means most tokens among them). You can use these two kinds of method by setting "batching_key" as "samples" or "tokens".

New Features - Delayed Update

This feature enables you to emuluate multi GPUs on a single GPU. By setting update_cycle as a value larger than 1, the model will compute forward and accumulate gradients for this many steps before parameters update, which behaves like one update step with actual batch size as update_cycle * batch_size. For example, if we want to use 25000 tokens in a batch on a single 1080 GPU(8GB Mem), we can set batch_size as 1250 and update_cycle as 20. This will prevent OOM problem.

Improvments

  • Put source sentences with similar length into a batch during inference. This significantly improve the speed of beam search(beam size is 5) from 497.01 tokens/sec to 1179.05 tokens/sec on a single 1080ti GPU.
  • Use SacreBLEU to compute BLEU scores instead of multi-bleu.perl and multi-bleu-detok.perl.
  • Add AdamW and Adafactor
  • Count the number of pads when using "tokens" as batching key.
  • Add ensemble decoding with different checkpoints.
  • Add length penalty when decoding.
  • Save k-best models.
  • Add dim_per_head option for transformer. Now dim_per_head * n_head can not equal to d_model.
  • Add exponential moving average (EMA).
  • Support pytorch 0.4.1

Bugs fix

  • Mask padding in generator.
  • RAM will not continue increasing during training.
  • close file handles when shuffling is over
  • fix the typo of Criterion

API Changes

  • Refactor RNN for data parallelism support.

release candidate of update in Sep 2018

14 Sep 02:11
Compare
Choose a tag to compare
Pre-release
2018.09-rc0

udpate requirements

NJUNMT-pytorch using Pytorch-0.3.1

16 May 07:53
Compare
Choose a tag to compare

This is the final version using Pytorch 0.3.1.
We will only provide minimum maintenance and bug fix.