Skip to content

Commit

Permalink
🚀 update readmes
Browse files Browse the repository at this point in the history
  • Loading branch information
nglehuy committed Feb 16, 2021
1 parent bda1fdf commit df538c4
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 4 deletions.
5 changes: 2 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,13 +21,11 @@ TensorFlowASR implements some automatic speech recognition architectures such as

## What's New?

- (02/16/2021) Supported for TPU training
- (12/27/2020) Supported _naive_ token level timestamp, see [demo](./examples/demonstration/conformer.py) with flag `--timestamp`
- (12/17/2020) Supported ContextNet [http://arxiv.org/abs/2005.03191](http://arxiv.org/abs/2005.03191)
- (12/12/2020) Add support for using masking
- (11/14/2020) Supported Gradient Accumulation for Training in Larger Batch Size
- (11/3/2020) Reduce differences between `librosa.stft` and `tf.signal.stft`
- (10/31/2020) Update DeepSpeech2 and Supported Jasper [https://arxiv.org/abs/1904.03288](https://arxiv.org/abs/1904.03288)
- (10/18/2020) Supported Streaming Transducer [https://arxiv.org/abs/1811.06621](https://arxiv.org/abs/1811.06621)

## Table of Contents

Expand All @@ -41,6 +39,7 @@ TensorFlowASR implements some automatic speech recognition architectures such as
- [Installation](#installation)
- [Installing via PyPi](#installing-via-pypi)
- [Installing from source](#installing-from-source)
- [Running in a container](#running-in-a-container)
- [Setup training and testing](#setup-training-and-testing)
- [TFLite Convertion](#tflite-convertion)
- [Features Extraction](#features-extraction)
Expand Down
3 changes: 2 additions & 1 deletion vocabularies/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Predefined Vocabularies

- `language.characters` files contain all of that language's characters
- `corpus_maxlength_nwords.subwords` files contain subwords generated from corpus transcripts, with maximum length of a subword is `maxlength` and number of subwords is `nwords`.
- `corpus_maxlength_nwords.subwords` files contain subwords generated from corpus transcripts, with maximum length of a subword is `maxlength` and number of subwords is `nwords`.
- `corpus_maxlength_nwords.max_lengths.json` files contain max lengths calculated from corpus duration and transcripts, for using static training

0 comments on commit df538c4

Please sign in to comment.