From 41af7cc1442b939c9455f6932e542eb70b092613 Mon Sep 17 00:00:00 2001 From: Fangjun Kuang Date: Fri, 21 Jun 2024 11:06:54 +0800 Subject: [PATCH] Fix doc URLs --- README.md | 13 +++++++++---- egs/aidatatang_200zh/ASR/README.md | 2 +- egs/aishell/ASR/tdnn_lstm_ctc/README.md | 2 +- egs/librispeech/ASR/README.md | 2 +- egs/librispeech/ASR/conformer_ctc/README.md | 4 ++-- egs/timit/ASR/README.md | 2 +- egs/yesno/ASR/README.md | 2 +- egs/yesno/ASR/tdnn/README.md | 2 +- 8 files changed, 17 insertions(+), 12 deletions(-) diff --git a/README.md b/README.md index 7700661667..31e514606c 100644 --- a/README.md +++ b/README.md @@ -16,12 +16,12 @@ Please refer to [document](https://k2-fsa.github.io/icefall/huggingface/spaces.h # Installation -Please refer to [document](https://icefall.readthedocs.io/en/latest/installation/index.html) +Please refer to [document](https://k2-fsa.github.io/icefall/installation/index.html) for installation. # Recipes -Please refer to [document](https://icefall.readthedocs.io/en/latest/recipes/index.html) +Please refer to [document](https://k2-fsa.github.io/icefall/recipes/index.html) for more details. ## ASR: Automatic Speech Recognition @@ -77,7 +77,7 @@ The [LibriSpeech][librispeech] recipe supports the most comprehensive set of mod #### Whisper - [OpenAi Whisper](https://arxiv.org/abs/2212.04356) (We support fine-tuning on AiShell-1.) -If you are willing to contribute to icefall, please refer to [contributing](https://icefall.readthedocs.io/en/latest/contributing/index.html) for more details. +If you are willing to contribute to icefall, please refer to [contributing](https://k2-fsa.github.io/icefall/contributing/index.html) for more details. We would like to highlight the performance of some of the recipes here. @@ -343,7 +343,12 @@ We provide a Colab notebook to test the pre-trained model: [![Open In Colab](htt Once you have trained a model in icefall, you may want to deploy it with C++ without Python dependencies. -Please refer to the [document](https://icefall.readthedocs.io/en/latest/recipes/Non-streaming-ASR/librispeech/conformer_ctc.html#deployment-with-c) +Please refer to + + - https://k2-fsa.github.io/icefall/model-export/export-with-torch-jit-script.html + - https://k2-fsa.github.io/icefall/model-export/export-onnx.html + - https://k2-fsa.github.io/icefall/model-export/export-ncnn.html + for how to do this. We also provide a Colab notebook, showing you how to run a torch scripted model in [k2][k2] with C++. diff --git a/egs/aidatatang_200zh/ASR/README.md b/egs/aidatatang_200zh/ASR/README.md index b85895a092..035139d17d 100644 --- a/egs/aidatatang_200zh/ASR/README.md +++ b/egs/aidatatang_200zh/ASR/README.md @@ -6,7 +6,7 @@ The main repositories are list below, we will update the training and decoding s k2: https://github.com/k2-fsa/k2 icefall: https://github.com/k2-fsa/icefall lhotse: https://github.com/lhotse-speech/lhotse -* Install k2 and lhotse, k2 installation guide refers to https://k2.readthedocs.io/en/latest/installation/index.html, lhotse refers to https://lhotse.readthedocs.io/en/latest/getting-started.html#installation. I think the latest version would be ok. And please also install the requirements listed in icefall. +* Install k2 and lhotse, k2 installation guide refers to https://k2-fsa.github.io/k2/installation/index.html, lhotse refers to https://lhotse.readthedocs.io/en/latest/getting-started.html#installation. I think the latest version would be ok. And please also install the requirements listed in icefall. * Clone icefall(https://github.com/k2-fsa/icefall) and check to the commit showed above. ``` git clone https://github.com/k2-fsa/icefall diff --git a/egs/aishell/ASR/tdnn_lstm_ctc/README.md b/egs/aishell/ASR/tdnn_lstm_ctc/README.md index a2d80a7858..c003fd419e 100644 --- a/egs/aishell/ASR/tdnn_lstm_ctc/README.md +++ b/egs/aishell/ASR/tdnn_lstm_ctc/README.md @@ -1,4 +1,4 @@ Please visit - + for how to run this recipe. diff --git a/egs/librispeech/ASR/README.md b/egs/librispeech/ASR/README.md index 080f81c913..93fef7a079 100644 --- a/egs/librispeech/ASR/README.md +++ b/egs/librispeech/ASR/README.md @@ -1,6 +1,6 @@ # Introduction -Please refer to for how to run models in this recipe. +Please refer to for how to run models in this recipe. [./RESULTS.md](./RESULTS.md) contains the latest results. diff --git a/egs/librispeech/ASR/conformer_ctc/README.md b/egs/librispeech/ASR/conformer_ctc/README.md index 37ace42048..1bccccc733 100644 --- a/egs/librispeech/ASR/conformer_ctc/README.md +++ b/egs/librispeech/ASR/conformer_ctc/README.md @@ -1,7 +1,7 @@ ## Introduction Please visit - + for how to run this recipe. ## How to compute framewise alignment information @@ -9,7 +9,7 @@ for how to run this recipe. ### Step 1: Train a model Please use `conformer_ctc/train.py` to train a model. -See +See for how to do it. ### Step 2: Compute framewise alignment diff --git a/egs/timit/ASR/README.md b/egs/timit/ASR/README.md index d493fc4794..f700fab9eb 100644 --- a/egs/timit/ASR/README.md +++ b/egs/timit/ASR/README.md @@ -1,3 +1,3 @@ -Please refer to +Please refer to for how to run models in this recipe. diff --git a/egs/yesno/ASR/README.md b/egs/yesno/ASR/README.md index 38b491fc67..c9a2b56b1e 100644 --- a/egs/yesno/ASR/README.md +++ b/egs/yesno/ASR/README.md @@ -10,5 +10,5 @@ get the following WER: ``` Please refer to - + for detailed instructions. diff --git a/egs/yesno/ASR/tdnn/README.md b/egs/yesno/ASR/tdnn/README.md index 2b6116f0ab..1b7ddcaf15 100644 --- a/egs/yesno/ASR/tdnn/README.md +++ b/egs/yesno/ASR/tdnn/README.md @@ -2,7 +2,7 @@ ## How to run this recipe You can find detailed instructions by visiting - + It describes how to run this recipe and how to use a pre-trained model with `./pretrained.py`.