diff --git a/cm-mlops/automation/list_of_scripts.md b/cm-mlops/automation/list_of_scripts.md
new file mode 100644
index 0000000000..ac822e1db9
--- /dev/null
+++ b/cm-mlops/automation/list_of_scripts.md
@@ -0,0 +1,37 @@
+[ [Back to index](README.md) ]
+
+
+
+This is an automatically generated list of reusable CM scripts being developed
+by the [open taskforce on automation and reproducibility](https://github.com/mlcommons/ck/issues/536)
+to make MLOps and DevOps tools more interoperable, portable, deterministic and reproducible.
+These scripts suppport the community effort to modularize ML Systems and automate their bechmarking, optimization,
+design space exploration and deployment across continuously changing software and hardware.
+
+# List of CM scripts by categories
+
+
+Click here to see the table of contents.
+
+* [Platform information](#platform-information)
+
+
+
+
+### Platform information
+
+* [detect-os](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/detect-os)
+
+
+# List of all sorted CM scripts
+
+* [detect-os](https://github.com/mlcommons/ck/tree/master/cm-mlops/script/detect-os)
+
+
+
+
+# Maintainers
+
+* [Open MLCommons taskforce on automation and reproducibility](https://github.com/mlcommons/ck/blob/master/docs/taskforce.md)'
diff --git a/cm-mlops/automation/script/README-extra.md b/cm-mlops/automation/script/README-extra.md
index 1a641bd785..e791f4ddde 100644
--- a/cm-mlops/automation/script/README-extra.md
+++ b/cm-mlops/automation/script/README-extra.md
@@ -708,12 +708,12 @@ as shown in the next example.
Instead of adding this flag to all scripts, you can specify it
using `CM_SCRIPT_EXTRA_CMD` environment variable as follows:
```bash
-export CM_SCRIPT_EXTRA_CMD="--adr.python.name.mlperf"
+export CM_SCRIPT_EXTRA_CMD="--adr.python.name=mlperf"
```
You can even specify min Python version required as follows:
```bash
-export CM_SCRIPT_EXTRA_CMD="--adr.python.name.mlperf --adr.python.version_min=3.9"
+export CM_SCRIPT_EXTRA_CMD="--adr.python.name=mlperf --adr.python.version_min=3.9"
```
### Assembling pipelines with other artifacts included
diff --git a/cm-mlops/challenge/optimize-mlperf-inference-scc2023/README.md b/cm-mlops/challenge/optimize-mlperf-inference-scc2023/README.md
index b5c69bf0e9..d92014a339 100644
--- a/cm-mlops/challenge/optimize-mlperf-inference-scc2023/README.md
+++ b/cm-mlops/challenge/optimize-mlperf-inference-scc2023/README.md
@@ -4,7 +4,7 @@ Under preparation: Reproduce and optimize MLPerf inference benchmarks during Stu
See our [related challange from 2022](https://access.cknowledge.org/playground/?action=challenges&name=optimize-mlperf-inference-scc2023).
-## Organizers
+### Organizers
* [MLCommons taskforce on automation and reproducibility](https://cKnowledge.org/mlcommons-taskforce)
* [cKnowledge](https://cKnowledge.org)
diff --git a/cm-mlops/challenge/optimize-mlperf-inference-v2.1-2022/README.md b/cm-mlops/challenge/optimize-mlperf-inference-v2.1-2022/README.md
index 59c1d4d1b3..d0ac7cf15b 100644
--- a/cm-mlops/challenge/optimize-mlperf-inference-v2.1-2022/README.md
+++ b/cm-mlops/challenge/optimize-mlperf-inference-v2.1-2022/README.md
@@ -3,7 +3,7 @@
Prepare, optimize and reproduce MLPerf inference v2.1 benchmarks across diverse implementations, software and hardware
using the [MLCommons CK framework](https://github.com/mlcommons/ck).
-## Organizers
+### Organizers
* [MLCommons taskforce on automation and reproducibility](https://cKnowledge.org/mlcommons-taskforce)
* [cTuning foundation](https://cTuning.org)
diff --git a/cm-mlops/challenge/optimize-mlperf-inference-v3.0-2023/README.md b/cm-mlops/challenge/optimize-mlperf-inference-v3.0-2023/README.md
index 9a1c07d6b7..a273890a27 100644
--- a/cm-mlops/challenge/optimize-mlperf-inference-v3.0-2023/README.md
+++ b/cm-mlops/challenge/optimize-mlperf-inference-v3.0-2023/README.md
@@ -9,7 +9,7 @@ using the [MLCommons CK framework](https://github.com/mlcommons/ck):
Join this public [Discord server](https://discord.gg/JjWNWXKxwT) to discuss with the community and organizers
how to use and enhance CK to benchmark and optimize ML Systems.
-## Organizers
+### Organizers
* [MLCommons taskforce on automation and reproducibility](https://cKnowledge.org/mlcommons-taskforce)
* [cTuning foundation](https://cTuning.org)
diff --git a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/README.md b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/README.md
index 21334f4755..e1eb944728 100644
--- a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/README.md
+++ b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/README.md
@@ -3,9 +3,10 @@
Prepare, optimize and reproduce MLPerf inference v3.1 benchmarks across diverse implementations, models, software and hardware.
Join this public [Discord server](https://discord.gg/JjWNWXKxwT) to discuss with the community and organizers
-how to use and enhance CK to run and optimize MLPerf inference benchmarks.
+how to use and enhance [CM scripts and workflows](https://github.com/mlcommons/ck/blob/master/docs/README.md)
+to run and optimize MLPerf inference benchmarks on your software/hardware stack.
-## Organizers
+#### Organizers
* [MLCommons taskforce on automation and reproducibility](https://cKnowledge.org/mlcommons-taskforce)
* [cTuning foundation](https://cTuning.org)
@@ -22,6 +23,9 @@ For MLPerf inference 3.1 we have the following benchmark tasks
6. Recommendation using DLRM model and Criteo dataset
7. Large Language Model (Pending)
-All the six tasks are applicable to datacenter while all except Recommendation are applicable to edge category. Further, language processing and medical imaging models have a high accuracy variant where the achieved accuracy must be within `99.9%` (`99%` is the default accuracy requirement) of the fp32 reference model. Recommendation task is only having a high accuracy variant. Currently we are not supporting Recommendation task as we are not having a highend server which is a requirement.
+All the six tasks are applicable to datacenter while all except Recommendation are applicable to edge category.
+Further, language processing and medical imaging models have a high accuracy variant where the achieved accuracy
+must be within `99.9%` (`99%` is the default accuracy requirement) of the fp32 reference model.
+Recommendation task is only having a high accuracy variant. Currently we are not supporting Recommendation task as we are not having a highend server which is a requirement.
This challenge is integrated with [our platform](https://github.com/ctuning/mlcommons-ck/tree/master/platform)
diff --git a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/generate-3d-unet-submission.md b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/generate-3d-unet-submission.md
index 8801a2318f..9806c22647 100644
--- a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/generate-3d-unet-submission.md
+++ b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/generate-3d-unet-submission.md
@@ -1,11 +1,19 @@
## Setup
-Please follow the MLCommons CK [installation guide](https://github.com/mlcommons/ck/blob/master/docs/installation.md) to install CM.
-Download the ck repo to get the CM script for MLPerf submission
+
+Please follow this [installation guide](https://github.com/mlcommons/ck/blob/master/docs/installation.md)
+to install the MLCommons CM reproducibility and automation language in your native environment or Docker container.
+
+Then install the repository with CM automation scripts to run MLPerf benchmarks out-of-the-box
+across different software, hardware, models and data sets:
+
```
cm pull repo mlcommons@ck
```
+Note that you can install Python virtual environment via CM to avoid contaminating
+your local Python installation as described [here](https://github.com/mlcommons/ck/blob/master/cm-mlops/automation/script/README-extra.md#using-python-virtual-environments).
+
## Run Commands
3d-unet has two variants - `3d-unet-99` and `3d-unet-99.9` where the `99` and `99.9` specifies the required accuracy constraint with respect to the reference floating point model. Both models can be submitter under edge as well as datacenter category.
diff --git a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/generate-bert-submission.md b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/generate-bert-submission.md
index 088f9d0ce2..c43363c1e9 100644
--- a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/generate-bert-submission.md
+++ b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/generate-bert-submission.md
@@ -1,15 +1,19 @@
## Setup
+
Please follow this [installation guide](https://github.com/mlcommons/ck/blob/master/docs/installation.md)
-to install the [MLCommons CM scripting language](https://github.com/mlcommons/ck/tree/master/docs#collective-mind-language-cm)
-on your system with minimal dependencies.
+to install the MLCommons CM reproducibility and automation language in your native environment or Docker container.
+
+Then install the repository with CM automation scripts to run MLPerf benchmarks out-of-the-box
+across different software, hardware, models and data sets:
-Download a GitHub repository with [portable and reusable CM scripts](https://github.com/mlcommons/ck/tree/master/cm-mlops/script)
-for unified MLPerf benchmarking and submission:
```
cm pull repo mlcommons@ck
```
+Note that you can install Python virtual environment via CM to avoid contaminating
+your local Python installation as described [here](https://github.com/mlcommons/ck/blob/master/cm-mlops/automation/script/README-extra.md#using-python-virtual-environments).
+
## Run Commands
Bert has two variants - `bert-99` and `bert-99.9` where the `99` and `99.9` specifies the required accuracy constraint with respect to the reference floating point model. `bert-99.9` model is applicable only on a datacenter system.
diff --git a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/generate-resnet50-submission.md b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/generate-resnet50-submission.md
index 72964d017c..470930e373 100644
--- a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/generate-resnet50-submission.md
+++ b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/generate-resnet50-submission.md
@@ -1,11 +1,19 @@
## Setup
-Please follow the MLCommons CK [installation guide](https://github.com/mlcommons/ck/blob/master/docs/installation.md) to install CM.
-Download the ck repo to get the CM script for MLPerf submission
+
+Please follow this [installation guide](https://github.com/mlcommons/ck/blob/master/docs/installation.md)
+to install the MLCommons CM reproducibility and automation language in your native environment or Docker container.
+
+Then install the repository with CM automation scripts to run MLPerf benchmarks out-of-the-box
+across different software, hardware, models and data sets:
+
```
cm pull repo mlcommons@ck
```
+Note that you can install Python virtual environment via CM to avoid contaminating
+your local Python installation as described [here](https://github.com/mlcommons/ck/blob/master/cm-mlops/automation/script/README-extra.md#using-python-virtual-environments).
+
## Run Commands
We need to get imagenet full dataset to make image-classification submissions for MLPerf inference. Since this dataset is not publicly available via a URL please follow the instructions given [here](https://github.com/mlcommons/ck/blob/master/cm-mlops/script/get-dataset-imagenet-val/README-extra.md) to download the dataset and register in CM.
diff --git a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/generate-retinanet-submission.md b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/generate-retinanet-submission.md
index b6da0f4cdb..4420462cde 100644
--- a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/generate-retinanet-submission.md
+++ b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/generate-retinanet-submission.md
@@ -1,11 +1,19 @@
## Setup
-Please follow the MLCommons CK [installation guide](https://github.com/mlcommons/ck/blob/master/docs/installation.md) to install CM.
-Download the ck repo to get the CM script for MLPerf submission
+
+Please follow this [installation guide](https://github.com/mlcommons/ck/blob/master/docs/installation.md)
+to install the MLCommons CM reproducibility and automation language in your native environment or Docker container.
+
+Then install the repository with CM automation scripts to run MLPerf benchmarks out-of-the-box
+across different software, hardware, models and data sets:
+
```
cm pull repo mlcommons@ck
```
+Note that you can install Python virtual environment via CM to avoid contaminating
+your local Python installation as described [here](https://github.com/mlcommons/ck/blob/master/cm-mlops/automation/script/README-extra.md#using-python-virtual-environments).
+
## Run Commands
diff --git a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/generate-rnnt-submission.md b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/generate-rnnt-submission.md
index b19c096ca5..a6ca069215 100644
--- a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/generate-rnnt-submission.md
+++ b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/generate-rnnt-submission.md
@@ -1,11 +1,19 @@
## Setup
-Please follow the MLCommons CK [installation guide](https://github.com/mlcommons/ck/blob/master/docs/installation.md) to install CM.
-Download the ck repo to get the CM script for MLPerf submission
+
+Please follow this [installation guide](https://github.com/mlcommons/ck/blob/master/docs/installation.md)
+to install the MLCommons CM reproducibility and automation language in your native environment or Docker container.
+
+Then install the repository with CM automation scripts to run MLPerf benchmarks out-of-the-box
+across different software, hardware, models and data sets:
+
```
cm pull repo mlcommons@ck
```
+Note that you can install Python virtual environment via CM to avoid contaminating
+your local Python installation as described [here](https://github.com/mlcommons/ck/blob/master/cm-mlops/automation/script/README-extra.md#using-python-virtual-environments).
+
## Run Commands
### TensorRT backend
diff --git a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/setup-aws-instance.md b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/setup-aws-instance.md
index e1691c21ac..152c612aad 100644
--- a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/setup-aws-instance.md
+++ b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/setup-aws-instance.md
@@ -1,3 +1,5 @@
+## Setup ASW instance for MLPerf
+
The below instructions are for creating an AWS instance from the CLI. You can also create an instance via web and setup CM on it.
## Prerequisites
diff --git a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/setup-gcp-instance.md b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/setup-gcp-instance.md
index a2df720ff3..a3a0e457a1 100644
--- a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/setup-gcp-instance.md
+++ b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/setup-gcp-instance.md
@@ -1,3 +1,5 @@
+## Setup GCP instance for MLPerf
+
The below instructions are for creating a Google Cloud instance from the CLI. You can also create an instance via web and setup CM on it.
## Prerequisites
diff --git a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/setup-nvidia-jetson-orin.md b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/setup-nvidia-jetson-orin.md
index 68db00ea0e..08c0a8eeb0 100644
--- a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/setup-nvidia-jetson-orin.md
+++ b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/setup-nvidia-jetson-orin.md
@@ -1,4 +1,5 @@
## Setup
+
We used Nvidia Jetson AGX Orin developer kit with 32GB RAM and 64GB eMMC. We also connected a 500GB SSD disk via USB and Wifi connection for internet connectivity.
We used the out of the box developer kit image which was running Ubuntu 20.04 and JetPack 5.0.1 Developer Preview (L4T 34.1.1) with CUDA 11.4. We were also using the default 4k page size (Nvidia recommends 64k for MLPerf inference).
diff --git a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-amazon-inferentia-2023/README.md b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-amazon-inferentia-2023/README.md
index 9918c7edbf..9037c658b2 100644
--- a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-amazon-inferentia-2023/README.md
+++ b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-amazon-inferentia-2023/README.md
@@ -5,7 +5,7 @@ Prepare and optimize MLPerf inference v3.1 submission for publicly-available Ama
Join this public [Discord server](https://discord.gg/JjWNWXKxwT) to discuss with the community and organizers
how to use and enhance CK to benchmark and optimize ML Systems.
-## Organizers
+### Organizers
* [MLCommons taskforce on automation and reproducibility](https://cKnowledge.org/mlcommons-taskforce)
* [cTuning foundation](https://cTuning.org)
diff --git a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-amd-2023/README.md b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-amd-2023/README.md
index fdce204a50..04fc797839 100644
--- a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-amd-2023/README.md
+++ b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-amd-2023/README.md
@@ -5,7 +5,7 @@ Prepare and optimize MLPerf inference v3.1 submission for AMD-based platforms.
Join this public [Discord server](https://discord.gg/JjWNWXKxwT) to discuss with the community and organizers
how to use and enhance CK to benchmark and optimize ML Systems.
-## Organizers
+### Organizers
* [MLCommons taskforce on automation and reproducibility](https://cKnowledge.org/mlcommons-taskforce)
* [cTuning foundation](https://cTuning.org)
diff --git a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-create-end-to-end-app/README.md b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-create-end-to-end-app/README.md
index fdce204a50..04fc797839 100644
--- a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-create-end-to-end-app/README.md
+++ b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-create-end-to-end-app/README.md
@@ -5,7 +5,7 @@ Prepare and optimize MLPerf inference v3.1 submission for AMD-based platforms.
Join this public [Discord server](https://discord.gg/JjWNWXKxwT) to discuss with the community and organizers
how to use and enhance CK to benchmark and optimize ML Systems.
-## Organizers
+### Organizers
* [MLCommons taskforce on automation and reproducibility](https://cKnowledge.org/mlcommons-taskforce)
* [cTuning foundation](https://cTuning.org)
diff --git a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-google-tpu-2023/README.md b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-google-tpu-2023/README.md
index d0fba98fbe..8ddd375715 100644
--- a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-google-tpu-2023/README.md
+++ b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-google-tpu-2023/README.md
@@ -5,7 +5,7 @@ Prepare and optimize MLPerf inference v3.1 submission for publicly-available Goo
Join this public [Discord server](https://discord.gg/JjWNWXKxwT) to discuss with the community and organizers
how to use and enhance CK to benchmark and optimize ML Systems.
-## Organizers
+### Organizers
* [MLCommons taskforce on automation and reproducibility](https://cKnowledge.org/mlcommons-taskforce)
* [cTuning foundation](https://cTuning.org)
diff --git a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-hugging-face-models-2023/README.md b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-hugging-face-models-2023/README.md
index 3e3f0f3362..8e34cfb698 100644
--- a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-hugging-face-models-2023/README.md
+++ b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-hugging-face-models-2023/README.md
@@ -11,7 +11,7 @@ MLPerf BERT model is available at Hugging Face [here](https://huggingface.co/ctu
Join this public [Discord server](https://discord.gg/JjWNWXKxwT) to discuss with the community and organizers
how to use and enhance CK to benchmark and optimize ML Systems.
-## Organizers
+### Organizers
* [MLCommons taskforce on automation and reproducibility](https://cKnowledge.org/mlcommons-taskforce)
* [cTuning foundation](https://cTuning.org)
diff --git a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-intel-2023/README.md b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-intel-2023/README.md
index e26b7b945c..100414a802 100644
--- a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-intel-2023/README.md
+++ b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-intel-2023/README.md
@@ -5,7 +5,7 @@ Prepare and optimize MLPerf inference v3.1 submission for Intel-based platforms.
Join this public [Discord server](https://discord.gg/JjWNWXKxwT) to discuss with the community and organizers
how to use and enhance CK to benchmark and optimize ML Systems.
-## Organizers
+### Organizers
* [MLCommons taskforce on automation and reproducibility](https://cKnowledge.org/mlcommons-taskforce)
* [cTuning foundation](https://cTuning.org)
diff --git a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-kilt-2023/README.md b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-kilt-2023/README.md
index 0a64378ae7..545f70b6be 100644
--- a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-kilt-2023/README.md
+++ b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-kilt-2023/README.md
@@ -6,7 +6,7 @@ using KILT. Compare usability with MLCommons MITL.
Join this public [Discord server](https://discord.gg/JjWNWXKxwT) to discuss with the community and organizers
how to use and enhance CK to benchmark and optimize ML Systems.
-## Organizers
+### Organizers
* [MLCommons taskforce on automation and reproducibility](https://cKnowledge.org/mlcommons-taskforce)
* [cTuning foundation](https://cTuning.org)
diff --git a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-mitl-2023/README.md b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-mitl-2023/README.md
index ca4310b50c..7f76ac494b 100644
--- a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-mitl-2023/README.md
+++ b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-mitl-2023/README.md
@@ -8,7 +8,7 @@ Compare usability and results with KILT.
Join this public [Discord server](https://discord.gg/JjWNWXKxwT) to discuss with the community and organizers
how to use and enhance CK to benchmark and optimize ML Systems.
-## Organizers
+### Organizers
* [MLCommons taskforce on automation and reproducibility](https://cKnowledge.org/mlcommons-taskforce)
* [cTuning foundation](https://cTuning.org)
diff --git a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-nvidia-gpu-2023/README.md b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-nvidia-gpu-2023/README.md
index e0022de334..3ff076be81 100644
--- a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-nvidia-gpu-2023/README.md
+++ b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-nvidia-gpu-2023/README.md
@@ -5,7 +5,7 @@ Prepare and optimize MLPerf inference v3.1 submission for Nvidia GPUs.
Join this public [Discord server](https://discord.gg/JjWNWXKxwT) to discuss with the community and organizers
how to use and enhance CK to benchmark and optimize ML Systems.
-## Organizers
+### Organizers
* [MLCommons taskforce on automation and reproducibility](https://cKnowledge.org/mlcommons-taskforce)
* [cTuning foundation](https://cTuning.org)
diff --git a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-qualcomm-ai100-2023/README.md b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-qualcomm-ai100-2023/README.md
index 7cebe67590..8e8683729a 100644
--- a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-qualcomm-ai100-2023/README.md
+++ b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-qualcomm-ai100-2023/README.md
@@ -5,7 +5,7 @@ Prepare and optimize MLPerf inference v3.1 submission for Qualcomm AI100-based p
Join this public [Discord server](https://discord.gg/JjWNWXKxwT) to discuss with the community and organizers
how to use and enhance CK to benchmark and optimize ML Systems.
-## Organizers
+### Organizers
* [MLCommons taskforce on automation and reproducibility](https://cKnowledge.org/mlcommons-taskforce)
* [cTuning foundation](https://cTuning.org)
diff --git a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-tvm-2023/README.md b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-tvm-2023/README.md
index f7a3d9e149..bb7f7f5988 100644
--- a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-tvm-2023/README.md
+++ b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-tvm-2023/README.md
@@ -9,7 +9,7 @@ using the [MLCommons CK framework](https://github.com/mlcommons/ck):
Join this public [Discord server](https://discord.gg/JjWNWXKxwT) to discuss with the community and organizers
how to use and enhance CK to run and optimize MLPerf inference benchmarks.
-## Organizers
+### Organizers
* [Deelvin](https://deelvin.com)
* [MLCommons taskforce on automation and reproducibility](https://cKnowledge.org/mlcommons-taskforce)
diff --git a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-windows-2023/README.md b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-windows-2023/README.md
index 2ebabb9759..2de0db73af 100644
--- a/cm-mlops/challenge/optimize-mlperf-inference-v3.1-windows-2023/README.md
+++ b/cm-mlops/challenge/optimize-mlperf-inference-v3.1-windows-2023/README.md
@@ -3,7 +3,7 @@
Make it possible to run MLPerf inference v3.1 benchmarks on Windows
using the [MLCommons CK framework](https://github.com/mlcommons/ck).
-## Organizers
+### Organizers
* Stanley Mwangi (Microsoft)
* Grigori Fursin (MLCommons, cTuning & cKnowledge)
diff --git a/cm-mlops/challenge/optimize-mlperf-training-v3.0-2023/README.md b/cm-mlops/challenge/optimize-mlperf-training-v3.0-2023/README.md
index 94e0ede863..8989a1678c 100644
--- a/cm-mlops/challenge/optimize-mlperf-training-v3.0-2023/README.md
+++ b/cm-mlops/challenge/optimize-mlperf-training-v3.0-2023/README.md
@@ -7,7 +7,7 @@ Join this public [Discord server](https://discord.gg/JjWNWXKxwT)
to discuss with the community and organizers
how to use and enhance CK to run and optimize MLPerf inference benchmarks.
-## Organizers
+### Organizers
* [MLCommons taskforce on automation and reproducibility](https://cKnowledge.org/mlcommons-taskforce)
* [cTuning foundation](https://cTuning.org)
diff --git a/cm-mlops/challenge/optimize-tinymlperf-inference-v3.0-2023/README.md b/cm-mlops/challenge/optimize-tinymlperf-inference-v3.0-2023/README.md
index 223faa7697..172d013359 100644
--- a/cm-mlops/challenge/optimize-tinymlperf-inference-v3.0-2023/README.md
+++ b/cm-mlops/challenge/optimize-tinymlperf-inference-v3.0-2023/README.md
@@ -6,7 +6,7 @@ Join this public [Discord server](https://discord.gg/JjWNWXKxwT)
to discuss with the community and organizers
how to use and enhance CK to run and optimize MLPerf inference benchmarks.
-## Organizers
+### Organizers
* [MLCommons taskforce on automation and reproducibility](https://cKnowledge.org/mlcommons-taskforce)
* [cTuning foundation](https://cTuning.org)
diff --git a/cm-mlops/challenge/repro-mlperf-inf-v3.0-orin/README.md b/cm-mlops/challenge/repro-mlperf-inf-v3.0-orin/README.md
index 6d283fec45..9a030030c8 100644
--- a/cm-mlops/challenge/repro-mlperf-inf-v3.0-orin/README.md
+++ b/cm-mlops/challenge/repro-mlperf-inf-v3.0-orin/README.md
@@ -7,7 +7,7 @@ Reproduce MLPerf inference v3.0 benchmark results for Nvidia Jetson Orin
Join this public [Discord server](https://discord.gg/JjWNWXKxwT) to discuss with the community and organizers
how to use and enhance CK to benchmark and optimize ML Systems.
-## Organizers
+### Organizers
* [MLCommons taskforce on automation and reproducibility](https://cKnowledge.org/mlcommons-taskforce)
* [cTuning foundation](https://cTuning.org)
diff --git a/cm-mlops/challenge/repro-mlperf-inference-retinanet-scc2022/README.md b/cm-mlops/challenge/repro-mlperf-inference-retinanet-scc2022/README.md
index d3c371db55..f41ba5bd53 100644
--- a/cm-mlops/challenge/repro-mlperf-inference-retinanet-scc2022/README.md
+++ b/cm-mlops/challenge/repro-mlperf-inference-retinanet-scc2022/README.md
@@ -3,7 +3,7 @@
Reproduce the MLPerf inference RetinaNet benchmark during Student Cluster Competition at SuperComputing'22
using the following [CK2(CM) tutorial](https://github.com/mlcommons/ck/blob/master/docs/tutorials/sc22-scc-mlperf.md).
-## Organizers
+### Organizers
* [MLCommons taskforce on automation and reproducibility](https://cKnowledge.org/mlcommons-taskforce)
* [cTuning foundation](https://cTuning.org)
diff --git a/cm-mlops/challenge/reproduce-and-automate-ipol-paper/README.md b/cm-mlops/challenge/reproduce-and-automate-ipol-paper/README.md
index d188be63dd..b44a440fb1 100644
--- a/cm-mlops/challenge/reproduce-and-automate-ipol-paper/README.md
+++ b/cm-mlops/challenge/reproduce-and-automate-ipol-paper/README.md
@@ -2,7 +2,7 @@
Reproduce and automate IPOL paper (proof-of-concept):
-### Organizers
+#### Organizers
* Jose Hernandez
* Miguel Colom
diff --git a/cm-mlops/script/get-ml-model-bert-base-squad/_cm.json b/cm-mlops/script/get-ml-model-bert-base-squad/_cm.json
index b5f2cb02e9..414020c7a1 100644
--- a/cm-mlops/script/get-ml-model-bert-base-squad/_cm.json
+++ b/cm-mlops/script/get-ml-model-bert-base-squad/_cm.json
@@ -9,7 +9,7 @@
"CM_ML_MODEL_DATASET": "squad-1.1",
"CM_ML_MODEL_MAX_SEQ_LENGTH": "384",
"CM_ML_MODEL_NAME": "MLPERF BERT Base on SQuAD v1.1",
- "CM_ML_MODEL_VOCAB_TXT": "vocab.txt"
+ "CM_TMP_ML_MODEL_REQUIRE_DOWNLOAD": "no"
},
"new_env_keys": [
"CM_ML_MODEL*"
@@ -25,6 +25,27 @@
"language-processing"
],
"uid": "b3b10b452ce24c5f",
+ "prehook_deps": [
+ {
+ "tags": "download-and-extract",
+ "env": {
+ "CM_EXTRACT_EXTRACTED_FILENAME": "<<>>",
+ "CM_DOWNLOAD_FINAL_ENV_NAME": "CM_ML_MODEL_FILE_WITH_PATH",
+ "CM_EXTRACT_FINAL_ENV_NAME": "CM_ML_MODEL_FILE_WITH_PATH"
+ },
+ "update_tags_from_env_with_prefix": {
+ "_url.": [ "CM_PACKAGE_URL" ]
+ },
+ "enable_if_env": {
+ "CM_TMP_ML_MODEL_REQUIRE_DOWNLOAD": "yes"
+ }
+ }
+ ],
+ "post_deps": [
+ {
+ "tags": "get,bert,squad,vocab"
+ }
+ ],
"variations": {
"deepsparse": {
"env": {
@@ -47,8 +68,7 @@
"env": {
"CM_ML_MODEL_F1": "87.89",
"CM_ML_MODEL_FILE": "model.onnx",
- "CM_PRUNING_PERCENTAGE": "95",
- "CM_VOCAB_FILE_URL": "https://zenodo.org/record/3733868/files/vocab.txt"
+ "CM_PRUNING_PERCENTAGE": "95"
}
},
"fp32": {
diff --git a/cm-mlops/script/get-ml-model-bert-base-squad/customize.py b/cm-mlops/script/get-ml-model-bert-base-squad/customize.py
deleted file mode 100644
index c6eed58008..0000000000
--- a/cm-mlops/script/get-ml-model-bert-base-squad/customize.py
+++ /dev/null
@@ -1,77 +0,0 @@
-from cmind import utils
-import os
-
-def preprocess(i):
-
- os_info = i['os_info']
-
- env = i['env']
-
- automation = i['automation']
-
- cm = automation.cmind
-
- path = os.getcwd()
-
- if 'CM_ML_MODEL_FILE_WITH_PATH' in env:
- if 'CM_VOCAB_FILE_URL' in env:
- vocab_url = env['CM_VOCAB_FILE_URL']
- from urllib.parse import urljoin
-
- env['CM_ML_MODEL_BERT_VOCAB_FILE_WITH_PATH']=os.path.join(path, env['CM_ML_MODEL_VOCAB_TXT'])
-
- print ('Downloading vocab file from {}'.format(vocab_url))
- r = cm.access({'action':'download_file',
- 'automation':'utils,dc2743f8450541e3',
- 'url':vocab_url})
- if r['return']>0: return r
-
- return {'return': 0}
-
- url = env['CM_PACKAGE_URL']
- if not url:
- return {'return':1, 'error': 'No valid URL to download the model. Probably an unsupported model variation chosen'}
-
- print ('Downloading from {}'.format(url))
-
- r = cm.access({'action':'download_file',
- 'automation':'utils,dc2743f8450541e3',
- 'url':url})
- if r['return']>0: return r
-
- if env.get('CM_UNTAR') == "yes":
- filename = r['filename']
- r = os.system("tar -xvf "+filename)
- if r > 0:
- return {'return': r, 'error': 'Untar failed'}
-
- filename = env['CM_ML_MODEL_FILE']
- env['CM_ML_MODEL_FILE_WITH_PATH']=os.path.join(path, filename)
-
- else:
- env['CM_ML_MODEL_FILE']=r['filename']
- env['CM_ML_MODEL_FILE_WITH_PATH']=r['path']
- env['CM_ML_MODEL_PATH']=path
-
- if 'CM_VOCAB_FILE_URL' in env:
- vocab_url = env['CM_VOCAB_FILE_URL']
- else:
- from urllib.parse import urljoin
- vocab_url = urljoin(url, env['CM_ML_MODEL_VOCAB_TXT'])
-
- env['CM_ML_MODEL_BERT_VOCAB_FILE_WITH_PATH']=os.path.join(path, env['CM_ML_MODEL_VOCAB_TXT'])
-
- print ('Downloading vocab file from {}'.format(vocab_url))
- r = cm.access({'action':'download_file',
- 'automation':'utils,dc2743f8450541e3',
- 'url':vocab_url})
- if r['return']>0: return r
-
-
- return {'return':0}
-
-def postprocess(i):
-
- env = i['env']
-
- return {'return': 0}
diff --git a/cm-mlops/script/get-ml-model-neuralmagic-zoo/_cm.json b/cm-mlops/script/get-ml-model-neuralmagic-zoo/_cm.json
index 94c3c3cbe2..8c531a57c2 100644
--- a/cm-mlops/script/get-ml-model-neuralmagic-zoo/_cm.json
+++ b/cm-mlops/script/get-ml-model-neuralmagic-zoo/_cm.json
@@ -8,7 +8,8 @@
},
"new_env_keys": [
"CM_ML_MODEL*",
- "CM_MODEL_ZOO_STUB"
+ "CM_MODEL_ZOO_STUB",
+ "CM_GET_DEPENDENT_CACHED_PATH"
],
"tags": [
"get",
diff --git a/cm-mlops/script/get-ml-model-neuralmagic-zoo/customize.py b/cm-mlops/script/get-ml-model-neuralmagic-zoo/customize.py
index 2b1b89e836..46afd8f5e4 100644
--- a/cm-mlops/script/get-ml-model-neuralmagic-zoo/customize.py
+++ b/cm-mlops/script/get-ml-model-neuralmagic-zoo/customize.py
@@ -14,3 +14,13 @@ def preprocess(i):
path = os.getcwd()
return {'return':0}
+
+def postprocess(i):
+
+ os_info = i['os_info']
+
+ env = i['env']
+
+ env['CM_GET_DEPENDENT_CACHED_PATH'] = env['CM_ML_MODEL_FILE_WITH_PATH']
+
+ return {'return':0}
diff --git a/docs/tutorials/README.md b/docs/tutorials/README.md
index 0812939cdc..9581e84740 100644
--- a/docs/tutorials/README.md
+++ b/docs/tutorials/README.md
@@ -9,6 +9,7 @@
* MLPerf modularization, automation and reproducibility using the CM automation language:
* [Running MLPerf RetinaNet inference benchmark on CPU via CM (Student Cluster Competition'22 tutorial)](sc22-scc-mlperf.md)
* [Running MLPerf BERT inference benchmark on CUDA GPU via CM (official submission)](https://github.com/mlcommons/ck/blob/master/cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/docs/generate-bert-submission.md)
+ * [Running all MLPerf inference benchmarks out-of-the-box for MLPerf inference v3.1 community submission](../../cm-mlops/challenge/optimize-mlperf-inference-v3.1-2023/README.md)
* [Customizing MLPerf inference benchmark and preparing submission](mlperf-inference-submission.md)
* [Measuring power during MLPerf inference benchmarks](mlperf-inference-power-measurement.md)
* [Understanding CM concepts](concept.md)