diff --git a/docs/online-inference-with-maxtext-engine.md b/docs/online-inference-with-maxtext-engine.md index 95fc84cc..9d3aefe1 100644 --- a/docs/online-inference-with-maxtext-engine.md +++ b/docs/online-inference-with-maxtext-engine.md @@ -21,8 +21,8 @@ Follow the steps in [Manage TPU resources | Google Cloud](https://cloud.google.c ## Step 1: Download JetStream and the MaxText github repository ```bash -git clone -b jetstream-v0.2.1 https://github.com/google/maxtext.git -git clone -b v0.2.1 https://github.com/google/JetStream.git +git clone -b jetstream-v0.2.2 https://github.com/google/maxtext.git +git clone -b v0.2.2 https://github.com/google/JetStream.git ``` ## Step 2: Setup MaxText diff --git a/setup.py b/setup.py index 3f1211ad..c4efd21e 100644 --- a/setup.py +++ b/setup.py @@ -24,7 +24,7 @@ def parse_requirements(filename): setup( name="google-jetstream", - version="0.2.1", + version="0.2.2", description=( "JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs welcome)." ),