Skip to content

Commit

Permalink
modify readme description
Browse files Browse the repository at this point in the history
typo
  • Loading branch information
ctlllll committed Sep 10, 2023
1 parent 1438d1b commit 10e7387
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ We aim to tackle the three pain points of popular acceleration techniques like s
</picture>
<br>
<div align="left" width="80%">
<em>Medusa adds extra "heads" to LLMs to predict multiple future tokens simultaneously. When augmenting a model with Medusa, the original model stays untouched, these new heads are fine-tuned during training. During generation, these heads each produce multiple likely next words. These options are then combined and sorted out using a tree-based attention mechanism. Finally, a typical acceptance scheme is employed to pick the most plausible sequence for further decoding.</em>
<em>Medusa adds extra "heads" to LLMs to predict multiple future tokens simultaneously. When augmenting a model with Medusa, the original model stays untouched, and only the new heads are fine-tuned during. During generation, these heads each produce multiple likely words for the corresponding position. These options are then combined and processed using a tree-based attention mechanism. Finally, a typical acceptance scheme is employed to pick the longest plausible prefix from the candidates for further decoding.</em>
</div>
<br>
</div>
Expand All @@ -48,7 +48,7 @@ In a nutshell, we solve the challenges of speculative decoding with the followin

- Instead of introducing a new model, we train multiple decoding heads on the *same* model.
- The training is parameter-efficient so that even GPU poor can do it. And since there is no additional model, there is no need to adjust the distributed computing setup.
- Relaxing the requirement of matching the distribution of the original model makes the generation with random sampling even faster than greedy decoding.
- Relaxing the requirement of matching the distribution of the original model makes the non-greedy generation even faster than greedy decoding.
<p align="center">
<picture>
<img src="assets/size_speedup.png" width="45%">
Expand Down Expand Up @@ -88,7 +88,7 @@ pip install -e .
### Model Weights
| Size | Chat Command | Hugging Face Repo |
| ---- | --------------------------------------------- | --------------------------------------------------------------------- |
| 7B | `python -m medusa.inference.cli --model FasterDecoding/medusa-vicuna-7b-v1.3` | [FasterDecoding/medusa-vicuna-33b-v1.3](https://huggingface.co/FasterDecoding/medusa-vicuna-7b-v1.3) |
| 7B | `python -m medusa.inference.cli --model FasterDecoding/medusa-vicuna-7b-v1.3` | [FasterDecoding/medusa-vicuna-7b-v1.3](https://huggingface.co/FasterDecoding/medusa-vicuna-7b-v1.3) |
| 13B | `python -m medusa.inference.cli --model FasterDecoding/medusa-vicuna-13b-v1.3` | [FasterDecoding/medusa-vicuna-13b-v1.3](https://huggingface.co/FasterDecoding/medusa-vicuna-13b-v1.3) |
| 33B | `python -m medusa.inference.cli --model FasterDecoding/medusa-vicuna-33b-v1.3` | [FasterDecoding/medusa-vicuna-33b-v1.3](https://huggingface.co/FasterDecoding/medusa-vicuna-33b-v1.3) |

Expand Down

0 comments on commit 10e7387

Please sign in to comment.