Skip to content

Commit

Permalink
deploy: d99b493
Browse files Browse the repository at this point in the history
  • Loading branch information
merrymercy committed Dec 4, 2024
1 parent 26b0845 commit ea3ecfd
Show file tree
Hide file tree
Showing 74 changed files with 71 additions and 77 deletions.
2 changes: 1 addition & 1 deletion 404/index.html

Large diffs are not rendered by default.

File renamed without changes.
1 change: 1 addition & 0 deletions _next/data/L7fVVlFcmbSvQNzaqNC8U/blog.json

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"pageProps":{"frontmatter":{"title":"SGLang v0.4: Zero-Overhead Batch Scheduler, Cache-Aware Load Balancer, Faster Structured Outputs","author":"The SGLang Team","date":"December 3, 2024","previewImg":"/images/blog/sglang_v0_4/nsys_no_idle.jpg"},"content":"\nWe’re excited to release [SGLang v0.4](https://github.com/sgl-project/sglang), featuring significant performance improvements and new features:\n\n- Zero-overhead batch scheduler: 1.1x increase in throughput. \n- Cache-aware load balancer: up to 1.9x increase in throughput with 3.8x higher cache hit rate. \n- Data parallelism attention for DeepSeek models: up to 1.9x decoding throughput improvement. \n- Fast structured outputs with xgrammar: up to 10x faster.\n\nThis blog provides a walkthrough of these updates. We welcome your feedback and contributions!\n\n## Zero-Overhead Batch Scheduler\n\nWhile LLM inference runs on GPUs, there is substantial work that also needs to be done by the CPU, such as batch scheduling, memory allocation, and prefix matching. An unoptimized inference engine can spend as much as [half of its time on CPU overhead](https://mlsys.wuklab.io/posts/scheduling_overhead/). SGLang has been known for its efficient batch scheduler from the start. In this new version, we pushed it to the extreme and achieved a near zero-overhead batch scheduler. This idea is simple and has been proposed in [NanoFlow](https://arxiv.org/abs/2408.12757). Basically, we can overlap the CPU scheduling with the GPU computation. The scheduler runs one batch ahead and prepares all the metadata required for the next batch. By doing this, we can keep the GPUs always busy and hide expensive overheads such as the radix cache operations. The related code is [here](https://github.com/sgl-project/sglang/blob/85e1a6f3aa5a2288ca85fe3fe922c733b6533fa7/python/sglang/srt/managers/scheduler.py#L399). The implementation details involve resolving dependencies by creating future tokens and carefully scheduling CUDA events and synchronization. Below is an illustration of the overlapped CPU scheduler and GPU worker.\n\n<img src=\"/images/blog/sglang_v0_4/scheduler.jpg\" style=\"display: flex; margin-top: auto; margin-left: auto; margin-right: auto; margin-bottom: auto; width: 100%;\"></img>\n\nWe verified the zero-overhead claim by using the Nsight profiling system. In the figure below, there are 5 consecutive decoding batches, and you can see there is no single idle time on the GPU. (NOTE: This profile is obtained with the Triton attention backend; there is still a minor gap with the FlashInfer backend, which will be resolved in the next FlashInfer release.)\n\n<img src=\"/images/blog/sglang_v0_4/nsys_no_idle.jpg\" style=\"display: flex; margin-top: auto; margin-left: auto; margin-right: auto; margin-bottom: auto; width: 90%;\"></img>\n\nWith this optimization, SGLang v0.4 can now squeeze the last bit of performance from the GPU and achieves a 1.1x speedup against its previous version and a 1.3x speedup against other state-of-the-art baselines. The speedup is most significant on small models and large tensor parallelism sizes.\n\n<img src=\"/images/blog/sglang_v0_4/llama_3_2_3b.svg\" style=\"display: flex; margin-top: auto; margin-left: auto; margin-right: auto; margin-bottom: auto; width: 60%;\"></img>\n\n\n**Usage**: It is turned on by default, so you do not need to change anything!\n\n**Reproduce benchmark**:\n```\n# zero-overhead batch scheduler (v0.4)\npython3 -m sglang.launch_server --model meta-llama/Llama-3.2-3B-Instruct\npython3 -m sglang.bench_serving --backend sglang --dataset-name random --num-prompts 500 --random-input 4096 --random-output 2048\n\n# old batch scheduler (v0.3)\npython3 -m sglang.launch_server --model meta-llama/Llama-3.2-3B-Instruct --disable-overlap\npython3 -m sglang.bench_serving --backend sglang --dataset-name random --num-prompts 500 --random-input 4096 --random-output 2048\n```\n\n## Cache-Aware Load Balancer\n\nSGLang v0.4 introduces a cache-aware load balancer for LLM inference engines. The load balancer predicts prefix KV cache hit rates on workers and selects those with the highest match rates. Testing shows a **up to 1.9x throughput increase and 3.8x hit rate improvement**, with benefits scaling as worker count increases. The figure below shows how a cache-aware load balancer is different from a naive round-robin load balancer for data parallelism. The cache-aware load balancer maintains an approximate radix tree of the actual radix tree on the workers. The tree is lazily updated with almost no overhead.\n\n<img src=\"/images/blog/sglang_v0_4/cache_aware.png\" style=\"display: flex; margin-top: auto; margin-left: auto; margin-right: auto; margin-bottom: auto; width: 90%;\"></img>\n\nHere are some benchmark results. The new cache-aware router significantly improves throughput.\n\n| | SGLang v0.4 | SGLang v0.3 |\n| :---- | :---- | :---- |\n| Throughput (token/s) | 158596 | 82665 |\n| Cache hit rate | 75% | 20% |\n\n> The benchmark is conducted on a [workload](https://github.com/sgl-project/sglang/pull/1990) that has multiple long prefix groups, and each group is perfectly balanced. The performance might vary based on the characteristics of the workload, but it should improve the cache hit rate significantly\n\nThe key features of this router includes\n- **Multi-Node Support**: Deploy workers across multiple machines, connect a single router to distributed workers, allowing for easy horizontal scaling while preserving cache awareness in a distributed setup.\n- **Cache-Aware Routing**: Requests are sent to workers with a higher hit rate, and load balancing is performed to avoid imbalance.\n- **Communication-Free Design**: No worker synchronization is required for cache state; instead, it uses passed information to simulate an \"approximate tree\".\n- **High-Performance Implementation**: Built in pure Rust for high concurrency, with a low overhead design, offering a 2x speedup compared to Python-based alternatives.\n- **Standalone Package**: Published as \"sglang-router\", includes Python bindings, and features a CLI interface for easy usage.\n \n ### Usage \nInstallation:\n```\npip install sglang-router\n```\n\n1. Co-launch Workers and Router\n\nDrop-in replacement for existing --dp-size parameter:\n```\npython -m sglang_router.launch_server \\\n --model-path meta-llama/Meta-Llama-3.1-8B-Instruct \\\n --dp-size 8\n```\n\n2. Router-Only Launch\nIdeal for multi-node distributed processing:\n```\npython -m sglang_router.launch_server \\\n --model-path meta-llama/Meta-Llama-3.1-8B-Instruct \\\n --dp-size 8 python -m sglang_router.launch_router \\\n --worker-urls http://worker1:8000 http://worker2:8000\n```\n\n ### Reproduce benchmark:\n ````\n# Hardware: 8x A100 80GB GPUs\n# Run benchmark\npython bench_serving.py \\\n --host 127.0.0.1 \\\n --port 30000 \\\n --dataset-name generated-shared-prefix\n\n# Launch with router\npython -m sglang_router.launch_server \\\n --model-path meta-llama/Meta-Llama-3.1-8B-Instruct \\\n --dp-size 8\n\n# Launch without router (baseline)\npython -m sglang.launch_server \\\n --model-path meta-llama/Meta-Llama-3.1-8B-Instruct \\\n --dp-size 8\n````\n\nLearn more by reading the [code](https://github.com/sgl-project/sglang/tree/main/rust). There is also a related paper (with a different design and implementation), [Preble](https://arxiv.org/abs/2407.00023), which is also built on top of SGLang.\n\n## Data Parallelism Attention For DeepSeek Models\n\nThe most common parallelism strategy for inference is tensor parallelism. However, it might not be the most efficient strategy for certain models. For example, DeepSeek models use MLA and only have one KV head. If we use tensor parallelism on 8 GPUs, it will lead to duplicated KV cache and unwanted memory usage.\n\nTo overcome this, we've implemented data parallelism (DP) for the multi-head latent attention (MLA) mechanism to improve throughput for DeepSeek models. By adopting DP for the attention component, the KV cache is significantly reduced, allowing for larger batch sizes. In our DP attention implementation, each DP worker handles different types of batches (prefill, decode, idle) independently. The attention-processed data will be all-gathered among all workers before entering the Mixture-of-Experts (MoE) layer, and after processing through the MoE, the data will be redistributed back to each worker. The figure below illustrates this idea. \n\n<img src=\"/images/blog/sglang_v0_4/dp_attention.svg\" style=\"display: flex; margin-top: auto; margin-left: auto; margin-right: auto; margin-bottom: auto; width: 50%;\"></img>\n\nHere are the benchmark results on 8 x H100 80GB GPUs. With this optimization, SGLang v0.4 achieved 1.9x decoding throughput compared to SGLang v0.3. We are working on further improving the throughput by integrating expert parallelism for the MoE layers. You can check out the related PRs for [data parallelism](https://github.com/sgl-project/sglang/pull/1970) and [expert parallelism](https://github.com/sgl-project/sglang/pull/2203). \n\n<img src=\"/images/blog/sglang_v0_4/deepseek_coder_v2.svg\" style=\"display: flex; margin-top: auto; margin-left: auto; margin-right: auto; margin-bottom: auto; width: 60%;\"></img>\n\n**Usage:** Add `--enable-dp-attention` option to turn on this feature. Currently, it’s only supported for DeepSeek models.\n\n**Reproduce benchmark:**\n```\n# Hardware: 8x H100 80GB GPUs\n# If you see out-of-memory, please try to reduce `--mem-fraction-static` to a smaller value such as 0.75.\n\n# SGLang w/ DP attention (v0.4)\npython3 -m sglang.launch_server --model-path neuralmagic/DeepSeek-Coder-V2-Instruct-FP8 --disable-radix-cache --trust-remote-code --tp 8 --enable-dp-attention --mem-fraction-static 0.78\npython3 -m sglang.bench_serving --backend sglang --dataset-name random --random-input 1 --random-output 512 --random-range-ratio 1 --num-prompts 10000\n\n# SGLang w/o DP attention (v0.3)\npython3 -m sglang.launch_server --model-path neuralmagic/DeepSeek-Coder-V2-Instruct-FP8 --disable-radix-cache --trust-remote-code --tp 8 --mem-fraction-static 0.78\npython3 -m sglang.bench_serving --backend sglang --dataset-name random --random-input 1 --random-output 512 --random-range-ratio 1 --num-prompts 10000\n```\n\n## Fast Structured Outputs with XGrammar\n\nSGLang has been the fastest inference engine for JSON decoding with its [Compressed Finite State Machine](https://lmsys.org/blog/2024-02-05-compressed-fsm/). With this new release, it becomes even faster by integrating a faster grammar backend, xgrammar. \nAccording to the benchmark results, **SGLang \\+ xgrammar can be up to 10x faster than other open-source solutions for JSON decoding tasks**. You can learn more in the xgrammar blog post: \n[https://blog.mlc.ai/2024/11/22/achieving-efficient-flexible-portable-structured-generation-with-xgrammar](https://blog.mlc.ai/2024/11/22/achieving-efficient-flexible-portable-structured-generation-with-xgrammar).\n\n**Usage**: Add \\`--grammar-backend xgrammar\\` when launching the server.\n```\npython3 -m sglang.launch_server --model-path meta-llama/Llama-3.1-8B-Instruct --grammar-backend xgrammar\n```\n\nYou can then query it with the OpenAI-compatible API. See an example at [https://sgl-project.github.io/backend/openai\\_api\\_completions.html\\#JSON](https://sgl-project.github.io/backend/openai_api_completions.html#JSON). \n\n## Acknowledgment\n\nThe work in this blog post is mainly contributed by Byron Hsu, Ke Bao, Lianmin Zheng, Yineng Zhang, and Ziyi Xu. We thank Zhiqiang Xie, Liangsheng Yin, Shuo Yang, and Yilong Zhao for their discussions on the zero-overhead scheduler; Ying Sheng, Yichuan Wang, and Shiyi Cao for their discussions on the cache-aware load balancer; Jiashi Li for their discussion on data parallelism attention; and Yixin Dong for the amazing xgrammar library.\n\n\n## Roadmap\n\nIt has been a great year, and we delivered many features following our [roadmap](https://github.com/sgl-project/sglang/issues/1487).\nThe community is also growing healthily with more developers and adoption.\nThe focus of the next release will be on disaggregated prefill-decode, speculative decoding, multi-level radix cache, sequence parallelism, and more!\n","slug":"2024-12-04-sglang-v0-4"},"__N_SSG":true}
1 change: 0 additions & 1 deletion _next/data/az7mzRR3X_HwhQN5b-Oeh/blog.json

This file was deleted.

Loading

0 comments on commit ea3ecfd

Please sign in to comment.