From 291ae7904a86cab353f517534883e1e0162321c2 Mon Sep 17 00:00:00 2001 From: DarkLight1337 Date: Mon, 2 Dec 2024 16:02:28 +0000 Subject: [PATCH] Fix heading Signed-off-by: DarkLight1337 --- docs/source/usage/generative_models.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/usage/generative_models.rst b/docs/source/usage/generative_models.rst index 91ad7141ddeca..5eca95099e89c 100644 --- a/docs/source/usage/generative_models.rst +++ b/docs/source/usage/generative_models.rst @@ -69,7 +69,7 @@ For example, to search using 5 beams and output at most 16 tokens: print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``LLM.chat`` -^^^^^^^^^^^ +^^^^^^^^^^^^ The :class:`~vllm.LLM.chat` method implements chat functionality on top of :class:`~vllm.LLM.generate`. In particular, it accepts input similar to `OpenAI Chat Completions API __