From b27dc196eae32d71231c5f61cd907799dc71a993 Mon Sep 17 00:00:00 2001 From: Luca Beurer-Kellner Date: Fri, 13 Oct 2023 18:32:17 +0200 Subject: [PATCH] docs: fix chat figures --- docs/docs/lib/chat/internal.md | 11 ++++------- docs/docs/lib/chat/overview.md | 17 ++++------------- docs/docs/lib/chat/serving.md | 23 +++++++---------------- 3 files changed, 15 insertions(+), 36 deletions(-) diff --git a/docs/docs/lib/chat/internal.md b/docs/docs/lib/chat/internal.md index 6a638fcf..315cac81 100644 --- a/docs/docs/lib/chat/internal.md +++ b/docs/docs/lib/chat/internal.md @@ -5,13 +5,10 @@ order: 3 While user-facing question-answering is the main goal of LLM-based chatbots, performance can be considerably improved by implementing internal reasoning and reflection mechanisms. In this chapter, we will discuss the implementation of such mechanisms in LMQL Chat. -```{figure} https://github.com/eth-sri/lmql/assets/17903049/cb609b5c-8984-414a-a3b6-b3fa6f8ab6bb -:name: lmql-chat -:alt: A chatbot with internal reasoning capabilities. -:align: center - -A chatbot with internal reasoning capabilities. -``` +
+ Screenshot of the model dropdown in the playground +
A chatbot that relies on internal reasoning.
+
Building on the simple chat application implemented in [](./overview.md), we extend the chat loop as follows: diff --git a/docs/docs/lib/chat/overview.md b/docs/docs/lib/chat/overview.md index cdd36836..5ed52025 100644 --- a/docs/docs/lib/chat/overview.md +++ b/docs/docs/lib/chat/overview.md @@ -65,19 +65,10 @@ lmql chat chat.lmql Once the server is running, you can access the chatbot at the provided local URL. -```{toctree} -:hidden: - -./chat/overview -``` - -```{figure} https://github.com/eth-sri/lmql/assets/17903049/334e9ab4-aab8-448d-9dc0-c53be8351e27 -:name: lmql-chat -:alt: A simple chatbot using the LMQL chat UI -:align: center - -A simple chatbot using the LMQL Chat UI. -``` +
+ Screenshot of the model dropdown in the playground +
A simple chatbot using the LMQL Chat UI.
+
In this interface, you can interact with your chatbot by typing into the input field at the bottom of the screen. The chatbot will then respond to your input, while also considering the system prompt that you provide in your program. On the right, you can inspect the full internal prompt of your program, including the generated prompt statements and the model output. This allows you at all times, to understand what exact input the model received and how it responded to it. diff --git a/docs/docs/lib/chat/serving.md b/docs/docs/lib/chat/serving.md index 5d5c3dde..6225ce58 100644 --- a/docs/docs/lib/chat/serving.md +++ b/docs/docs/lib/chat/serving.md @@ -15,23 +15,14 @@ To locally serve an LMQL chat endpoint and user interface, simply run the follow lmql chat .lmql ``` -This will serve a web interface on `http://localhost:8089`, and automatically open it in your default browser. You can now start chatting with your custom LMQL chat application. The internal trace on the right-hand side always displays the complete current prompt, reflecting the current state of your chat application. +This will serve a web interface on `http://localhost:8089`, and automatically open it in your default browser. You can now start chatting with your custom LMQL chat application. The internal trace on the right-hand side (shown below), always displays the complete (conversational) prompt, reflecting the current state of your chat application. Note that changing the `.lmql` file will **not** automatically reload the server, so you will have to restart the server manually to see the changes. -```{toctree} -:hidden: - -./chat/overview -``` - -```{figure} https://github.com/eth-sri/lmql/assets/17903049/334e9ab4-aab8-448d-9dc0-c53be8351e27 -:name: lmql-chat -:alt: A simple chatbot using the LMQL chat UI -:align: center - -A simple chatbot using the LMQL Chat UI. -``` +
+ A simple chatbot using the LMQL Chat UI. +
A simple chatbot launched via lmql chat.
+
## Using `chatserver` @@ -49,12 +40,12 @@ Note that when passing a query function directly, you have to always provide a ` Chat relies on a [decorator-based](../../language/decorators.md) output streaming. More specifically, only model output variables that are annotated as `@message` are streamed and shown to the user in the chat interface. This allows for a clean separation of model output and chat output, and eneables hidden/internal reasoning. -To use `@message` with your [custom output writer](../output.ipynb), make sure to inherit from `lmql.lib.chat`'s `ChatMessageOutputWriter`, which offers additional methods for specifically handling and streaming `@message` variables. +To use `@message` with your [custom output writer](../output.html), make sure to inherit from `lmql.lib.chat`'s `ChatMessageOutputWriter`, which offers additional methods for specifically handling and streaming `@message` variables. ## More Advanced Usage For more advanced serving scenarios, e.g. when integrating Chat into your own web applications, please refer to the very minimal implementation of `chatserver` in [`src/lmql/ui/chat/__init__.py`](https://github.com/eth-sri/lmql/blob/main/src/lmql/ui/chat/__init__.py). This implementation is very minimal and can be easily adapted to your own needs and infrastructure. The corresponding web UI is implemented in [`src/lmql/ui/chat/assets/`](https://github.com/eth-sri/lmql/blob/main/src/lmql/ui/chat/assets/) and offers a good starting point for your own implementation and UI adaptations on the client side. -For other forms of output streaming e.g. via HTTP or SSE, see also the chapter on [Output Streaming](../output.ipynb) +For other forms of output streaming e.g. via HTTP or SSE, see also the chapter on [Output Streaming](../output.html) **Disclaimer**: The LMQL chat server is a simple code template that does not include any security features, authentication or cost control. It is intended for local development and testing only, and should not be used as-is in production environments. Before deploying your own chat application, make sure to implement the necessary security measures, cost control and authentication mechanisms. \ No newline at end of file