From e34fed3537343a9f60da8f7d37a7fc9f87da8679 Mon Sep 17 00:00:00 2001 From: Luca Beurer-Kellner Date: Fri, 14 Jul 2023 17:07:19 +0200 Subject: [PATCH] update docs --- README.md | 2 +- docs/source/_static/css/lmql-docs.css | 6 +- docs/source/_static/js/lmql-playground.js | 4 +- docs/source/language/overview.md | 126 ++++++++++------------ docs/source/language/scripted_prompts.md | 85 +++++++++------ docs/source/quickstart.md | 20 ++-- 6 files changed, 127 insertions(+), 116 deletions(-) diff --git a/README.md b/README.md index 68655ff2..ae1b6d27 100644 --- a/README.md +++ b/README.md @@ -26,7 +26,7 @@ LMQL is a programming language for large language models (LLMs) based on a *superset of Python*. LMQL offers a novel way of interweaving traditional programming with the ability to call LLMs in your code. It goes beyond traditional templating languages by integrating LLM interaction natively at the level of your program code. ## Explore LMQL -An LMQL program reads like standard Python, but top-level strings are interpreted as query strings, i.e. they are passed to an LLM , where template variables like `[GREETINGS]` are completed by the model: +An LMQL program reads like standard Python, but top-level strings are interpreted as query strings: They are passed to an LLM, where template variables like `[GREETINGS]` are automatically completed by the model: ```python "Greet LMQL:[GREETINGS]\n" where stops_at(GREETINGS, ".") and not "\n" in GREETINGS diff --git a/docs/source/_static/css/lmql-docs.css b/docs/source/_static/css/lmql-docs.css index f6c026f2..78232926 100644 --- a/docs/source/_static/css/lmql-docs.css +++ b/docs/source/_static/css/lmql-docs.css @@ -23,6 +23,10 @@ html[data-theme="dark"] .highlight.lmql { border: none; } +.highlight.lmql pre span.c1 { + opacity: 0.5 !important; +} + .highlight pre { padding-top: 20pt !important; padding-bottom: 20pt !important; @@ -172,7 +176,7 @@ html[data-theme="dark"] .getting-started>a.primary { } .highlight-model-output::before { - content: "Model Output"; + content: "Model Transcript"; font-family: system-ui, -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif; position: absolute; top: 4pt; diff --git a/docs/source/_static/js/lmql-playground.js b/docs/source/_static/js/lmql-playground.js index 72271ea5..e067c107 100644 --- a/docs/source/_static/js/lmql-playground.js +++ b/docs/source/_static/js/lmql-playground.js @@ -9,10 +9,8 @@ function getPlaygroundUrl(next) { if (host === "docs.lmql.ai") { return "https://lmql.ai/playground"; - } else if (host.startsWith("localhost") || host.startsWith("127.0.0.1")) { - return "http://localhost:3000/playground"; } else { - return "https://lbeurerkellner.github.io/green-gold-dachshund-web/playground"; + return "http://localhost:8081/playground/"; } } diff --git a/docs/source/language/overview.md b/docs/source/language/overview.md index 6aeb07cc..822abfef 100644 --- a/docs/source/language/overview.md +++ b/docs/source/language/overview.md @@ -1,65 +1,41 @@ # Overview -LMQL is a declarative, SQL-like programming language for language model interaction. As an example consider the following query, demonstrating the basic syntax of LMQL: +LMQL is a Python-based programming language for LLM programming with declarative elements. As a simple example consider the following program, demonstrating the basic syntax of LMQL: ```{lmql} name::overview-query -argmax - """Review: We had a great stay. Hiking in the mountains was fabulous and the food is really good. - Q: What is the underlying sentiment of this review and why? - A:[ANALYSIS] - Based on this, the overall sentiment of the message can be considered to be[CLASSIFICATION]""" -from - "openai/text-davinci-003" -where - not "\n" in ANALYSIS and CLASSIFICATION in [" positive", " neutral", " negative"] + +# review to be analyzed +review = """We had a great stay. Hiking in the mountains + was fabulous and the food is really good.""" + +# use prompt statements to pass information to the model +"Review: {review}" +"Q: What is the underlying sentiment of this review and why?" +# template variables like [ANALYSIS] are used to generate text +"A:[ANALYSIS]" where not "\n" in ANALYSIS + +# use constrained variable to produce a classification +"Based on this, the overall sentiment of the message\ + can be considered to be[CLS]" where CLS in [" positive", " neutral", " negative"] + +CLS # positive model-output:: Review: We had a great stay. Hiking in the mountains was fabulous and the food is really good.⏎ Q: What is the underlying sentiment of this review and why?⏎ A: [ANALYSIS The underlying sentiment of this review is positive because the reviewer had a great stay, enjoyed the hiking and found the food to be good.]⏎ -Based on this, the overall sentiment of the message can be considered to be [CLASSIFICATION positive] +Based on this, the overall sentiment of the message can be considered to be [CLS positive] ``` -In this program, we use the language model `openai/text-davinci-003` (GPT-3.5) to perform a sentiment analysis on a provided user review. We first ask the model to provide some basic analysis of the review, and then we ask the model to classify the overall sentiment as one of `positive`, `neutral`, or `negative`. The model is able to correctly identify the sentiment of the review as `positive`. - -Overall, the query consists of four main clauses: - -1. **Decoder Clause** First, we specify the decoding algorithm to use for text generation. In this case we use `argmax` decoding, however, LMQL also support branching decoding algorithms like beam search. See [Decoders](./decoders.md) to learn more about this. - -2. **Prompt Clause** - - ```python - """Review: We had a great stay. Hiking in the mountains was fabulous and the food is really good. - Q: What is the underlying sentiment of this review and why? - A:[ANALYSIS] - Based on this, the overall sentiment of the message can be considered to be[CLASSIFICATION]""" - ``` - - In this part of the program, you specify your prompt. Here, we include the user review, as well as the two questions we want to ask the model. Template variables like `[ANALYSIS]` are automatically completed by the model. Apart from simple textual prompts, LMQL also support multi-part and scripted prompts. To learn more, see [Scripted Prompting](./scripted_prompts.md). +In this program, we program an LLM to perform sentiment analysis on a provided user review. We first ask the model to provide some basic analysis, and then we ask it to classify the overall sentiment as one of `positive`, `neutral`, or `negative`. The model is able to correctly identify the sentiment of the review as `positive`. -3. **Model Clause** +To implement this workflow, we use two template variables `[ANALYSIS]` and `[CLS]`, both of which are constrained using designated `where` expressions. - ```python - from "openai/text-davinci-003" - ``` +For `ANALYSIS` we constrain the model to not output any newlines, which prevents it from outputting multiple lines that could potentially break the program. For `CLS` we constrain the model to output one of the three possible values. Using these constraints allows us to decode a fitting answer from the model, where both the analysis and the classification are well-formed and in an expected format. - Next, we specify what model we want to use for text generation. In this case, we use the language model `openai/text-davinci-003`. To learn more about the different models available in LMQL, see [Models](./models.md). - -4. **Constraint Clause** - - ```python - not "\n" in ANALYSIS and CLASSIFICATION in [" positive", " neutral", " negative"] - ``` - - In this part of the query, users can specify logical, high-level constraints on the generated text.
- - Here, we specify two constraints: For `ANALYSIS` we constrain the model to not output any newlines, which prevents the model from outputting multiple lines, which could potentially breaking the prompt. For `CLASSIFICATION` we constrain the model to output one of the three possible values. Using these constraints allows us to decode a fitting answer from the model, where both the analysis and the classification are well-formed and in an expected format. - - Without constraints, the prompt above could produce different final classifications, such as `good`, `bad`, or `neutral`. To handle this in an automated way, one would again have to employ some model of language understanding to parse the model's CLASSIFICATION result. - - To learn more about the different types of constraints available in LMQL, see [Constraints](./constraints.md). +Without constraints, the prompt above could produce different final classifications, such as `good` or `bad`. To handle this in an automated way, one would have to employ ad-hoc parsing to CLS result to obtain a clear result. Using LMQL's constraints, however, we can simply restrict the model to only output one of the desired values, thereby enabling robust and reliable integration. To learn more about the different types of constraints available in LMQL, see [Constraints](./constraints.md). ### Extracting More Information With Distributions @@ -69,24 +45,27 @@ While the query above allows us to extract the sentiment of a review, we do not name::sentiment-distribution argmax - """Review: We had a great stay. Hiking in the mountains was fabulous and the food is really good. - Q: What is the underlying sentiment of this review and why? - A:[ANALYSIS] - Based on this, the overall sentiment of the message can be considered to be[CLASSIFICATION]""" -from - "openai/text-davinci-003" -where - not "\n" in ANALYSIS + # review to be analyzed + review = """We had a great stay. Hiking in the mountains was fabulous and the food is really good.""" + + # use prompt statements to pass information to the model + "Review: {review}" + "Q: What is the underlying sentiment of this review and why?" + # template variables like [ANALYSIS] are used to generate text + "A:[ANALYSIS]" where not "\n" in ANALYSIS + + # use constrained variable to produce a classification + "Based on this, the overall sentiment of the message can be considered to be[CLS]" distribution - CLASSIFICATION in [" positive", " neutral", " negative"] + CLS in [" positive", " neutral", " negative"] model-output:: Review: We had a great stay. Hiking in the mountains was fabulous and the food is really good.⏎ Q: What is the underlying sentiment of this review and why?⏎ A: [ANALYSIS The underlying sentiment of this review is positive because the reviewer had a great stay, enjoyed the hiking and found the food to be good.]⏎ -Based on this, the overall sentiment of the message can be considered to be [CLASSIFICATION] +Based on this, the overall sentiment of the message can be considered to be [CLS] -P(CLASSIFICATION) +P(CLS) - positive (*) 0.9999244826658527 - neutral 7.513155848720942e-05 - negative 3.8577566019560874e-07 @@ -94,13 +73,17 @@ P(CLASSIFICATION) **Distribution Clause** -Instead of constraining `CLASSIFICATION` as part of the `where` clause, we now constrain in the `distribution` clause. In LMQL, the `distribution` clause is used to specify whether we want to additionally obtain the distribution over the possible values for a given variable. In this case, we want to obtain the distribution over the possible values for `CLASSIFICATION`. +Instead of constraining `CLS` with a `where` expression, we now constrain it in the separate `distribution` clause. In LMQL, the `distribution` clause can be used to specify whether we want to additionally obtain the distribution over the possible values for a given variable. In this case, we want to obtain the distribution over the possible values for `CLS`. -In addition to using the model to perform the `ANALYSIS`, LMQL now also scores each of the individually provided values for `CLASSIFICATION` and normalizes the resulting sequence scores into a probability distribution `P(CLASSIFICATION)` (printed to the Terminal Output of the Playground or Standard Output of the CLI). +> Note, that to use the `distribution` clause, we have to make our choice of decoding algorithm explicit, by specifying `argmax` at the beginning of our code (see [Decoding Algorithms](./decoding.md) for more information). ¸ +> +> In general, indenting your program and explicitly specifying e.g. `argmax` at the beginning of your code is optional, but recommended if you want to use the `distribution` clause. Throughout the documentation we will make use of both options. + +In addition to using the model to perform the `ANALYSIS`, LMQL now also scores each of the individually provided values for `CLS` and normalizes the resulting sequence scores into a probability distribution `P(CLS)` (printed to the Terminal Output of the Playground or Standard Output of the CLI). Here, we can see that the model is indeed quite confident in its classification of the review as `positive`, with an overwhelming probability of `99.9%`. -> Note that currently distribution variables like `CLASSIFICATION` can only occur at the end of a prompt. +> Note that currently distribution variables like `CLS` can only occur at the end of your program. ### Dynamically Reacting To Model Output @@ -110,20 +93,22 @@ Another way to improve on our initial query, is to implement a more dynamic prom name::dynamic-analysis argmax - """Review: We had a great stay. Hiking in the mountains was fabulous and the food is really good. + review = """We had a great stay. Hiking in the mountains + was fabulous and the food is really good.""" + """Review: {review} Q: What is the underlying sentiment of this review and why? - A:[ANALYSIS] - Based on this, the overall sentiment of the message can be considered to be[CLASSIFICATION]""" - if CLASSIFICATION == " positive": + A:[ANALYSIS]""" where not "\n" in ANALYSIS + + "Based on this, the overall sentiment of the message can be considered to be[CLS]" where CLS in [" positive", " neutral", " negative"] + + if CLS == " positive": "What is it that they liked about their stay? [FURTHER_ANALYSIS]" - elif CLASSIFICATION == " neutral": + elif CLS == " neutral": "What is it that could have been improved? [FURTHER_ANALYSIS]" - elif CLASSIFICATION == " negative": + elif CLS == " negative": "What is it that they did not like about their stay? [FURTHER_ANALYSIS]" -from - "openai/text-davinci-003" where - not "\n" in ANALYSIS and CLASSIFICATION in [" positive", " neutral", " negative"] and STOPS_AT(FURTHER_ANALYSIS, ".") + STOPS_AT(FURTHER_ANALYSIS, ".") model-output:: Review: We had a great stay. Hiking in the mountains was fabulous and the food is really good.⏎ @@ -138,3 +123,6 @@ What is it that they liked about their stay?⏎ As shown here, we can use the `if` statement to dynamically react to the model's output. In this case, we ask the model to provide a more detailed analysis of the review, depending on the overall positive, neutral, or negative sentiment of the review. All intermediate variables like `ANALYSIS`, `CLASSIFICATION` or `FURTHER_ANALYSIS` can be considered the output of query, and may be processed by an surrounding automated system. To learn more about the capabilities of such control-flow-guided prompts, see [Scripted Prompting](./scripted_prompts.md). + +As shown here, in addition to inline `where` expressions as seen earlier, you can also provide a global `where` expression at the end of your program, e.g. to specify constraints that should apply for all variables. Depending on your use case, this can be a convenient way to avoid having to repeat the same constraints multiple times, like for `FURTHER_ANALYSIS` in this example. + diff --git a/docs/source/language/scripted_prompts.md b/docs/source/language/scripted_prompts.md index 5134cd2e..a199f108 100644 --- a/docs/source/language/scripted_prompts.md +++ b/docs/source/language/scripted_prompts.md @@ -1,19 +1,17 @@ # Scripted Prompting -In LMQL, prompts are not just static text, as they can also contain control flow (e.g. loops, conditions, function calls). This facilitates dynamic prompt construction and allows LMQL queries to respond dynamically to model output. This scripting mechanic is achieved by a combination of prompt templates, control flow and [output constraining](constraints.md). +In LMQL, programs are not just static tempaltes of text, as they also contain control flow (e.g. loops, conditions, function calls). This facilitates dynamic prompt construction and allows LMQL programs to respond dynamically to model output. This scripting mechanic is achieved by a combination of prompt templates, control flow and [output constraining](constraints.md). > Note: LMQL requires special escaping to use `[`, `]`, `{` and `}` in a literal way, see [](Escaping). -**Packing List** For instance, let's say we want to generate a packing list. One way to do this would be the following query: +**Packing List** For instance, let's say we want to generate a packing list before going on vacation. One way to do this would be the following query: ```{lmql} name::list -sample(temperature=0.8) - "A few things not to forget when going to the sea (not travelling): \n" - "[LIST]" -from - 'openai/text-ada-001' + +"A few things not to forget when going to the sea (not travelling): \n" +"[LIST]" model-output:: A list of things not to forget when going to the sea (not travelling): @@ -23,24 +21,23 @@ A list of things not to forget when going to the sea (not travelling): ] ``` +Here, we specify a `sample` decoding algorithm for increased diversity over `argmax` (cf. [Decoders](decoders.md)), and then execute the program to generate a complete list using one `[LIST]` variable. + + This can work well, however, it is unclear if the model will always produce a well-structured list of items. Further, we have to parse the response to separate the various items and process them further. -**Simple Prompt Templates** To address this, we can provide a more rigid prompt template, where we already provide the output format and let the model only fill in the `THING` variable: +**Simple Prompt Templates** To address this, we can provide a more rigid template, by providing multiple prompt statements, one per item, to let the model only fill-in the `THING` variable: ```{lmql} name::list-multi -sample(temperature=0.8) - "A list of things not to forget when going to the sea (not travelling): \n" - "-[THING]" - "-[THING]" - "-[THING]" - "-[THING]" - "-[THING]" -from - 'openai/text-ada-001' -where - STOPS_AT(THING, "\n") + +"A list of things not to forget when going to the sea (not travelling): \n" +"-[THING]" where STOPS_AT(THING, "\n") +"-[THING]" where STOPS_AT(THING, "\n") +"-[THING]" where STOPS_AT(THING, "\n") +"-[THING]" where STOPS_AT(THING, "\n") +"-[THING]" where STOPS_AT(THING, "\n") model-output:: A list of things not to forget when going to the sea (not travelling): @@ -51,24 +48,19 @@ A list of things not to forget when going to the sea (not travelling): -[THING A has been in the Poconos for/ Entered the Poconos] ``` -Note how we use a stopping condition on `THING`, such that a new line in the model output leads to a continuation of our provided template. Without the stopping condition, simple template filling would not be possible, as the model would generate more than one items for the first variable already. +Note how we use a stopping constraint on each `THING`, such that a new line in the model output makes sure we progress with our provided template, instead of running-on with the model output. Without the stopping condition, simple template filling would not be possible, as the model would generate more than one item for the first variable already. -**Prompt with Control-Flow** Given this prompt template, we can now leverage control flow in our prompt, to further process results, while also guiding text generation. First, we simplify our query and use a `for` loop instead of repeating the variable ourselves: +**Prompt with Control-Flow** Given this prompt template, we can now leverage control flow in our prompt, to further process results and avoid redundancy, while also guiding text generation. First, we simplify our query and use a `for` loop instead of repeating the variable ourselves: ```{lmql} name::list-multi -sample(temperature=0.8) - "A list of things not to forget when going to the sea (not travelling): \n" - backpack = [] - for i in range(5): - "-[THING]" - backpack.append(THING.strip()) - print(backpack) -from - 'openai/text-ada-001' -where - STOPS_AT(THING, "\n") +"A list of things not to forget when going to the sea (not travelling): \n" +backpack = [] +for i in range(5): + "-[THING]" where STOPS_AT(THING, "\n") + backpack.append(THING.strip()) +print(backpack) model-output:: A list of things not to forget when going to the sea (not travelling): @@ -83,11 +75,38 @@ A list of things not to forget when going to the sea (not travelling): Because we decode our list `THING` by `THING`, we can easily access the individual items, without having to think about parsing or validation. We just add them to a `backpack` list of things, which we then can process further. + +**Inter-Variable Constraints** Now that we have a collected a list of things, we can even extend our program to constrain a follow-up LLM calls to choose among the things in our `backpack`: + +```{lmql} + +name::list-multi-follow + +"A list of things not to forget when going to the sea (not travelling): \n" +backpack = [] +for i in range(5): + "-[THING]" where STOPS_AT(THING, "\n") + backpack.append(THING.strip()) + +"The most essential of which is: [ESSENTIAL_THING]" where ESSENTIAL_THING in backpack + +model-output:: +A list of things not to forget when going to the sea (not travelling): ⏎ +-[THING Sunscreen]⏎ +-[THING Beach Towels]⏎ +-[THING Beach Umbrella]⏎ +-[THING Beach Chairs]⏎ +-[THING Beach Bag]⏎ +The most essential of which is: [ESSENTIAL_THING Sunscreen] +``` + +This can be helpful in guiding the model to achieve complete and consistent model reasoning which is less likely to contradict itself. + (Escaping)= ## Escaping `[`, `]`, `{`, `}` -Inside prompt strings, the characters `[`, `]`, `{`, and `}` are reserved for template variable use and cannot be used directly. To use them as literals, they need to be escaped as `[[`, `]]`, `{{`, and `}}`, respectively. +Inside prompt strings, the characters `[`, `]`, `{`, and `}` are reserved for template variable use and cannot be used directly. To use them as literals, they need to be escaped as `[[`, `]]`, `{{`, and `}}`, respectively. Beyond this, the [standard string escaping rules](https://www.w3schools.com/python/gloss_python_escape_characters.asp) for Python strings and [f-strings](https://peps.python.org/pep-0498/#escape-sequences) apply, as all top-level strings in LMQL are interpreted as Python f-strings. For instance, if you want to use JSON syntax as part of your prompt string, you need to escape the curly braces and squared brackets as follows: diff --git a/docs/source/quickstart.md b/docs/source/quickstart.md index 4d102bf6..64975b05 100644 --- a/docs/source/quickstart.md +++ b/docs/source/quickstart.md @@ -23,13 +23,15 @@ Say 'this is a test': [RESPONSE This is a test.] This simple LMQL program consists of a single prompt statement and an associated `where` clause: -- **Prompt Statement** `"Say 'this is a test'[RESPONSE]"`: Prompts are constructed using so-called prompt statements, that look like top-level strings in Python. Template variables like `[RESPONSE]` are automatically completed by the model. Apart from single-line textual prompts, LMQL also support multi-part and scripted prompts, e.g. by allowing control flow and branching behavior to control prompt construction. To learn more, see [Scripted Prompting](./language/scripted_prompts.md). +- **Prompt Statement** `"Say 'this is a test'[RESPONSE]"`: Prompts are constructed using so-called prompt statements that look like top-level strings in Python. Template variables like `[RESPONSE]` are automatically completed by the model. Apart from single-line textual prompts, LMQL also support multi-part and scripted prompts, e.g. by allowing control flow and branching behavior to control prompt construction. To learn more, see [Scripted Prompting](./language/scripted_prompts.md). - **Constraint Clause** `where len(RESPONSE) < 10`: In this second part of the statement, users can specify logical, high-level constraints on the output. LMQL uses novel evaluation semantics for these constraints, to automatically translate character-level constraints like `len(RESPONSE) < 25` to (sub)token masks, that can be eagerly enforced during text generation. To learn more, see [Constraints](./language/constraints.md). -## 4. Going Further +## 3. Going Further -Extending on your first query above, you may want to add a more complex prompt, e.g. by adding a second part to the prompt. Further, you may want to employ a different decoding algorithm, e.g. to sample multiple responses from the model or even sample from a entirely different model. Let's extend our initial query, to allow for these changes: +Extending on your first query above, you may want to add more complex logic, e.g. by adding a second part to the prompt. Further, you may want to employ a different decoding algorithm, e.g. to sample multiple trajectories of your program or use a different model. + +Let's extend our initial query, to allow for these changes: ```{lmql} name::hello-extended @@ -47,15 +49,15 @@ from Going beyond what we have seen so far, this LMQL program extends on the above in a few ways: -- **Decoder Clause** `sample(temperature=1.2)`: Here, we specify the decoding algorithm to use for text generation. In this case we use `sample` decoding with slightly increased temperature. Above, we implicitly relied on deterministic `argmax` decoding, which is the default in LMQL. To learn more about the different supported decoding algorithms in LMQL (e.g. `beam` or `best_k`), please see [Decoders](./language/decoders.md). +**Decoder Clause** `sample(temperature=1.2)`: Here, we specify the decoding algorithm to use for text generation. In this case we use `sample` decoding with slightly increased temperature (>1.0). Above, we implicitly relied on deterministic `argmax` decoding, which is the default in LMQL. To learn more about the different supported decoding algorithms in LMQL (e.g. `beam` or `best_k`), please see [Decoders](./language/decoders.md). -- **Prompt Program**: The main body of the program specifies the prompt. As before, we use prompt statements here, however, now we also make use of control-flow and branching behavior. +**Prompt Program**: The main body of the program specifies the prompt. As before, we use prompt statements here, however, now we also make use of control-flow and branching behavior. - At any point during execution, the concatenation of all prompt statements executed so far, form the prompt used to decode a value for the currently active template variable like `RESPONSE`. - - After a prompt statement has been executed, the contained template variables are automatically exposed to the surrounding program context. This allows you to dynamically construct prompts and react to intermediate model output. See [Scripted Prompting](./language/scripted_prompts.md) for more details about this ability. +On each LLM call, the concatenation of all prompt statements so far, form the prompt used to generate a value for the currently active template variable like `RESPONSE`. This means the LLM is always aware of the full prompt context so far, when generating a value for a template variable. + +After a prompt statement has been executed, the contained template variables are automatically exposed to the surrounding program context. This allows you to react to model output and incorporate the results in your program logic. To learn more about this form of interactive prompting, please see [Scripted Prompting](./language/scripted_prompts.md). -- **Model Clause** `from "openai/text-ada-001"`: In this advanced version we now specify a custom model to use for text generation. LMQL supports [OpenAI models](https://platform.openai.com/docs/models), like GPT-3.5 variants, ChatGPT, and GPT-4, but also self-hosted models, e.g. via [🤗 Transformers](https://huggingface.co/transformers). For more details, please see [Models](./language/models.md). +**Model Clause** `from "openai/text-ada-001"`: In this extended version we now specify a specific model to use for text generation. LMQL supports [OpenAI models](https://platform.openai.com/docs/models), like GPT-3.5 variants, ChatGPT, and GPT-4, but also self-hosted models, e.g. via [🤗 Transformers](https://huggingface.co/transformers). For more details, please see [Models](./language/models.md). By default, LMQL relies on `openai/text-davinci-003`, if not specified otherwise. ## 3. Enjoy These basic steps should get you started with LMQL. If you need more inspiration before writing your own queries, you can explore the examples included with the [Playground IDE](https://lmql.ai/playground) or showcased on the [LMQL Website](https://lmql.ai/).