Skip to content

Commit

Permalink
docs: how to add function calling to custom engine
Browse files Browse the repository at this point in the history
  • Loading branch information
zhudotexe committed Nov 3, 2023
1 parent a445d6e commit a7d7cbc
Show file tree
Hide file tree
Showing 4 changed files with 36 additions and 2 deletions.
Binary file added docs/_static/concepts-figure.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/_static/function-calling-parsing.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
28 changes: 28 additions & 0 deletions docs/engines.rst
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,34 @@ the underlying model, and kani needs to know about the extra tokens added by thi
model.
- :meth:`.BaseEngine.close`: if your engine needs to clean up resources during shutdown.

Adding Function Calling
^^^^^^^^^^^^^^^^^^^^^^^
If you're writing an engine for a model with function calling, there are a couple additional steps you need to take.

Generally, to use function calling, you need to do the following:

1. Tell the model what functions it has available to it
a. Optional - tell the model what format to output to request calling a function (if the model is not already
fine-tuned to do so)
2. Parse the model's requests to call functions from its text generations

To tell the model what functions it has available, you'll need to somehow prompt the model.
You'll need to implement two methods: :meth:`.BaseEngine.predict` and :meth:`.BaseEngine.function_token_reserve`.

:meth:`.BaseEngine.predict` takes in a list of available :class:`.AIFunction`\ s as an argument, which you should use to
build such a prompt. :meth:`.BaseEngine.function_token_reserve` tells kani how many tokens that prompt takes, so the
context window management can ensure it never sends too many tokens.

To parse the model's requests to call a function, you also do this in :meth:`.BaseEngine.predict`. After generating the
model's completion (usually a string, or a list of token IDs that decodes into a string), separate the model's
conversational content from the structured function call:

.. image:: _static/function-calling-parsing.png
:align: center

Finally, return a :class:`.Completion` with the ``.message`` attribute set to a :class:`.ChatMessage` with the
appropriate :attr:`.ChatMessage.content` and :attr:`.ChatMessage.function_call`.

HTTP Client
-----------
If your language model backend exposes an HTTP API, you can create a subclass of :class:`.BaseClient` to interface with
Expand Down
10 changes: 8 additions & 2 deletions docs/kani.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,12 @@ Let's take a look back at the quickstart program:
kani is comprised of two main parts: the *engine*, which is the interface between kani and the language model,
and the *kani*, which is responsible for tracking chat history, prompting the engine, and handling function calls.

.. image:: _static/concepts-figure.png
:width: 60%
:align: center

In this section, we'll look at how to initialize a Kani class and core concepts in the library.

Kani
----

Expand Down Expand Up @@ -100,8 +106,8 @@ This table lists the engines built in to kani:

When you are finished with an engine, release its resources with :meth:`.BaseEngine.close`.

Chat Messages
-------------
Concept: Chat Messages
----------------------
Each message contains the ``role`` (a :class:`.ChatRole`: system, assistant, user, or function) that sent the message
and the ``content`` of the message. Optionally, a user message can also contain a ``name`` (for multi-user
conversations), and an assistant message can contain a ``function_call`` (discussed in :doc:`function_calling`).
Expand Down

0 comments on commit a7d7cbc

Please sign in to comment.