Skip to content

Commit

Permalink
docs: initial commit
Browse files Browse the repository at this point in the history
  • Loading branch information
arielweinberger committed Oct 16, 2023
1 parent 1c7ef95 commit 70825f7
Show file tree
Hide file tree
Showing 50 changed files with 1,302 additions and 0 deletions.
34 changes: 34 additions & 0 deletions apps/docs/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# Mintlify Starter Kit

Click on `Use this template` to copy the Mintlify starter kit. The starter kit contains examples including

- Guide pages
- Navigation
- Customizations
- API Reference pages
- Use of popular components

### 👩‍💻 Development

Install the [Mintlify CLI](https://www.npmjs.com/package/mintlify) to preview the documentation changes locally. To install, use the following command

```
npm i -g mintlify
```

Run the following command at the root of your documentation (where mint.json is)

```
mintlify dev
```

### 😎 Publishing Changes

Changes will be deployed to production automatically after pushing to the default branch.

You can also preview changes using PRs, which generates a preview link of the docs.

#### Troubleshooting

- Mintlify dev isn't running - Run `mintlify install` it'll re-install dependencies.
- Page loads as a 404 - Make sure you are running in a folder with `mint.json`
3 changes: 3 additions & 0 deletions apps/docs/_snippets/snippet-example.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
## My Snippet

<Info>This is an example of a reusable snippet</Info>
3 changes: 3 additions & 0 deletions apps/docs/api-reference/cache/retrieve-cached-request.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
---
openapi: post /cache/v1/request/retrieve
---
3 changes: 3 additions & 0 deletions apps/docs/api-reference/cache/save-request-to-cache.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
---
openapi: post /cache/v1/request/save
---
3 changes: 3 additions & 0 deletions apps/docs/api-reference/health/performs-a-health-check.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
---
openapi: get /healthz
---
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
---
openapi: get /prompts/v2/deployment
---
3 changes: 3 additions & 0 deletions apps/docs/api-reference/reporting/report-a-request.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
---
openapi: post /reporting/v2/request
---
Binary file added apps/docs/client/cache-request-details.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added apps/docs/client/cache-requests-list.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
227 changes: 227 additions & 0 deletions apps/docs/client/integrations/openai.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,227 @@
---
title: "OpenAI Integration"
description: "Learn how to use OpenAI with Pezzo."
---

## Using OpenAI With Pezzo

Ensure that you have the latest version of the Pezzo Client installed, as well as the OpenAI NPM package.

<Tabs>
<Tab title="Node.js">
<CodeGroup>
```bash npm
npm i @pezzo/client openai
```
```bash yarn
yarn add @pezzo/client openai
```
```bash pnpm
pnpm add @pezzo/client openai
```
</CodeGroup>
</Tab>
<Tab title="Python">
<CodeGroup>
```bash pip
pip install pezzo
```
```bash poetry
poetry add pezzo
```
</CodeGroup>
</Tab>
</Tabs>

### Initialize Pezzo and PezzoOpenAI

Learn more about how to initialize the Pezzo Client:
- [Node.js](/client/pezzo-client-node)
- [Python](/client/pezzo-client-python)

### Making Requests to OpenAI

#### Option 1: With Prompt Management (Recommended)

We recommend you to manage your AI prompts through Pezzo. This allows you to easily manage your prompts, and keep track of your AI requests. [Click here to learn about Prompt Management in Pezzo](platform/prompt-management).

Below is an example of how you can use Pezzo to retrieve a prompt, and then use it to make a request to OpenAI.

<Tabs>
<Tab title="Node.js">
```ts
// Fech prompt from Pezzo
const prompt = await pezzo.getPrompt("PromptName");

// Provide the prompt as-is to OpenAI
const response = await openai.chat.completions.create(prompt);

// Or you can override specific properties if you wish
const response = await openai.chat.completions.create({
...prompt,
model: "gpt-4",
});
```
</Tab>
<Tab title="Python">
```py
from pezzo.client import pezzo
from pezzo.openai import openai

# Fetch prompt from Pezzo
prompt = pezzo.get_prompt("PromptName")

# Provide the prompt to OpenAI
response = openai.ChatCompletion.create(
pezzo_prompt=prompt
)

# You can override specific properties if you wish
response = openai.ChatCompletion.create(
pezzo_prompt=prompt,
model="gpt-4"
)
```
</Tab>
</Tabs>

Congratulations! You've about to benefit from seamless prompt version management and request tracking. Your request will now be visible in the **Requests** page of your Pezzo project.

#### Option 2: Without Prompt Management

If you don't want to manage your prompts through Pezzo, you can still use Pezzo to make requests to OpenAI and benefit from Pezzo's [Observability features](platform/observability/overview).

You will consume the make request to the OpenAI exactly as you normally would. The only difference is that you will use the `PezzoOpenAI` instance we created above. Here is an example:

<Tabs>
<Tab title="Node.js">
```ts
const response = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
temperature: 0,
messages: [
{
role: "user",
content: "Hey, how are you doing?",
},
],
});
```
</Tab>
<Tab title="Python">
```py
from pezzo.client import pezzo
from pezzo.openai import openai

response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
temperature=0,
messages=[
{
"role": "user",
"content": "Hey, how are you doing?",
}
]
)
```
</Tab>
</Tabs>

You should now be able to see your request in the **Requests** page of your Pezzo project.

### Additional Capabilities

The Pezzo client enhances your developer experience by providing additional functionality to the OpenAI API. This is done through the second argument of the `createChatCompletion` method.

#### Variables

You can specify variables that will be interpolated by the Pezzo client before sending the request to OpenAI. This is useful if you want to use the same prompt for multiple requests, but with different variables.

<Tabs>
<Tab title="Node.js">
```ts
const response = await openai.chat.completions.create(..., {
variables: {
age: 22,
country: "France"
}
});
```
</Tab>
<Tab title="Python">
```py
response = openai.ChatCompletion.create(
...,
pezzo_options={
"variables": {
"age": 22,
"country": "France"
}
}
)
```
</Tab>
</Tabs>

Notice the variables in the prompt. The Pezzo client will replace them with the values you specified in the `variables` object.

#### Custom Properties

You can also specify custom properties that will be sent to Pezzo. This is useful if you want to add additional information to your request, such as the user ID, or the request ID. This information will be visible in the **Requests** page of your Pezzo project, and you will be able to filter requests based on these properties.

<Tabs>
<Tab title="Node.js">
```ts
const response = await openai.chat.completions.create({
...
}, {
properties: {
userId: "some-user-id",
traceId: "some-trace-id"
}
});
```
</Tab>
<Tab title="Python">
```py
response = await openai.ChatCompletion.create(
...,
pezzo_options={
"properties": {
"userId": "some-user-id",
"traceId": "some-trace-id"
}
}
)
```
</Tab>
</Tabs>


#### Request Caching

Utilizing request caching can sometimes save up to 90% on your API costs and execution time. You can enable cache by setting `cache` to `true` in the second argument of the `createChatCompletion` method.

<Tabs>
<Tab title="Node.js">
```ts
const response = await openai.chat.completions.create({
...
}, {
cache: true
});
```
</Tab>
<Tab title="Python">
```py
response = await openai.ChatCompletion.create(
...,
pezzo_options={
"cache": True
}
)
```
</Tab>
</Tabs>

To learn more, visit the [Request Caching](/client/request-caching) page.
105 changes: 105 additions & 0 deletions apps/docs/client/pezzo-client-node.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
---
title: "Pezzo Client - Node.js"
---

The Pezzo client is an NPM package that allows you to easily integrate your application with Pezzo. The client was built with TypeScript and is type-safe.

## Getting Started

### Intall the Pezzo Client

Install the [@pezzo/client](https://www.npmjs.com/package/@pezzo/client) NPM package:

<CodeGroup>
```bash npm
npm i @pezzo/client
```
```bash yarn
yarn add @pezzo/client
```
```bash pnpm
pnpm add @pezzo/client
```
</CodeGroup>

### Initialize the Pezzo Client

You only need to initialize the Pezzo client once, and then you can use it throughout your application.

<Tabs>
<Tab title="Configure via environment variables">
Pezzo automatically looks for the following environment variables:
- `PEZZO_API_KEY`: Your Pezzo API key
- `PEZZO_PROJECT_ID`: Your Pezzo project ID
- `PEZZO_ENVIRONMENT`: The environment you want to use (e.g. `Production`, which is the default environment created by Pezzo)

Variables found will be used automatically for configuration.

```ts
import { Pezzo, PezzoOpenAI } from "@pezzo/client";

// Initialize the Pezzo client and export it
export const pezzo = new Pezzo();

// Initialize PezzoOpenAI and export it
export const openai = new PezzoOpenAI(pezzo);
```
</Tab>
<Tab title="Configure manually">
```ts
import { Pezzo, PezzoOpenAI } from "@pezzo/client";

// Initialize the Pezzo client and export it
export const pezzo = new Pezzo({
apiKey: "your-api-key",
projectId: "your-project-id",
environment: "Production",
});

// Initialize PezzoOpenAI and export it
export const openai = new PezzoOpenAI(pezzo);
```
</Tab>
</Tabs>


<CardGroup cols={2}>
<Card
title="Use Pezzo with OpenAI"
icon="bolt-lightning"
href="/client/integrations/openai"
>
Learn how to use Pezzo to observe and manage your OpenAI API calls.
</Card>
</CardGroup>

## API Reference

<ResponseField name="Pezzo.constructor(options: PezzoOptions)" type="Function">
<div style={{ marginLeft: 20 }}>
<ParamField path="options" type="PezzoOptions">
<div style={{ marginLeft: 20 }}>
<ParamField path="apiKey" type="string" required="false" default="process.env.PEZZO_API_KEY">
Pezzo API key
</ParamField>
<ParamField path="projectId" type="string" required="false" default="process.env.PEZZO_PROJECT_ID">
Pezzo project ID
</ParamField>
<ParamField path="environment" type="string" required="false" default="process.env.PEZZO_ENVIRONMENT">
Pezzo environment name
</ParamField>
<ParamField path="serverUrl" type="string" required="false" default="https://api.pezzo.ai">
Pezzo server URL
</ParamField>
</div>
</ParamField>
</div>
</ResponseField>

<ResponseField name="Pezzo.getPrompt(promptName: string)" type="Function">
<div style={{ marginLeft: 20 }}>
<ParamField path="promptName" type="string">
The name of the prompt to retrieve. The prompt must be deployed to the current environment specified when initializing the Pezzo client.
</ParamField>
</div>
</ResponseField>
Loading

0 comments on commit 70825f7

Please sign in to comment.