diff --git a/apps/docs/README.md b/apps/docs/README.md index 745062a..c89c478 100644 --- a/apps/docs/README.md +++ b/apps/docs/README.md @@ -8,7 +8,7 @@ Click on `Use this template` to copy the Mintlify starter kit. The starter kit c - API Reference pages - Use of popular components -### 👩‍đŸ’ģ Development +### Development Install the [Mintlify CLI](https://www.npmjs.com/package/mintlify) to preview the documentation changes locally. To install, use the following command @@ -22,11 +22,9 @@ Run the following command at the root of your documentation (where mint.json is) mintlify dev ``` -### 😎 Publishing Changes +### Publishing Changes -Changes will be deployed to production automatically after pushing to the default branch. - -You can also preview changes using PRs, which generates a preview link of the docs. +Install our Github App to autopropagate changes from youre repo to your deployment. Changes will be deployed to production automatically after pushing to the default branch. Find the link to install on your dashboard. #### Troubleshooting diff --git a/apps/docs/api-reference/authentication.mdx b/apps/docs/api-reference/authentication.mdx new file mode 100644 index 0000000..4578f69 --- /dev/null +++ b/apps/docs/api-reference/authentication.mdx @@ -0,0 +1,22 @@ +--- +title: "Authentication" +description: "Example overview page before API endpoints" +--- + +Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas et eros iaculis tortor dapibus cursus. Curabitur quis sapien nec tortor dictum gravida. + +```bash +'Authorization': 'Token ' +``` + +## API Tokens + +Nullam convallis mauris at nunc consectetur, ac imperdiet leo rutrum. Maecenas cursus purus a pellentesque blandit. Pellentesque vitae lacinia libero, non mollis metus. + +Nam id ullamcorper urna, at rutrum enim. [Maecenas vulputate](/introduction) vehicula libero, vitae sodales augue pretium nec. Quisque a magna tempor, semper risus vel, fermentum nunc. Pellentesque fermentum interdum ex, eu convallis massa blandit sed. Aliquam bibendum ipsum vel laoreet auctor. + +### Permissions + +Etiam lobortis ut odio ut fermentum. Nunc odio velit, sollicitudin at consectetur id, tristique eget turpis. Aliquam at risus vitae dolor sodales venenatis. In hac habitasse platea dictumst. + +Aenean consequat diam eget mollis fermentum. [Quisque eu malesuada](/introduction) felis, non dignissim libero. diff --git a/apps/docs/api-reference/cache/retrieve-cached-request.mdx b/apps/docs/api-reference/cache/retrieve-cached-request.mdx deleted file mode 100644 index 22fcf48..0000000 --- a/apps/docs/api-reference/cache/retrieve-cached-request.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: post /cache/v1/request/retrieve ---- \ No newline at end of file diff --git a/apps/docs/api-reference/cache/save-request-to-cache.mdx b/apps/docs/api-reference/cache/save-request-to-cache.mdx deleted file mode 100644 index de13e36..0000000 --- a/apps/docs/api-reference/cache/save-request-to-cache.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: post /cache/v1/request/save ---- \ No newline at end of file diff --git a/apps/docs/api-reference/endpoint/create.mdx b/apps/docs/api-reference/endpoint/create.mdx new file mode 100644 index 0000000..7e7f86b --- /dev/null +++ b/apps/docs/api-reference/endpoint/create.mdx @@ -0,0 +1,84 @@ +--- +title: "Create User" +api: "POST https://api.mintlify.com/api/user" +description: "This endpoint creates a new user" +--- + +### Body + + + This is the current user group token you have for the user group that you want + to rotate. + + +### Response + + + Indicates whether the call was successful. 1 if successful, 0 if not. + + + + +The contents of the user group + + + + + This is the internal ID for this user group. You don't need to record this + information, since you will not need to use it. + + + + This is the user group token (userGroupToken or USER_GROUP_TOKEN) that will be + used to identify which user group is viewing the dashboard. You should save + this on your end to use when rendering an embedded dashboard. + + + + This is the name of the user group provided in the request body. + + + + This is the user_group_id provided in the request body. + + + + This is the environment tag of the user group. Possible values are 'Customer' + and 'Testing'. User group id's must be unique to each environment, so you can + not create multiple user groups with with same id. If you have a production + customer and a test user group with the same id, you will be required to label + one as 'Customer' and another as 'Testing' + + + + + + + + +```bash Example Request +curl --location --request POST 'https://api.mintlify.com/api/user' \ +--header 'Content-Type: application/json' \ +--header 'Authorization: Token ' \ +--data-raw '{ + "current_token": "" +}' +``` + + + + + +```json Response +{ + "success": 1, + "user_group": { + "team_id": 3, + "token": "", + "name": "Example 1", + "provided_id": "example_1" + } +} +``` + + diff --git a/apps/docs/api-reference/endpoint/delete.mdx b/apps/docs/api-reference/endpoint/delete.mdx new file mode 100644 index 0000000..cb0eb8b --- /dev/null +++ b/apps/docs/api-reference/endpoint/delete.mdx @@ -0,0 +1,47 @@ +--- +title: "Delete User" +api: "DELETE https://api.mintlify.com/api/user" +description: "This endpoint deletes an existing user." +--- + +### Body + + + The data source ID provided in the data tab may be used to identify the data + source for the user group + + + + This is the current user group token you have for the user group you want to + delete + + +### Response + + + Indicates whether the call was successful. 1 if successful, 0 if not. + + + + +```bash Example Request +curl --location --request DELETE 'https://api.mintlify.com/api/user' \ +--header 'Content-Type: application/json' \ +--header 'Authorization: Token ' \ +--data-raw '{ + "user_group_id": "example_1" + "current_token": "abcdef" +}' +``` + + + + + +```json Response +{ + "success": 1 +} +``` + + diff --git a/apps/docs/api-reference/endpoint/get.mdx b/apps/docs/api-reference/endpoint/get.mdx new file mode 100644 index 0000000..ce95f65 --- /dev/null +++ b/apps/docs/api-reference/endpoint/get.mdx @@ -0,0 +1,101 @@ +--- +title: "Get User" +api: "GET https://api.mintlify.com/api/user" +description: "This endpoint gets or creates a new user." +--- + +### Body + + + This is the name of the user group. + + + + This is the ID you use to identify this user group in your database. + + + + This is a JSON mapping of schema id to either the data source that this user group should be + associated with or id of the datasource you provided when creating it. + + + + This is a JSON object for properties assigned to this user group. These will be accessible through + variables in the dashboards and SQL editor + + +### Response + + + Indicates whether the call was successful. 1 if successful, 0 if not. + + + + Indicates whether a new user group was created. + + + + +The contents of the user group + + + + + This is the internal ID for this user group. You don't need to record this information, since + you will not need to use it. + + + + This is the user group token (userGroupToken or USER_GROUP_TOKEN) that will be used to identify + which user group is viewing the dashboard. You should save this on your end to use when rendering + an embedded dashboard. + + + + This is the name of the user group provided in the request body. + + + + This is the user_group_id provided in the request body. + + + + This is the properties object if it was provided in the request body + + + + + + + + +```bash Example Request +curl --location --request GET 'https://api.mintlify.com/api/user' \ +--header 'Content-Type: application/json' \ +--header 'Authorization: Token ' \ +--data-raw '{ + "user_group_id": "example_1", + "name": "Example 1", + "mapping": {"40": "213", "134": "386"}, + "properties": {"filterValue": "value"} +}' +``` + + + + + +```json Response +{ + "success": 1, + "new_user_group": true, + "user_group": { + "team_id": 3, + "token": "", + "name": "Example 1", + "provided_id": "example_1" + } +} +``` + + diff --git a/apps/docs/api-reference/endpoint/update.mdx b/apps/docs/api-reference/endpoint/update.mdx new file mode 100644 index 0000000..4304987 --- /dev/null +++ b/apps/docs/api-reference/endpoint/update.mdx @@ -0,0 +1,101 @@ +--- +title: "Update User" +api: "PUT https://api.mintlify.com/api/user" +description: "This endpoint updates an existing user." +--- + +### Body + + + This is the name of the user group. + + + + This is the ID you use to identify this user group in your database. + + + + This is a JSON mapping of schema id to either the data source that this user + group should be associated with or id of the datasource you provided when + creating it. + + + + This is a JSON object for properties assigned to this user group. These will + be accessible through variables in the dashboards and SQL editor + + +### Response + + + Indicates whether the call was successful. 1 if successful, 0 if not. + + + + +The contents of the user group + + + + + Indicates whether a new user group was created. + + + + This is the user group token (userGroupToken or USER_GROUP_TOKEN) that will be + used to identify which user group is viewing the dashboard. You should save + this on your end to use when rendering an embedded dashboard. + + + + This is the name of the user group provided in the request body. + + + + This is the user_group_id provided in the request body. + + + + This is the properties object if it was provided in the request body + + + + This is the environment tag of the user group. Possible values are 'Customer' + and 'Testing' + + + + + + + + +```bash Example Request +curl --location --request PUT 'https://api.mintlify.com/api/user' \ +--header 'Content-Type: application/json' \ +--header 'Authorization: Token ' \ +--data-raw '{ + "user_group_id": "example_1", + "name": "Example 1", + "mapping": {"40": "213", "134": "386"}, + "properties": {"filterValue": "value"} +}' +``` + + + + + +```json Response +{ + "success": 1, + "user_group": { + "team_id": 113, + "token": "", + "name": "ok", + "provided_id": "6" + } +} +``` + + diff --git a/apps/docs/api-reference/health/performs-a-health-check.mdx b/apps/docs/api-reference/health/performs-a-health-check.mdx deleted file mode 100644 index 4cd1d03..0000000 --- a/apps/docs/api-reference/health/performs-a-health-check.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: get /healthz ---- \ No newline at end of file diff --git a/apps/docs/api-reference/prompts/get-the-deployed-prompt-version-to-a-particular-environment.mdx b/apps/docs/api-reference/prompts/get-the-deployed-prompt-version-to-a-particular-environment.mdx deleted file mode 100644 index b2335cd..0000000 --- a/apps/docs/api-reference/prompts/get-the-deployed-prompt-version-to-a-particular-environment.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: get /prompts/v2/deployment ---- \ No newline at end of file diff --git a/apps/docs/api-reference/reporting/report-a-request.mdx b/apps/docs/api-reference/reporting/report-a-request.mdx deleted file mode 100644 index 22674a4..0000000 --- a/apps/docs/api-reference/reporting/report-a-request.mdx +++ /dev/null @@ -1,3 +0,0 @@ ---- -openapi: post /reporting/v2/request ---- \ No newline at end of file diff --git a/apps/docs/client/cache-request-details.png b/apps/docs/client/cache-request-details.png deleted file mode 100644 index 67cd430..0000000 Binary files a/apps/docs/client/cache-request-details.png and /dev/null differ diff --git a/apps/docs/client/cache-requests-list.png b/apps/docs/client/cache-requests-list.png deleted file mode 100644 index 39dbba0..0000000 Binary files a/apps/docs/client/cache-requests-list.png and /dev/null differ diff --git a/apps/docs/client/integrations/openai.mdx b/apps/docs/client/integrations/openai.mdx deleted file mode 100644 index 6fba51b..0000000 --- a/apps/docs/client/integrations/openai.mdx +++ /dev/null @@ -1,227 +0,0 @@ ---- -title: "OpenAI Integration" -description: "Learn how to use OpenAI with Pezzo." ---- - -## Using OpenAI With Pezzo - -Ensure that you have the latest version of the Pezzo Client installed, as well as the OpenAI NPM package. - - - - - ```bash npm - npm i @pezzo/client openai - ``` - ```bash yarn - yarn add @pezzo/client openai - ``` - ```bash pnpm - pnpm add @pezzo/client openai - ``` - - - - - ```bash pip - pip install pezzo - ``` - ```bash poetry - poetry add pezzo - ``` - - - - -### Initialize Pezzo and PezzoOpenAI - -Learn more about how to initialize the Pezzo Client: -- [Node.js](/client/pezzo-client-node) -- [Python](/client/pezzo-client-python) - -### Making Requests to OpenAI - -#### Option 1: With Prompt Management (Recommended) - -We recommend you to manage your AI prompts through Pezzo. This allows you to easily manage your prompts, and keep track of your AI requests. [Click here to learn about Prompt Management in Pezzo](platform/prompt-management). - -Below is an example of how you can use Pezzo to retrieve a prompt, and then use it to make a request to OpenAI. - - - -```ts - // Fech prompt from Pezzo - const prompt = await pezzo.getPrompt("PromptName"); - - // Provide the prompt as-is to OpenAI - const response = await openai.chat.completions.create(prompt); - - // Or you can override specific properties if you wish - const response = await openai.chat.completions.create({ - ...prompt, - model: "gpt-4", - }); -``` - - -```py - from pezzo.client import pezzo - from pezzo.openai import openai - - # Fetch prompt from Pezzo - prompt = pezzo.get_prompt("PromptName") - - # Provide the prompt to OpenAI - response = openai.ChatCompletion.create( - pezzo_prompt=prompt - ) - - # You can override specific properties if you wish - response = openai.ChatCompletion.create( - pezzo_prompt=prompt, - model="gpt-4" - ) -``` - - - -Congratulations! You've about to benefit from seamless prompt version management and request tracking. Your request will now be visible in the **Requests** page of your Pezzo project. - -#### Option 2: Without Prompt Management - -If you don't want to manage your prompts through Pezzo, you can still use Pezzo to make requests to OpenAI and benefit from Pezzo's [Observability features](platform/observability/overview). - -You will consume the make request to the OpenAI exactly as you normally would. The only difference is that you will use the `PezzoOpenAI` instance we created above. Here is an example: - - - -```ts -const response = await openai.chat.completions.create({ - model: "gpt-3.5-turbo", - temperature: 0, - messages: [ - { - role: "user", - content: "Hey, how are you doing?", - }, - ], -}); -``` - - -```py -from pezzo.client import pezzo -from pezzo.openai import openai - -response = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - temperature=0, - messages=[ - { - "role": "user", - "content": "Hey, how are you doing?", - } - ] -) -``` - - - -You should now be able to see your request in the **Requests** page of your Pezzo project. - -### Additional Capabilities - -The Pezzo client enhances your developer experience by providing additional functionality to the OpenAI API. This is done through the second argument of the `createChatCompletion` method. - -#### Variables - -You can specify variables that will be interpolated by the Pezzo client before sending the request to OpenAI. This is useful if you want to use the same prompt for multiple requests, but with different variables. - - - -```ts -const response = await openai.chat.completions.create(..., { - variables: { - age: 22, - country: "France" - } -}); -``` - - -```py -response = openai.ChatCompletion.create( - ..., - pezzo_options={ - "variables": { - "age": 22, - "country": "France" - } - } -) -``` - - - -Notice the variables in the prompt. The Pezzo client will replace them with the values you specified in the `variables` object. - -#### Custom Properties - -You can also specify custom properties that will be sent to Pezzo. This is useful if you want to add additional information to your request, such as the user ID, or the request ID. This information will be visible in the **Requests** page of your Pezzo project, and you will be able to filter requests based on these properties. - - - -```ts -const response = await openai.chat.completions.create({ - ... -}, { - properties: { - userId: "some-user-id", - traceId: "some-trace-id" - } -}); -``` - - -```py -response = await openai.ChatCompletion.create( - ..., - pezzo_options={ - "properties": { - "userId": "some-user-id", - "traceId": "some-trace-id" - } - } -) -``` - - - - -#### Request Caching - -Utilizing request caching can sometimes save up to 90% on your API costs and execution time. You can enable cache by setting `cache` to `true` in the second argument of the `createChatCompletion` method. - - - -```ts -const response = await openai.chat.completions.create({ - ... -}, { - cache: true -}); -``` - - -```py -response = await openai.ChatCompletion.create( - ..., - pezzo_options={ - "cache": True - } -) -``` - - - -To learn more, visit the [Request Caching](/client/request-caching) page. \ No newline at end of file diff --git a/apps/docs/client/pezzo-client-node.mdx b/apps/docs/client/pezzo-client-node.mdx deleted file mode 100644 index 02e379f..0000000 --- a/apps/docs/client/pezzo-client-node.mdx +++ /dev/null @@ -1,105 +0,0 @@ ---- -title: "Pezzo Client - Node.js" ---- - -The Pezzo client is an NPM package that allows you to easily integrate your application with Pezzo. The client was built with TypeScript and is type-safe. - -## Getting Started - -### Intall the Pezzo Client - -Install the [@pezzo/client](https://www.npmjs.com/package/@pezzo/client) NPM package: - - - ```bash npm - npm i @pezzo/client - ``` - ```bash yarn - yarn add @pezzo/client - ``` - ```bash pnpm - pnpm add @pezzo/client - ``` - - -### Initialize the Pezzo Client - -You only need to initialize the Pezzo client once, and then you can use it throughout your application. - - - - Pezzo automatically looks for the following environment variables: - - `PEZZO_API_KEY`: Your Pezzo API key - - `PEZZO_PROJECT_ID`: Your Pezzo project ID - - `PEZZO_ENVIRONMENT`: The environment you want to use (e.g. `Production`, which is the default environment created by Pezzo) - - Variables found will be used automatically for configuration. - - ```ts - import { Pezzo, PezzoOpenAI } from "@pezzo/client"; - - // Initialize the Pezzo client and export it - export const pezzo = new Pezzo(); - - // Initialize PezzoOpenAI and export it - export const openai = new PezzoOpenAI(pezzo); - ``` - - - ```ts - import { Pezzo, PezzoOpenAI } from "@pezzo/client"; - - // Initialize the Pezzo client and export it - export const pezzo = new Pezzo({ - apiKey: "your-api-key", - projectId: "your-project-id", - environment: "Production", - }); - - // Initialize PezzoOpenAI and export it - export const openai = new PezzoOpenAI(pezzo); - ``` - - - - - - - Learn how to use Pezzo to observe and manage your OpenAI API calls. - - - -## API Reference - - -
- -
- - Pezzo API key - - - Pezzo project ID - - - Pezzo environment name - - - Pezzo server URL - -
-
-
-
- - -
- - The name of the prompt to retrieve. The prompt must be deployed to the current environment specified when initializing the Pezzo client. - -
-
\ No newline at end of file diff --git a/apps/docs/client/pezzo-client-python.mdx b/apps/docs/client/pezzo-client-python.mdx deleted file mode 100644 index 073a5a9..0000000 --- a/apps/docs/client/pezzo-client-python.mdx +++ /dev/null @@ -1,55 +0,0 @@ ---- -title: "Pezzo Client - Python" ---- - -The Pezzo client is a PyPi that allows you to easily integrate your application with Pezzo. - -## Getting Started - -### Intall the Pezzo Client - -Install the [pezzo](https://pypi.org/project/pezzo/) package from PyPi: - - - ```bash pip - pip install pezzo - ``` - ```bash poetry - poetry add pezzo - ``` - - -### Initialize the Pezzo Client - -To initialize the Pezzo client, simply import it into your script. It will automatically read the configuration from your environment variables: -- `PEZZO_API_KEY`: Your Pezzo API key -- `PEZZO_PROJECT_ID`: Your Pezzo project ID -- `PEZZO_ENVIRONMENT`: The environment you want to use (e.g. `Production`, which is the default environment created by Pezzo) - - -```py main.py -from pezzo.client import pezzo -``` - - -In the above example, we created a `libs/pezzo.ts` file in which we instantiate the Pezzo client and export it. We can then import it in other areas of our application. - - - - Learn how to use Pezzo to observe and manage your OpenAI API calls. - - - -## API Reference - - -
- - The name of the prompt to retrieve. The prompt must be deployed to the current environment specified when initializing the Pezzo client. - -
-
\ No newline at end of file diff --git a/apps/docs/client/request-caching.mdx b/apps/docs/client/request-caching.mdx deleted file mode 100644 index e847a16..0000000 --- a/apps/docs/client/request-caching.mdx +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: "Request Caching" ---- - -Pezzo provides you with out-of-the-box request caching capabilities. Caching is useful in several scenarios: -- Your LLM requests are relatively static -- Your LLM requests take a long time to execute -- Your LLM requests are expensive - -Utilizing caching can sometimes reduce your development costs and execution time by over 90%! - -## Usage - -To enable caching, simply set `cache: enabled` in the Pezzo Options parameter. Here is an example: - -```ts -const response = await openai.chat.completions.create({ - model: "gpt-3.5-turbo", - messages: [ - { - role: "user", - message: "Hello, how are you?" - } - ] -}, { - cache: true -}); -``` - -## Cached Requests in the Console - -Cached requests will will be marked in the **Requests** tab in the Pezzo Console: - - - - - -When inspecting requests, you will see whether cache was enabled, and whether there was a cache hit or miss: - - - - - -## Limitations - -Requests will be cached for 3 days by default. This is currently not configurable. \ No newline at end of file diff --git a/apps/docs/development.mdx b/apps/docs/development.mdx new file mode 100644 index 0000000..8783008 --- /dev/null +++ b/apps/docs/development.mdx @@ -0,0 +1,98 @@ +--- +title: 'Development' +description: 'Learn how to preview changes locally' +--- + + + **Prerequisite** You should have installed Node.js (version 18.10.0 or + higher). + + +Step 1. Install Mintlify on your OS: + + + +```bash npm +npm i -g mintlify +``` + +```bash yarn +yarn global add mintlify +``` + + + +Step 2. Go to the docs are located (where you can find `mint.json`) and run the following command: + +```bash +mintlify dev +``` + +The documentation website is now available at `http://localhost:3000`. + +### Custom Ports + +Mintlify uses port 3000 by default. You can use the `--port` flag to customize the port Mintlify runs on. For example, use this command to run in port 3333: + +```bash +mintlify dev --port 3333 +``` + +You will see an error like this if you try to run Mintlify in a port that's already taken: + +```md +Error: listen EADDRINUSE: address already in use :::3000 +``` + +## Mintlify Versions + +Each CLI is linked to a specific version of Mintlify. Please update the CLI if your local website looks different than production. + + + +```bash npm +npm i -g mintlify@latest +``` + +```bash yarn +yarn global upgrade mintlify +``` + + + +## Deployment + + + Unlimited editors available under the [Startup + Plan](https://mintlify.com/pricing) + + +You should see the following if the deploy successfully went through: + + + + + +## Troubleshooting + +Here's how to solve some common problems when working with the CLI. + + + + Update to Node v18. Run `mintlify install` and try again. + + +Go to the `C:/Users/Username/.mintlify/` directory and remove the `mint` +folder. Then Open the Git Bash in this location and run `git clone +https://github.com/mintlify/mint.git`. + +Repeat step 3. + + + + Try navigating to the root of your device and delete the ~/.mintlify folder. + Then run `mintlify dev` again. + + + +Curious about what changed in a CLI version? [Check out the CLI changelog.](/changelog/command-line) diff --git a/apps/docs/essentials/code.mdx b/apps/docs/essentials/code.mdx new file mode 100644 index 0000000..d2a462a --- /dev/null +++ b/apps/docs/essentials/code.mdx @@ -0,0 +1,37 @@ +--- +title: 'Code Blocks' +description: 'Display inline code and code blocks' +icon: 'code' +--- + +## Basic + +### Inline Code + +To denote a `word` or `phrase` as code, enclose it in backticks (`). + +``` +To denote a `word` or `phrase` as code, enclose it in backticks (`). +``` + +### Code Block + +Use [fenced code blocks](https://www.markdownguide.org/extended-syntax/#fenced-code-blocks) by enclosing code in three backticks and follow the leading ticks with the programming language of your snippet to get syntax highlighting. Optionally, you can also write the name of your code after the programming language. + +```java HelloWorld.java +class HelloWorld { + public static void main(String[] args) { + System.out.println("Hello, World!"); + } +} +``` + +````md +```java HelloWorld.java +class HelloWorld { + public static void main(String[] args) { + System.out.println("Hello, World!"); + } +} +``` +```` diff --git a/apps/docs/essentials/images.mdx b/apps/docs/essentials/images.mdx new file mode 100644 index 0000000..4c15177 --- /dev/null +++ b/apps/docs/essentials/images.mdx @@ -0,0 +1,59 @@ +--- +title: 'Images and Embeds' +description: 'Add image, video, and other HTML elements' +icon: 'image' +--- + + + +## Image + +### Using Markdown + +The [markdown syntax](https://www.markdownguide.org/basic-syntax/#images) lets you add images using the following code + +```md +![title](/path/image.jpg) +``` + +Note that the image file size must be less than 5MB. Otherwise, we recommend hosting on a service like [Cloudinary](https://cloudinary.com/) or [S3](https://aws.amazon.com/s3/). You can then use that URL and embed. + +### Using Embeds + +To get more customizability with images, you can also use [embeds](/writing-content/embed) to add images + +```html + +``` + +## Embeds and HTML elements + + + +
+ + + +Mintlify supports [HTML tags in Markdown](https://www.markdownguide.org/basic-syntax/#html). This is helpful if you prefer HTML tags to Markdown syntax, and lets you create documentation with infinite flexibility. + + + +### iFrames + +Loads another HTML page within the document. Most commonly used for embedding videos. + +```html + +``` diff --git a/apps/docs/essentials/markdown.mdx b/apps/docs/essentials/markdown.mdx new file mode 100644 index 0000000..c8ad9c1 --- /dev/null +++ b/apps/docs/essentials/markdown.mdx @@ -0,0 +1,88 @@ +--- +title: 'Markdown Syntax' +description: 'Text, title, and styling in standard markdown' +icon: 'text-size' +--- + +## Titles + +Best used for section headers. + +```md +## Titles +``` + +### Subtitles + +Best use to subsection headers. + +```md +### Subtitles +``` + + + +Each **title** and **subtitle** creates an anchor and also shows up on the table of contents on the right. + + + +## Text Formatting + +We support most markdown formatting. Simply add `**`, `_`, or `~` around text to format it. + +| Style | How to write it | Result | +| ------------- | ----------------- | --------------- | +| Bold | `**bold**` | **bold** | +| Italic | `_italic_` | _italic_ | +| Strikethrough | `~strikethrough~` | ~strikethrough~ | + +You can combine these. For example, write `**_bold and italic_**` to get **_bold and italic_** text. + +You need to use HTML to write superscript and subscript text. That is, add `` or `` around your text. + +| Text Size | How to write it | Result | +| ----------- | ------------------------ | ---------------------- | +| Superscript | `superscript` | superscript | +| Subscript | `subscript` | subscript | + +## Linking to Pages + +You can add a link by wrapping text in `[]()`. You would write `[link to google](https://google.com)` to [link to google](https://google.com). + +Links to pages in your docs need to be root-relative. Basically, you should include the entire folder path. For example, `[link to text](/writing-content/text)` links to the page "Text" in our components section. + +Relative links like `[link to text](../text)` will open slower because we cannot optimize them as easily. + +## Blockquotes + +### Singleline + +To create a blockquote, add a `>` in front of a paragraph. + +> Dorothy followed her through many of the beautiful rooms in her castle. + +```md +> Dorothy followed her through many of the beautiful rooms in her castle. +``` + +### Multiline + +> Dorothy followed her through many of the beautiful rooms in her castle. +> +> The Witch bade her clean the pots and kettles and sweep the floor and keep the fire fed with wood. + +```md +> Dorothy followed her through many of the beautiful rooms in her castle. +> +> The Witch bade her clean the pots and kettles and sweep the floor and keep the fire fed with wood. +``` + +### LaTeX + +Mintlify supports [LaTeX](https://www.latex-project.org) through the Latex component. + +8 x (vk x H1 - H2) = (0,1) + +```md +8 x (vk x H1 - H2) = (0,1) +``` diff --git a/apps/docs/essentials/navigation.mdx b/apps/docs/essentials/navigation.mdx new file mode 100644 index 0000000..ca44bb6 --- /dev/null +++ b/apps/docs/essentials/navigation.mdx @@ -0,0 +1,66 @@ +--- +title: 'Navigation' +description: 'The navigation field in mint.json defines the pages that go in the navigation menu' +icon: 'map' +--- + +The navigation menu is the list of links on every website. + +You will likely update `mint.json` every time you add a new page. Pages do not show up automatically. + +## Navigation syntax + +Our navigation syntax is recursive which means you can make nested navigation groups. You don't need to include `.mdx` in page names. + + + +```json Regular Navigation +"navigation": [ + { + "group": "Getting Started", + "pages": ["quickstart"] + } +] +``` + +```json Nested Navigation +"navigation": [ + { + "group": "Getting Started", + "pages": [ + "quickstart", + { + "group": "Nested Reference Pages", + "pages": ["nested-reference-page"] + } + ] + } +] +``` + + + +## Folders + +Simply put your MDX files in folders and update the paths in `mint.json`. + +For example, to have a page at `https://yoursite.com/your-folder/your-page` you would make a folder called `your-folder` containing an MDX file called `your-page.mdx`. + + + +You cannot use `api` for the name of a folder unless you nest it inside another folder. Mintlify uses Next.js which reserves the top-level `api` folder for internal server calls. A folder name such as `api-reference` would be accepted. + + + +```json Navigation With Folder +"navigation": [ + { + "group": "Group Name", + "pages": ["your-folder/your-page"] + } +] +``` + +## Hidden Pages + +MDX files not included in `mint.json` will not show up in the sidebar but are accessible through the search bar and by linking directly to them. diff --git a/apps/docs/essentials/settings.mdx b/apps/docs/essentials/settings.mdx new file mode 100644 index 0000000..ae6e7d6 --- /dev/null +++ b/apps/docs/essentials/settings.mdx @@ -0,0 +1,318 @@ +--- +title: 'Global Settings' +description: 'Mintlify gives you complete control over the look and feel of your documentation using the mint.json file' +icon: 'gear' +--- + +Every Mintlify site needs a `mint.json` file with the core configuration settings. Learn more about the [properties](#properties) below. + +## Properties + + +Name of your project. Used for the global title. + +Example: `mintlify` + + + + + An array of groups with all the pages within that group + + + The name of the group. + + Example: `Settings` + + + + The relative paths to the markdown files that will serve as pages. + + Example: `["customization", "page"]` + + + + + + + + Path to logo image or object with path to "light" and "dark" mode logo images + + + Path to the logo in light mode + + + Path to the logo in dark mode + + + Where clicking on the logo links you to + + + + + + Path to the favicon image + + + + Hex color codes for your global theme + + + The primary color. Used for most often for highlighted content, section + headers, accents, in light mode + + + The primary color for dark mode. Used for most often for highlighted + content, section headers, accents, in dark mode + + + The primary color for important buttons + + + The color of the background in both light and dark mode + + + The hex color code of the background in light mode + + + The hex color code of the background in dark mode + + + + + + + + Array of `name`s and `url`s of links you want to include in the topbar + + + The name of the button. + + Example: `Contact us` + + + The url once you click on the button. Example: `https://mintlify.com/contact` + + + + + + + + + Link shows a button. GitHub shows the repo information at the url provided including the number of GitHub stars. + + + If `link`: What the button links to. + + If `github`: Link to the repository to load GitHub information from. + + + Text inside the button. Only required if `type` is a `link`. + + + + + + + Array of version names. Only use this if you want to show different versions + of docs with a dropdown in the navigation bar. + + + + An array of the anchors, includes the `icon`, `color`, and `url`. + + + The [Font Awesome](https://fontawesome.com/search?s=brands%2Cduotone) icon used to feature the anchor. + + Example: `comments` + + + The name of the anchor label. + + Example: `Community` + + + The start of the URL that marks what pages go in the anchor. Generally, this is the name of the folder you put your pages in. + + + The hex color of the anchor icon background. Can also be a gradient if you pass an object with the properties `from` and `to` that are each a hex color. + + + Used if you want to hide an anchor until the correct docs version is selected. + + + Pass `true` if you want to hide the anchor until you directly link someone to docs inside it. + + + One of: "brands", "duotone", "light", "sharp-solid", "solid", or "thin" + + + + + + + Override the default configurations for the top-most anchor. + + + The name of the top-most anchor + + + Font Awesome icon. + + + One of: "brands", "duotone", "light", "sharp-solid", "solid", or "thin" + + + + + + An array of navigational tabs. + + + The name of the tab label. + + + The start of the URL that marks what pages go in the tab. Generally, this + is the name of the folder you put your pages in. + + + + + + Configuration for API settings. Learn more about API pages at [API Components](/api-playground/demo). + + + The base url for all API endpoints. If `baseUrl` is an array, it will enable for multiple base url + options that the user can toggle. + + + + + + The authentication strategy used for all API endpoints. + + + The name of the authentication parameter used in the API playground. + + If method is `basic`, the format should be `[usernameName]:[passwordName]` + + + The default value that's designed to be a prefix for the authentication input field. + + E.g. If an `inputPrefix` of `AuthKey` would inherit the default input result of the authentication field as `AuthKey`. + + + + + + Configurations for the API playground + + + + Whether the playground is showing, hidden, or only displaying the endpoint with no added user interactivity `simple` + + Learn more at the [playground guides](/api-playground/demo) + + + + + + Enabling this flag ensures that key ordering in OpenAPI pages matches the key ordering defined in the OpenAPI file. + + This behavior will soon be enabled by default, at which point this field will be deprecated. + + + + + + + A string or an array of strings of URL(s) or relative path(s) pointing to your + OpenAPI file. + + Examples: + + ```json Absolute + "openapi": "https://example.com/openapi.json" + ``` + ```json Relative + "openapi": "/openapi.json" + ``` + ```json Multiple + "openapi": ["https://example.com/openapi1.json", "/openapi2.json", "/openapi3.json"] + ``` + + + + + + An object of social media accounts where the key:property pair represents the social media platform and the account url. + + Example: + ```json + { + "twitter": "https://twitter.com/mintlify", + "website": "https://mintlify.com" + } + ``` + + + One of the following values `website`, `facebook`, `twitter`, `discord`, `slack`, `github`, `linkedin`, `instagram`, `hacker-news` + + Example: `twitter` + + + The URL to the social platform. + + Example: `https://twitter.com/mintlify` + + + + + + Configurations to enable feedback buttons + + + + Enables a button to allow users to suggest edits via pull requests + + + Enables a button to allow users to raise an issue about the documentation + + + + + + Customize the dark mode toggle. + + + Set if you always want to show light or dark mode for new users. When not + set, we default to the same mode as the user's operating system. + + + Set to true to hide the dark/light mode toggle. You can combine `isHidden` with `default` to force your docs to only use light or dark mode. For example: + + + ```json Only Dark Mode + "modeToggle": { + "default": "dark", + "isHidden": true + } + ``` + + ```json Only Light Mode + "modeToggle": { + "default": "light", + "isHidden": true + } + ``` + + + + + + + + + A background image to be displayed behind every page. See example with + [Infisical](https://infisical.com/docs) and [FRPC](https://frpc.io). + diff --git a/apps/docs/favicon.png b/apps/docs/favicon.png index 7cb1b32..0161cc1 100644 Binary files a/apps/docs/favicon.png and b/apps/docs/favicon.png differ diff --git a/apps/docs/images/background.png b/apps/docs/images/background.png deleted file mode 100644 index 3292ab2..0000000 Binary files a/apps/docs/images/background.png and /dev/null differ diff --git a/apps/docs/images/checks-passed.png b/apps/docs/images/checks-passed.png new file mode 100644 index 0000000..3303c77 Binary files /dev/null and b/apps/docs/images/checks-passed.png differ diff --git a/apps/docs/images/hero-dark.svg b/apps/docs/images/hero-dark.svg new file mode 100644 index 0000000..59ab097 --- /dev/null +++ b/apps/docs/images/hero-dark.svg @@ -0,0 +1,136 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/apps/docs/images/hero-light.svg b/apps/docs/images/hero-light.svg new file mode 100644 index 0000000..9db54d9 --- /dev/null +++ b/apps/docs/images/hero-light.svg @@ -0,0 +1,139 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/apps/docs/introduction.mdx b/apps/docs/introduction.mdx new file mode 100644 index 0000000..d39991a --- /dev/null +++ b/apps/docs/introduction.mdx @@ -0,0 +1,71 @@ +--- +title: Introduction +description: "Welcome to the home of your new documentation" +--- + +Hero Light +Hero Dark + +## Setting up + +The first step to world-class documentation is setting up your editing environnments. + + + + Get your docs set up locally for easy development + + + Preview your changes before you push to make sure they're perfect + + + +## Make it yours + +Update your docs to your brand and add valuable content for the best user conversion. + + + + Customize your docs to your company's colors and brands + + + Automatically generate endpoints from an OpenAPI spec + + + Build interactive features and designs to guide your users + + + Check out our showcase of our favorite documentation + + diff --git a/apps/docs/introduction/banner.png b/apps/docs/introduction/banner.png deleted file mode 100644 index 22015b7..0000000 Binary files a/apps/docs/introduction/banner.png and /dev/null differ diff --git a/apps/docs/introduction/docker-compose.mdx b/apps/docs/introduction/docker-compose.mdx deleted file mode 100644 index c790971..0000000 --- a/apps/docs/introduction/docker-compose.mdx +++ /dev/null @@ -1,31 +0,0 @@ ---- -title: "Running With Docker Compose" -description: "Learn how to run the full Pezzo stack locally with Docker Compose." ---- - - - If you don't wish to run Pezzo locally, you can use [Pezzo - Cloud](https://app.pezzo.ai) instead. Pezzo Cloud is a fully managed version - of Pezzo that is hosted and managed by the Pezzo team. - - -## Clone the repository - -First, clone the Pezzo repository from GitHub and then navigate into the directory: - -```bash -git clone https://github.com/pezzolabs/pezzo.git -cd pezzo -``` - -## Running with Docker Compose - -Run the following command to start the Pezzo stack: - -```bash -docker-compose up -``` - -This command will spin up all infrastructure services (PostgreSQL, Redis, etc.) and the Pezzo components (Server, Console). It will also automatically apply database migration to the PostgreSQL database. - -Once the stack is up and running, you can access the Pezzo Console at http://localhost:4200. 🚀 diff --git a/apps/docs/introduction/tutorial-observability/overview.mdx b/apps/docs/introduction/tutorial-observability/overview.mdx deleted file mode 100644 index 889ab30..0000000 --- a/apps/docs/introduction/tutorial-observability/overview.mdx +++ /dev/null @@ -1,114 +0,0 @@ ---- -title: "Tutorial: Observability" -description: "In just a few lines of code, monitor your AI operations seamlessly." ---- - - -This tutorial is for users who want to use Pezzo for observability and monitoring only. - -If you also want to use Pezzo to manage your prompts, version control and prompt deployment, check out the [Prompt Management tutorial](/introduction/tutorial-prompt-management/overview). - - -**Prefer a video tutorial?** We've prepared a 5-minute video for you! If you want to see the code example, [it's available on Codesandbox](https://codesandbox.io/p/sandbox/pezzo-example-observability-6d2qp6?file=%2Fsrc%2Fapp.ts%3A1%2C1). - - -
- -
- -## What you'll learn - -You're going to learn how to easily use Pezzo to supercharge your AI operations with monitoring and observability. It takes just a few lines of code! - - - This tutorial assumes you already signed up to Pezzo Cloud and created a new - project. If you haven't done so, please [sign up to Pezzo - Cloud](https://app.pezzo.ai). - - -## Install depdendencies - -Install the Pezzo Client and the OpenAI SDK: - -```bash -npm i @pezzo/client openai -``` - -## Making calls to OpenAI - -Here is a code example: - -```ts app.ts -import { Pezzo, PezzoOpenAI } from "@pezzo/client"; - -// Initialize the Pezzo client -const pezzo = new Pezzo({ - apiKey: "", - projectId: "", - environment: "Production", -}); - -// Initialize the OpenAI client -const openai = new PezzoOpenAI(pezzo); - -async function main() { - // Make calls to the OpenAI API as you normally would! - const completion = await openai.chat.completions.create( - { - model: "gpt-3.5-turbo", - temperature: 0, - messages: [ - { - role: "user", - content: "Tell me {numFacts} fun facts about {topic}", - }, - ], - }, - { - variables: { - // You can define variables that will be interpolated during execution. - numFacts: 3, - topic: "Artificial Intelligence", - }, - properties: { - // You can optionally specify custom properties that will be associated with the request. - someProperty: "someValue", - }, - } - ); -} - -main(); - -``` - -[Click here to run this example on CodeSandbox](https://codesandbox.io/p/sandbox/pezzo-example-observability-6d2qp6?file=%2Fsrc%2Fapp.ts%3A1%2C1) - -**Let's explain what's going on here:** - -- First, we initialize the Pezzo client and the OpenAI client. We pass the Pezzo client to the OpenAI client so it can use it to fetch the prompt. -- Then, we make a call to the OpenAI API as we normally would. -- (Optional) We specify additional parameters in the second argument, these are `variables` and `properties`. - -The result will go directly to OpenAI and the response will be reported to Pezzo. - -## Monitoring Requests - -After integrating with Pezzo and making some requests to OpenAI, you should see all historical requests in the **Requests** view in Pezzo. - -If you want to learn more about Pezzo's observability feature, check out the [Observability section in the docs](/platform/observability/overview). - - - - diff --git a/apps/docs/introduction/tutorial-prompt-management/before-commit.png b/apps/docs/introduction/tutorial-prompt-management/before-commit.png deleted file mode 100644 index a18faf2..0000000 Binary files a/apps/docs/introduction/tutorial-prompt-management/before-commit.png and /dev/null differ diff --git a/apps/docs/introduction/tutorial-prompt-management/commit-modal.png b/apps/docs/introduction/tutorial-prompt-management/commit-modal.png deleted file mode 100644 index ecafdbd..0000000 Binary files a/apps/docs/introduction/tutorial-prompt-management/commit-modal.png and /dev/null differ diff --git a/apps/docs/introduction/tutorial-prompt-management/empty-prompts-page.png b/apps/docs/introduction/tutorial-prompt-management/empty-prompts-page.png deleted file mode 100644 index 4f00478..0000000 Binary files a/apps/docs/introduction/tutorial-prompt-management/empty-prompts-page.png and /dev/null differ diff --git a/apps/docs/introduction/tutorial-prompt-management/final-app-screenshot.png b/apps/docs/introduction/tutorial-prompt-management/final-app-screenshot.png deleted file mode 100644 index 3dd33f6..0000000 Binary files a/apps/docs/introduction/tutorial-prompt-management/final-app-screenshot.png and /dev/null differ diff --git a/apps/docs/introduction/tutorial-prompt-management/modal-new-prompt.png b/apps/docs/introduction/tutorial-prompt-management/modal-new-prompt.png deleted file mode 100644 index 1d9c5b7..0000000 Binary files a/apps/docs/introduction/tutorial-prompt-management/modal-new-prompt.png and /dev/null differ diff --git a/apps/docs/introduction/tutorial-prompt-management/overview.mdx b/apps/docs/introduction/tutorial-prompt-management/overview.mdx deleted file mode 100644 index 3902ca6..0000000 --- a/apps/docs/introduction/tutorial-prompt-management/overview.mdx +++ /dev/null @@ -1,201 +0,0 @@ ---- -title: "Tutorial: Prompt Management" -description: "Learn how to manage your AI prompts for streamlined delivery with Pezzo in less than 5 minutes!" ---- - -This tutorial is for users who want to manage their AI operations in Pezzo end-to-end. From prompt management, version control and instant delivery, all the way to monitoring and observability. - -If you wish to only use Pezzo for monitoring and observability, check out the [Observability Tutorial](/introduction/tutorial-observability/overview). - - -**Prefer a video tutorial?** We've prepared a 5-minute video for you! If you want to see the code example, [it's available on Codesandbox](https://codesandbox.io/p/sandbox/pezzo-example-prompt-management-qv5f86?file=%2Fsrc%2Fapp.ts%3A1%2C1). - -
- -
- -## What you'll learn -You're going to learn how to manage your AI prompts with Pezzo, so you can streamline delivery and collaborate with your team. This includes: - -- Creating a prompt -- Basic Prompt Engineering -- Testing your prompt in the Pezzo Platform -- Committing and publishing your prompt -- Consuming your prompt using TypeScript - -Are you ready? Let's go! - -## Create your first prompt - -This tutorial assumes you already signed up to Pezzo Cloud and created a new project. If you haven't done so, please [sign up to Pezzo Cloud](https://app.pezzo.ai). - - -In the Prompts page, click the **+ New Prompt** button. This will open a modal where you can create a new Prompt. - -It's best to choose a descriptive name for your prompt. Let's call it `FactGenerator`. - - - - - -Hit **Create** and you will be taken to the Prompt page. - -## Prompt Engineering - -The Prompt Editor is the first screen you will see after selecting/creating a new prompt. This is wehere you will do some prompt engineering, commit and publish your prompts. You can find two main sections in the prompt editor - the **Content** and the **Settings**. - -### Prompt content - -The Prompt Content is where you will write your prompt. In our case, we want to ask the LLM (in our case, OpenAI) to generate facts about a topic. - -Simply copy and paste the following: - -```plaintext -Generate {numFacts} facts about the following topic: "{topic}" -``` - - -You can define variables in the prompt content by using curly braces. In our case, we define the `numFacts` and `topic` variables. - - -### Prompt settings -Depending on the LLM provider, the settings will vary. In our case, we will use the OpenAI API. Feel free to adjust the settings to your liking. - -For this tutorial, make the following adjustments: -- Temperate: `0` -- Max Response Length: `1000` - - -When expecting a structured response or strictly factual data, it's best to set the temperature to 0. This reduces the chance of "hallucinations". - - - - - - -## Testing the prompt -Pezzo allows you to test your prompts before publishing them. This is a great way to quickly iterate on your prompts and make sure they're working as expected. - -To test your prompt, simply click the **Test** button. This will open a modal where you can enter the values for the variables you defined in the prompt content. - -For example, if you want to generate 3 facts about cats, enter the following: - - - - - -Then, hit **Test** and wait for the results. You should see something like this: - - - - - -In the test results you can find plenty of useful information such as: -- Token usage -- Cost -- Status (success, error) -- Duration -- Request body -- Response body - -This is extremely useful when debugging prompts and making sure they're working as expected. Performing tests in Pezzo allows you to ensure that whatever prompt you publish will work as expected, without having to write a single line of code! - -## Commiting and publishing - -Once you're happy with the prompt, there is one more step you need to do before you can consume it in our application. You need to **commit and publish** the prompt. - -Simply click the **Commit** button at the top right, and provide a message, such as `Initial version`. Then, hit **Commit**. - - - - - -Now that you've committed the prompt, go ahead and publish it. Click the **Publish** button at the top right, and choose the desired environment. In our case, we will choose the **Production** environment. - - - - -
- -Every Pezzo project comes built in with a Production environment. You can manage your environments in the **Environments** page of your project. - - -Congratulations! You've just created your first prompt and published it to production. In the next step you'll learn how to consume it in your application. - -## Consuming the prompt - -In this part of the tutorial, you'll learn how to consume the prompt you just published in a TypeScript application. For the full Pezzo Client documentation, you can [click here](http://localhost:3200/client/pezzo-client). - -### Install depdendencies - -Install the Pezzo Client and the OpenAI SDK: - -```bash -npm i @pezzo/client openai -``` - -### Consume prompt via Pezzo Client - -Here is a code example: - -```ts app.ts -import { Pezzo, PezzoOpenAI } from "@pezzo/client"; - -// Initialize the Pezzo client -const pezzo = new Pezzo({ - apiKey: "", - projectId: "", - environment: "Production", -}); - -// Initialize the OpenAI client -const openai = new PezzoOpenAI(pezzo); - -async function main() { - // Get the deployed "GenerateFacts" prompt version - const prompt = await pezzo.getPrompt("GenerateFacts"); - - // Call the OpenAI API, passing the prompt as an argument. You can override parameters if you wish. - const response = await openai.chat.completions.create(prompt, { - variables: { - // You can define variables that will be interpolated during execution. - numFacts: 3, - topic: "Artificial Intelligence", - }, - properties: { - // You can optionally specify custom properties that will be associated with the request. - someProperty: "someValue", - }, - }); - - console.log("response", response); -} - -main(); -``` - -[Click here to run this example on CodeSandbox](https://codesandbox.io/p/sandbox/pezzo-example-prompt-management-qv5f86?file=%2Fsrc%2Fapp.ts%3A1%2C1) - -**Let's explain what's going on here:** - -- First, we initialize the Pezzo client and the OpenAI client. We pass the Pezzo client to the OpenAI client so it can use it to fetch the prompt. -- Then, we fetch the prompt from Pezzo using the `getPrompt` method. This method returns the latest version of the prompt for the desired environment. -- Finally, we call the OpenAI API using the `createChatCompletion` method. This method takes the prompt as an argument and returns the response from the OpenAI API. - -Congratulations! You've just consumed your first prompt using Pezzo. The Pezzo Client has many more useful capabilities such as variables - -## Monitoring Requests - -Now that you've made a request, you should be able to monitor your results in the **Requests** page. [Check it out here](/platform/observability/requests). - diff --git a/apps/docs/introduction/tutorial-prompt-management/prompt-test-results.png b/apps/docs/introduction/tutorial-prompt-management/prompt-test-results.png deleted file mode 100644 index 87d5fad..0000000 Binary files a/apps/docs/introduction/tutorial-prompt-management/prompt-test-results.png and /dev/null differ diff --git a/apps/docs/introduction/tutorial-prompt-management/publish-modal.png b/apps/docs/introduction/tutorial-prompt-management/publish-modal.png deleted file mode 100644 index 559ffb4..0000000 Binary files a/apps/docs/introduction/tutorial-prompt-management/publish-modal.png and /dev/null differ diff --git a/apps/docs/introduction/tutorial-prompt-management/step-2-prompt-engineering.mdx b/apps/docs/introduction/tutorial-prompt-management/step-2-prompt-engineering.mdx deleted file mode 100644 index eadc5ef..0000000 --- a/apps/docs/introduction/tutorial-prompt-management/step-2-prompt-engineering.mdx +++ /dev/null @@ -1,99 +0,0 @@ ---- -title: "Step 2: Prompt Engineering" -description: "Create your first prompt and do some prompt engineering" ---- - -## Create your first Prompt - -In the Prompts page, click the **+ New Prompt** button. This will open a modal where you can create a new Prompt. - -It's best to choose a descriptive name for your prompt. We will call it `FactGenerator`. - - - - - -Hit **Create** and you will be taken to the Prompt page. - - -## The Prompt Editor - -The Prompt Editor is the first screen you will see after selecting/creating a new prompt. This is wehere you will do some prompt engineering, commit and publish your prompts. You can find two main sections in the prompt editor - the **Content** and the **Settings**. - -### Prompt Content - -The Prompt Content is where you will write your prompt. In our case, we want to ask the LLM (in our case, OpenAI) to generate facts about a topic. - -Simply copy and paste the following: - -```plaintext -Generate {numFacts} facts about the following topic: "{topic}" -``` - - -You can define variables in the prompt content by using curly braces. In our case, we define the `numFacts` and `topic` variables. - - -### Prompt Settings -Depending on the LLM provider, the settings will vary. In our case, we will use the OpenAI API. Feel free to adjust the settings to your liking. - -However, for this tutorial, we will make the following adjustments: -- Temperate: `0` -- Max Response Length: `1000` - - -When expecting a structured response or strictly factual data, it's best to set the temperature to 0. This reduces the chance of "hallucinations". - - - - - - -## Testing the Prompt -Pezzo allows you to test your prompts before publishing them. This is a great way to quickly iterate on your prompts and make sure they're working as expected. - -To test your prompt, simply click the **Test** button. This will open a modal where you can enter the values for the variables you defined in the prompt content. - -For example, if we want to generate 3 facts about cats, we will enter the following: - - - - - -Then, hit **Test** and wait for the results. You should see something like this: - - - - - -In the test results you can find plenty of useful information such as: -- Token usage -- Cost -- Status (success, error) -- Duration -- Request body -- Response body - -This is extremely useful when debugging prompts and making sure they're working as expected. Performing tests in Pezzo allows you to ensure that whatever prompt you publish will work as expected, without having to write a single line of code! - -## Commiting and Publishing the Prompt - -Once we're happy with the prompt, there is one more step we need to do before we can consume it in our application. We need to **commit and publish** the prompt. - -Simply click the **Commit** button at the top right, and provide a message, such as `Initial version`. Then, hit **Commit**. - - - - - -Now that we've committed the prompt, we can publish it. Click the **Publish** button at the top right, and choose the desired environment. In our case, we will choose the **Production** environment. - - - - -
- -Every Pezzo project comes built in with a Production environment. You can manage your environments in the **Environments** page of your project. - - -Congratulations! You've just created your first prompt and published it to production. In the next step we will learn how to consume it in our application. \ No newline at end of file diff --git a/apps/docs/introduction/tutorial-prompt-management/step-3-consume-prompt.mdx b/apps/docs/introduction/tutorial-prompt-management/step-3-consume-prompt.mdx deleted file mode 100644 index 849cd0b..0000000 --- a/apps/docs/introduction/tutorial-prompt-management/step-3-consume-prompt.mdx +++ /dev/null @@ -1,27 +0,0 @@ ---- -title: "Step 3: Consume Prompt" -description: "Consume the prompt in your application" ---- - -Now it's time to consume the prompt in our application. For this tutorial, we'll use Node.js with TypeScript. - -## Install Dependencies - -We'll need to install the Pezzo Client as well as the OpenAI NPM package. - -```bash -npm i @pezzo/client openai -``` - - diff --git a/apps/docs/introduction/tutorial-prompt-management/test-variables-modal.png b/apps/docs/introduction/tutorial-prompt-management/test-variables-modal.png deleted file mode 100644 index 93574d5..0000000 Binary files a/apps/docs/introduction/tutorial-prompt-management/test-variables-modal.png and /dev/null differ diff --git a/apps/docs/introduction/what-is-pezzo.mdx b/apps/docs/introduction/what-is-pezzo.mdx deleted file mode 100644 index 178f080..0000000 --- a/apps/docs/introduction/what-is-pezzo.mdx +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: 'What is Pezzo?' ---- - -Pezzo is a powerful open-source toolkit designed to streamline the process of AI development. It empowers developers and teams to leverage the full potential of AI models in their applications with ease. - - - - - -
- -# Key Features -- 🎛ī¸ **Centralized Prompt Management:** Manage all AI prompts in one place for maximum visibility and efficiency. -- 🚀 **Streamlined Prompt Design & Versioning:** Create, edit, test and version prompts with ease. -- 🕜 **Instant Deployments:** Pezzo allows you to publish your prompts instantly, without requiring a full release cycle. -- 🔍 **Observability**: Access detailed prompt execution history, stats and metrics (duration, prompt cost, completion cost, etc.) for better insights. -- 🛠ī¸ **Troubleshooting:** Effortlessly resolve issues with your prompts. Time travel to retroactively fine-tune failed prompts and commit the fix instantly. -- 💰 **Cost Transparency**: Gain comprehensive cost transparency across all prompts and AI models. -- 🧑‍đŸ’ģ **Multiple Clients**: Support for [Node.js](/client/pezzo-client-node) and [Python](/client/pezzo-client-python). - -# Next Steps - - - Learn about Pezzo's robust observability features. - - - Learn how you can streamline your AI delivery with Pezzo. - - - Get started with Pezzo and OpenAI in 5 minutes. - - diff --git a/apps/docs/logo/dark.svg b/apps/docs/logo/dark.svg index 8bca740..db4cf22 100644 --- a/apps/docs/logo/dark.svg +++ b/apps/docs/logo/dark.svg @@ -1,22 +1,26 @@ - - - - - - + + + + + + + - - - - - - - - - - - - + + + + + + + + + + + + + + + diff --git a/apps/docs/logo/light.svg b/apps/docs/logo/light.svg new file mode 100644 index 0000000..c569a65 --- /dev/null +++ b/apps/docs/logo/light.svg @@ -0,0 +1,22 @@ + + + + + + + + + + + + + + + + + + + + + + diff --git a/apps/docs/mint.json b/apps/docs/mint.json index 085dcf6..11dfe67 100644 --- a/apps/docs/mint.json +++ b/apps/docs/mint.json @@ -1,29 +1,32 @@ { "$schema": "https://mintlify.com/schema.json", - "name": "UniLLM", + "name": "Starter Kit", "logo": { "dark": "/logo/dark.svg", "light": "/logo/light.svg" }, "favicon": "/favicon.png", "colors": { - "primary": "#059669", - "light": "#10b981", - "dark": "#065f46" - }, - "modeToggle": { - "default": "dark", - "isHidden": true + "primary": "#9563FF", + "light": "#AE87FF", + "dark": "#0D001D", + "background": { + "dark": "#090014" + }, + "anchors": { + "from": "#FF7F57", + "to": "#9563FF" + } }, "topbarLinks": [ { - "name": "🌟 Pezzo on GitHub", - "url": "https://github.com/pezzolabs/pezzo?ref=docs" + "name": "Support", + "url": "mailto:hi@mintlify.com" } ], "topbarCtaButton": { - "name": "🕹ī¸ Pezzo Console", - "url": "https://app.pezzo.ai?ref=docs" + "name": "Dashboard", + "url": "https://dashboard.mintlify.com" }, "tabs": [ { @@ -31,85 +34,49 @@ "url": "api-reference" } ], - "navigation": [ + "anchors": [ { - "group": "Getting Started", - "pages": [ - "introduction/what-is-pezzo", - "introduction/tutorial-prompt-management/overview", - "introduction/tutorial-observability/overview", - "introduction/docker-compose" - ] + "name": "Documentation", + "icon": "book-open-cover", + "url": "https://mintlify.com/docs" }, { - "group": "Observability", - "pages": [ - "platform/observability/overview", - "platform/observability/requests", - "platform/observability/metrics" - ] + "name": "Community", + "icon": "slack", + "url": "https://mintlify.com/community" }, { - "group": "Prompt Management", - "pages": [ - "platform/prompt-management/overview", - "platform/prompt-management/environments", - "platform/prompt-management/prompt-editor", - "platform/prompt-management/versioning-and-deployments" - ] - }, - { - "group": "Pezzo SDK", - "pages": [ - "client/pezzo-client-node", - "client/pezzo-client-python", - "client/integrations/openai", - "client/request-caching" - ] - }, + "name": "Blog", + "icon": "newspaper", + "url": "https://mintlify.com/blog" + } + ], + "navigation": [ { - "group": "Health", - "pages": [ - "api-reference/health/performs-a-health-check" - ] + "group": "Get Started", + "pages": ["introduction", "quickstart", "development"] }, { - "group": "Prompts", - "pages": [ - "api-reference/prompts/get-the-deployed-prompt-version-to-a-particular-environment" - ] + "group": "Essentials", + "pages": ["essentials/markdown", "essentials/code", "essentials/images", "essentials/settings", "essentials/navigation"] }, { - "group": "Reporting", - "pages": [ - "api-reference/reporting/report-a-request" - ] + "group": "API Documentation", + "pages": ["api-reference/authentication"] }, { - "group": "Cache", + "group": "Endpoint Examples", "pages": [ - "api-reference/cache/retrieve-cached-request", - "api-reference/cache/save-request-to-cache" + "api-reference/endpoint/get", + "api-reference/endpoint/create", + "api-reference/endpoint/update", + "api-reference/endpoint/delete" ] } ], "footerSocials": { - "github": "https://github.com/pezzolabs/pezzo", - "discord": "https://discord.gg/h5nBW5ySqQ", - "twitter": "https://twitter.com/pezzoai", - "linkedin": "https://linkedin.com/company/pezzo" - }, - "backgroundImage": "/background.png", - "analytics": { - "gtm": { - "tagId": "GTM-M338ZV95" - } - }, - "api": { - "baseUrl": "https://api.pezzo.ai/api", - "playground": { - "mode": "hide" - } - }, - "openapi": "/openapi.json" -} \ No newline at end of file + "twitter": "https://twitter.com/mintlify", + "github": "https://github.com/mintlify", + "linkedin": "https://www.linkedin.com/company/mintsearch" + } +} diff --git a/apps/docs/openapi.json b/apps/docs/openapi.json deleted file mode 100644 index bbdd707..0000000 --- a/apps/docs/openapi.json +++ /dev/null @@ -1 +0,0 @@ -{"openapi":"3.0.0","paths":{"/healthz":{"get":{"operationId":"HealthController_healthz","summary":"Performs a health check","parameters":[],"responses":{"200":{"description":"Returns the health status and current version"}},"tags":["Health"]}},"/prompts/v2/deployment":{"get":{"operationId":"PromptsController_getPromptDeployment","summary":"Get the deployed Prompt Version to a particular Environment","parameters":[{"name":"name","required":true,"in":"query","description":"The name of the prompt (case sensitive)","example":"PromptName","schema":{"type":"string"}},{"name":"environmentName","required":true,"in":"query","description":"The name of the environment (case sensitive)","example":"Production","schema":{"type":"string"}}],"responses":{"200":{"description":"Deployed prompt version object"},"404":{"description":"Prompt deployment not found for the specific environment name"},"500":{"description":"Internal server error"}},"tags":["Prompts"]}},"/reporting/v2/request":{"post":{"operationId":"ReportingController_reportRequest","summary":"Report a request","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"$ref":"#/components/schemas/CreateReportDto"}}}},"responses":{"200":{"description":"Report has been reported successfully"},"500":{"description":"Internal server error"}},"tags":["Reporting"]}},"/cache/v1/request/retrieve":{"post":{"operationId":"CacheController_retrieveCachedRequest","summary":"Retrieve cached request","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"$ref":"#/components/schemas/RetrieveCacheRequestDto"}}}},"responses":{"200":{"description":"Returns the cached request data."},"404":{"description":"Cached request not found."}},"tags":["Cache"]}},"/cache/v1/request/save":{"post":{"operationId":"CacheController_saveRequestToCache","summary":"Save request to cache","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"$ref":"#/components/schemas/CacheRequestDto"}}}},"responses":{"200":{"description":"Returns the cache request result."}},"tags":["Cache"]}}},"info":{"title":"Pezzo API","description":"Specification of the Pezzo REST API, used by various clients.","version":"1.0","contact":{}},"tags":[],"servers":[],"components":{"schemas":{"PromptExecutionMetadataDto":{"type":"object","properties":{"provider":{"type":"string","description":"LLM provider","enum":["OpenAI","Azure","Anthropic"]},"client":{"type":"string","description":"Client name identifier","example":"pezzo-ts"},"clientVersion":{"type":"string","description":"Client version","example":"0.4.11"},"environment":{"type":"string","description":"The name of the Environment (case sensitive)","example":"Production"},"promptId":{"type":"string","description":"The ID of the reported prompt (if managed)","example":"c41jd0s93j000ud7kg7vekhi3"}},"required":["provider","client","clientVersion","environment"]},"ExecutionRequestDto":{"type":"object","properties":{"timestamp":{"format":"date-time","type":"string","description":"Request timestamp","example":"2021-01-01T00:00:00.000Z"},"body":{"type":"object","description":"Raw request body, as sent to the LLM","additionalProperties":true}},"required":["timestamp","body"]},"ExecutionResponseDto":{"type":"object","properties":{"timestamp":{"format":"date-time","type":"string","description":"Response timestamp","example":"2021-01-01T00:00:00.000Z"},"body":{"type":"object","description":"Raw response body, as received from the LLM","additionalProperties":true}},"required":["timestamp","body"]},"CreateReportDto":{"type":"object","properties":{"metadata":{"description":"Metadata","allOf":[{"$ref":"#/components/schemas/PromptExecutionMetadataDto"}]},"properties":{"type":"object","description":"Additional properties to be associated with the report","additionalProperties":true,"example":{"userId":"someUserId","traceId":"traceId"}},"request":{"$ref":"#/components/schemas/ExecutionRequestDto"},"response":{"$ref":"#/components/schemas/ExecutionResponseDto"},"cacheEnabled":{"type":"boolean","description":"Whether caching is enabled for the report","default":false},"cacheHit":{"type":"boolean","description":"Whether the report was generated from a cache hit or not","default":false}},"required":["metadata","request","response"]},"RetrieveCacheRequestDto":{"type":"object","properties":{"request":{"type":"object","description":"The request object to retrieve from cache","additionalProperties":true,"example":{"key1":"value1","key2":"value2"}}},"required":["request"]},"CacheRequestDto":{"type":"object","properties":{"request":{"type":"object","description":"The request object to cache","additionalProperties":true,"example":{"key1":"value1","key2":"value2"}},"response":{"type":"object","description":"The response object to cache","additionalProperties":true,"example":{"key1":"value1","key2":"value2"}}},"required":["request","response"]}}}} \ No newline at end of file diff --git a/apps/docs/platform/observability/filters.png b/apps/docs/platform/observability/filters.png deleted file mode 100644 index 90b248c..0000000 Binary files a/apps/docs/platform/observability/filters.png and /dev/null differ diff --git a/apps/docs/platform/observability/metrics.mdx b/apps/docs/platform/observability/metrics.mdx deleted file mode 100644 index 4d3f3ad..0000000 --- a/apps/docs/platform/observability/metrics.mdx +++ /dev/null @@ -1,19 +0,0 @@ ---- -title: 'Metrics' ---- -Pezzo provides a rich set of metrics to help you understand your Generative AI performance. - -# Prompt Metrics - - -We are in the process of moving the Metrics view to be at the Project level. Until then, Metrics are only available for users who manage their prompts with Pezzo. [Learn more about Prompt Management with Pezzo](/platform/prompt-management). - - -If you feel that there are metrics that are missing, please let us know by [creating an issue on GitHub](https://github.com/pezzolabs/pezzo/issues). - - -If you manage your prompts with Pezzo, you are able to view insights and metrics for a specific prompt. - - - \ No newline at end of file diff --git a/apps/docs/platform/observability/overview.mdx b/apps/docs/platform/observability/overview.mdx deleted file mode 100644 index ba89ac6..0000000 --- a/apps/docs/platform/observability/overview.mdx +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: 'Observability Overview' -sidebarTitle: 'Overview' ---- -Pezzo enabels you to easily observe your Geneartive AI operations. This includes: - -- **Traces and Requests** - Pezzo automatically traces your operations and requests and provides you with a detailed view of the execution of your operations. -- **Advanced Filtering** - Filter your traces and requests by any field, including custom fields (e.g. user ID, correlation ID, etc). -- **Metrics** - Pezzo automatically collects metrics from your LLM calls (execution time, cost, error rate, and more) and provides you with detailed dashboards. -- **Alerts** *(Coming Soon)* - Define alerts on your metrics and get notified when something goes wrong. \ No newline at end of file diff --git a/apps/docs/platform/observability/prompt-metrics.png b/apps/docs/platform/observability/prompt-metrics.png deleted file mode 100644 index 0ea399a..0000000 Binary files a/apps/docs/platform/observability/prompt-metrics.png and /dev/null differ diff --git a/apps/docs/platform/observability/request-inspector.png b/apps/docs/platform/observability/request-inspector.png deleted file mode 100644 index 0564aca..0000000 Binary files a/apps/docs/platform/observability/request-inspector.png and /dev/null differ diff --git a/apps/docs/platform/observability/requests-view.png b/apps/docs/platform/observability/requests-view.png deleted file mode 100644 index 91697a1..0000000 Binary files a/apps/docs/platform/observability/requests-view.png and /dev/null differ diff --git a/apps/docs/platform/observability/requests.mdx b/apps/docs/platform/observability/requests.mdx deleted file mode 100644 index 0034add..0000000 --- a/apps/docs/platform/observability/requests.mdx +++ /dev/null @@ -1,28 +0,0 @@ ---- -title: 'Requests & Traces' ---- -Observe your Generative AI operations with Pezzo. - -# Requests View - -Every project on Pezzo has a dedicated Requests view. This view shows you all the requests that have been made to your project. - - - - - -## Filters - -You can filter requests by various criteria. For example, you can filter by the status of the request, by date/time, or by the cost of execution. - - - - - -## Inspector - -You can inspect the details of any request by clicking on it. This will open the Inspector view. - - - - \ No newline at end of file diff --git a/apps/docs/platform/prompt-management/commit-modal.png b/apps/docs/platform/prompt-management/commit-modal.png deleted file mode 100644 index b349455..0000000 Binary files a/apps/docs/platform/prompt-management/commit-modal.png and /dev/null differ diff --git a/apps/docs/platform/prompt-management/environments-list.png b/apps/docs/platform/prompt-management/environments-list.png deleted file mode 100644 index 610a501..0000000 Binary files a/apps/docs/platform/prompt-management/environments-list.png and /dev/null differ diff --git a/apps/docs/platform/prompt-management/environments.mdx b/apps/docs/platform/prompt-management/environments.mdx deleted file mode 100644 index 1bcf9e6..0000000 --- a/apps/docs/platform/prompt-management/environments.mdx +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: 'Environments' ---- -Environments provide a way to deploy prompt versions gradually and helps you manage your prompts in a more structured way, reducing the risk of breaking changes. - -You can find the environments tab in the left-hand side navigation bar, by clicking the **Environments** item on the menu. - -Here, you will find a list of all environments, or create a new one. - - -Environments are defined at the project level. Every project has a default environment called **Production**. - - - - - \ No newline at end of file diff --git a/apps/docs/platform/prompt-management/overview.mdx b/apps/docs/platform/prompt-management/overview.mdx deleted file mode 100644 index b99e0c6..0000000 --- a/apps/docs/platform/prompt-management/overview.mdx +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: 'Prompt Management Overview' -sidebarTitle: 'Overview' ---- -Pezzo provides you with a robust set of tools to streamline your prompt design, testing, version control and delivery. - -- **Prompt Management** - Manage your Generative AI prompts in a single platform, regardless of the provider or model. -- **Prompt Editor** - Ideate, design and polish your prompts in the Pezzo platform, before writing a single line of code. -- **Prompt Testing** - Test your prompts in the Pezzo platform, to ensure they are working as expected. -- **Environments** - Create and manage multiple environments for your prompts, to support development, production and more. -- **Version Control** - Manage your prompt versions, and rollback to previous versions if needed. -- **Instant Deployment** - Deploy your prompts to production (or any other environment) with a single click, without going through a full release cycle. \ No newline at end of file diff --git a/apps/docs/platform/prompt-management/prompt-editor.mdx b/apps/docs/platform/prompt-management/prompt-editor.mdx deleted file mode 100644 index 32edd30..0000000 --- a/apps/docs/platform/prompt-management/prompt-editor.mdx +++ /dev/null @@ -1,27 +0,0 @@ ---- -title: 'Prompt Editor' ---- -The Prompt Editor is where you can edit your prompt content and settings. - -When selecting a prompt from the Prompts page, you will be met with the Prompt view, which contains multiple tabs. The main tab is the **Prompt Editor**. - - - - - -## Prompt Content - -On the left-hand side you will find a text area. This is where you can write the content of your prompt. - - -You can specify variables using curly braces, which allows you to dynamically replace values when calling the LLM provider using the Pezzo Client. For example - `{name}`. - - -## Prompt Settings - -On the right-hand side you will find the settings for your prompt. The first setting is the provider. Currently, Pezzo supports the following: -- OpenAI Chat Completion -- Azure OpenAI Chat Completion (Coming Soon) -- Anthropic (Coming Soon) - -After selecting the provider, a set of settings will appear which are specific to that provider. You can tune these settings and commit them in a specific version, so they are automatically fetched by the Pezzo Client. \ No newline at end of file diff --git a/apps/docs/platform/prompt-management/prompt-editor.png b/apps/docs/platform/prompt-management/prompt-editor.png deleted file mode 100644 index f4849b2..0000000 Binary files a/apps/docs/platform/prompt-management/prompt-editor.png and /dev/null differ diff --git a/apps/docs/platform/prompt-management/prompt-versions.png b/apps/docs/platform/prompt-management/prompt-versions.png deleted file mode 100644 index fb66bf5..0000000 Binary files a/apps/docs/platform/prompt-management/prompt-versions.png and /dev/null differ diff --git a/apps/docs/platform/prompt-management/publish-modal.png b/apps/docs/platform/prompt-management/publish-modal.png deleted file mode 100644 index 3f06cb6..0000000 Binary files a/apps/docs/platform/prompt-management/publish-modal.png and /dev/null differ diff --git a/apps/docs/platform/prompt-management/version-selector.png b/apps/docs/platform/prompt-management/version-selector.png deleted file mode 100644 index d4ea87f..0000000 Binary files a/apps/docs/platform/prompt-management/version-selector.png and /dev/null differ diff --git a/apps/docs/platform/prompt-management/versioning-and-deployments.mdx b/apps/docs/platform/prompt-management/versioning-and-deployments.mdx deleted file mode 100644 index 80420f3..0000000 --- a/apps/docs/platform/prompt-management/versioning-and-deployments.mdx +++ /dev/null @@ -1,50 +0,0 @@ ---- -title: 'Versioning & Deployments' ---- -Pezzo allows you to create multiple versions of your prompts and deploy them to your environments. - -## Commiting a New Version - -After making changes to a prompt, you can commit a new version by clicking the **Commit** button in the top right corner of the prompt editor. A pop-up will appear, where you will provide a commit message. - - - - - -After confirming, a new version will be created and is ready for deployment. - -## Publishing a Version - -To publish a version, click the **Publish** button in the top right corner of the prompt editor. A pop-up will appear, where you will select the environment you want to deploy to. - - - - - -After clicking **Publish**, the prompt will be deployed to the selected environment instantly. - - -The ability to instantly publish changes is powerful. However, it can lead to issues if you are not careful. It is advised to set up a pre-production environment to test your changes before deploying to Production. [Learn about Environments in Pezzo](/platform/prompt-management/environments). - - -## Rolling Back a Version - -You can always publish an older version of a prompt. To do this, simply click on the version selector in the editor and select the desired version. Then, publish as usual. - - - - - -## Commit History - -When selecting a prompt, you can navigate to the **Versions** tab to see a full list of versions for the prompt. - - - - - -
- - -We are working on a feature that will allows you to view the differences between versions. This will be available soon. - \ No newline at end of file diff --git a/apps/docs/quickstart.mdx b/apps/docs/quickstart.mdx new file mode 100644 index 0000000..d7f3486 --- /dev/null +++ b/apps/docs/quickstart.mdx @@ -0,0 +1,86 @@ +--- +title: 'Quickstart' +description: 'Start building awesome documentation in under 5 minutes' +--- + +## Setup your development + +Learn how to update your docs locally and and deploy them to the public. + +### Edit and preview + + + + During the onboarding process, we created a repository on your Github with + your docs content. You can find this repository on our + [dashboard](https://dashboard.mintlify.com). To clone the repository + locally, follow these + [instructions](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository) + in your terminal. + + + Previewing helps you make sure your changes look as intended. We built a + command line interface to render these changes locally. 1. Install the + [Mintlify CLI](https://www.npmjs.com/package/mintlify) to preview the + documentation changes locally with this command: ``` npm i -g mintlify ``` + 2. Run the following command at the root of your documentation (where + `mint.json` is): ``` mintlify dev ``` + + + +### Deploy your changes + + + + + Our Github app automatically deploys your changes to your docs site, so you + don't need to manage deployments yourself. You can find the link to install on + your [dashboard](https://dashboard.mintlify.com). Once the bot has been + successfully installed, there should be a check mark next to the commit hash + of the repo. + + + [Commit and push your changes to + Git](https://docs.github.com/en/get-started/using-git/pushing-commits-to-a-remote-repository#about-git-push) + for your changes to update in your docs site. If you push and don't see that + the Github app successfully deployed your changes, you can also manually + update your docs through our [dashboard](https://dashboard.mintlify.com). + + + + +## Update your docs + +Add content directly in your files with MDX syntax and React components. You can use any of our components, or even build your own. + + + + + Add flair to your docs with personalized branding. + + + + Implement your OpenAPI spec and enable API user interaction. + + + + Draw insights from user interactions with your documentation. + + + + Keep your docs on your own website's subdomain. + + +