diff --git a/.github/assets/logo-dark-mode.svg b/.github/assets/logo-dark-mode.svg
new file mode 100644
index 0000000..a6d5359
--- /dev/null
+++ b/.github/assets/logo-dark-mode.svg
@@ -0,0 +1,546 @@
+
diff --git a/.github/assets/logo-light-mode.svg b/.github/assets/logo-light-mode.svg
new file mode 100644
index 0000000..0fb2e2d
--- /dev/null
+++ b/.github/assets/logo-light-mode.svg
@@ -0,0 +1,547 @@
+
diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md
new file mode 100644
index 0000000..eb109fb
--- /dev/null
+++ b/CODE_OF_CONDUCT.md
@@ -0,0 +1,120 @@
+# Contributor Covenant Code of Conduct
+
+## Our Pledge
+
+We as members, contributors, and leaders pledge to make participation in our
+community a harassment-free experience for everyone, regardless of age, body
+size, visible or invisible disability, ethnicity, sex characteristics, gender
+identity and expression, level of experience, education, socio-economic status,
+nationality, personal appearance, race, religion, or sexual identity
+and orientation.
+
+We pledge to act and interact in ways that contribute to an open, welcoming,
+diverse, inclusive, and healthy community.
+
+## Our Standards
+
+Examples of behavior that contributes to a positive environment for our
+community include:
+
+- Demonstrating empathy and kindness toward other people
+- Being respectful of differing opinions, viewpoints, and experiences
+- Giving and gracefully accepting constructive feedback
+- Accepting responsibility and apologizing to those affected by our mistakes,
+ and learning from the experience
+- Focusing on what is best not just for us as individuals, but for the
+ overall community
+
+Examples of unacceptable behavior include:
+
+- The use of sexualized language or imagery, and sexual attention or
+ advances of any kind
+- Trolling, insulting or derogatory comments, and personal or political attacks
+- Public or private harassment
+- Publishing others' private information, such as a physical or email
+ address, without their explicit permission
+- Other conduct which could reasonably be considered inappropriate in a
+ professional setting
+
+## Enforcement Responsibilities
+
+Community leaders are responsible for clarifying and enforcing our standards of
+acceptable behavior and will take appropriate and fair corrective action in
+response to any behavior that they deem inappropriate, threatening, offensive,
+or harmful.
+
+Community leaders have the right and responsibility to remove, edit, or reject
+comments, commits, code, wiki edits, issues, and other contributions that are
+not aligned to this Code of Conduct, and will communicate reasons for moderation
+decisions when appropriate.
+
+## Scope
+
+This Code of Conduct applies within all community spaces, and also applies when
+an individual is officially representing the community in public spaces.
+Examples of representing our community include using an official e-mail address,
+posting via an official social media account, or acting as an appointed
+representative at an online or offline event.
+
+## Enforcement
+
+Instances of abusive, harassing, or otherwise unacceptable behavior may be
+reported to the community leaders responsible for enforcement at
+hello@pezzo.ai.
+All complaints will be reviewed and investigated promptly and fairly.
+
+All community leaders are obligated to respect the privacy and security of the
+reporter of any incident.
+
+## Enforcement Guidelines
+
+Community leaders will follow these Community Impact Guidelines in determining
+the consequences for any action they deem in violation of this Code of Conduct:
+
+### 1. Correction
+
+**Community Impact**: Use of inappropriate language or other behavior deemed
+unprofessional or unwelcome in the community.
+
+**Consequence**: A private, written warning from community leaders, providing
+clarity around the nature of the violation and an explanation of why the
+behavior was inappropriate. A public apology may be requested.
+
+### 2. Warning
+
+**Community Impact**: A violation through a single incident or series
+of actions.
+
+**Consequence**: A warning with consequences for continued behavior. No
+interaction with the people involved, including unsolicited interaction with
+those enforcing the Code of Conduct, for a specified period of time. This
+includes avoiding interactions in community spaces as well as external channels
+like social media. Violating these terms may lead to a temporary or
+permanent ban.
+
+### 3. Temporary Ban
+
+**Community Impact**: A serious violation of community standards, including
+sustained inappropriate behavior.
+
+**Consequence**: A temporary ban from any sort of interaction or public
+communication with the community for a specified period of time. No public or
+private interaction with the people involved, including unsolicited interaction
+with those enforcing the Code of Conduct, is allowed during this period.
+Violating these terms may lead to a permanent ban.
+
+### 4. Permanent Ban
+
+**Community Impact**: Demonstrating a pattern of violation of community
+standards, including sustained inappropriate behavior, harassment of an
+individual, or aggression toward or disparagement of classes of individuals.
+
+**Consequence**: A permanent ban from any sort of public interaction within
+the community.
+
+## Attribution
+
+This Code of Conduct is adapted from the [Contributor Covenant][homepage],
+version 2.0.
+
+[homepage]: http://contributor-covenant.org
\ No newline at end of file
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
new file mode 100644
index 0000000..6febe50
--- /dev/null
+++ b/CONTRIBUTING.md
@@ -0,0 +1,164 @@
+# Contributing
+
+We opened sourced UniLLM because we believe in the power of community. We believe you can help making UniLLM better!
+We are excited to see what you will build with UniLLM and we are looking forward to your contributions. We want to make contributing to this project as easy and transparent as possible, whether it's features, bug fixes, documentation updates, guides, examples and more.
+
+## How can I contribute?
+
+Ready to contribute but seeking guidance, we have several avenues to assist you. Explore the upcoming segment for clarity on the kind of contributions we appreciate and how to jump in. Reach out directly to the UniLLM team on [Discord](https://pezzo.cc/discord) for immediate assistance! Alternatively, you're welcome to raise an issue and one of our dedicated maintainers will promptly steer you in the right direction!
+
+## Found a bug?
+
+If you find a bug in the source code, you can help us by [creating an issue](https://github.com/pezzolabs/unillm/issues/new) to our GitHub Repository. Even better, you can submit a Pull Request with a fix.
+
+## Missing a feature?
+
+So, you've got an awesome feature in mind? Throw it over to us by [creating an issue](https://github.com/pezzolabs/unillm/issues/new) on our GitHub Repo.
+
+Planning to code a feature yourself? We love the enthusiasm, but hang on, always good to have a little chinwag with us before you burn that midnight oil. Unfortunately, not every feature might fit into our plans.
+
+- Dreaming big? Kick off by opening an issue and sketch out your cool ideas. Helps us all stay on the same page, avoid doing the same thing twice, and ensures your hard work gels well into the project.
+- Cooking up something small? Just craft it and [shoot it straight as a Pull Request](#submit-pr).
+
+## What do you need to know to help?
+
+If you want to help out with a code contribution, our project uses the following stack:
+
+- TypeScript
+- Node.js
+- Various APIs/SDKs of LLM providers
+
+If you don't feel ready to make a code contribution yet, no problem! You can also improve our documentation.
+
+# How do I make a code contribution?
+
+## Good first issues
+
+Are you new to open source contribution? Wondering how contributions work in our project? Here's a quick rundown.
+
+Find an issue that you're interested in addressing, or a feature that you'd like to add.
+You can use [this view](https://github.com/pezzolabs/unillm/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) which helps new contributors find easy gateways into our project.
+
+## Step 1: Make a fork
+
+Fork the UniLLM repository to your GitHub organization/account. This means that you'll have a copy of the repository under _your-GitHub-username/repository-name_.
+
+## Step 2: Clone the repository to your local machine
+
+```
+git clone https://github.com/{your-GitHub-username}/unillm.git
+
+```
+
+## Step 3: Prepare the development environment
+
+Set up and run the development environment on your local machine:
+
+**BEFORE** you run the following steps make sure:
+
+1. You have typescript installed locally on you machine `npm install -g typescript`
+2. You are using node version: ^18.16.0 || ^14.0.0"
+3. You are using npm version: ^8.1.0 || ^7.3.0"
+4. You have `docker` installed and running on your machine
+
+```shell
+cd unillm
+npm install
+```
+
+## Step 4: Create a branch
+
+Create a new branch for your changes.
+In order to keep branch names uniform and easy-to-understand, please use the following conventions for branch naming.
+Generally speaking, it is a good idea to add a group/type prefix to a branch.
+Here is a list of good examples:
+
+- for docs change : docs/{ISSUE_NUMBER}-{CUSTOM_NAME}
+- for new features : feat/{ISSUE_NUMBER}-{CUSTOM_NAME}
+- for bug fixes : fix/{ISSUE_NUMBER}-{CUSTOM_NAME}
+
+```jsx
+git checkout -b branch-name-here
+```
+
+## Step 5: Make your changes
+
+Update the code with your bug fix or new feature.
+
+## Step 6: Add the changes that are ready to be committed
+
+Stage the changes that are ready to be committed:
+
+```jsx
+git add .
+```
+
+## Step 7: Commit the changes (Git)
+
+Commit the changes with a short message. (See below for more details on how we structure our commit messages)
+
+```jsx
+git commit -m "
+
+
+
+
+
+
+
+
+
+
+ UniLLM is a library that allows you to call any LLM using the OpenAI API, with 100% type safety. +
+ + + +# Benefits +- ✨ Integrate with any provider and model using the OpenAI API +- 💬 Consistent chatCompletion responses and logs across all models and providers +- 💯 Type safety across all providers and models +- 🔁 Seamlessly switch between LLMs without rewriting your codebase +- ✅ If you write tests for your service, you only need to test it once +- 🔜 (Coming Soon) Request caching and rate limiting +- 🔜 (Coming Soon) Cost monitoring and alerting + +# Usage + +## [Check our interactive documentation](https://docs.unillm.ai) + +## 💬 Chat Completions + +With UniLLM, you can use chat completions even for providers/models that don't natively support it (e.g. Anthropic). + + +```bash +npm i unillm ``` -## What's inside? - -This Turborepo includes the following packages/apps: - -### Apps and Packages - -- `docs`: a [Next.js](https://nextjs.org/) app -- `web`: another [Next.js](https://nextjs.org/) app -- `ui`: a stub React component library shared by both `web` and `docs` applications -- `eslint-config-custom`: `eslint` configurations (includes `eslint-config-next` and `eslint-config-prettier`) -- `tsconfig`: `tsconfig.json`s used throughout the monorepo - -Each package/app is 100% [TypeScript](https://www.typescriptlang.org/). - -### Utilities +```ts +import { UniLLM } from 'unillm'; -This Turborepo has some additional tools already setup for you: +const uniLLM = new UniLLM(); -- [TypeScript](https://www.typescriptlang.org/) for static type checking -- [ESLint](https://eslint.org/) for code linting -- [Prettier](https://prettier.io) for code formatting +// OpenAI +const response = await uniLLM.createChatCompletion("openai:gpt-3.5-turbo", { messages: ... }); +const response = await uniLLM.createChatCompletion("openai:gpt-4", { messages: ... }); -### Build +// Anthropic +const response = await uniLLM.createChatCompletion("anthropic:claude-2", { messages: ... }); +const response = await uniLLM.createChatCompletion("anthropic:claude-1-instant", { messages: ... }); -To build all apps and packages, run the following command: +// Azure OpenAI +const response = await uniLLM.createChatCompletion("azure:openai", { messages: ... }); +// More coming soon! ``` -cd my-turborepo -pnpm build -``` +Want to see more examples? Check out the **[interactive docs](https://docs.unillm.ai)**. -### Develop -To develop all apps and packages, run the following command: +## ⚡️ Streaming +To enable streaming, simply provide `stream: true` in the options object. Here is an example: +```ts +const response = await uniLLM.createChatCompletion("openai:gpt-3.5-turbo", { + messages: ..., + stream: true +}); ``` -cd my-turborepo -pnpm dev -``` - -### Remote Caching -Turborepo can use a technique known as [Remote Caching](https://turbo.build/repo/docs/core-concepts/remote-caching) to share cache artifacts across machines, enabling you to share build caches with your team and CI/CD pipelines. +Want to see more examples? Check out the **[interactive docs](https://docs.unillm.ai)**. -By default, Turborepo will cache locally. To enable Remote Caching you will need an account with Vercel. If you don't have an account you can [create one](https://vercel.com/signup), then enter the following commands: +# Contributing -``` -cd my-turborepo -npx turbo login -``` +We welcome contributions from the community! Please feel free to submit pull requests or create issues for bugs or feature suggestions. -This will authenticate the Turborepo CLI with your [Vercel account](https://vercel.com/docs/concepts/personal-accounts/overview). - -Next, you can link your Turborepo to your Remote Cache by running the following command from the root of your Turborepo: - -``` -npx turbo link -``` +If you want to contribute but not sure how, join our [Discord](https://pezzo.cc/discord) and we'll be happy to help you out! -## Useful Links +Please check out [CONTRIBUTING.md](CONTRIBUTING.md) before contributing. -Learn more about the power of Turborepo: +# License -- [Tasks](https://turbo.build/repo/docs/core-concepts/monorepos/running-tasks) -- [Caching](https://turbo.build/repo/docs/core-concepts/caching) -- [Remote Caching](https://turbo.build/repo/docs/core-concepts/remote-caching) -- [Filtering](https://turbo.build/repo/docs/core-concepts/monorepos/filtering) -- [Configuration Options](https://turbo.build/repo/docs/reference/configuration) -- [CLI Usage](https://turbo.build/repo/docs/reference/command-line-reference) +This repository's source code is available under the [MIT](LICENSE). \ No newline at end of file diff --git a/apps/docs/pages/index.mdx b/apps/docs/pages/index.mdx index 7d415a9..eefb989 100644 --- a/apps/docs/pages/index.mdx +++ b/apps/docs/pages/index.mdx @@ -6,12 +6,11 @@ import { DynamicCodeExample } from '../components/DynamicCodeExample' UniLLM is an TypeScript library that enables you to interact with any LLM (Large Language Model) via a unified API - the OpenAI API, in a type-safe way. ## Benefits -- ✨ Integrate with any provider and model using a unified interface (OpenAI) +- ✨ Integrate with any provider and model using the OpenAI API +- 💬 Consistent chatCompletion responses and logs across all models and providers +- 💯 Type safety across all providers and models - 🔁 Seamlessly switch between LLMs without rewriting your codebase - ✅ If you write tests for your service, you only need to test it once -- 🔎 Seamless integration with monitoring and observability tools -- ⏩ Seamless integration with A/B testing tools without requiring a full release cycle -- 💬 Consistent chatCompletion responses and logs across all models and providers - 🔜 (Coming Soon) Request caching and rate limiting - 🔜 (Coming Soon) Cost monitoring and alerting