Skip to content

Commit

Permalink
Merge branch 'main' into grit-prod
Browse files Browse the repository at this point in the history
  • Loading branch information
morgante committed Jun 21, 2024
2 parents 0dbd4c0 + 3feaf23 commit e09eaec
Show file tree
Hide file tree
Showing 205 changed files with 26,991 additions and 17,878 deletions.
4 changes: 4 additions & 0 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,7 @@ jobs:
pip install "pydantic==2.7.1"
pip install "diskcache==5.6.1"
pip install "Pillow==10.3.0"
pip install "ijson==3.2.3"
- save_cache:
paths:
- ./venv
Expand Down Expand Up @@ -126,6 +127,7 @@ jobs:
pip install jinja2
pip install tokenizers
pip install openai
pip install ijson
- run:
name: Run tests
command: |
Expand Down Expand Up @@ -180,6 +182,7 @@ jobs:
pip install numpydoc
pip install prisma
pip install fastapi
pip install ijson
pip install "httpx==0.24.1"
pip install "gunicorn==21.2.0"
pip install "anyio==3.7.1"
Expand All @@ -202,6 +205,7 @@ jobs:
-e REDIS_PORT=$REDIS_PORT \
-e AZURE_FRANCE_API_KEY=$AZURE_FRANCE_API_KEY \
-e AZURE_EUROPE_API_KEY=$AZURE_EUROPE_API_KEY \
-e MISTRAL_API_KEY=$MISTRAL_API_KEY \
-e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
-e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
-e AWS_REGION_NAME=$AWS_REGION_NAME \
Expand Down
10 changes: 10 additions & 0 deletions .github/dependabot.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "daily"
groups:
github-actions:
patterns:
- "*"
22 changes: 19 additions & 3 deletions .github/workflows/ghcr_deploy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,11 @@ jobs:
if: github.repository == 'BerriAI/litellm'
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v4
with:
ref: ${{ github.event.inputs.commit_hash }}
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
Expand All @@ -41,19 +46,22 @@ jobs:
name: Build and push
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: litellm/litellm:${{ github.event.inputs.tag || 'latest' }}
-
name: Build and push litellm-database image
uses: docker/build-push-action@v5
with:
context: .
push: true
file: Dockerfile.database
tags: litellm/litellm-database:${{ github.event.inputs.tag || 'latest' }}
-
name: Build and push litellm-spend-logs image
uses: docker/build-push-action@v5
with:
context: .
push: true
file: ./litellm-js/spend-logs/Dockerfile
tags: litellm/litellm-spend_logs:${{ github.event.inputs.tag || 'latest' }}
Expand All @@ -68,6 +76,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
ref: ${{ github.event.inputs.commit_hash }}
# Uses the `docker/login-action` action to log in to the Container registry registry using the account and password that will publish the packages. Once published, the packages are scoped to the account defined here.
- name: Log in to the Container registry
uses: docker/login-action@65b78e6e13532edd9afa3aa52ac7964289d1a9c1
Expand All @@ -92,7 +102,7 @@ jobs:
- name: Build and push Docker image
uses: docker/build-push-action@4976231911ebf5f32aad765192d35f942aa48cb8
with:
context: https://github.com/BerriAI/litellm.git#${{ github.event.inputs.commit_hash}}
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}-${{ github.event.inputs.tag || 'latest' }}, ${{ steps.meta.outputs.tags }}-${{ github.event.inputs.release_type }} # if a tag is provided, use that, otherwise use the release tag, and if neither is available, use 'latest'
labels: ${{ steps.meta.outputs.labels }}
Expand All @@ -106,6 +116,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
ref: ${{ github.event.inputs.commit_hash }}

- name: Log in to the Container registry
uses: docker/login-action@65b78e6e13532edd9afa3aa52ac7964289d1a9c1
Expand All @@ -128,7 +140,7 @@ jobs:
- name: Build and push Database Docker image
uses: docker/build-push-action@f2a1d5e99d037542a71f64918e516c093c6f3fc4
with:
context: https://github.com/BerriAI/litellm.git#${{ github.event.inputs.commit_hash}}
context: .
file: Dockerfile.database
push: true
tags: ${{ steps.meta-database.outputs.tags }}-${{ github.event.inputs.tag || 'latest' }}, ${{ steps.meta-database.outputs.tags }}-${{ github.event.inputs.release_type }}
Expand All @@ -143,6 +155,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
ref: ${{ github.event.inputs.commit_hash }}

- name: Log in to the Container registry
uses: docker/login-action@65b78e6e13532edd9afa3aa52ac7964289d1a9c1
Expand All @@ -165,7 +179,7 @@ jobs:
- name: Build and push Database Docker image
uses: docker/build-push-action@f2a1d5e99d037542a71f64918e516c093c6f3fc4
with:
context: https://github.com/BerriAI/litellm.git#${{ github.event.inputs.commit_hash}}
context: .
file: ./litellm-js/spend-logs/Dockerfile
push: true
tags: ${{ steps.meta-spend-logs.outputs.tags }}-${{ github.event.inputs.tag || 'latest' }}, ${{ steps.meta-spend-logs.outputs.tags }}-${{ github.event.inputs.release_type }}
Expand All @@ -176,6 +190,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
ref: ${{ github.event.inputs.commit_hash }}

- name: Log in to the Container registry
uses: docker/login-action@65b78e6e13532edd9afa3aa52ac7964289d1a9c1
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -60,3 +60,4 @@ litellm/proxy/_experimental/out/404/index.html
litellm/proxy/_experimental/out/model_hub/index.html
litellm/proxy/_experimental/out/onboarding/index.html
litellm/tests/log.txt
litellm/tests/langfuse.log
30 changes: 22 additions & 8 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,19 @@
repos:
- repo: local
hooks:
- id: mypy
name: mypy
entry: python3 -m mypy --ignore-missing-imports
language: system
types: [python]
files: ^litellm/
- id: isort
name: isort
entry: isort
language: system
types: [python]
files: litellm/.*\.py
exclude: ^litellm/__init__.py$
- repo: https://github.com/psf/black
rev: 24.2.0
hooks:
Expand All @@ -16,11 +31,10 @@ repos:
name: Check if files match
entry: python3 ci_cd/check_files_match.py
language: system
- repo: local
hooks:
- id: mypy
name: mypy
entry: python3 -m mypy --ignore-missing-imports
language: system
types: [python]
files: ^litellm/
# - id: check-file-length
# name: Check file length
# entry: python check_file_length.py
# args: ["10000"] # set your desired maximum number of lines
# language: python
# files: litellm/.*\.py
# exclude: ^litellm/tests/
28 changes: 28 additions & 0 deletions check_file_length.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
import sys


def check_file_length(max_lines, filenames):
bad_files = []
for filename in filenames:
with open(filename, "r") as file:
lines = file.readlines()
if len(lines) > max_lines:
bad_files.append((filename, len(lines)))
return bad_files


if __name__ == "__main__":
max_lines = int(sys.argv[1])
filenames = sys.argv[2:]

bad_files = check_file_length(max_lines, filenames)
if bad_files:
bad_files.sort(
key=lambda x: x[1], reverse=True
) # Sort files by length in descending order
for filename, length in bad_files:
print(f"{filename}: {length} lines")

sys.exit(1)
else:
sys.exit(0)
110 changes: 110 additions & 0 deletions docs/my-website/docs/completion/drop_params.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

# Drop Unsupported Params

Drop unsupported OpenAI params by your LLM Provider.

## Quick Start

```python
import litellm
import os

# set keys
os.environ["COHERE_API_KEY"] = "co-.."

litellm.drop_params = True # 👈 KEY CHANGE

response = litellm.completion(
model="command-r",
messages=[{"role": "user", "content": "Hey, how's it going?"}],
response_format={"key": "value"},
)
```


LiteLLM maps all supported openai params by provider + model (e.g. function calling is supported by anthropic on bedrock but not titan).

See `litellm.get_supported_openai_params("command-r")` [**Code**](https://github.com/BerriAI/litellm/blob/main/litellm/utils.py#L3584)

If a provider/model doesn't support a particular param, you can drop it.

## OpenAI Proxy Usage

```yaml
litellm_settings:
drop_params: true
```
## Pass drop_params in `completion(..)`

Just drop_params when calling specific models

<Tabs>
<TabItem value="sdk" label="SDK">

```python
import litellm
import os
# set keys
os.environ["COHERE_API_KEY"] = "co-.."
response = litellm.completion(
model="command-r",
messages=[{"role": "user", "content": "Hey, how's it going?"}],
response_format={"key": "value"},
drop_params=True
)
```
</TabItem>
<TabItem value="proxy" label="PROXY">

```yaml
- litellm_params:
api_base: my-base
model: openai/my-model
drop_params: true # 👈 KEY CHANGE
model_name: my-model
```
</TabItem>
</Tabs>

## Specify params to drop

To drop specific params when calling a provider (E.g. 'logit_bias' for vllm)

Use `additional_drop_params`

<Tabs>
<TabItem value="sdk" label="SDK">

```python
import litellm
import os
# set keys
os.environ["COHERE_API_KEY"] = "co-.."
response = litellm.completion(
model="command-r",
messages=[{"role": "user", "content": "Hey, how's it going?"}],
response_format={"key": "value"},
additional_drop_params=["response_format"]
)
```
</TabItem>
<TabItem value="proxy" label="PROXY">

```yaml
- litellm_params:
api_base: my-base
model: openai/my-model
additional_drop_params: ["response_format"] # 👈 KEY CHANGE
model_name: my-model
```
</TabItem>
</Tabs>

**additional_drop_params**: List or null - Is a list of openai params you want to drop when making a call to the model.
6 changes: 5 additions & 1 deletion docs/my-website/docs/completion/input.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,10 @@ By default, LiteLLM raises an exception if the openai param being passed in isn'

To drop the param instead, set `litellm.drop_params = True` or `completion(..drop_params=True)`.

This **ONLY DROPS UNSUPPORTED OPENAI PARAMS**.

LiteLLM assumes any non-openai param is provider specific and passes it in as a kwarg in the request body

:::

## Input Params
Expand Down Expand Up @@ -162,7 +166,7 @@ def completion(

- `function`: *object* - Required.

- `tool_choice`: *string or object (optional)* - Controls which (if any) function is called by the model. none means the model will not call a function and instead generates a message. auto means the model can pick between generating a message or calling a function. Specifying a particular function via {"type: "function", "function": {"name": "my_function"}} forces the model to call that function.
- `tool_choice`: *string or object (optional)* - Controls which (if any) function is called by the model. none means the model will not call a function and instead generates a message. auto means the model can pick between generating a message or calling a function. Specifying a particular function via `{"type: "function", "function": {"name": "my_function"}}` forces the model to call that function.

- `none` is the default when no functions are present. `auto` is the default if functions are present.

Expand Down
Loading

0 comments on commit e09eaec

Please sign in to comment.