Skip to content

Commit

Permalink
docs(drop_params.md): drop unsupported params
Browse files Browse the repository at this point in the history
  • Loading branch information
krrishdholakia committed Jun 21, 2024
1 parent 8ab29d7 commit 3feaf23
Show file tree
Hide file tree
Showing 2 changed files with 111 additions and 0 deletions.
110 changes: 110 additions & 0 deletions docs/my-website/docs/completion/drop_params.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

# Drop Unsupported Params

Drop unsupported OpenAI params by your LLM Provider.

## Quick Start

```python
import litellm
import os

# set keys
os.environ["COHERE_API_KEY"] = "co-.."

litellm.drop_params = True # 👈 KEY CHANGE

response = litellm.completion(
model="command-r",
messages=[{"role": "user", "content": "Hey, how's it going?"}],
response_format={"key": "value"},
)
```


LiteLLM maps all supported openai params by provider + model (e.g. function calling is supported by anthropic on bedrock but not titan).

See `litellm.get_supported_openai_params("command-r")` [**Code**](https://github.com/BerriAI/litellm/blob/main/litellm/utils.py#L3584)

If a provider/model doesn't support a particular param, you can drop it.

## OpenAI Proxy Usage

```yaml
litellm_settings:
drop_params: true
```
## Pass drop_params in `completion(..)`

Just drop_params when calling specific models

<Tabs>
<TabItem value="sdk" label="SDK">

```python
import litellm
import os
# set keys
os.environ["COHERE_API_KEY"] = "co-.."
response = litellm.completion(
model="command-r",
messages=[{"role": "user", "content": "Hey, how's it going?"}],
response_format={"key": "value"},
drop_params=True
)
```
</TabItem>
<TabItem value="proxy" label="PROXY">

```yaml
- litellm_params:
api_base: my-base
model: openai/my-model
drop_params: true # 👈 KEY CHANGE
model_name: my-model
```
</TabItem>
</Tabs>

## Specify params to drop

To drop specific params when calling a provider (E.g. 'logit_bias' for vllm)

Use `additional_drop_params`

<Tabs>
<TabItem value="sdk" label="SDK">

```python
import litellm
import os
# set keys
os.environ["COHERE_API_KEY"] = "co-.."
response = litellm.completion(
model="command-r",
messages=[{"role": "user", "content": "Hey, how's it going?"}],
response_format={"key": "value"},
additional_drop_params=["response_format"]
)
```
</TabItem>
<TabItem value="proxy" label="PROXY">

```yaml
- litellm_params:
api_base: my-base
model: openai/my-model
additional_drop_params: ["response_format"] # 👈 KEY CHANGE
model_name: my-model
```
</TabItem>
</Tabs>

**additional_drop_params**: List or null - Is a list of openai params you want to drop when making a call to the model.
1 change: 1 addition & 0 deletions docs/my-website/sidebars.js
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,7 @@ const sidebars = {
},
items: [
"completion/input",
"completion/drop_params",
"completion/prompt_formatting",
"completion/output",
"exception_mapping",
Expand Down

0 comments on commit 3feaf23

Please sign in to comment.