Skip to content
This repository was archived by the owner on May 19, 2025. It is now read-only.

Commit 3feaf23

Browse files
docs(drop_params.md): drop unsupported params
1 parent 8ab29d7 commit 3feaf23

File tree

2 files changed

+111
-0
lines changed

2 files changed

+111
-0
lines changed
Lines changed: 110 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,110 @@
1+
import Tabs from '@theme/Tabs';
2+
import TabItem from '@theme/TabItem';
3+
4+
# Drop Unsupported Params
5+
6+
Drop unsupported OpenAI params by your LLM Provider.
7+
8+
## Quick Start
9+
10+
```python
11+
import litellm
12+
import os
13+
14+
# set keys
15+
os.environ["COHERE_API_KEY"] = "co-.."
16+
17+
litellm.drop_params = True # 👈 KEY CHANGE
18+
19+
response = litellm.completion(
20+
model="command-r",
21+
messages=[{"role": "user", "content": "Hey, how's it going?"}],
22+
response_format={"key": "value"},
23+
)
24+
```
25+
26+
27+
LiteLLM maps all supported openai params by provider + model (e.g. function calling is supported by anthropic on bedrock but not titan).
28+
29+
See `litellm.get_supported_openai_params("command-r")` [**Code**](https://github.com/BerriAI/litellm/blob/main/litellm/utils.py#L3584)
30+
31+
If a provider/model doesn't support a particular param, you can drop it.
32+
33+
## OpenAI Proxy Usage
34+
35+
```yaml
36+
litellm_settings:
37+
drop_params: true
38+
```
39+
40+
## Pass drop_params in `completion(..)`
41+
42+
Just drop_params when calling specific models
43+
44+
<Tabs>
45+
<TabItem value="sdk" label="SDK">
46+
47+
```python
48+
import litellm
49+
import os
50+
51+
# set keys
52+
os.environ["COHERE_API_KEY"] = "co-.."
53+
54+
response = litellm.completion(
55+
model="command-r",
56+
messages=[{"role": "user", "content": "Hey, how's it going?"}],
57+
response_format={"key": "value"},
58+
drop_params=True
59+
)
60+
```
61+
</TabItem>
62+
<TabItem value="proxy" label="PROXY">
63+
64+
```yaml
65+
- litellm_params:
66+
api_base: my-base
67+
model: openai/my-model
68+
drop_params: true # 👈 KEY CHANGE
69+
model_name: my-model
70+
```
71+
</TabItem>
72+
</Tabs>
73+
74+
## Specify params to drop
75+
76+
To drop specific params when calling a provider (E.g. 'logit_bias' for vllm)
77+
78+
Use `additional_drop_params`
79+
80+
<Tabs>
81+
<TabItem value="sdk" label="SDK">
82+
83+
```python
84+
import litellm
85+
import os
86+
87+
# set keys
88+
os.environ["COHERE_API_KEY"] = "co-.."
89+
90+
response = litellm.completion(
91+
model="command-r",
92+
messages=[{"role": "user", "content": "Hey, how's it going?"}],
93+
response_format={"key": "value"},
94+
additional_drop_params=["response_format"]
95+
)
96+
```
97+
</TabItem>
98+
<TabItem value="proxy" label="PROXY">
99+
100+
```yaml
101+
- litellm_params:
102+
api_base: my-base
103+
model: openai/my-model
104+
additional_drop_params: ["response_format"] # 👈 KEY CHANGE
105+
model_name: my-model
106+
```
107+
</TabItem>
108+
</Tabs>
109+
110+
**additional_drop_params**: List or null - Is a list of openai params you want to drop when making a call to the model.

docs/my-website/sidebars.js

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -88,6 +88,7 @@ const sidebars = {
8888
},
8989
items: [
9090
"completion/input",
91+
"completion/drop_params",
9192
"completion/prompt_formatting",
9293
"completion/output",
9394
"exception_mapping",

0 commit comments

Comments
 (0)