Skip to content

Conversation

Xu-Wenqing
Copy link
Contributor

@Xu-Wenqing Xu-Wenqing commented May 29, 2025

DeepSeek-R1-0528 model support function call, add function call chat template.

Usage:

vllm serve ... --enable-auto-tool-choice --tool-call-parser deepseek_v3 --chat-template examples/tool_chat_template_deepseekr1.jinja

Function Call Test
Use Berkeley Function Calling Leaderboard to evaluate function call template here.

Evaluation Result:
🦍 Model: DeepSeek-R1-0528
🔍 Running test: simple
✅ Test completed: simple. 🎯 Accuracy: 0.9325
Number of models evaluated: 100%|███████████████████████████████████████████| 1/1 [00:00<00:00, 41.24it/s]
📈 Aggregating data to generate leaderboard score table...
🏁 Evaluation completed. See /Users/xuwenqing/function_call_eval/score/data_overall.csv for overall evaluation results on BFCL V3.

@Xu-Wenqing Xu-Wenqing changed the title Add DeepSeekR1-0528 function call chat template Add DeepSeek-R1-0528 function call chat template May 29, 2025
@mergify mergify bot added documentation Improvements or additions to documentation tool-calling labels May 29, 2025
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@Xu-Wenqing Xu-Wenqing marked this pull request as ready for review May 29, 2025 04:57
@Xu-Wenqing
Copy link
Contributor Author

@aarnphm @DarkLight1337

@Xu-Wenqing Xu-Wenqing requested a review from hmellor as a code owner May 29, 2025 05:06
@Xu-Wenqing
Copy link
Contributor Author

#18931

@markluofd
Copy link

I use the following command to start server

vllm serve /model     --tensor-parallel-size 8     --pipeline-parallel-size 2     --trust-remote-code     --gpu-memory-utilization 0.92     --enable-auto-tool-choice     --tool-call-parser deepseek_v3                     --chat-template /home/work/easyedge/llm/tool_chat_template_deepseekr1.jinja     --max-model-len 98304     --host 0.0.0.0     --port 8669     --served-model-name DeepSeek-R1     --uvicorn-log-level info

and curl the server with the body

{
    "messages": [
        {
            "content": "",
            "role": "system"
        },
        {
            "content": "what's the magic function of 5?",
            "role": "user"
        }
    ],
    "model": "",
    "max_tokens": 2560,
    "stream": false,
    "temperature": 0.7,
    "tools": [
        {
            "type": "function",
            "function": {
                "name": "magic_function",
                "description": "Applies a magic function to an input.",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "input": {
                            "type": "integer"
                        }
                    },
                    "required": [
                        "input"
                    ]
                }
            }
        }
    ]
}

but I got the result as follows:

{
    "id": "chatcmpl-83f63afeb0644f1990e2c865cd08f7f2",
    "object": "chat.completion",
    "created": 1748572697,
    "model": "DeepSeek-R1",
    "choices": [
        {
            "index": 0,
            "message": {
                "role": "assistant",
                "reasoning_content": null,
                "content": "<think>\nWe are given a function called \"magic_function\" that takes an integer input.\n The user query is: \"what's the magic function of 5?\"\n We should call the magic_function with input 5.\n We will output the function call in the required format.\n</think>\n",
                "tool_calls": []
            },
            "logprobs": null,
            "finish_reason": "tool_calls",
            "stop_reason": null
        }
    ],
    "usage": {
        "prompt_tokens": 160,
        "total_tokens": 238,
        "completion_tokens": 78,
        "prompt_tokens_details": null
    },
    "prompt_logprobs": null,
    "kv_transfer_params": null
}

it seems like tool parser failed to contract the tool_calls parameters, have I use the wrong command ?

@Xu-Wenqing
Copy link
Contributor Author

I use the following command to start server

vllm serve /model     --tensor-parallel-size 8     --pipeline-parallel-size 2     --trust-remote-code     --gpu-memory-utilization 0.92     --enable-auto-tool-choice     --tool-call-parser deepseek_v3                     --chat-template /home/work/easyedge/llm/tool_chat_template_deepseekr1.jinja     --max-model-len 98304     --host 0.0.0.0     --port 8669     --served-model-name DeepSeek-R1     --uvicorn-log-level info

and curl the server with the body

{
    "messages": [
        {
            "content": "",
            "role": "system"
        },
        {
            "content": "what's the magic function of 5?",
            "role": "user"
        }
    ],
    "model": "",
    "max_tokens": 2560,
    "stream": false,
    "temperature": 0.7,
    "tools": [
        {
            "type": "function",
            "function": {
                "name": "magic_function",
                "description": "Applies a magic function to an input.",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "input": {
                            "type": "integer"
                        }
                    },
                    "required": [
                        "input"
                    ]
                }
            }
        }
    ]
}

but I got the result as follows:

{
    "id": "chatcmpl-83f63afeb0644f1990e2c865cd08f7f2",
    "object": "chat.completion",
    "created": 1748572697,
    "model": "DeepSeek-R1",
    "choices": [
        {
            "index": 0,
            "message": {
                "role": "assistant",
                "reasoning_content": null,
                "content": "<think>\nWe are given a function called \"magic_function\" that takes an integer input.\n The user query is: \"what's the magic function of 5?\"\n We should call the magic_function with input 5.\n We will output the function call in the required format.\n</think>\n",
                "tool_calls": []
            },
            "logprobs": null,
            "finish_reason": "tool_calls",
            "stop_reason": null
        }
    ],
    "usage": {
        "prompt_tokens": 160,
        "total_tokens": 238,
        "completion_tokens": 78,
        "prompt_tokens_details": null
    },
    "prompt_logprobs": null,
    "kv_transfer_params": null
}

it seems like tool parser failed to contract the tool_calls parameters, have I use the wrong command ?

@markluofd you can add tool_choice="required" in your request.

@DarkLight1337 DarkLight1337 requested a review from aarnphm May 30, 2025 06:13
@markluofd
Copy link

failed too, seems like the response is not a json format😂
request as:

{
    "messages": [
        {
            "content": "",
            "role": "system"
        },
        {
            "content": "what's the magic function of 5?",
            "role": "user"
        }
    ],
    "model": "",
    "max_tokens": 2560,
    "stream": false,
    "temperature": 0.7,
    "tool_choice": "required",
    "tools": [
        {
            "type": "function",
            "function": {
                "name": "magic_function",
                "description": "Applies a magic function to an input.",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "input": {
                            "type": "integer"
                        }
                    },
                    "required": [
                        "input"
                    ]
                }
            }
        }
    ]
}

response as:

{
    "object": "error",
    "message": "1 validation error for list[function-wrap[__log_extra_fields__()]]\n  Invalid JSON: EOF while parsing a string at line 1 column 3 [type=json_invalid, input_value='[{\"', input_type=str]\n    For further information visit https://errors.pydantic.dev/2.11/v/json_invalid",
    "type": "BadRequestError",
    "param": null,
    "code": 400
}

I extract the prompt from vllm log as:

Received request chatcmpl-95a8d630d8f64a8cad7548c5057c59d5: prompt: '<|begin▁of▁sentence|>\n    You may call one or more functions to assist with the user query.\n\n    Here are the available functions:\n{"type": "function", "function": {"name": "magic_function", "description": "Applies a magic function to an input.", "parameters": {"type": "object", "properties": {"input": {"type": "integer"}}, "required": ["input"]}}}\n    For function call returns, you should first print <|tool▁calls▁begin|>\n    For each function call, you should return object like:\n\n    <|tool▁call▁begin|>function<|tool▁sep|><function_name>\n```json\n<function_arguments_in_json_format>\n```<|tool▁call▁end|>\n    At the end of function call returns, you should print <|tool▁calls▁end|><|end▁of▁sentence|>\n<|User|>what\'s the magic function of 5?    <|Assistant|>\n', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.7, top_p=0.95, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=2560, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=GuidedDecodingParams(json={'type': 'array', 'minItems': 1, 'items': {'type': 'object', 'anyOf': [{'properties': {'name': {'type': 'string', 'enum': ['magic_function']}, 'parameters': {'type': 'object', 'properties': {'input': {'type': 'integer'}}, 'required': ['input']}}, 'required': ['name', 'parameters']}]}}, regex=None, choice=None, grammar=None, json_object=None, backend=None, backend_was_auto=False, disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, whitespace_pattern=None, structural_tag=None), extra_args=None), prompt_token_ids: None, prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None

Copy link
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems reasonable to me, thanks

@github-project-automation github-project-automation bot moved this from Backlog to In progress in DeepSeek V3/R1 May 30, 2025
@mgoin mgoin added the ready ONLY add when PR is ready to merge/full CI is needed label May 30, 2025
@NaiveYan
Copy link

NaiveYan commented May 31, 2025

I'm encountering an issue when trying to use the DeepSeek-R1-0528-Qwen3-8B model. It appears unsupported, returning Error 400:

{'object': 'error', 
'message': 'DeepSeek-V3 Tool parser could not locate tool call start/end tokens in the tokenizer! None', 
'type': 'BadRequestError', 
'param': None, 
'code': 400}

@houseroad
Copy link
Collaborator

Thanks for adding the support, @Xu-Wenqing. Btw, could you paste the test results in the PR description? Also do you want to include the updated version in this PR? It's fine to have another one to include the updated template.

@alllexx88
Copy link

I'm experiencing the same with DeepSeek-R1-0528-Qwen3-8B as @NaiveYan is. IDK if it helps, but here's an ollama chat template that has tool calling working with this model: https://ollama.com/okamototk/deepseek-r1:8b/blobs/e94a8ecb9327
Thanks

@Xu-Wenqing
Copy link
Contributor Author

@houseroad @wukaixingxp @markluofd @NaiveYan @alllexx88 Sorry for the late reply. The past few days were Chinese Dragon Boat Festival, I didn’t check messages. I’ll try out the chat template on some test datasets again. Meanwhile, seems DeepSeek update DeepSeek-R1-0528 chat template: https://huggingface.co/deepseek-ai/DeepSeek-R1-0528/commit/4236a6af538feda4548eca9ab308586007567f52#d2h-846292 I will also update the template here.

@Zongru-Wang
Copy link

Error code: 400 - {'object': 'error', 'message': 'DeepSeek-V3 Tool parser could not locate tool call start/end tokens in the tokenizer! None', 'type': 'BadRequestError', 'param': None, 'code': 400}, if i use langchain will tool call agent, it will get the following error.

vllm serve /deepseek-ai/DeepSeek-R1-0528-Qwen3-8B --tensor-parallel-size 8 --host 0.0.0.0 --port 10001 --api-key none --rope-scaling '{"factor": 2.0, "original_max_position_embeddings": 32768, "rope_type": "yarn"}' --gpu-memory-utilization 0.9 --enable-reasoning --reasoning-parser deepseek_r1 --guided_decoding_backend guidance --enable-auto-tool-choice --tool-call-parser deepseek_v3 --chat-template /home/ubuntu/wzr/LLM-MODELS/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B/tool_chat_template_deepseekr1.jinja --served-model-name DeepSeek-R1

@Zongru-Wang
Copy link

Error code: 400 - {'object': 'error', 'message': 'DeepSeek-V3 Tool parser could not locate tool call start/end tokens in the tokenizer! None', 'type': 'BadRequestError', 'param': None, 'code': 400}, if i use langchain will tool call agent, it will get the following error.

vllm serve /deepseek-ai/DeepSeek-R1-0528-Qwen3-8B --tensor-parallel-size 8 --host 0.0.0.0 --port 10001 --api-key none --rope-scaling '{"factor": 2.0, "original_max_position_embeddings": 32768, "rope_type": "yarn"}' --gpu-memory-utilization 0.9 --enable-reasoning --reasoning-parser deepseek_r1 --guided_decoding_backend guidance --enable-auto-tool-choice --tool-call-parser deepseek_v3 --chat-template /home/ubuntu/wzr/LLM-MODELS/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B/tool_chat_template_deepseekr1.jinja --served-model-name DeepSeek-R1

I tried --tool-call-parser deepseek_v3 --chat-template examples/tool_chat_template_deepseekr1.jinja, and got Error code: 400 - {'object': 'error', 'message': 'DeepSeek-V3 Tool parser could not locate tool call start/end tokens in the tokenizer! None', 'type': 'BadRequestError', 'param': None, 'code': 400}, if I use --tool-call-parser hermes, vllm backend shows: The following fields were present in the request but ignored: {'function_call'}.

I am using langchain agent to make tool calls, QWQ-32B, Qwen3 serires works fine for me.

Comment on lines 242 to 243
* `deepseek-ai/DeepSeek-V3-0324`
* `deepseek-ai/DeepSeek-R1-0528`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would this look in the docs?

Suggested change
* `deepseek-ai/DeepSeek-V3-0324`
* `deepseek-ai/DeepSeek-R1-0528`
* `deepseek-ai/DeepSeek-V3-0324` (`--tool-call-parser deepseek_v3 --chat-template examples/tool_chat_template_deepseekv3.jinja`)
* `deepseek-ai/DeepSeek-R1-0528` (`--tool-call-parser deepseek_v3 --chat-template examples/tool_chat_template_deepseekr1.jinja`)

Copy link
Contributor Author

@Xu-Wenqing Xu-Wenqing Jun 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hmellor Updated the markdown file.

@Xu-Wenqing
Copy link
Contributor Author

@houseroad Updated the chat template, and added test results in description.

@Xu-Wenqing Xu-Wenqing requested a review from hmellor June 4, 2025 11:04
Copy link
Collaborator

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@houseroad houseroad enabled auto-merge (squash) June 4, 2025 11:31
@houseroad houseroad merged commit 02658c2 into vllm-project:main Jun 4, 2025
49 checks passed
@github-project-automation github-project-automation bot moved this from In progress to Done in DeepSeek V3/R1 Jun 4, 2025
@menardorama
Copy link

Hi,
I have tested this PR with https://huggingface.co/deepseek-ai/DeepSeek-R1-0528 and tool calling is never working

 vllm serve deepseek-ai/DeepSeek-R1-0528  --port 8000 --trust-remote-code --tensor-parallel-size 8 --enable-reasoning --reasoning-parser deepseek_r1 --tool-call-parser deepseek_v3 --enable-auto-tool-choice --chat-template /vllm-templates/tool_chat_template_deepseekr1_v2.jinja"

Is anybody got success using this model and this PR ?

@qdivan
Copy link

qdivan commented Jun 5, 2025

Hi, I have tested this PR with https://huggingface.co/deepseek-ai/DeepSeek-R1-0528 and tool calling is never working

 vllm serve deepseek-ai/DeepSeek-R1-0528  --port 8000 --trust-remote-code --tensor-parallel-size 8 --enable-reasoning --reasoning-parser deepseek_r1 --tool-call-parser deepseek_v3 --enable-auto-tool-choice --chat-template /vllm-templates/tool_chat_template_deepseekr1_v2.jinja"

Is anybody got success using this model and this PR ?

I got successed in tool calling. maybe you did not put the template file in the right path.

@menardorama
Copy link

/vllm-templates/tool_chat_template_deepseekr1_v2.jinja

Well that's an absolute path and I can read the file.

You are using the same model ?

I am running vllm 0.9.0.1 on NVIDIA B200 if that helps...

@qdivan
Copy link

qdivan commented Jun 5, 2025

/vllm-templates/tool_chat_template_deepseekr1_v2.jinja

Well that's an absolute path and I can read the file.

You are using the same model ?

I am running vllm 0.9.0.1 on NVIDIA B200 if that helps...

I use docker and I put the template in /data/vllm-extra/tool_chat_template_deepseekr1.jinja

and use docker run to start the model and it worked.

you can follow the bash:

docker run -d \
  --name vllm-deepseek-r1 \
  --restart unless-stopped \
  --runtime=nvidia \
  --health-cmd="wget -qO- http://localhost:12345/v1/models >/dev/null 2>&1 || exit 1" \
  --health-interval=30s \
  --health-timeout=5s \
  --health-retries=3 \
  --health-start-period=300s \
  --gpus all \
  -p 12345:12345 \
  --ipc=host \
  --log-driver json-file \
  --log-opt max-size=100m \
  --log-opt max-file=3 \
  -v /data1/models:/data1/models \
  -v /data/vllm-cache:/data/vllm-cache \
  -v /data/app/vllm/log:/workspace/logs \
  -v /data/vllm-extra/tool_chat_template_deepseekr1.jinja:/vllm-workspace/examples/tool_chat_template_deepseekr1.jinja:ro \
  -e VLLM_CACHE_ROOT=/data/vllm-cache \
  -e VLLM_WORKER_MULTIPROC_METHOD=spawn \
  -e VLLM_MARLIN_USE_ATOMIC_ADD=1 \
  -e HF_HUB_OFFLINE=1 \
  -e VLLM_USE_MODELSCOPE=true \
  -e OMP_NUM_THREADS=1 \
  -e VLLM_USE_V1=1 \
  vllm/vllm-openai:latest \
  --host 0.0.0.0 \
  --port 12345 \
  --model /data1/models/DeepSeek-R1-0528 \
  --served-model-name deepseek-reasoning \
  --tensor-parallel-size 8 \
  --gpu-memory-utilization 0.98 \
  --max-model-len 131071 \
  --max-seq-len-to-capture 8192 \
  --max-num-seqs 16 \
  --enable-chunked-prefill \
  --enable-prefix-caching \
  --enable-auto-tool-choice \
  --tool-call-parser deepseek_v3 \
  --chat-template examples/tool_chat_template_deepseekr1.jinja \
  --trust-remote-code

@X1ll
Copy link

X1ll commented Jul 8, 2025

DeepSeek-R1-0528 model support function call, add function call chat template.

Usage:

vllm serve ... --enable-auto-tool-choice --tool-call-parser deepseek_v3 --chat-template examples/tool_chat_template_deepseekr1.jinja

Function Call Test Use Berkeley Function Calling Leaderboard to evaluate function call template here.

Evaluation Result: 🦍 Model: DeepSeek-R1-0528 🔍 Running test: simple ✅ Test completed: simple. 🎯 Accuracy: 0.9325 Number of models evaluated: 100%|███████████████████████████████████████████| 1/1 [00:00<00:00, 41.24it/s] 📈 Aggregating data to generate leaderboard score table... 🏁 Evaluation completed. See /Users/xuwenqing/function_call_eval/score/data_overall.csv for overall evaluation results on BFCL V3.

Is this test using BFCL FC or Prompt mode?

@bennorris123
Copy link

Hi all,

It seems this PR was merged even though there was still ambiguity around its validity (above comments)

When I run the following command:

vllm serve cognitivecomputations/DeepSeek-R1-0528-AWQ
.......
--trust-remote-code
--tool-call-parser deepseek_v3
--chat-template examples/tool_chat_template_deepseekr1.jinja

The tool call JSON only gets parsed from the model response about 50% of the time. See below for an example false output:

ChatCompletion(id='chatcmpl-28198aed4f16412a8c6cdaa50504f9d2', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='<think>\nWe are given a tool for web search. The user is asking for gold prices today.\n Since we don\'t have real-time data, we need to perform a web search to get the latest gold prices.\n We can call the web_search function with a query about today\'s gold prices.\n The function requires a single query string. We can use: "current gold prices today".\n Let\'s call the function accordingly.\n</think>\nI\'ll search for the latest gold prices for you. Let me find the most up-to-date information. \n\nI\'ll perform a web search for today\'s gold prices.\n</think>\n```json\n{"query":"gold prices today"}\n```<|tool▁call▁end|><|tool▁calls▁end|>', refusal=None, role='assistant', annotations=None, audio=None, function_call=None, tool_calls=None), content_filter_results={'hate': {'filtered': False}, 'self_harm': {'filtered': False}, 'sexual': {'filtered': False}, 'violence': {'filtered': False}, 'jailbreak': {'filtered': False, 'detected': False}, 'profanity': {'filtered': False, 'detected': False}})], created=1755705900, model='DeepSeek-R1-0528', object='chat.completion', service_tier=None, system_fingerprint='', usage=CompletionUsage(completion_tokens=136, prompt_tokens=161, total_tokens=297, completion_tokens_details=None, prompt_tokens_details=None))

As you can see, when using this updated chat template and recommended tool parser, we still get an empty function_call/tool_call in the response. Does anyone recommend any solutions?

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation ready ONLY add when PR is ready to merge/full CI is needed tool-calling

Projects

Status: Done
Status: Done

Development

Successfully merging this pull request may close these issues.