Skip to content

Support jinja extra template kwargs (Qwen3 enable_thinking feature), from command line and from client #13196

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 9 commits into
base: master
Choose a base branch
from

Conversation

matteoserva
Copy link
Contributor

@matteoserva matteoserva commented Apr 29, 2025

This PR implements handling additional jinja parameters.
Used for example to set enable_thinking in Qwen3 models.

The official template is still partially compatible. I modified it to use only supported features.
It's here: https://pastebin.com/16ZpCLHk https://pastebin.com/GGuTbFRc
And should be loaded with llama-server --jinja --chat-template-file {template_file}

It fixes #13160 and #13189

Test it with:

  • enable_thinking=false. Expected: {"prompt":"\n<|im_start|>user\nGive me a short introduction to large language models.<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n"}
curl http://localhost:8080/apply-template -H "Content-Type: application/json" -d '{
  "model": "Qwen/Qwen3-8B",
  "messages": [
    {"role": "user", "content": "Give me a short introduction to large language models."}
  ],
  "temperature": 0.7,
  "top_p": 0.8,
  "top_k": 20,
  "max_tokens": 8192,
  "presence_penalty": 1.5,
  "chat_template_kwargs": {"enable_thinking": false}
}'
  • enable_thinking=true
curl http://localhost:8080/apply-template -H "Content-Type: application/json" -d '{
  "model": "Qwen/Qwen3-8B",
  "messages": [
    {"role": "user", "content": "Give me a short introduction to large language models."}
  ],
  "temperature": 0.7,
  "top_p": 0.8,
  "top_k": 20,
  "max_tokens": 8192,
  "presence_penalty": 1.5,
  "chat_template_kwargs": {"enable_thinking": true}
}'
  • enable_thinking undefined
curl http://localhost:8080/apply-template -H "Content-Type: application/json" -d '{
  "model": "Qwen/Qwen3-8B",
  "messages": [
    {"role": "user", "content": "Give me a short introduction to large language models."}
  ],
  "temperature": 0.7,
  "top_p": 0.8,
  "top_k": 20,
  "max_tokens": 8192,
  "presence_penalty": 1.5
}'

@matteoserva matteoserva requested a review from ngxson as a code owner April 29, 2025 18:58
@matteoserva matteoserva marked this pull request as draft April 29, 2025 18:58
@rhjdvsgsgks
Copy link
Contributor

can you add chat_template_kwargs to cli argument as well?

@matteoserva
Copy link
Contributor Author

matteoserva commented Apr 30, 2025

can you add chat_template_kwargs to cli argument as well?

I added it. I tested it using updated command (You might want to check the escaping of the double quotes):
--chat_template_kwargs "{\"enable_thinking\":false}" --jinja --chat-template-file qwen/qwen3_template.txt

@matteoserva matteoserva force-pushed the enable_thinking branch 2 times, most recently from d1861c4 to 01b58b5 Compare April 30, 2025 15:58
@matteoserva matteoserva changed the title [RFC] handling jinja extra template kwargs (Qwen3 enable_thinking feature) Support jinja extra template kwargs (Qwen3 enable_thinking feature), from command line and from client Apr 30, 2025
@matteoserva matteoserva marked this pull request as ready for review April 30, 2025 15:59
@neolee
Copy link

neolee commented May 1, 2025

Very useful for Qwen3 series. +1 for this feature!

@xiaomi102
Copy link

Cannot work with the --chat_template_kwargs option from CLI: error: invalid argument: --chat-template-kwargs
git clone --depth=1 -b enable_thinking https://github.com/matteoserva/llama.cpp
Screenshot 2025-05-09 061336

@celsowm
Copy link

celsowm commented May 9, 2025

@ggerganov is there any reason why this PR has not been accepted and merged yet?

@matteoserva
Copy link
Contributor Author

Cannot work with the --chat_template_kwargs option from CLI: error: invalid argument: --chat-template-kwargs git clone

This PR is implemented only for llama-server and its webui.

llama-cli has unresolved bugs that prevent me from enabling this feature.

@xiaomi102
Copy link

Cannot work with the --chat_template_kwargs option from CLI: error: invalid argument: --chat-template-kwargs git clone

This PR is implemented only for llama-server and its webui.

llama-cli has unresolved bugs that prevent me from enabling this feature.

Hope you'll integrate it for the CLI environment soon, thanks!

@matteoserva
Copy link
Contributor Author

Hope you'll integrate it for the CLI environment soon, thanks!

I'll open a new PR soon for llama-cli. The code is ready but it's blocked by #13402 and #13404

@celsowm
Copy link

celsowm commented May 15, 2025

would be nice a enable_thinking checkbox or something like that on llama cpp webui too

@strawberrymelonpanda
Copy link
Contributor

strawberrymelonpanda commented May 15, 2025

@ggerganov is there any reason why this PR has not been accepted and merged yet?

@celsowm Lack of eyes on this area would be my guess.

With 438 open PRs (many obsolete), I've kind of come to accept I'll need to pull in some PRs of interest to me when building.

@neolee
Copy link

neolee commented May 16, 2025

@ggerganov is there any reason why this PR has not been accepted and merged yet?

@celsowm Lack of eyes on this area would be my guess.

With 438 open PRs (many obsolete), I've kind of come to accept I'll need to pull in some PRs of interest to me when building.

vLLM and SGLang have got this feature the first day Qwen3 released. At the same time many useful enhancement and fix PRs become obsolete just because of delay on merging here in llama.cpp community. Really sad about that.

@ggerganov ggerganov requested a review from ochafik May 16, 2025 05:28
matteoserva and others added 4 commits May 16, 2025 08:22
Co-authored-by: Georgi Gerganov <[email protected]>
Co-authored-by: Georgi Gerganov <[email protected]>
coding standard: cosmetic changes

Co-authored-by: Georgi Gerganov <[email protected]>
Co-authored-by: Georgi Gerganov <[email protected]>
@matteoserva matteoserva requested a review from ggerganov May 16, 2025 06:52
@Neath
Copy link

Neath commented May 16, 2025

This is so necessary when dealing with Qwen3! Can't wait to see this merged and be able to use the latest version with this <3

@taha-yassine
Copy link

FYI now that #13573 is merged, the official template should work as expected and there's no need to use the modified one. We're one step closed to having proper support for Qwen3. One remaining thing is the correct handling of the ... part in previous assistant messages.

Copy link
Collaborator

@ochafik ochafik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@matteoserva Sorry for only getting to review this now! First couple of thoughts:

  • While being able to pass kwargs will be very useful in general (🎉), I think for the particular case of enable_thinking we will probably want to special case it, since there's a few thinking models around, some of which force the thinking tag open (a common --disable-thinking flag could close their tags, and set the enable_thinking: false for Qwen3).

    Building on server: streaming of tool calls and thoughts when --jinja is on #12379 would make this easy for instance: 506e712

  • We'll want to pass the params even when there are tools (right now only setup in common_chat_params_init_without_tools). Ideally after the diffs PR goes in 😅

@@ -73,6 +74,7 @@ struct common_chat_templates_inputs {
bool parallel_tool_calls = false;
bool extract_reasoning = true;
std::chrono::system_clock::time_point now = std::chrono::system_clock::now();
std::map<std::string, std::string> chat_template_kwargs;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a map from string keys to stringified json values. Why not just store the stringified top-level json? (maybe name it chat_additional_context_json or chat_template_kwargs_json?)

@neolee
Copy link

neolee commented May 17, 2025

  • While being able to pass kwargs will be very useful in general (🎉), I think for the particular case of enable_thinking we will probably want to special case it, since there's a few thinking models around, some of which force the thinking tag open (a common --disable-thinking flag could close their tags, and set the enable_thinking: false for Qwen3).

Why not accept this PR to get the general "pass kwargs to chat template" feature at first, then implement the enable_thinking flag using this feature?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Misc. bug: Qwen 3.0 "enable_thinking" parameter not working
10 participants