Skip to content

Eval bug: Qwen3, failed to parse chat template (jinja) #13178

Closed
@matteoserva

Description

@matteoserva

Name and Version

ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
Device 0: AMD Radeon Graphics, gfx1103 (0x1103), VMM: no, Wave Size: 32
version: 5217 (e98b369)
built with cc (Debian 12.2.0-14) 12.2.0 for x86_64-linux-gnu

Operating systems

Linux

GGML backends

HIP

Hardware

amd 780M, gfx1103

Models

Qwen 3 models: Qwen_Qwen3-30B-A3B-Q5_K_L.gguf

Problem description & steps to reproduce

llama.cpp fails to parse the Qwen3 chat template, the error is related to the list slicing syntax

First Bad Commit

No response

Relevant log output

common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
common_chat_templates_init: failed to parse chat template (defaulting to chatml): Expected value expression at row 18, column 30:
{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
{%- for message in messages[::-1] %}
                             ^
    {%- set index = (messages|length - 1) - loop.index0 %}

srv          init: initializing slots, n_slots = 1

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions