You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
I reviewed the Discussions, and have a new and useful enhancement to share.
Feature Description
Summary
The current convert_hf_to_gguf.py script fails to convert HuggingFace models that require trust_remote_code=True, especially those using custom tokenizers or architectures (e.g., TikTokenTokenizer). This results in runtime errors or input prompts that break automation and scripting workflows.
Adding support for trust_remote_code (via CLI flag or built-in toggle) would enable full compatibility with a growing set of modern models.
Use custom tokenizers (TikToken, JinjaTokenizer, etc.)
Require remote code execution
Break current conversion pipelines
Fixing this once will unlock dozens of models for GGUF and llama.cpp.
Happy to PR this if helpful. Thanks again for all the work — your tooling is incredible.
Motivation
Motivation
Support for trust_remote_code=True is increasingly important as more HuggingFace models rely on custom tokenizers or architectures. Without it, convert_hf_to_gguf.py cannot load or convert these models, blocking compatibility with llama.cpp.
Possible Implementation
No response
The text was updated successfully, but these errors were encountered:
Prerequisites
Feature Description
Summary
The current
convert_hf_to_gguf.py
script fails to convert HuggingFace models that requiretrust_remote_code=True
, especially those using custom tokenizers or architectures (e.g.,TikTokenTokenizer
). This results in runtime errors or input prompts that break automation and scripting workflows.Adding support for
trust_remote_code
(via CLI flag or built-in toggle) would enable full compatibility with a growing set of modern models.🔥 Encountered Issues
During conversion of
huihui-ai/Moonlight-16B-A3B-Instruct-abliterated
, the following blockers were hit:Broken
tokenizer_config.json
JSONDecodeError
Script lacks support for
trust_remote_code=True
--trust-remote-code
CLI flag existsScript attempts
.vocab
on TikTokenTokenizertokenizer.model.n_vocab
or hardcoded value🧠 Suggested Fixes
(Flexible depending on preference)
--trust-remote-code
AutoTokenizer.from_pretrained()
andAutoModelForCausalLM.from_pretrained()
.vocab
Optional:
trust_remote_code
iftokenizer_class
is custom?README
ordocs/gguf.md
🧪 Repro Steps
git clone https://huggingface.co/huihui-ai/Moonlight-16B-A3B-Instruct-abliterated ./convert_hf_to_gguf.py --verbose --outfile moonlight.gguf --outtype bf16 huihui-ai/Moonlight-16B-A3B-Instruct-abliterated # fails with prompt + crashes if not patched
✅ Workaround (What I Did)
.vocab
call withtokenizer.model.n_vocab
🙏 Why This Matters
Many newer models are starting to:
TikToken
,JinjaTokenizer
, etc.)Fixing this once will unlock dozens of models for GGUF and
llama.cpp
.Happy to PR this if helpful. Thanks again for all the work — your tooling is incredible.
Motivation
Motivation
Support for
trust_remote_code=True
is increasingly important as more HuggingFace models rely on custom tokenizers or architectures. Without it,convert_hf_to_gguf.py
cannot load or convert these models, blocking compatibility withllama.cpp
.Possible Implementation
No response
The text was updated successfully, but these errors were encountered: