-
Notifications
You must be signed in to change notification settings - Fork 12.2k
Granite Four #13550
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Granite Four #13550
Conversation
* ggml : improve ggml_mul speed when masking recurrent states
* ggml : make the ggml_mul fast broadcast path more consistently formatted
The tokenzier.json of Mamba-Codestral-7B-v0.1 otherwise requires workarounds to work correctly.
The max index is 31, so trimming the arguments is necessary.
Whoops, this is needed for the offset in the concatenated output.
This was initially added because states were masked with ggml_mul, but this is no longer done and so this "optimisation" is no longer necessary, or at least not worth the additional code complexity.
This makes the weight buft detection in src/llama.cpp simpler. * convert : transpose Mamba-2 A, D and reshape SSM_NORM This breaks existing conversions of Mamba-2 models to avoid some reshapes. Not sure if it's a good idea, but it makes the graph slightly cleaner. * llama : more appropriate SSM_SCAN and SSM_CONV buft support checks
And also fix multi-user inference for recurrent models by using cell_id instead of i as the kv cell index when populating s_copy.
Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]>
Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]>
This re-uses the Bamba code paths heavily and simply adds the missing parts for loading MoE and the shared expert. Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]>
Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]>
Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]>
…_mamba*_layer Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]>
… impl to use mixins The challenge here is to give both the non-hybrid classes (llm_build_mamba and llm_build_granite) AND the hybrid class (llm_build_hybrid_mamba) access to the same intermediate "base class" functionality (build_mamba*_layer, build_granite_attention_layer) without running into trouble with diamond inheritance of llm_graph_context. Due to the non-trivial initialization that happens in llm_graph_context, diamond inheritance results in multiple initializations of the common base which cause problems around the unique ptrs. I wanted to get away from `self->` everywhere, but this is still a bit cleaner than making those methods static I think. Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]>
…r builders This follows the pattern where the type of input is pinned to the type of memory and that is used to dispatch to the correct version of `build_rs` / `build_attn`. There's a lot of code duplication that can hopefully be pulled into common functions in the graph later. Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]>
I've got back-and-forth a lot about how/if to try to implement reuse of the "child model" layer types for hybrid models. At the end of the day, I think hybrid models are their own beast and even if their layers are inspired by other models, they should maintain control of their own layer building (in other words, the copy-paste method). Given that, the name should reflect that this is not a generic hybrid model builder, but rather a granite- specific hybrid model builder that can do MoE (granite 4) or dense (bamba). As part if this, I also cleaned up dangling comments from previous attempts at using static methods for reusability. Branch: GraniteFour Signed-off-by: Gabe Goodhart <[email protected]>
Subclasses of llm_graph_context cannot have extra fields, because the called destructor is not the one from the subclass. This otherwise would cause problems when runnning Mamba-(1|2) inference when compiled -DGGML_SANITIZE_ADDRESS=ON
* origin/compilade/mamba2: (29 commits) mamba : fix mismatched new and delete size for llm_build_mamba cuda : implement ssm scan for Mamba2 ggml-cpu : reorder SVE FMA for consistency with other SIMD arches ggml : fix mamba2 ssm scan when compiled with SVE graph : fix recurrent state copies when avoiding copies kv-cache : allow context shift for recurrent models convert : avoid AutoConfig for Mamba and Mamba2 hparams kv-cache : remove const_cast when setting inputs for s_copy metal : single-user mamba2 inference works metal : add missing args for nb references in ssm_scan_f32_group metal : fix confusion between ; and , convert : fix flake8 lint ggml : avoid multiply by D in GGML_OP_SSM_SCAN ggml : remove unused fast broadcast path in GGML_MUL metal : fix wrong number of tokens per sequence in SSM_SCAN metal : fix SSM_SCAN state head offset metal : add back n_seqs to SSM_SCAN args metal : remove unused arguments for SSM_SCAN metal : use log and exp instead of log1pf and expf in SSM_SCAN metal : fix SSM_SCAN pipeline scope ...
* mamba2-sync: (22 commits) recurrent : call balloc split_reset() in init_batch() (ggml-org#14414) ggml : add ggml_set_rows (ggml-org#14274) convert : fix broken sentencepiece vocab (ggml-org#14416) mamba : fix mismatched new and delete size for llm_build_mamba model : gemma3n text-only (ggml-org#14400) cmake: regen vulkan shaders when shaders-gen sources change (ggml-org#14398) llama : return mistral-v7-tekken as default template only (ggml-org#14390) metal : add special-case mat-vec mul for ne00 == 4 (ggml-org#14385) metal : batch rows copy in a single threadgroup (ggml-org#14384) docs: update s390x documentation + add faq (ggml-org#14389) musa: enable fp16 mma (all) and cublas on qy2 (ggml-org#13842) ggml-cpu: enable IBM NNPA Vector Intrinsics (ggml-org#14317) ggml : do not output unprintable characters on GGUF load failure (ggml-org#14381) sycl: GGML_SYCL_DISABLE_OPT on by default for all Intel Devices (ggml-org#13973) opencl: ref count `ggml_backend_opencl_context` and refactor profiling (ggml-org#14254) batch : fix check for empty sequences in memory (ggml-org#14364) cmake : use LLAMA_BUILD_NUMBER when defining LLAMA_INSTALL_VERSION (ggml-org#14362) server : move no API key doc to /health (ggml-org#14352) main : honor --verbose-prompt on interactive prompts (ggml-org#14350) jinja : Add Mistral-Small-3.2-24B-Instruct-2506.jinja (ggml-org#14349) ...
@compilade @ggerganov @AnmolS1 I'm going to move the conversation about hybrid cache seg faults over here to avoid cluttering the |
I think the issue may be a logic bug that is somehow being triggered due to the parallel prefill when running without
So, the question in my mind is how the two statuses could end up different and what (if any) problems this would cause for the hybrid cache. |
If I put a conditional check on |
The problem is probably caused when
It definitely sounds like the recurrent cache's I wonder if it would be simpler with a separate Not sure if there's a situation where the same problem could happen with |
Ah, yeah, so that makes sense that we could solve this in the recurrent cache as well by simply making it a no-op if status is |
I'm wondering if the |
Ok, yeah, the logic in kv_self_update would definitely trigger this if the two caches had different statuses. I think it makes sense to contain this within the I'll open a standalone PR to fix this on |
There are conditions where the two child conditions can end up with different status values based on the logic in the init_update constructor for llama_kv_cache_unified_context which can conditionally set status to either LLAMA_MEMORY_STATUS_SUCCESS or LLAMA_MEMORY_STATUS_NO_UPDATE. See full discussion: ggml-org#13550 (comment) Branch: HybridCacheApplyLogic Signed-off-by: Gabe Goodhart <[email protected]>
Fix PR: #14428 |
* origin/master: metal : disable fast-math for some cpy kernels (ggml-org#14460) ggml-cpu: sycl: Re-enable exp f16 (ggml-org#14462) test-backend-ops : disable llama test (ggml-org#14461) cmake : Remove redundant include path in CMakeLists.txt (ggml-org#14452) scripts : make the shell scripts cross-platform (ggml-org#14341) server : support jinja extra template kwargs (Qwen3 enable_thinking feature), from command line and from client (ggml-org#13196) server : fix appearance of the chats list context menu for Safari (ggml-org#14322) SYCL: disable faulty fp16 exp kernel (ggml-org#14395) ggml : fix unmerged GGML_FPxx_TO_FPxx refactoring (ggml-org#14443) ggml : implement REGLU/GEGLU/SWIGLU ops (ggml-org#14158) vulkan: Add fusion support for RMS_NORM+MUL (ggml-org#14366) CUDA: add bf16 and f32 support to cublas_mul_mat_batched (ggml-org#14361) vulkan: handle noncontig in the final case of ggml_vk_get_cpy_pipeline (ggml-org#14378) vulkan: lock accesses of pinned_memory vector (ggml-org#14333) model : add support for ERNIE 4.5 0.3B model (ggml-org#14408) fix async_mode bug (ggml-org#14432) ci : fix windows build and release (ggml-org#14431) vulkan: Fix GGML_VULKAN_SHADER_DEBUG_INFO (ggml-org#14427) graph : make llm_graph_context destructor virtual (ggml-org#14410)
* origin/gg/memory-is-fail: memory : correctly handle failure in apply()
* origin/master: memory : correctly handle failure in apply() (ggml-org#14438)
* origin/master: Add Vulkan images to docker.md (ggml-org#14472) CANN: update aclnnGroupedMatmulV2 to aclnnGroupedMatmulV3 (ggml-org#14411) vulkan: Split large mul_mat_id to fit in shared memory (ggml-org#14451) add GELU_ERF (ggml-org#14455) ggml : remove trailing whitespace (#0) sync : ggml ggml-cpu : "align corners" for bilinear upscale/downscale (ggml/1285) ggml-quants : rename best_mad to best_error (ggml/1283) opencl : add GEGLU, REGLU, SWIGLU (ggml-org#14456) Add Conv2d for CPU (ggml-org#14388)
Description
This PR is the end-point for architecture support for Granite 4.0 (#13269 . It incorporates a number of changes from other in-flight branches that will need to be merged first:
Additionally, this PR replaces some work done on other PRs / branches:
Bamba
support: Bamba architecture #10810Bamba
support: https://github.com/gabe-l-hart/llama.cpp/tree/BambaArchitectureRefactorGranite 4.0
support: https://github.com/gabe-l-hart/llama.cpp/tree/GraniteFourDraftBamba
work, this will also be abandoned in favor of this PRJamba
: llama : support Jamba hybrid Transformer-Mamba models #7531master
.Jamba
support in this branch, but on further inspection, it looks like theJamba
architecture has some additional bells-and-whistles (eg sliding-window-attention) that would need further work, so my plan is to leaveJamba
off for now and possibly tackle it later (hopefully it's much easier than the original branch!)Outstanding Questions
Besides the upstream PRs, there are a few questions to answer before this PR is merge ready:
llama-kv-cache
beyond those in feat: Hybrid unified/recurrent cache #13276, but they depend on the addition ofhparams.recurrent_layer_arr
which is only populated correctly if there is a valid model architecture to check against. Should I move all of these changes to the hybrid cache PR or keep them here where the model architectures become real?hparams.recurrent_layer_arr
? Using a max-layer-sizestd::array
doesn't feel quite right.Bamba
andgranite-4.0-tiny-shared-preview
on this branch vs the respective draft branches, so I need to determine if this is due to changes in the attention implementation (ie "working as expected") or a bug somewhere.dymamic_cast
to get the right cache type could be expensive (though it's likely negligible relative to the tensor math). Should we do something more clever to handle different cache types inllama-graph
?switch
statement for determining the type of KV cache to allocate inllama-model.cpp
seems redundant withllama_model_is_recurrent
andllama_model_is_hybrid
. Should we use those functions instead and eliminate the duplicate logic and additional place to tweak for new recurrent / hybrid models?Testing
To test out this branch, I've been using the following models:
granite-4.0-tiny-preview
: https://huggingface.co/ibm-granite/granite-4.0-tiny-previewBamba-9B-v1
: https://huggingface.co/ibm-ai-platform/Bamba-9B-v1mamba2-370m-hf
: https://huggingface.co/AntonV/mamba2-370m-hfDetails
This PR has a lot of changes in it, some of which are isolated in the prereq-PRs above. In addition to the general
mamba2
andllama_kv_cache_hybrid
changes, this PR does the following:python side
BambaForCausalLM
andGraniteMoeHybridForCausalLM
gguf_writer.py
that allows duplicate key/value pairs throughadd_key_value
if (and only if) they match both value and type with the existing key. This is a convenience for hybrid models so that the converter doesn't need to rewrite the hparam conversion from multiple parents.HybridAttention
section underKeys
inconstants.py
to holdattention.layer_indices
. OPEN QUESTION: Should this just go underAttention
?c++ side
llama_model_is_hybrid
akin tollama_model_is_recurrent
llama_model_is_recurrent
intollm_arch_is_*
implemented inllama-arch.*
andllama_model_is_*
implemented inllama-model.*
. This was done so that they could be used during model initialization before the model itself can be passed as the argument, specifically to determine how to populatehparams.recurrent_layer_arr
(see below).hparams.recurrent_layer_arr
and support parsing ithparams.n_embd_k_s
/hparams.n_embd_v_s
0
. This should be fine since none of those places interact with the hybrid caching.hparams.recurrent_layer(uint32_t)
to check whether a given layer is recurrentbamba
andgranitemoeshared
inllama-arch.*
(the boring part!)hparams
as an additional argument to thellama_model.create_memory
methodllama-graph
, anywhere that a specific cache type needs to be fetched, it is grabbed using new methodsget_recurrent_cache
/get_unified_cache
. These methods usedynamic_cast
to handle both non-hybrid caches and hybrid caches.llama-model.cpp
bamba
andgranitemoehybrid
inllama-model
build_mamba_layer
/build_mamba2_layer
fromllm_build_mamba
andbuild_attention_layer
/build_layer_ffn
fromllm_build_granite
intostatic
methods on their respective classes. This makes for some gross function signatures where member data needs to be explicitly passed, but it allows the hybrid model architecture(s) to use these methods without complex inheritance.