Skip to content

GGML_ASSERT(cur_p->size > 0) failed, or gibberish on DeepSeek V3 0324 (Q2_K_XL), CUDA + CPU #13461

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Panchovix opened this issue May 11, 2025 · 7 comments

Comments

@Panchovix
Copy link

Panchovix commented May 11, 2025

Hi there! I found that I got this issue when trying to use some higher values of -b and -ub with DeepSeekV3, as doing so it increases the PP performance a lot. So got the issues in the title, so tried to set batch sizes to the default values but the issues still happen.

Setup is 5090+4090x2+A6000, Ryzen 7 7800X3D, 192GB RAM, Fedora 42 (built llamacpp with GCC14)

Log is

pancho@fedora:/run/media/pancho/4C4643C74643B10E/ChatIAs/llama.cpp/lenux/bin$ ./llama-server -m '/run/media/pancho/14D6DF2AD6DF0B3E/models_llm/DeepSeek-V3-0324-UD-Q2_K_XL-00001-of-00006.gguf' -c 16384 --no-mmap -ngl 999 -ot "blk.(0|1|2|3|4|5|6|7).ffn.=CUDA0" -ot "blk.(8|9|10|11).ffn.=CUDA1" -ot "blk.(12|13|14|15).ffn.=CUDA2" -ot "blk.(16|17|18|19|20|21|22|23).ffn.=CUDA3" -ot "ffn.*=CPU" -fa -mg 0 -ub 4096 -b 4096
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 4 CUDA devices:
  Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes
  Device 1: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
  Device 2: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
  Device 3: NVIDIA RTX A6000, compute capability 8.6, VMM: yes
build: 5349 (9a390c48) with gcc-14 (GCC) 14.2.1 20250210 (Red Hat 14.2.1-8) for x86_64-redhat-linux
system info: n_threads = 8, n_threads_batch = 8, total_threads = 16

system_info: n_threads = 8 (n_threads_batch = 8) / 16 | CUDA : ARCHS = 860,890,1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | FA_ALL_QUANTS = 1 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | LLAMAFILE = 1 | OPENMP = 1 | AARCH64_REPACK = 1 | 

main: binding port with default address family
main: HTTP server is listening, hostname: 127.0.0.1, port: 8080, http threads: 15
main: loading model
srv    load_model: loading model '/run/media/pancho/14D6DF2AD6DF0B3E/models_llm/DeepSeek-V3-0324-UD-Q2_K_XL-00001-of-00006.gguf'
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 5090) - 29819 MiB free
llama_model_load_from_file_impl: using device CUDA1 (NVIDIA GeForce RTX 4090) - 23666 MiB free
llama_model_load_from_file_impl: using device CUDA2 (NVIDIA GeForce RTX 4090) - 23698 MiB free
llama_model_load_from_file_impl: using device CUDA3 (NVIDIA RTX A6000) - 48281 MiB free
llama_model_loader: additional 5 GGUFs metadata loaded.
llama_model_loader: loaded meta data with 64 key-value pairs and 1086 tensors from /run/media/pancho/14D6DF2AD6DF0B3E/models_llm/DeepSeek-V3-0324-UD-Q2_K_XL-00001-of-00006.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = deepseek2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Deepseek-V3-0324
llama_model_loader: - kv   3:                            general.version str              = V3-0324
llama_model_loader: - kv   4:                           general.basename str              = Deepseek-V3-0324
llama_model_loader: - kv   5:                       general.quantized_by str              = Unsloth
llama_model_loader: - kv   6:                         general.size_label str              = 256x20B
llama_model_loader: - kv   7:                            general.license str              = mit
llama_model_loader: - kv   8:                           general.repo_url str              = https://huggingface.co/unsloth
llama_model_loader: - kv   9:                   general.base_model.count u32              = 1
llama_model_loader: - kv  10:                  general.base_model.0.name str              = DeepSeek V3 0324
llama_model_loader: - kv  11:               general.base_model.0.version str              = V3-0324
llama_model_loader: - kv  12:          general.base_model.0.organization str              = Deepseek Ai
llama_model_loader: - kv  13:              general.base_model.0.repo_url str              = https://huggingface.co/deepseek-ai/De...
llama_model_loader: - kv  14:                               general.tags arr[str,4]       = ["deepseek_v3", "deepseek", "unsloth"...
llama_model_loader: - kv  15:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  16:                      deepseek2.block_count u32              = 61
llama_model_loader: - kv  17:                   deepseek2.context_length u32              = 163840
llama_model_loader: - kv  18:                 deepseek2.embedding_length u32              = 7168
llama_model_loader: - kv  19:              deepseek2.feed_forward_length u32              = 18432
llama_model_loader: - kv  20:             deepseek2.attention.head_count u32              = 128
llama_model_loader: - kv  21:          deepseek2.attention.head_count_kv u32              = 1
llama_model_loader: - kv  22:                   deepseek2.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  23: deepseek2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  24:                deepseek2.expert_used_count u32              = 8
llama_model_loader: - kv  25:        deepseek2.leading_dense_block_count u32              = 3
llama_model_loader: - kv  26:                       deepseek2.vocab_size u32              = 129280
llama_model_loader: - kv  27:            deepseek2.attention.q_lora_rank u32              = 1536
llama_model_loader: - kv  28:           deepseek2.attention.kv_lora_rank u32              = 512
llama_model_loader: - kv  29:             deepseek2.attention.key_length u32              = 576
llama_model_loader: - kv  30:           deepseek2.attention.value_length u32              = 512
llama_model_loader: - kv  31:         deepseek2.attention.key_length_mla u32              = 192
llama_model_loader: - kv  32:       deepseek2.attention.value_length_mla u32              = 128
llama_model_loader: - kv  33:       deepseek2.expert_feed_forward_length u32              = 2048
llama_model_loader: - kv  34:                     deepseek2.expert_count u32              = 256
llama_model_loader: - kv  35:              deepseek2.expert_shared_count u32              = 1
llama_model_loader: - kv  36:             deepseek2.expert_weights_scale f32              = 2.500000
llama_model_loader: - kv  37:              deepseek2.expert_weights_norm bool             = true
llama_model_loader: - kv  38:               deepseek2.expert_gating_func u32              = 2
llama_model_loader: - kv  39:             deepseek2.rope.dimension_count u32              = 64
llama_model_loader: - kv  40:                deepseek2.rope.scaling.type str              = yarn
llama_model_loader: - kv  41:              deepseek2.rope.scaling.factor f32              = 40.000000
llama_model_loader: - kv  42: deepseek2.rope.scaling.original_context_length u32              = 4096
llama_model_loader: - kv  43: deepseek2.rope.scaling.yarn_log_multiplier f32              = 0.100000
llama_model_loader: - kv  44:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  45:                         tokenizer.ggml.pre str              = deepseek-v3
llama_model_loader: - kv  46:                      tokenizer.ggml.tokens arr[str,129280]  = ["<|begin▁of▁sentence|>", "<�...
llama_model_loader: - kv  47:                  tokenizer.ggml.token_type arr[i32,129280]  = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  48:                      tokenizer.ggml.merges arr[str,127741]  = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e...
llama_model_loader: - kv  49:                tokenizer.ggml.bos_token_id u32              = 0
llama_model_loader: - kv  50:                tokenizer.ggml.eos_token_id u32              = 1
llama_model_loader: - kv  51:            tokenizer.ggml.padding_token_id u32              = 2
llama_model_loader: - kv  52:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  53:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  54:                    tokenizer.chat_template str              = {% if not add_generation_prompt is de...
llama_model_loader: - kv  55:               general.quantization_version u32              = 2
llama_model_loader: - kv  56:                          general.file_type u32              = 10
llama_model_loader: - kv  57:                      quantize.imatrix.file str              = DeepSeek-V3-0324-GGUF/imatrix_unsloth...
llama_model_loader: - kv  58:                   quantize.imatrix.dataset str              = unsloth_calibration_DeepSeek-V3-0324.txt
llama_model_loader: - kv  59:             quantize.imatrix.entries_count i32              = 720
llama_model_loader: - kv  60:              quantize.imatrix.chunks_count i32              = 60
llama_model_loader: - kv  61:                                   split.no u16              = 0
llama_model_loader: - kv  62:                        split.tensors.count i32              = 1086
llama_model_loader: - kv  63:                                split.count u16              = 6
llama_model_loader: - type  f32:  361 tensors
llama_model_loader: - type q8_0:  122 tensors
llama_model_loader: - type q2_K:  122 tensors
llama_model_loader: - type q3_K:   54 tensors
llama_model_loader: - type q4_K:  389 tensors
llama_model_loader: - type q5_K:   23 tensors
llama_model_loader: - type q6_K:   15 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q2_K - Medium
print_info: file size   = 233.18 GiB (2.98 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 818
load: token to piece cache size = 0.8223 MB
print_info: arch             = deepseek2
print_info: vocab_only       = 0
print_info: n_ctx_train      = 163840
print_info: n_embd           = 7168
print_info: n_layer          = 61
print_info: n_head           = 128
print_info: n_head_kv        = 1
print_info: n_rot            = 64
print_info: n_swa            = 0
print_info: n_swa_pattern    = 1
print_info: n_embd_head_k    = 576
print_info: n_embd_head_v    = 512
print_info: n_gqa            = 128
print_info: n_embd_k_gqa     = 576
print_info: n_embd_v_gqa     = 512
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 18432
print_info: n_expert         = 256
print_info: n_expert_used    = 8
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = yarn
print_info: freq_base_train  = 10000.0
print_info: freq_scale_train = 0.025
print_info: n_ctx_orig_yarn  = 4096
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 671B
print_info: model params     = 671.03 B
print_info: general.name     = Deepseek-V3-0324
print_info: n_layer_dense_lead   = 3
print_info: n_lora_q             = 1536
print_info: n_lora_kv            = 512
print_info: n_embd_head_k_mla    = 192
print_info: n_embd_head_v_mla    = 128
print_info: n_ff_exp             = 2048
print_info: n_expert_shared      = 1
print_info: expert_weights_scale = 2.5
print_info: expert_weights_norm  = 1
print_info: expert_gating_func   = sigmoid
print_info: rope_yarn_log_mul    = 0.1000
print_info: vocab type       = BPE
print_info: n_vocab          = 129280
print_info: n_merges         = 127741
print_info: BOS token        = 0 '<|begin▁of▁sentence|>'
print_info: EOS token        = 1 '<|end▁of▁sentence|>'
print_info: EOT token        = 1 '<|end▁of▁sentence|>'
print_info: PAD token        = 2 '<|▁pad▁|>'
print_info: LF token         = 201 'Ċ'
print_info: FIM PRE token    = 128801 '<|fim▁begin|>'
print_info: FIM SUF token    = 128800 '<|fim▁hole|>'
print_info: FIM MID token    = 128802 '<|fim▁end|>'
print_info: EOG token        = 1 '<|end▁of▁sentence|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 61 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 62/62 layers to GPU
load_tensors:        CUDA0 model buffer size = 22188.53 MiB
load_tensors:        CUDA1 model buffer size = 17471.11 MiB
load_tensors:        CUDA2 model buffer size = 17472.86 MiB
load_tensors:        CUDA3 model buffer size = 34533.53 MiB
load_tensors:          CPU model buffer size = 147110.06 MiB
....................................................................................................
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 16384
llama_context: n_ctx_per_seq = 16384
llama_context: n_batch       = 4096
llama_context: n_ubatch      = 4096
llama_context: causal_attn   = 1
llama_context: flash_attn    = 1
llama_context: freq_base     = 10000.0
llama_context: freq_scale    = 0.025
llama_context: n_ctx_per_seq (16384) < n_ctx_train (163840) -- the full capacity of the model will not be utilized
llama_context:  CUDA_Host  output buffer size =     0.49 MiB
llama_kv_cache_unified: kv_size = 16384, type_k = 'f16', type_v = 'f16', n_layer = 61, can_shift = 1, padding = 256
llama_kv_cache_unified:      CUDA0 KV buffer size =   510.00 MiB
llama_kv_cache_unified:      CUDA1 KV buffer size =   408.00 MiB
llama_kv_cache_unified:      CUDA2 KV buffer size =   408.00 MiB
llama_kv_cache_unified:      CUDA3 KV buffer size =   748.00 MiB
llama_kv_cache_unified: KV self size  = 2074.00 MiB, K (f16): 1098.00 MiB, V (f16):  976.00 MiB
llama_context:      CUDA0 compute buffer size =  3571.00 MiB
llama_context:      CUDA1 compute buffer size =  3064.02 MiB
llama_context:      CUDA2 compute buffer size =  3064.02 MiB
llama_context:      CUDA3 compute buffer size =  3064.03 MiB
llama_context:  CUDA_Host compute buffer size =   368.05 MiB
llama_context: graph nodes  = 4782
llama_context: graph splits = 436 (with bs=4096), 214 (with bs=1)
common_init_from_params: setting dry_penalty_last_n to ctx_size = 16384
common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
srv          init: initializing slots, n_slots = 1
slot         init: id  0 | task -1 | new slot n_ctx_slot = 16384
main: model loaded
...
slot launch_slot_: id  0 | task 0 | processing task
que    start_loop: update slots
srv  update_slots: posting NEXT_RESPONSE
que          post: new task, id = 1, front = 0
slot update_slots: id  0 | task 0 | new prompt, n_ctx_slot = 16384, n_keep = 0, n_prompt_tokens = 3596
slot update_slots: id  0 | task 0 | kv cache rm [0, end)
slot update_slots: id  0 | task 0 | prompt processing progress, n_past = 3596, n_tokens = 3596, progress = 1.000000
slot update_slots: id  0 | task 0 | prompt done, n_past = 3596, n_tokens = 3596
srv  update_slots: decoding batch, n_tokens = 3596
set_embeddings: value = 0
clear_adapter_lora: call
/run/media/pancho/4C4643C74643B10E/ChatIAs/llama.cpp/src/llama-sampling.cpp:204: GGML_ASSERT(cur_p->size > 0) failed
[New LWP 111814]
[New LWP 111813]
[New LWP 111812]
[New LWP 111811]
[New LWP 111810]
[New LWP 111809]
[New LWP 111808]
[New LWP 111093]
[New LWP 111092]
[New LWP 111091]
[New LWP 111090]
[New LWP 111089]
[New LWP 111088]
[New LWP 111087]
[New LWP 111086]
[New LWP 111085]
[New LWP 111084]
[New LWP 111083]
[New LWP 111082]
[New LWP 111081]
[New LWP 111080]
[New LWP 111079]
[New LWP 111078]
[New LWP 111077]
[New LWP 111076]
[New LWP 111075]
[New LWP 111074]
[New LWP 111073]
[New LWP 111072]
[New LWP 111071]
[New LWP 111070]
[New LWP 111069]
[New LWP 111068]

This GDB supports auto-downloading debuginfo from the following URLs:
  <https://debuginfod.fedoraproject.org/>
Enable debuginfod for this session? (y or [n]) [answered N; input not from terminal]
Debuginfod has been disabled.
To make this setting permanent, add 'set debuginfod enabled off' to .gdbinit.
Function(s) ^std::(move|forward|as_const|(__)?addressof) will be skipped when stepping.
Function(s) ^std::(shared|unique)_ptr<.*>::(get|operator) will be skipped when stepping.
Function(s) ^std::(basic_string|vector|array|deque|(forward_)?list|(unordered_|flat_)?(multi)?(map|set)|span)<.*>::(c?r?(begin|end)|front|back|data|size|empty) will be skipped when stepping.
Function(s) ^std::(basic_string|vector|array|deque|span)<.*>::operator.] will be skipped when stepping.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
0x00007f1e86a876c2 in __syscall_cancel_arch () from /lib64/libc.so.6
#0  0x00007f1e86a876c2 in __syscall_cancel_arch () from /lib64/libc.so.6
#1  0x00007f1e86a7b9da in __internal_syscall_cancel () from /lib64/libc.so.6
#2  0x00007f1e86a7ba24 in __syscall_cancel () from /lib64/libc.so.6
#3  0x00007f1e86aeb5af in wait4 () from /lib64/libc.so.6
#4  0x00007f1e984b6fb6 in ggml_abort () from libggml-base.so
#5  0x00007f1e9874ca5e in llama_sampler_softmax_impl(llama_token_data_array*) () from libllama.so
#6  0x00007f1e98754d35 in llama_sampler_dist_apply(llama_sampler*, llama_token_data_array*) () from libllama.so
#7  0x00007f1e9874f50b in llama_sampler_chain_apply(llama_sampler*, llama_token_data_array*) () from libllama.so
#8  0x00000000005e1ea2 in common_sampler_sample(common_sampler*, llama_context*, int, bool) ()
#9  0x0000000000492993 in server_context::update_slots() ()
#10 0x000000000046083f in server_queue::start_loop() ()
#11 0x0000000000428dbd in main ()
[Inferior 1 (process 111063) detached]
Aborted (core dumped)
@Panchovix Panchovix changed the title GGML_ASSERT(cur_p->size > 0) failed when using -b and -ub 4096 on DeepSeek V3 0324 (Q2_K_XL), CUDA + CPU GGML_ASSERT(cur_p->size > 0) failed when using -b and -ub 3072 on DeepSeek V3 0324 (Q2_K_XL), CUDA + CPU May 11, 2025
@Panchovix Panchovix changed the title GGML_ASSERT(cur_p->size > 0) failed when using -b and -ub 3072 on DeepSeek V3 0324 (Q2_K_XL), CUDA + CPU GGML_ASSERT(cur_p->size > 0) failed on DeepSeek V3 0324 (Q2_K_XL), CUDA + CPU May 11, 2025
@Panchovix
Copy link
Author

Just an update, it seems to happen with any batch size/ubatch size now.

@Panchovix
Copy link
Author

Okay found the commit where the issue started, on 0208355

If reverting that commit, it works fine.

@Panchovix Panchovix changed the title GGML_ASSERT(cur_p->size > 0) failed on DeepSeek V3 0324 (Q2_K_XL), CUDA + CPU GGML_ASSERT(cur_p->size > 0) failed, or gibberish on DeepSeek V3 0324 (Q2_K_XL), CUDA + CPU after https://github.com/ggml-org/llama.cpp/commit/0208355f42bdab88a08507ead4a6302790a08323 May 12, 2025
@Panchovix Panchovix changed the title GGML_ASSERT(cur_p->size > 0) failed, or gibberish on DeepSeek V3 0324 (Q2_K_XL), CUDA + CPU after https://github.com/ggml-org/llama.cpp/commit/0208355f42bdab88a08507ead4a6302790a08323 GGML_ASSERT(cur_p->size > 0) failed, or gibberish on DeepSeek V3 0324 (Q2_K_XL), CUDA + CPU May 12, 2025
@MikeLP
Copy link

MikeLP commented May 12, 2025

Could it be related to
#12878 ?

@terribleplan
Copy link

terribleplan commented May 12, 2025

I am seeing the same issue. I am not able to reproduce it reliably, the same input that caused it to crash will work once I restart the server, but after a few more requests it will crash again.

Edit: Also, it seems that r1 is doing a lot more reasoning suddenly (e.g. a dozen lines of "Alternatively, ..." in a row). Could be placebo or could be related to the idea of "gibberish" output.

Edit2: Ok, I am seeing straight gibberish in some reasoning/responses too... here are some excerpts from a response where it seemingly lost the plot entirely:

[...], like in the  a
... (interrupted) 

**Final Answer**
[...]
4. **The  Y. a. a. a.  (You have a. in the name but it's an acronym for something else, like Y. a. is the  a. in a. the. a.  (You have a. in the name but it's an acronym for something else, like Y. a. in the a. for  the. a.  (You have a. in the name but it's an acronym for something else.  (This is confusing, maybe.  (Perhaps  to use the  the a. in the name of the  the. a.  the. a. (This is  to the  the  the. a.  (This is too convoluted. Let's think of specific examples.  How about:
[...]
</think>
[...]
1. **The H.  (as in, the  a.  (The  a.  (The  a.  (But in the  a.  (The  a.  (The  a.  (The  a.  (The  a.  (This is not helpful. The user is a  a.  (The  a.  (The  a.  (The  a.  (This is not.  (But this is too.  (The  a.  (The  a.  (This is impossible. Let's think of  a.  (The  a.  (This is a dead end. 
[...]
5. **The E.  (a. in the  (But in the. a.  (But the a.  (The  a. in the name is a.  (But in the a.  (But the  a.  (This is a.  (H.  (H.  (But in the a.  (H.  (But this is a.  (The a.  (This is too convoluted.  
[...]

Crash log:

...
slot update_slots: id  1 | task 9031 | prompt done, n_past = 825, n_tokens = 473
slot      release: id  1 | task 9031 | stop processing: n_past = 872, truncated = 0
slot print_timing: id  1 | task 9031 |
prompt eval time =   20058.80 ms /   473 tokens (   42.41 ms per token,    23.58 tokens per second)
       eval time =    5732.18 ms /    48 tokens (  119.42 ms per token,     8.37 tokens per second)
      total time =   25790.98 ms /   521 tokens
srv  update_slots: all slots are idle
slot launch_slot_: id  1 | task 9080 | processing task
slot update_slots: id  1 | task 9080 | new prompt, n_ctx_slot = 32768, n_keep = 0, n_prompt_tokens = 872
slot update_slots: id  1 | task 9080 | kv cache rm [825, end)
slot update_slots: id  1 | task 9080 | prompt processing progress, n_past = 872, n_tokens = 47, progress = 0.053899
slot update_slots: id  1 | task 9080 | prompt done, n_past = 872, n_tokens = 47
/home/ml/llama.cpp/src/llama-sampling.cpp:204: GGML_ASSERT(cur_p->size > 0) failed
[New LWP 59885]
...
[New LWP 60114]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
0x00007f2d20ad85ff in wait4 () from /lib64/libc.so.6
#0  0x00007f2d20ad85ff in wait4 () from /lib64/libc.so.6
#1  0x00007f2d233211a1 in ggml_abort () from /home/ml/llama.cpp/build-2025-05-11/bin/libggml-base.so
#2  0x00007f2d235e2740 in llama_sampler_softmax_impl(llama_token_data_array*) () from /home/ml/llama.cpp/build-2025-05-11/bin/libllama.so
#3  0x00007f2d235e9ad5 in llama_sampler_dist_apply(llama_sampler*, llama_token_data_array*) () from /home/ml/llama.cpp/build-2025-05-11/bin/libllama.so
#4  0x00007f2d235e41f3 in llama_sampler_chain_apply(llama_sampler*, llama_token_data_array*) () from /home/ml/llama.cpp/build-2025-05-11/bin/libllama.so
#5  0x00000000005ea620 in common_sampler_sample(common_sampler*, llama_context*, int, bool) ()
#6  0x00000000004bb0d9 in server_context::update_slots() ()
#7  0x000000000046a0d4 in server_queue::start_loop() ()
#8  0x00000000004392f5 in main ()
[Inferior 1 (process 59884) detached]
./run_r1.sh: line 3: 59884 Aborted                 (core dumped)

My command-line for llama.cpp is:

CUDA_VISIBLE_DEVICES="1,2" ./bin/llama-server \
  --host 0.0.0.0 \
  --model ./models/gguf/unsloth_DeepSeek-R1-UD-Q4_K_XL.gguf \
  --ctx-size 131072 \
  --parallel 4 \
  -ngl 999 \
  -dev CUDA0,CUDA1 \
  --flash-attn \
  --split-mode layer \
  --no-mmap \
  --numa numactl \
  -ot "ffn_.*_exps.=CPU"

This was built against master (9a390c4) as follows:

CUDACXX=/usr/local/cuda/bin/nvcc cmake .. -DLLAMA_CUDA=ON -DGGML_CUDA=ON -DGGML_RPC=ON

cmake --build . --config Release -j 64

@segmond
Copy link

segmond commented May 12, 2025

I'm seeing the same issue, nothing to do with any parameters, it's a bug in the code, with deepseek v3 as well, but Q3_K_XL

main: server is listening on http://0.0.0.0:8089 - starting the main loop
srv update_slots: all slots are idle
srv params_from_: Chat format: Content-only
slot launch_slot_: id 0 | task 0 | processing task
slot update_slots: id 0 | task 0 | new prompt, n_ctx_slot = 16128, n_keep = 0, n_prompt_tokens = 88
slot update_slots: id 0 | task 0 | kv cache rm [0, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_past = 88, n_tokens = 88, progress = 1.000000
slot update_slots: id 0 | task 0 | prompt done, n_past = 88, n_tokens = 88
/home/seg/llama.cpp/src/llama-sampling.cpp:204: GGML_ASSERT(cur_p->size > 0) failed
Could not attach to process. If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
No stack.
The program is not being run.
dsv3b.sh: line 8: 68577 Aborted (core dumped) ~/llama.cpp/build/bin/llama-server -ngl 62 --host 0.0.0.0 --path ~/llama.cpp/examples/server/public -m /llmzoo/models/DeepSeek-V3-0324-UD-Q3_K_XL.gguf --port 8089 --override-tensor "blk.([0-4]).ffn_(up|down)exp.=CUDA0,blk.([1][0257]|[5]).ffn(up|down)exp.=CUDA1,blk.([2][0257]|[6]).ffn(up|down)exp.=CUDA2,blk.([3][0257]|[7]).ffn(up|down)exp.=CUDA3,blk.([4][0257]|[6][01]).ffn(up|down)exp.=CUDA4,blk.([5][02579]|[6][2]).ffn(up|down)exp.=CUDA5,blk.([8-9]|[1-9][0-9]).ffn.exp.=CPU" -md ~/models/draft/DeepSeek-V3-0324-DRAFT-0.5B-Q8_0.gguf -ngld 127 -devd CUDA2 -cd 16000 -fa -mg 4 --no-mmap -c 16000

@terribleplan
Copy link

I am having better results running without GPUs for now. With CUDA_VISIBLE_DEVICES="" I don't get the weird garbage output and haven't had a crash since starting testing.

So that CUDA FA change mentioned by @Panchovix seems plausible at least.

@Panchovix
Copy link
Author

I Just tested and so far, #13469 PR seems to have fix it the issue. It is merged now on master.

Closing is as @JohannesGaessler commented that should be fixed as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants