Skip to content

cudaMemcpyAsync for CUDA12.1 ubutn 22.04 #4638

@yhyu13

Description

@yhyu13

Problem

Encounter cuda api argument invalid error when running mixtral for latest commit a206137
Not sure which commit cause it, but older commit from mixtral branch e1241d9 would work

The line cause the problem from the below error message:
https://github.com/yhyu13/llama.cpp/blob/master/ggml-cuda.cu#L8196

Script

./build/bin/main \
    -m Mixtral-8x7B-Instruct-v0.1-GGUF/mixtral-8x7b-instruct-v0.1.Q5_K_M.gguf \
    -p "Image what will AI be like in the year 1010 A.D. \nANSWER:\n" \
    -n 32000 \
    -e \
    --n-gpu-layers 999 \
    --temp 0.0 \
    --top-k 0 \
    --top-p 1.0 \
    | tee ./log.txt

Log

Log start
main: build = 1700 (a206137)
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
main: seed  = 1703571339
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 2 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
  Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
llama_model_loader: loaded meta data with 26 key-value pairs and 995 tensors from /media/hangyu5/Home/Documents/Hugging-Face/Mixtral-8x7B-Instruct-v0.1-GGUF/mixtral-8x7b-instruct-v0.1.Q5_K_M.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = mistralai_mixtral-8x7b-instruct-v0.1
llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:                         llama.expert_count u32              = 8
llama_model_loader: - kv  10:                    llama.expert_used_count u32              = 2
llama_model_loader: - kv  11:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  12:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:                          general.file_type u32              = 17
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  16:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  20:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type  f16:   32 tensors
llama_model_loader: - type q8_0:   64 tensors
llama_model_loader: - type q5_K:  833 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 8
llm_load_print_meta: n_expert_used    = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q5_K - Medium
llm_load_print_meta: model params     = 46.70 B
llm_load_print_meta: model size       = 30.02 GiB (5.52 BPW) 
llm_load_print_meta: general.name     = mistralai_mixtral-8x7b-instruct-v0.1
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: PAD token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size       =    0.38 MiB
llm_load_tensors: using CUDA for GPU acceleration
llm_load_tensors: system memory used  =   86.32 MiB
llm_load_tensors: VRAM used           = 30649.55 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
....................................................................................................
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: VRAM kv self = 64.00 MB
llama_new_context_with_model: KV self size  =   64.00 MiB, K (f16):   32.00 MiB, V (f16):   32.00 MiB
llama_build_graph: non-view tensors processed: 1124/1124
llama_new_context_with_model: compute buffer total size = 117.72 MiB
llama_new_context_with_model: VRAM scratch buffer: 114.53 MiB
llama_new_context_with_model: total VRAM used: 30828.09 MiB (model: 30649.55 MiB, context: 178.53 MiB)
CUDA error: cudaMemcpyAsync(src1_ddq_i, src1_ddq_i_source, src1_ncols*src1_padded_col_size*q8_1_ts/q8_1_bs, cudaMemcpyDeviceToDevice, stream): invalid argument
  in function ggml_cuda_op_mul_mat at /home/hangyu5/Documents/Git-repoMy/llama.cpp/ggml-cuda.cu:8196
GGML_ASSERT: /home/hangyu5/Documents/Git-repoMy/llama.cpp/ggml-cuda.cu:239: !"CUDA error"
Could not attach to process.  If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user.  For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: 不允许的操作.
No stack.
The program is not being run.

The mentioned file for ptrace
/etc/sysctl.d/10-ptrace.con

# The PTRACE system is used for debugging.  With it, a single user process
# can attach to any other dumpable process owned by the same user.  In the
# case of malicious software, it is possible to use PTRACE to access
# credentials that exist in memory (re-using existing SSH connections,
# extracting GPG agent information, etc).
#
# A PTRACE scope of "0" is the more permissive mode.  A scope of "1" limits
# PTRACE only to direct child processes (e.g. "gdb name-of-program" and
# "strace -f name-of-program" work, but gdb's "attach" and "strace -fp $PID"
# do not).  The PTRACE scope is ignored when a user has CAP_SYS_PTRACE, so
# "sudo strace -fp $PID" will work as before.  For more details see:
# https://wiki.ubuntu.com/SecurityTeam/Roadmap/KernelHardening#ptrace
#
# For applications launching crash handlers that need PTRACE, exceptions can
# be registered by the debugee by declaring in the segfault handler
# specifically which process will be using PTRACE on the debugee:
#   prctl(PR_SET_PTRACER, debugger_pid, 0, 0, 0);
#
# In general, PTRACE is not needed for the average running Ubuntu system.
# To that end, the default is to set the PTRACE scope to "1".  This value
# may not be appropriate for developers or servers with only admin accounts.
kernel.yama.ptrace_scope = 1

Nvidia-smi

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03             Driver Version: 535.129.03   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 3090        On  | 00000000:82:00.0 Off |                  N/A |
|  0%   31C    P8              27W / 420W |     12MiB / 24576MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   1  NVIDIA GeForce RTX 3090        On  | 00000000:C1:00.0  On |                  N/A |
|  0%   38C    P8              31W / 420W |   1050MiB / 24576MiB |     27%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

Activity

yhyu13

yhyu13 commented on Dec 26, 2023

@yhyu13
Author

Success output from cpu, built without cuda:

llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: KV self size  =   64.00 MiB, K (f16):   32.00 MiB, V (f16):   32.00 MiB
llama_build_graph: non-view tensors processed: 1124/1124
llama_new_context_with_model: compute buffer total size = 117.72 MiB

system_info: n_threads = 24 / 48 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | 
sampling: 
        repeat_last_n = 64, repeat_penalty = 1.100, frequency_penalty = 0.000, presence_penalty = 0.000
        top_k = 0, tfs_z = 1.000, top_p = 1.000, min_p = 0.050, typical_p = 1.000, temp = 0.000
        mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order: 
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temp 
generate: n_ctx = 512, n_batch = 512, n_predict = 32000, n_keep = 0


 Image what will AI be like in the year 1010 A.D. 
ANSWER:
It is impossible to predict with certainty what AI will be like in the year 1010 A.D., as it depends on many factors such as technological advancements, societal values and priorities, and unforeseen developments. However, based on current trends and trajectories, we can make some educated guesses.

Firstly, it is likely that AI will be much more advanced than it is today. We may see the development of artificial general intelligence (AGI), which refers to machines that have the ability to understand, learn, and apply knowledge across a wide range of tasks at a level equal to or beyond human capability. This could lead to significant advancements in fields such as healthcare, education, transportation, and manufacturing.

Secondly, AI may become more integrated into our daily lives, with smart homes, autonomous vehicles, and personalized virtual assistants becoming commonplace. AI may also play a larger role in decision-making processes, from government policies to business strategies.

Thirdly, there may be ethical considerations surrounding the use of AI, such as issues related to privacy, bias, and job displacement. Society will need to address these challenges and establish guidelines for responsible AI development and deployment.

Finally, it is possible that AI could lead to unintended consequences or even existential risks, such as superintelligence or autonomous weapons. It is important for society to consider these possibilities and take proactive measures to mitigate potential risks.

Overall, while we cannot predict the exact state of AI in 1010 A.D., it is clear that it will continue to be a significant force shaping our world. [end of text]

llama_print_timings:        load time =    1363.26 ms
llama_print_timings:      sample time =     129.72 ms /   349 runs   (    0.37 ms per token,  2690.51 tokens per second)
llama_print_timings: prompt eval time =    1718.28 ms /    26 tokens (   66.09 ms per token,    15.13 tokens per second)
llama_print_timings:        eval time =   40188.14 ms /   348 runs   (  115.48 ms per token,     8.66 tokens per second)
llama_print_timings:       total time =   42181.63 ms
Log end
reopened this on Dec 26, 2023
ENjoyBlue2021

ENjoyBlue2021 commented on Dec 26, 2023

@ENjoyBlue2021

Wanted to open this myself just now too. Can confirm this issue as well.
It occurs for me if I checkout .
cuda : improve cuda pool efficiency using virtual memory (https://github.com/ggerganov/llama.cpp/pull/4606[)]

Before it works.
Have a 1080+1080ti. 12.2 Cuda and Ubuntu 22.04.
Model doesnt matter, can also confirm with CPU it works.

llama_new_context_with_model: total VRAM used: 6363.56 MiB (model: 5563.55 MiB, context: 800.00 MiB)
CUDA error: cudaMemcpyAsync(src1_ddq_i, src1_ddq_i_source, src1_ncols*src1_padded_col_size*q8_1_ts/q8_1_bs, cudaMemcpyDeviceToDevice, stream): invalid argument
  in function ggml_cuda_op_mul_mat at ggml-cuda.cu:8196
GGML_ASSERT: ggml-cuda.cu:239: !"CUDA error"
Could not attach to process.  If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user.  For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
No stack.
The program is not being run.
Aborted (core dumped)
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.05             Driver Version: 535.104.05   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce GTX 1080 Ti     On  | 00000000:01:00.0  On |                  N/A |
| 29%   55C    P5              25W / 250W |   1683MiB / 11264MiB |      1%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   1  NVIDIA GeForce GTX 1080        On  | 00000000:02:00.0 Off |                  N/A |
|  0%   36C    P8               8W / 200W |      7MiB /  8192MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A      1433      G   /usr/lib/xorg/Xorg                          758MiB |
|    0   N/A  N/A      2422      G   /usr/bin/kwin_x11                           390MiB |
|    0   N/A  N/A      2473      G   /usr/bin/plasmashell                        111MiB |
|    0   N/A  N/A      2912      G   ...ures=SpareRendererForSitePerProcess        6MiB |
|    0   N/A  N/A    862717      G   ...,262144 --variations-seed-version=1      330MiB |
|    0   N/A  N/A    862727      G   ...ures=SpareRendererForSitePerProcess       79MiB |
|    1   N/A  N/A      1433      G   /usr/lib/xorg/Xorg                            4MiB |
+---------------------------------------------------------------------------------------+
linked a pull request that will close this issuecuda : fix vmm pool with multi GPU #4620on Dec 26, 2023
phalexo

phalexo commented on Dec 26, 2023

@phalexo

This is what I see.

llm_load_tensors: using CUDA for GPU acceleration
llm_load_tensors: system memory used = 102.92 MiB
llm_load_tensors: VRAM used = 36497.55 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
....................................................................................................
llama_new_context_with_model: n_ctx = 16384
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: VRAM kv self = 2048.00 MB
llama_new_context_with_model: KV self size = 2048.00 MiB, K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
llama_build_graph: non-view tensors processed: 1124/1124
llama_new_context_with_model: compute buffer total size = 1111.22 MiB
llama_new_context_with_model: VRAM scratch buffer: 1108.04 MiB
llama_new_context_with_model: total VRAM used: 39653.59 MiB (model: 36497.55 MiB, context: 3156.04 MiB)
CUDA error: cudaMemcpyAsync(src1_ddf_i, src1_ddf_i_source, src1_ncolsne10sizeof(float), cudaMemcpyDeviceToDevice, stream): invalid argument
in function ggml_cuda_op_mul_mat at /home/developer/llama.cpp/ggml-cuda.cu:8202
GGML_ASSERT: /home/developer/llama.cpp/ggml-cuda.cu:240: !"CUDA error"
memory allocation/deallocation mismatch at 0x56282c3d20a0: allocated with malloc being deallocated with delete []

yhyu13

yhyu13 commented on Dec 27, 2023

@yhyu13
Author

@slaren Thanks!

It is fixed after #4620!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      cudaMemcpyAsync for CUDA12.1 ubutn 22.04 · Issue #4638 · ggml-org/llama.cpp