Skip to content

vulkan: scalar flash attention implementation #13324

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 10 commits into from
May 10, 2025

Conversation

jeffbolznv
Copy link
Collaborator

With so many issues like #13217 being due to lack of FA support, I ported the FA shader to use scalar math. Perf is pretty good for cases where there are few rows (e.g. during token gen), but it will still be slower than -fa 0 for cases where -fa 0 uses KHR_coopmat.

I'd appreciate some help testing (including perf testing) on non-NVIDIA GPUs. And if anybody knows a good placeholder value for shader_core_count for Intel or how to query it, that would be good too.

@jeffbolznv jeffbolznv requested a review from 0cc4m May 6, 2025 00:53
@github-actions github-actions bot added Vulkan Issues specific to the Vulkan backend ggml changes relating to the ggml tensor library for machine learning labels May 6, 2025
@nalf3in
Copy link

nalf3in commented May 6, 2025

Umm unfortunately it doesn't look like there's an improvement for my setup, even for the rtx 2070 gpu.

Short version:

With Flash Attention (-fa)

GPU Prompt Eval Time (ms) ms/token Tokens/sec Eval Time (ms) ms/token Tokens/sec
RX 480 7402 5.5 182 19191 48 21
RTX 2070 4953 3.7 272 11552 31 33

Without Flash Attention

GPU Prompt Eval Time (ms) ms/token Tokens/sec Eval Time (ms) ms/token Tokens/sec
RX 480 4892 3.64 275 17451 48 21
RTX 2070 4907 3.65 274 8579 29 34
Long version: (click to show)

Hardware Configuration

  • CPU: Intel Xeon E5-2620 v3 (6 cores, 12 threads, Haswell)
  • Memory: Quad-channel DDR4 @ 1866 MHz
  • GPUs:
    • CUDA0: NVIDIA RTX 2070 (CUDA backend)
    • VULKAN0: AMD RX 480 8GB (Vulkan backend)
    • VULKAN1: NVIDIA RTX 2070 (Vulkan backend)

Repository Status (Sanity Check)

git status
# On branch master
# Your branch is ahead of 'origin/master' by 1 commit.

git log -1
# commit 6c7443cbcfc34c9247166a3f9ed9cfe762441a43 (HEAD -> master)
# vulkan: scalar flash attention implementation

Test Prompt

  • Length: 1984 tokens
  • Source: Default sillytavern conversation prompt

Performance Results


1. Normal Setup

1.1 With Flash Attention (-fa)

Command GPU Prompt Eval Time Tokens ms/token Tokens/sec Eval Time Tokens ms/token Tokens/sec
./build/bin/llama-server -m /share/Qwen3-4B-UD-Q4_K_XL.gguf -dev Vulkan0 -ngl 99 -c 8192 --host -fa :: RX 480 (VULKAN0) 7402.49 ms 1345 5.50 181.70 19191.17 ms 400 47.98 20.84
./build/bin/llama-server -m /share/Qwen3-4B-UD-Q4_K_XL.gguf -dev Vulkan0 -ngl 99 -c 8192 --host :: -fa -ctk q4_0 -ctv q4_0 RX 480 (VULKAN0) 37511.76 ms 1345 27.89 35.86 45465.21 ms 391 116.28 8.60
./build/bin/llama-server -m /share/Qwen3-4B-UD-Q4_K_XL.gguf -dev VULKAN1 -ngl 99 -c 8192 --host :: -fa RTX 2070 (VULKAN1) 4952.98 ms 1345 3.68 271.55 11552.00 ms 377 30.64 32.64
./build/bin/llama-server -m /share/Qwen3-4B-UD-Q4_K_XL.gguf -dev VULKAN1 -ngl 99 -c 8192 --host :: -fa -ctk q4_0 -ctv q4_0 RTX 2070 (VULKAN1) 41505.56 ms 1345 30.86 32.41 58276.78 ms 400 145.69 6.86

1.2 Without Flash Attention

Command GPU Prompt Eval Time Tokens ms/token Tokens/sec Eval Time Tokens ms/token Tokens/sec
./build/bin/llama-server -m /share/Qwen -dev Vulkan0 -ngl 99 -c 8192 --host :: RX 480 (VULKAN0) 4891.92 ms 1345 3.64 274.94 17451.06 ms 362 48.21 20.74
./build/bin/llama-server -m /share/Qwen3-4B-UD-Q4_K_XL.gguf -dev VULKAN1 -ngl 99 -c 8192 --host :: RTX 2070 (VULKAN1) 4906.50 ms 1345 3.65 274.13 8579.09 ms 292 29.38 34.04

2. Experimental Setup (Patch for Issue #13164)

  • Patch applied to increase matrix multiplication size limit from 3072 to 8192
--- a/ggml/src/ggml-vulkan/ggml-vulkan.cpp
+++ b/ggml/src/ggml-vulkan/ggml-vulkan.cpp
-    GGML_ASSERT(nei0 * nei1 <= 3072);
+    GGML_ASSERT(nei0 * nei1 <= 8192);
--- a/ggml/src/ggml-vulkan/vulkan-shaders/mul_mm.comp
+++ b/ggml/src/ggml-vulkan/vulkan-shaders/mul_mm.comp
- shared u16vec2 row_ids[3072];
+ shared u16vec2 row_ids[8192];

2.1 With Flash Attention (-fa)

Command GPU Prompt Eval Time Tokens ms/token Tokens/sec Eval Time Tokens ms/token Tokens/sec
./build/bin/llama-server -dev CUDA0,Vulkan0 -ngl 99 -c 8192 -m /share/Qwen3-30B-A3B-UD-Q3_K_XL.gguf -fa --batch-size 1200 --host :: RTX 2070 (CUDA0) + RX 480 (VULKAN0) 47275.80 ms 1306 36.20 27.63 19982.30 ms 400 49.96 20.02
./build/bin/llama-server -m /share/Qwen3-4B-UD-Q4_K_XL.gguf -dev VULKAN0 -ngl 99 -c 8192 --host :: -fa RX 480 (VULKAN0) 7390.74 ms 1345 5.49 181.98 18178.16 ms 380 47.84 20.90

2.2 Without Flash Attention

Command GPU Prompt Eval Time Tokens ms/token Tokens/sec Eval Time Tokens ms/token Tokens/sec
./build/bin/llama-server -dev CUDA0,Vulkan0 -ngl 99 -c 8192 -m /share/Qwen3-30B-A3B-UD-Q3_K_XL.gguf --batch-size 1200 --host :: RTX 2070 (CUDA0) + RX 480 (VULKAN0) 46707.90 ms 1345 34.73 28.80 17597.43 ms 400 43.99 22.73
./build/bin/llama-server -m /share/Qwen3-4B-UD-Q4_K_XL.gguf -dev VULKAN0 -ngl 99 -c 8192 --host :: RX 480 (VULKAN0) 4915.57 ms 1345 3.65 273.62 11767.99 ms 244 48.23 20.73

@jeffbolznv
Copy link
Collaborator Author

On the 2070 system, does it report coopmat1 or coopmat2 support? If it's coopmat2 then FA is already accelerated.

-ctk q4_0 -ctv q4_0

I didn't add support for quantized KV yet (it's probably not a ton of work, just didn't think it was critical for the first version), so these tests will continue to fall back to CPU.

@nalf3in
Copy link

nalf3in commented May 6, 2025

does it report coopmat1 or coopmat2 support?

Not sure how to confirm this. Coopmat is not mentioned in stdout when running llama-server. llama-cpp was built with
cmake -B build -DGGML_VULKAN=ON -DGGML_CUDA=ON

I didn't add support for quantized KV

Ah I see, good idea

@jeffbolznv
Copy link
Collaborator Author

llama-server should print something like this when using the vulkan backend:

ggml_vulkan: 0 = NVIDIA GeForce RTX 4070 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: NV_coopmat2

What does yours say for matrix cores?

@daniandtheweb
Copy link
Contributor

daniandtheweb commented May 6, 2025

I've just tested it on both the Radeon RX 7800 XT and the Radeon RX 5700 XT and the performance is pretty close to non FA.

RX 7800 XT

ggml_vulkan: 0 = AMD Radeon RX 7800 XT (RADV NAVI32) (radv) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat

model size params backend ngl fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 pp512 1241.50 ± 15.37
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 tg128 111.19 ± 0.59
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 pp512 1248.76 ± 5.05
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 tg128 110.08 ± 0.12

ggml_vulkan: 0 = AMD Radeon RX 7800 XT (AMD open-source driver) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 32768 | int dot: 1 | matrix cores: KHR_coopmat

model size params backend ngl fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 pp512 2090.51 ± 7.00
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 tg128 96.95 ± 2.41
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 pp512 1729.53 ± 5.07
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 tg128 76.70 ± 0.11

ggml_vulkan: 0 = AMD Radeon RX 7800 XT (AMD proprietary driver) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 32768 | int dot: 1 | matrix cores: KHR_coopmat

model size params backend ngl fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 pp512 2074.83 ± 6.27
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 tg128 97.45 ± 0.36
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 pp512 1727.79 ± 8.97
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 tg128 76.40 ± 0.16

RX 5700 XT

ggml_vulkan: 0 = AMD Radeon RX 5700 XT (RADV NAVI10) (radv) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 65536 | int dot: 0 | matrix cores: none

model size params backend ngl fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 pp512 470.39 ± 0.35
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 tg128 65.21 ± 0.21
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 pp512 422.59 ± 0.26
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 tg128 64.23 ± 0.06

ggml_vulkan: 0 = AMD Radeon RX 5700 XT (AMD open-source driver) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 32768 | int dot: 0 | matrix cores: none

model size params backend ngl fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 pp512 441.33 ± 0.45
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 tg128 64.52 ± 0.74
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 pp512 377.49 ± 0.14
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 tg128 52.63 ± 0.01

ggml_vulkan: 0 = AMD Radeon RX 5700 XT (AMD proprietary driver) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 32768 | int dot: 0 | matrix cores: none

model size params backend ngl fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 pp512 565.87 ± 0.53
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 tg128 71.49 ± 0.04
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 pp512 543.53 ± 0.31
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 tg128 66.61 ± 0.02

I'm not sure if there are specific FA tests in test-backend-ops (haven't used it in a while) but if you need more performance data from I can run the tests too.

@nalf3in
Copy link

nalf3in commented May 6, 2025

llama-server should print something like this when using the vulkan backend

It looks like it's not enabled:

ggml_vulkan: Found 2 Vulkan devices:
ggml_vulkan: 0 = AMD Radeon RX 480 Graphics (RADV POLARIS10) (radv) | uma: 0 | fp16: 0 | warp size: 64 | shared memory: 65536 | int dot: 0 | matrix cores: none
ggml_vulkan: 1 = NVIDIA GeForce RTX 2070 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 49152 | int dot: 0 | matrix cores: none

build: 5288 (6c7443cb) with cc (Debian 12.2.0-14) 12.2.0 for x86_64-linux-gnu

Looking a bit closer I think that's coming from the build config. I'm using a standard debian server installation. From what I understand the libvulkan version shipped with debian bookworm(1.3.239) is probably too old to support these "new" extensions.

cmake -B build -DGGML_VULKAN=ON ...
-- Found Vulkan: /usr/lib/x86_64-linux-gnu/libvulkan.so (found version "1.3.239") found components: glslc glslangValidator 
-- Vulkan found
-- GL_KHR_cooperative_matrix not supported by glslc
-- GL_NV_cooperative_matrix2 not supported by glslc
-- GL_EXT_integer_dot_product not supported by glslc
-- GL_EXT_bfloat16 not supported by glslc
-- Including Vulkan backend

In any case I see that it`s working on my desktop with arch linux and a 3080 ti:

-- Found Vulkan: /lib/libvulkan.so (found version "1.4.309") found components: glslc glslangValidator
-- Vulkan found
-- GL_KHR_cooperative_matrix supported by glslc
-- GL_NV_cooperative_matrix2 supported by glslc
-- GL_EXT_integer_dot_product supported by glslc
-- GL_EXT_bfloat16 not supported by glslc
-- Including Vulkan backend
Full debian server cmake -B build -DGGML_VULKAN=ON output cmake -B build -DGGML_VULKAN=ON -- The C compiler identification is GNU 12.2.0 -- The CXX compiler identification is GNU 12.2.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Found Git: /usr/bin/git (found version "2.39.5") -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- ccache found, compilation results will be cached. Disable with GGML_CCACHE=OFF. -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- Including CPU backend -- Found OpenMP_C: -fopenmp (found version "4.5") -- Found OpenMP_CXX: -fopenmp (found version "4.5") -- Found OpenMP: TRUE (found version "4.5") -- x86 detected -- Adding CPU backend variant ggml-cpu: -march=native -- Found Vulkan: /usr/lib/x86_64-linux-gnu/libvulkan.so (found version "1.3.239") found components: glslc glslangValidator -- Vulkan found -- GL_KHR_cooperative_matrix not supported by glslc -- GL_NV_cooperative_matrix2 not supported by glslc -- GL_EXT_integer_dot_product not supported by glslc -- GL_EXT_bfloat16 not supported by glslc -- Including Vulkan backend -- Found CURL: /usr/lib/x86_64-linux-gnu/libcurl.so (found version "7.88.1") -- Configuring done -- Generating done -- Build files have been written to: /home/joe/ai/temp/llama.cpp/build
Full debian server llama-server output ./build/bin/llama-server -m /share/Qwen3-4B-UD-Q4_K_XL.gguf -dev VULKAN1 -ngl 99 -c 8192 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 2070, compute capability 7.5, VMM: yes ggml_vulkan: Found 2 Vulkan devices: ggml_vulkan: 0 = AMD Radeon RX 480 Graphics (RADV POLARIS10) (radv) | uma: 0 | fp16: 0 | warp size: 64 | shared memory: 65536 | int dot: 0 | matrix cores: none ggml_vulkan: 1 = NVIDIA GeForce RTX 2070 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 49152 | int dot: 0 | matrix cores: none build: 5288 (6c7443cb) with cc (Debian 12.2.0-14) 12.2.0 for x86_64-linux-gnu system info: n_threads = 6, n_threads_batch = 6, total_threads = 12

system_info: n_threads = 6 (n_threads_batch = 6) / 12 | CUDA : ARCHS = 750 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | LLAMAFILE = 1 | OPENMP = 1 | AARCH64_REPACK = 1 |

main: binding port with default address family
main: HTTP server is listening, hostname: 127.0.0.1, port: 8080, http threads: 11
main: loading model
srv load_model: loading model '/share/Qwen3-4B-UD-Q4_K_XL.gguf'
llama_model_load_from_file_impl: using device Vulkan1 (NVIDIA GeForce RTX 2070) - 8192 MiB free
llama_model_loader: loaded meta data with 32 key-value pairs and 398 tensors from /share/Qwen3-4B-UD-Q4_K_XL.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen3
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Qwen3-4B
llama_model_loader: - kv 3: general.basename str = Qwen3-4B
llama_model_loader: - kv 4: general.quantized_by str = Unsloth
llama_model_loader: - kv 5: general.size_label str = 4B
llama_model_loader: - kv 6: general.repo_url str = https://huggingface.co/unsloth
llama_model_loader: - kv 7: qwen3.block_count u32 = 36
llama_model_loader: - kv 8: qwen3.context_length u32 = 40960
llama_model_loader: - kv 9: qwen3.embedding_length u32 = 2560
llama_model_loader: - kv 10: qwen3.feed_forward_length u32 = 9728
llama_model_loader: - kv 11: qwen3.attention.head_count u32 = 32
llama_model_loader: - kv 12: qwen3.attention.head_count_kv u32 = 8
llama_model_loader: - kv 13: qwen3.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 14: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 15: qwen3.attention.key_length u32 = 128
llama_model_loader: - kv 16: qwen3.attention.value_length u32 = 128
llama_model_loader: - kv 17: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 18: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 19: tokenizer.ggml.tokens arr[str,151936] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 20: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 21: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 22: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 23: tokenizer.ggml.padding_token_id u32 = 151654
llama_model_loader: - kv 24: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 25: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 26: general.quantization_version u32 = 2
llama_model_loader: - kv 27: general.file_type u32 = 15
llama_model_loader: - kv 28: quantize.imatrix.file str = Qwen3-4B-GGUF/imatrix_unsloth.dat
llama_model_loader: - kv 29: quantize.imatrix.dataset str = unsloth_calibration_Qwen3-4B.txt
llama_model_loader: - kv 30: quantize.imatrix.entries_count i32 = 252
llama_model_loader: - kv 31: quantize.imatrix.chunks_count i32 = 32
llama_model_loader: - type f32: 145 tensors
llama_model_loader: - type q4_K: 154 tensors
llama_model_loader: - type q5_K: 30 tensors
llama_model_loader: - type q6_K: 49 tensors
llama_model_loader: - type iq4_xs: 20 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_K - Medium
print_info: file size = 2.37 GiB (5.05 BPW)
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch = qwen3
print_info: vocab_only = 0
print_info: n_ctx_train = 40960
print_info: n_embd = 2560
print_info: n_layer = 36
print_info: n_head = 32
print_info: n_head_kv = 8
print_info: n_rot = 128
print_info: n_swa = 0
print_info: n_swa_pattern = 1
print_info: n_embd_head_k = 128
print_info: n_embd_head_v = 128
print_info: n_gqa = 4
print_info: n_embd_k_gqa = 1024
print_info: n_embd_v_gqa = 1024
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-06
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 0.0e+00
print_info: n_ff = 9728
print_info: n_expert = 0
print_info: n_expert_used = 0
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 2
print_info: rope scaling = linear
print_info: freq_base_train = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 40960
print_info: rope_finetuned = unknown
print_info: ssm_d_conv = 0
print_info: ssm_d_inner = 0
print_info: ssm_d_state = 0
print_info: ssm_dt_rank = 0
print_info: ssm_dt_b_c_rms = 0
print_info: model type = 4B
print_info: model params = 4.02 B
print_info: general.name = Qwen3-4B
print_info: vocab type = BPE
print_info: n_vocab = 151936
print_info: n_merges = 151387
print_info: BOS token = 11 ','
print_info: EOS token = 151645 '<|im_end|>'
print_info: EOT token = 151645 '<|im_end|>'
print_info: PAD token = 151654 '<|vision_pad|>'
print_info: LF token = 198 'Ċ'
print_info: FIM PRE token = 151659 '<|fim_prefix|>'
print_info: FIM SUF token = 151661 '<|fim_suffix|>'
print_info: FIM MID token = 151660 '<|fim_middle|>'
print_info: FIM PAD token = 151662 '<|fim_pad|>'
print_info: FIM REP token = 151663 '<|repo_name|>'
print_info: FIM SEP token = 151664 '<|file_sep|>'
print_info: EOG token = 151643 '<|endoftext|>'
print_info: EOG token = 151645 '<|im_end|>'
print_info: EOG token = 151662 '<|fim_pad|>'
print_info: EOG token = 151663 '<|repo_name|>'
print_info: EOG token = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: offloading 36 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 37/37 layers to GPU
load_tensors: Vulkan1 model buffer size = 2422.70 MiB
load_tensors: CPU_Mapped model buffer size = 304.28 MiB
...............................................................................
llama_context: constructing llama_context
llama_context: n_seq_max = 1
llama_context: n_ctx = 8192
llama_context: n_ctx_per_seq = 8192
llama_context: n_batch = 2048
llama_context: n_ubatch = 512
llama_context: causal_attn = 1
llama_context: flash_attn = 0
llama_context: freq_base = 1000000.0
llama_context: freq_scale = 1
llama_context: n_ctx_per_seq (8192) < n_ctx_train (40960) -- the full capacity of the model will not be utilized
llama_context: Vulkan_Host output buffer size = 0.58 MiB
llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 36, can_shift = 1, padding = 32
llama_kv_cache_unified: Vulkan1 KV buffer size = 1152.00 MiB
llama_kv_cache_unified: KV self size = 1152.00 MiB, K (f16): 576.00 MiB, V (f16): 576.00 MiB
llama_context: Vulkan1 compute buffer size = 554.00 MiB
llama_context: Vulkan_Host compute buffer size = 21.01 MiB
llama_context: graph nodes = 1374
llama_context: graph splits = 2
common_init_from_params: setting dry_penalty_last_n to ctx_size = 8192
common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
srv init: initializing slots, n_slots = 1
slot init: id 0 | task -1 | new slot n_ctx_slot = 8192
main: model loaded
main: chat template, chat_template: {%- if tools %}
{{- '<|im_start|>system\n' }}
{%- if messages[0].role == 'system' %}
{{- messages[0].content + '\n\n' }}
{%- endif %}
{{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within XML tags:\n" }}
{%- for tool in tools %}
{{- "\n" }}
{{- tool | tojson }}
{%- endfor %}
{{- "\n\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{"name": , "arguments": }\n</tool_call><|im_end|>\n" }}
{%- else %}
{%- if messages[0].role == 'system' %}
{{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
{%- for forward_message in messages %}
{%- set index = (messages|length - 1) - loop.index0 %}
{%- set message = messages[index] %}
{%- set tool_start = '<tool_response>' %}
{%- set tool_start_length = tool_start|length %}
{%- set start_of_message = message.content[:tool_start_length] %}
{%- set tool_end = '</tool_response>' %}
{%- set tool_end_length = tool_end|length %}
{%- set start_pos = (message.content|length) - tool_end_length %}
{%- if start_pos < 0 %}
{%- set start_pos = 0 %}
{%- endif %}
{%- set end_of_message = message.content[start_pos:] %}
{%- if ns.multi_step_tool and message.role == "user" and not(start_of_message == tool_start and end_of_message == tool_end) %}
{%- set ns.multi_step_tool = false %}
{%- set ns.last_query_index = index %}
{%- endif %}
{%- endfor %}
{%- for message in messages %}
{%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
{{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
{%- elif message.role == "assistant" %}
{%- set content = message.content %}
{%- set reasoning_content = '' %}
{%- if message.reasoning_content is defined and message.reasoning_content is not none %}
{%- set reasoning_content = message.reasoning_content %}
{%- else %}
{%- if '' in message.content %}
{%- set content = (message.content.split('')|last).lstrip('\n') %}
{%- set reasoning_content = (message.content.split('')|first).rstrip('\n') %}
{%- set reasoning_content = (reasoning_content.split('')|last).lstrip('\n') %}
{%- endif %}
{%- endif %}
{%- if loop.index0 > ns.last_query_index %}
{%- if loop.last or (not loop.last and reasoning_content) %}
{{- '<|im_start|>' + message.role + '\n\n' + reasoning_content.strip('\n') + '\n\n\n' + content.lstrip('\n') }}
{%- else %}
{{- '<|im_start|>' + message.role + '\n' + content }}
{%- endif %}
{%- else %}
{{- '<|im_start|>' + message.role + '\n' + content }}
{%- endif %}
{%- if message.tool_calls %}
{%- for tool_call in message.tool_calls %}
{%- if (loop.first and content) or (not loop.first) %}
{{- '\n' }}
{%- endif %}
{%- if tool_call.function %}
{%- set tool_call = tool_call.function %}
{%- endif %}
{{- '<tool_call>\n{"name": "' }}
{{- tool_call.name }}
{{- '", "arguments": ' }}
{%- if tool_call.arguments is string %}
{{- tool_call.arguments }}
{%- else %}
{{- tool_call.arguments | tojson }}
{%- endif %}
{{- '}\n</tool_call>' }}
{%- endfor %}
{%- endif %}
{{- '<|im_end|>\n' }}
{%- elif message.role == "tool" %}
{%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
{{- '<|im_start|>user' }}
{%- endif %}
{{- '\n<tool_response>\n' }}
{{- message.content }}
{{- '\n</tool_response>' }}
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
{{- '<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- endfor %}
{%- if add_generation_prompt %}
{{- '<|im_start|>assistant\n' }}
{%- if enable_thinking is defined and enable_thinking is false %}
{{- '\n\n\n\n' }}
{%- endif %}
{%- endif %}, example_format: '<|im_start|>system
You are a helpful assistant<|im_end|>
<|im_start|>user
Hello<|im_end|>
<|im_start|>assistant
Hi there<|im_end|>
<|im_start|>user
How are you?<|im_end|>
<|im_start|>assistant
'
main: server is listening on http://127.0.0.1:8080 - starting the main loop
srv update_slots: all slots are idle

@jeffbolznv
Copy link
Collaborator Author

OK, so your RTX 2070 should have been using flash attention. Were all your test results with this change? Did you try using flash attention without this change? It would have fallen back to the CPU.

@nalf3in
Copy link

nalf3in commented May 6, 2025

Yes all tests were made with this change, more specifically with this commit cherry picked on top of the latest commit at the time of writing (9070365)

git log
commit 6c7443cbcfc34c9247166a3f9ed9cfe762441a43 (HEAD -> master)
Author: Jeff Bolz <[email protected]>
Date:   Mon May 5 19:34:23 2025 -0500

    vulkan: scalar flash attention implementation

commit 907036502070ba608bdb2aaebf802092d4cfba07 (tag: b5287, origin/master, origin/HEAD)
Author: Johannes Gäßler <[email protected]>
Date:   Mon May 5 22:32:13 2025 +0200

    CUDA: fix logic for clearing padding with -ngl 0 (#13320)

Just did the same test again without this commit and indeed it falls back to the cpu and is very slow:

9070365

./build/bin/llama-server -m /share/Qwen3-4B-UD-Q4_K_XL.gguf -dev Vulkan0 -ngl 99 -c 8192 --host :: -fa

prompt eval time =   38053.10 ms /  1930 tokens (   19.72 ms per token,    50.72 tokens per second)
       eval time =   45318.10 ms /   345 tokens (  131.36 ms per token,     7.61 tokens per second)

359a92f691ff74f7fc89cf12cac744bb18ab98df (this pr commit)

./build/bin/llama-server -m /share/Qwen3-4B-UD-Q4_K_XL.gguf -dev Vulkan0 -ngl 99 -c 8192 --host :: -fa

prompt eval time =   11606.94 ms /  1930 tokens (    6.01 ms per token,   166.28 tokens per second)
       eval time =   17867.61 ms /   365 tokens (   48.95 ms per token,    20.43 tokens per second)

@netrunnereve
Copy link
Collaborator

netrunnereve commented May 6, 2025

@nalf3in I think there's something wrong with your setup as your numbers already don't make sense for the non fa case. First of all even if you have no DP4A and no matrix cores the 2070 should easily beat the 480 in prompt processing. Your inference speeds are also really low for a Q4 4B model.

Can you run a regular llama-bench on each GPU separately using the same model? The server and SillyTavern might be messing things up.

@nalf3in
Copy link

nalf3in commented May 7, 2025

I didn't use llama-bench previously because it doesn't support the -dev option which allows to specify which gpu I want to use. From what I understand it isn't possible using llama-bench command line arguments but I was able to do it anyway using bwrap (see below for full command line args)

Short version of the results:

Commit GPU fa pp512 t/s tg128 t/s
141a908 AMD RX 480 0 294.5 ± 0.7 37.8 ± 0.1
141a908 AMD RX 480 1 140.6 ± 0.6 26.5 ± 0.1
141a908 NVIDIA RTX2070 0 461.6 ± 2.7 63.0 ± 1.5
141a908 NVIDIA RTX2070 1 140.0 ± 1.1 35.9 ± 1.0
005756a AMD RX 480 0 294.4 ± 0.7 37.9 ± 0.5
005756a AMD RX 480 1 230.3 ± 0.3 31.3 ± 0.1
005756a NVIDIA RTX2070 0 461.0 ± 1.6 63.1 ± 2.2
005756a NVIDIA RTX2070 1 444.6 ± 1.6 59.7 ± 0.7

It seems that the rtx 2070 is still around 1.6x faster than then rx 480 using vulkan. Cuda is much faster for prompt ingestion (vulkan doesn't use KHR_coopmat there though)

Cuda without flash attention for reference

model size params backend ngl test t/s
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B CUDA 99 pp512 806.22 ± 5.90
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B CUDA 99 tg128 71.77 ± 4.07
Long version Commit 141a908

~/ai/temp/llama.cpp$ bwrap
--ro-bind / /
--dev-bind /dev/dri/card0 /dev/dri/card0
--dev-bind /dev/dri/renderD128 /dev/dri/renderD128
--dev-bind /dev/null /dev/null
--dev-bind /dev/urandom /dev/urandom
--dev-bind /dev/zero /dev/zero
--dir /tmp
./build/bin/llama-bench -m /share/Qwen3-4B-UD-Q4_K_XL.gguf
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = AMD Radeon RX 480 Graphics (RADV POLARIS10) (radv) | uma: 0 | fp16: 0 | warp size: 64 | shared memory: 65536 | int dot: 0 | matrix cores: none

model size params backend ngl test t/s
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 pp512 294.50 ± 0.66
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 tg128 37.79 ± 0.11

build: 141a908 (5298)

~/ai/temp/llama.cpp$ bwrap
--ro-bind / /
--dev-bind /dev/dri/card0 /dev/dri/card0
--dev-bind /dev/dri/renderD128 /dev/dri/renderD128
--dev-bind /dev/null /dev/null
--dev-bind /dev/urandom /dev/urandom
--dev-bind /dev/zero /dev/zero
--dir /tmp
./build/bin/llama-bench -m /share/Qwen3-4B-UD-Q4_K_XL.gguf -fa 1
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = AMD Radeon RX 480 Graphics (RADV POLARIS10) (radv) | uma: 0 | fp16: 0 | warp size: 64 | shared memory: 65536 | int dot: 0 | matrix cores: none

model size params backend ngl fa test t/s
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 1 pp512 140.58 ± 0.57
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 1 tg128 26.48 ± 0.07

build: 141a908 (5298)

~/ai/temp/llama.cpp$ bwrap
--ro-bind / /
--dev-bind /dev/nvidia0 /dev/nvidia0
--dev-bind /dev/nvidiactl /dev/nvidiactl
--dev-bind /dev/nvidia-uvm /dev/nvidia-uvm
--dev-bind /dev/nvidia-modeset /dev/nvidia-modeset
--dev-bind /dev/null /dev/null
--dev-bind /dev/urandom /dev/urandom
--dev-bind /dev/zero /dev/zero
--dir /tmp
./build/bin/llama-bench -m /share/Qwen3-4B-UD-Q4_K_XL.gguf
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = NVIDIA GeForce RTX 2070 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 49152 | int dot: 0 | matrix cores: none

model size params backend ngl test t/s
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 pp512 461.58 ± 2.73
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 tg128 63.01 ± 1.47

build: 141a908 (5298)

~/ai/temp/llama.cpp$ bwrap
--ro-bind / /
--dev-bind /dev/nvidia0 /dev/nvidia0
--dev-bind /dev/nvidiactl /dev/nvidiactl
--dev-bind /dev/nvidia-uvm /dev/nvidia-uvm
--dev-bind /dev/nvidia-modeset /dev/nvidia-modeset
--dev-bind /dev/null /dev/null
--dev-bind /dev/urandom /dev/urandom
--dev-bind /dev/zero /dev/zero
--dir /tmp
./build/bin/llama-bench -m /share/Qwen3-4B-UD-Q4_K_XL.gguf -fa 1
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = NVIDIA GeForce RTX 2070 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 49152 | int dot: 0 | matrix cores: none

model size params backend ngl fa test t/s
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 1 pp512 139.97 ± 1.06
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 1 tg128 35.87 ± 1.01

build: 141a908 (5298)


Commit 005756a

~/ai/temp/llama.cpp$ bwrap
--ro-bind / /
--dev-bind /dev/dri/card0 /dev/dri/card0
--dev-bind /dev/dri/renderD128 /dev/dri/renderD128
--dev-bind /dev/null /dev/null
--dev-bind /dev/urandom /dev/urandom
--dev-bind /dev/zero /dev/zero
--dir /tmp
./build/bin/llama-bench -m /share/Qwen3-4B-UD-Q4_K_XL.gguf
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = AMD Radeon RX 480 Graphics (RADV POLARIS10) (radv) | uma: 0 | fp16: 0 | warp size: 64 | shared memory: 65536 | int dot: 0 | matrix cores: none

model size params backend ngl test t/s
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 pp512 294.41 ± 0.74
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 tg128 37.87 ± 0.49

build: bd417ee8 (5299)

~/ai/temp/llama.cpp$ bwrap
--ro-bind / /
--dev-bind /dev/dri/card0 /dev/dri/card0
--dev-bind /dev/dri/renderD128 /dev/dri/renderD128
--dev-bind /dev/null /dev/null
--dev-bind /dev/urandom /dev/urandom
--dev-bind /dev/zero /dev/zero
--dir /tmp
./build/bin/llama-bench -m /share/Qwen3-4B-UD-Q4_K_XL.gguf -fa 1
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = AMD Radeon RX 480 Graphics (RADV POLARIS10) (radv) | uma: 0 | fp16: 0 | warp size: 64 | shared memory: 65536 | int dot: 0 | matrix cores: none

model size params backend ngl fa test t/s
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 1 pp512 230.25 ± 0.30
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 1 tg128 31.31 ± 0.07

build: bd417ee8 (5299)

~/ai/temp/llama.cpp$ bwrap
--ro-bind / /
--dev-bind /dev/nvidia0 /dev/nvidia0
--dev-bind /dev/nvidiactl /dev/nvidiactl
--dev-bind /dev/nvidia-uvm /dev/nvidia-uvm
--dev-bind /dev/nvidia-modeset /dev/nvidia-modeset
--dev-bind /dev/null /dev/null
--dev-bind /dev/urandom /dev/urandom
--dev-bind /dev/zero /dev/zero
--dir /tmp
./build/bin/llama-bench -m /share/Qwen3-4B-UD-Q4_K_XL.gguf
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = NVIDIA GeForce RTX 2070 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 49152 | int dot: 0 | matrix cores: none

model size params backend ngl test t/s
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 pp512 460.96 ± 1.61
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 tg128 63.14 ± 2.21

build: bd417ee8 (5299)

~/ai/temp/llama.cpp$ bwrap
--ro-bind / /
--dev-bind /dev/nvidia0 /dev/nvidia0
--dev-bind /dev/nvidiactl /dev/nvidiactl
--dev-bind /dev/nvidia-uvm /dev/nvidia-uvm
--dev-bind /dev/nvidia-modeset /dev/nvidia-modeset
--dev-bind /dev/null /dev/null
--dev-bind /dev/urandom /dev/urandom
--dev-bind /dev/zero /dev/zero
--dir /tmp
./build/bin/llama-bench -m /share/Qwen3-4B-UD-Q4_K_XL.gguf -fa 1
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = NVIDIA GeForce RTX 2070 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 49152 | int dot: 0 | matrix cores: none

model size params backend ngl fa test t/s
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 1 pp512 444.55 ± 1.55
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 1 tg128 59.65 ± 0.71

build: bd417ee8 (5299)

@netrunnereve
Copy link
Collaborator

netrunnereve commented May 7, 2025

Anyways I went and tried this out on my RX 470. With FA turned on prompt processing becomes slower and inference becomes faster when I make it generate a lot of text. I guess there's a tradeoff here and this should be quite useful for those new thinking models.

model size params backend ngl threads main_gpu sm fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 8 1 none 0 pp512 183.57 ± 1.11
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 8 1 none 1 pp512 174.75 ± 1.23
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 8 1 none 0 tg128 33.85 ± 0.06
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 8 1 none 1 tg128 33.52 ± 0.03
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 8 1 none 0 pp2000 177.40 ± 0.20
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 8 1 none 1 pp2000 85.46 ± 0.12
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 8 1 none 0 tg2000 18.45 ± 0.00
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 8 1 none 1 tg2000 30.28 ± 0.39

The FA tests are passing on my RX 470 but they're failing on my W8100 when prec=def as the shaders are trying to do FP16 math on a chip that doesn't support it. The prec=f32 tests are passing on the W8100. In this case we'll either need to disable these FA shaders if the GPU doesn't support FP16 or have two sets of shaders like how it's done for mul mat and mat vec.

@netrunnereve
Copy link
Collaborator

I didn't use llama-bench previously because it doesn't support the -dev option which allows to specify which gpu I want to use.

Oh you can just use the -mg option to set your GPU number and then set -sm none to make it only run on a single GPU.

It seems that the rtx 2070 is still around 1.6x faster than then rx 480 using vulkan. Cuda is much faster for prompt ingestion (vulkan doesn't use KHR_coopmat there though)

Yeah those numbers make more sense now 😉. If you get coopmat2 working it should be much closer to CUDA but I think it's still going to be a bit slower.

@jeffbolznv
Copy link
Collaborator Author

I didn't use llama-bench previously because it doesn't support the -dev option which allows to specify which gpu I want to use.

Oh you can just use the -mg option to set your GPU number and then set -sm none to make it only run on a single GPU.

You can also set the env var GGML_VK_VISIBLE_DEVICES=0 or 1 to hide the other device.

@jeffbolznv
Copy link
Collaborator Author

The FA tests are passing on my RX 470 but they're failing on my W8100 when prec=def as the shaders are trying to do FP16 math on a chip that doesn't support it.

I hadn't realized this was happening, it's the leftover ACC_TYPE in the shader that's barely used. I've changed the logic to always select the f32 variant for scalar.

@jeffbolznv jeffbolznv changed the title vulkan: scalar flash attention implementation draft: vulkan: scalar flash attention implementation May 7, 2025
@jeffbolznv
Copy link
Collaborator Author

Set to draft, I have a bit more perf tuning I want to try.

@0cc4m
Copy link
Collaborator

0cc4m commented May 7, 2025

This is very exciting. I'll test it across my devices within the next days.

@Mushoz
Copy link

Mushoz commented May 7, 2025

I have some 7900XTX results to share with the radv vulkan driver with the Qwen3 32B Q4_K_S model. I am seeing very nice speedups for token generation at longer context depths, but unfortunately prompt processing drops off a cliff:

ggml_vulkan: 0 = AMD Radeon RX 7900 XTX (RADV NAVI31) (radv) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat

model size params backend ngl fa test t/s
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 pp512 331.03 ± 0.71
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 tg128 35.68 ± 0.04
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 pp512 @ d128 326.19 ± 0.23
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 tg128 @ d128 35.46 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 pp512 @ d256 322.68 ± 0.42
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 tg128 @ d256 35.34 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 pp512 @ d512 311.54 ± 0.32
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 tg128 @ d512 34.89 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 pp512 @ d1024 294.65 ± 7.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 tg128 @ d1024 34.56 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 pp512 @ d2048 296.63 ± 0.19
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 tg128 @ d2048 33.10 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 pp512 @ d4096 269.95 ± 0.39
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 tg128 @ d4096 29.74 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 pp512 @ d8192 236.21 ± 0.25
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 tg128 @ d8192 24.72 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 pp512 @ d16384 182.46 ± 0.17
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 tg128 @ d16384 18.59 ± 0.00
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 313.83 ± 0.34
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 35.60 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d128 300.44 ± 0.44
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d128 35.68 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d256 300.73 ± 0.30
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d256 35.10 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d512 282.28 ± 0.27
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d512 34.89 ± 0.00
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d1024 260.11 ± 0.39
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d1024 34.27 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d2048 222.80 ± 0.15
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d2048 33.24 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d4096 157.15 ± 0.30
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d4096 31.13 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d8192 96.81 ± 0.06
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d8192 27.96 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d16384 53.02 ± 0.05
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d16384 23.11 ± 0.02

@0cc4m
Copy link
Collaborator

0cc4m commented May 7, 2025

I have some 7900XTX results to share with the radv vulkan driver with the Qwen3 32B Q4_K_S model. I am seeing very nice speedups for token generation at longer context depths, but unfortunately prompt processing drops off a cliff:

That is expected for you since the new flash attention shader doesn't use coopmat1 for matrix core acceleration, which your GPU supports and uses for non-FA prompt processing, that's why it's slower.

I'll look into a coopmat1 version that would fix this at some point, if nobody else gets to it first.

@Mushoz
Copy link

Mushoz commented May 7, 2025

ROCm number for reference in case they are useful:

ggml_cuda_init: found 1 ROCm devices:
Device 0: AMD Radeon RX 7900 XTX, gfx1100 (0x1100), VMM: no, Wave Size: 32

model size params backend ngl fa test t/s
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 pp512 744.73 ± 1.10
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 tg128 26.07 ± 0.03
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 pp512 @ d128 725.08 ± 14.51
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 tg128 @ d128 25.98 ± 0.02
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 pp512 @ d256 724.20 ± 12.35
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 tg128 @ d256 25.69 ± 0.00
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 pp512 @ d512 713.54 ± 0.97
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 tg128 @ d512 24.93 ± 0.07
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 pp512 @ d1024 689.74 ± 2.81
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 tg128 @ d1024 24.21 ± 0.05
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 pp512 @ d2048 648.16 ± 0.61
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 tg128 @ d2048 24.39 ± 0.03
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 pp512 @ d4096 579.65 ± 1.06
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 tg128 @ d4096 22.81 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 pp512 @ d8192 472.54 ± 0.75
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 tg128 @ d8192 20.25 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 pp512 @ d16384 338.52 ± 0.47
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 tg128 @ d16384 16.57 ± 0.00

@Mushoz
Copy link

Mushoz commented May 7, 2025

I'll look into a coopmat1 version that would fix this at some point, if nobody else gets to it first.

That is honestly great to hear, thank you! I think once KV Cache quantization is in place, and prompt processing performance has been resolved, there really isn't much reason left to use ROCm over Vulkan. Vulkan has shown great performance at token generation compared to ROCm.

@netrunnereve
Copy link
Collaborator

I hadn't realized this was happening, it's the leftover ACC_TYPE in the shader that's barely used. I've changed the logic to always select the f32 variant for scalar.

Thanks it's passing now!

@github-actions github-actions bot added the devops improvements to build systems and github actions label May 8, 2025
@Mushoz
Copy link

Mushoz commented May 8, 2025

Here are my new results on my 7900XTX. Prompt processing has gotten a really nice performance boost, especially at higher depths, so that's really nice! Unfortunately, token generation has seen a pretty noticeable regression at low to medium depths. For example, going from 34.27 tokens/sec at 1024 depth to 32.24 tokens/sec, which is a >6% performance regression.

Interestingly, at high depths the regression disappears and turns into a slight performance lead going from 23.11 tokens/sec to 24.45 tokens/sec at 16k depth.

With prompt caching, token generation is arguably more important than prompt processing, so really hopeful the cause of the regression at low to medium depths can be identified and fixed. Full result here:

ggml_vulkan: 0 = AMD Radeon RX 7900 XTX (RADV NAVI31) (radv) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat

model size params backend ngl fa test t/s
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 319.02 ± 0.42
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 34.12 ± 0.04
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d128 307.34 ± 0.28
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d128 34.07 ± 0.03
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d256 309.05 ± 0.47
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d256 33.14 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d512 301.31 ± 0.25
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d512 32.23 ± 0.00
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d1024 286.87 ± 0.33
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d1024 32.24 ± 0.00
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d2048 256.16 ± 0.29
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d2048 31.20 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d4092 203.79 ± 0.33
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d4092 30.03 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d8192 139.46 ± 0.05
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d8192 27.86 ± 0.00
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d16384 82.86 ± 0.20
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d16384 24.45 ± 0.01

@jeffbolznv
Copy link
Collaborator Author

It was probably the tile size change, going fro 4 to 8 rows when tg only needs one row. I've pushed a change that only uses 1 row when that's all that's needed. I verified this fixed a small regression when running llama-2-7b.Q4_0.gguf. When I ran qwen3 I hit #13164, were you working around that in your tests?

I also fixed an issue where the last round of optimizations had reintroduced usage of Float16.

@Mushoz
Copy link

Mushoz commented May 8, 2025

I've pushed a change that only uses 1 row when that's all that's needed. I verified this fixed a small regression when running llama-2-7b.Q4_0.gguf.

Perfect! Recompiling now to retest on my 7900XTX as well. Will let you know as soon as I have the results.

When I ran qwen3 I hit #13164, were you working around that in your tests?

I am testing with the dense 32B model, which is unaffected by that bug. It only impacts the 30B MOE model.

@Mushoz
Copy link

Mushoz commented May 8, 2025

Benchmark only just started running and will take a while to fully complete, but the initial tests show worse performance for both prompt processing as well as token generation compared to the previous build. So the token generation regression seems to have gotten worse, and the prompt processing improvements have been reduced. Will edit this post with the full result as soon as it's done, but just wanted to share some initial results:

ggml_vulkan: 0 = AMD Radeon RX 7900 XTX (RADV NAVI31) (radv) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat

model size params backend ngl fa test t/s
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 315.60 ± 0.89
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 33.89 ± 0.03
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d128 303.15 ± 0.14
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d128 33.86 ± 0.02
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d256 305.06 ± 0.26
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d256 33.13 ± 0.02
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d512 298.26 ± 0.47
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d512 32.25 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d1024 283.61 ± 0.36
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d1024 32.16 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d2048 254.35 ± 0.52
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d2048 31.11 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d4092 207.06 ± 0.10
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d4092 29.81 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d8192 143.00 ± 0.17
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d8192 27.33 ± 0.00
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d16384 85.52 ± 0.18
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d16384 23.68 ± 0.00

@jeffbolznv
Copy link
Collaborator Author

Hmm, I don't know what's going on there. I tried Qwen3-14B-Q4_K_M.gguf on my RTX 4070 using the KHR_coopmat path, and I see an improvement vs yesterday with both pp512 @ d1024 and tg128 @ d1024

@wbruna
Copy link
Contributor

wbruna commented May 8, 2025

On my Ryzen 5 3400G iGPU, most tests get a little bit slower, a few improve slightly; the difference seems to be less than the variation between consecutive runs.

ggml_vulkan: 0 = AMD Radeon Vega 11 Graphics (RADV RAVEN) (radv) | uma: 1 | fp16: 1 | warp size: 64 | shared memory: 65536 | int dot: 0 | matrix cores: none

Qwen3-30B-A3B-UD-Q4_K_XL at e660942 :

type_k type_v fa test t/s
q8_0 q8_0 1 pp512 24.57 ± 0.28
q8_0 q8_0 1 tg128 15.39 ± 0.08
q8_0 q8_0 1 pp2048 22.65 ± 0.04
q8_0 q8_0 1 tg128 15.70 ± 0.13
f16 f16 1 pp512 22.81 ± 0.17
f16 f16 1 tg128 15.08 ± 0.10
f16 f16 1 pp2048 22.56 ± 0.08
f16 f16 1 tg128 15.72 ± 0.03
f16 f16 0 pp512 22.97 ± 0.17
f16 f16 0 tg128 15.72 ± 0.07
f16 f16 0 pp2048 23.48 ± 0.09
f16 f16 0 tg128 16.27 ± 0.04

llama-3.2-1b-instruct-q8_0 at 20a6246 :

type_k type_v fa test t/s
q8_0 q8_0 1 pp8192 232.88 ± 5.47
q8_0 q8_0 1 tg128 30.29 ± 0.21
f16 f16 1 pp8192 239.87 ± 3.24
f16 f16 1 tg128 29.98 ± 0.08
f16 f16 0 pp8192 268.65 ± 3.35
f16 f16 0 tg128 30.01 ± 0.09

@Mushoz
Copy link

Mushoz commented May 8, 2025

Updated my table above with the full results. Observations:

  1. Token generation has regressed at all (most?) depths versus yesterday's version unfortunately. Especially noticeable at 16k
  2. Prompt processing has regressed at the lower depths versus yesterday's version
  3. Prompt processing has improved at high depths versus yesterday's version

The differences of my setup versus yours:

  1. I am using Q4_K_S versus your Q4_K_M
  2. I am using the 32B model versus your 14B
  3. I am using AMD versus your Nvidia

@daniandtheweb
Copy link
Contributor

daniandtheweb commented May 8, 2025

I've just retested the latest changes on my cards:

Radeon RX 5700 XT

ggml_vulkan: 0 = AMD Radeon RX 5700 XT (RADV NAVI10) (radv) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 65536 | int dot: 0 | matrix cores: none

model size params backend ngl fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 pp512 468.06 ± 0.65
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 tg128 65.19 ± 0.02
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 pp512 444.67 ± 0.34
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 tg128 65.28 ± 0.04

ggml_vulkan: 0 = AMD Radeon RX 5700 XT (AMD open-source driver) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 32768 | int dot: 0 | matrix cores: none

model size params backend ngl fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 pp512 437.86 ± 0.49
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 tg128 64.57 ± 0.03
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 pp512 257.83 ± 0.33
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 tg128 63.34 ± 0.01

ggml_vulkan: 0 = AMD Radeon RX 5700 XT (AMD proprietary driver) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 32768 | int dot: 0 | matrix cores: none

model size params backend ngl fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 pp512 561.11 ± 0.40
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 tg128 71.29 ± 0.03
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 pp512 400.18 ± 0.14
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 tg128 70.71 ± 0.05

Radeon RX 7800 XT

ggml_vulkan: 0 = AMD Radeon RX 7800 XT (RADV NAVI32) (radv) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat

model size params backend ngl fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 pp512 1236.76 ± 19.36
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 tg128 111.39 ± 0.08
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 pp512 1187.56 ± 4.26
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 tg128 113.31 ± 0.03

ggml_vulkan: 0 = AMD Radeon RX 7800 XT (AMD open-source driver) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 32768 | int dot: 1 | matrix cores: KHR_coopmat

model size params backend ngl fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 pp512 2062.42 ± 14.64
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 tg128 98.06 ± 0.31
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 pp512 1210.71 ± 1.74
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 tg128 95.14 ± 0.39

ggml_vulkan: 0 = AMD Radeon RX 7800 XT (AMD proprietary driver) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 32768 | int dot: 1 | matrix cores: KHR_coopmat

model size params backend ngl fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 pp512 2047.26 ± 20.53
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 tg128 97.43 ± 0.32
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 pp512 1209.53 ± 1.19
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 tg128 94.67 ± 0.28

Token generation has improved on almost any scenario, however there seems to be a constant performance penalty in prompt processing on amdvlk and vulkan_pro, this hardly affects linux but since those drivers are based on the Windows ones the regression may be present there.

@jeffbolznv
Copy link
Collaborator Author

Token generation has improved on almost any scenario, however there seems to be a constant performance penalty in prompt processing

Do you just mean that the scalar FA is slower than the KHR_coopmat alternative? This is expected.

I'm going to have very limited availability over the next week, and I don't think anybody has reported a serious performance problem. So I suggest we merge this as-is (after any review fixes) and further tuning can happen later.

@daniandtheweb
Copy link
Contributor

daniandtheweb commented May 9, 2025

What I mean is that I compared my first results with today's and performance on the amdvlk and vulkan_pro on Linux and the performance on both got worse in prompt processing. I'm just pointing this out since these drivers behave almost identically to the AMD driver on Windows (linux uses radv by default so it's not an issue there).

This is the result from 005756a:

ggml_vulkan: 0 = AMD Radeon RX 7800 XT (AMD proprietary driver) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 32768 | int dot: 1 | matrix cores: KHR_coopmat

model size params backend ngl fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 pp512 2074.83 ± 6.27
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 tg128 97.45 ± 0.36
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 pp512 1727.79 ± 8.97
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 tg128 76.40 ± 0.16

And this is from 20a6246:

ggml_vulkan: 0 = AMD Radeon RX 7800 XT (AMD proprietary driver) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 32768 | int dot: 1 | matrix cores: KHR_coopmat

model size params backend ngl fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 pp512 2047.26 ± 20.53
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 tg128 97.43 ± 0.32
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 pp512 1209.53 ± 1.19
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 tg128 94.67 ± 0.28

The performance hit seems to have initiated on a6c940b and got worse with further commits. Overall it's amazing that we finally have a flash attention implementation on vulkan for non coopmat2. I'm just commenting about this so there's initial data for some future tuning.

@0cc4m
Copy link
Collaborator

0cc4m commented May 9, 2025

ggml_vulkan: 0 = AMD Radeon (TM) Pro VII (RADV VEGA20) (radv) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: none

model size params backend ngl fa test t/s
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 0 pp512 661.44 ± 1.31
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 0 tg128 64.35 ± 0.14
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 1 pp512 605.52 ± 0.75
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 1 tg128 58.28 ± 0.12

ggml_vulkan: 0 = Intel(R) Arc(tm) A770 Graphics (DG2) (Intel open-source Mesa driver) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 65536 | int dot: 1 | matrix cores: none

model size params backend ngl fa test t/s
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 0 pp512 707.30 ± 4.01
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 0 tg128 31.12 ± 0.02
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 1 pp512 230.01 ± 0.13
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 1 tg128 22.36 ± 0.01

ggml_vulkan: 0 = NVIDIA GeForce RTX 3090 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: NV_coopmat2

model size params backend ngl fa test t/s
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 0 pp512 4280.88 ± 76.19
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 0 tg128 103.67 ± 5.79
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 1 pp512 4581.71 ± 17.68
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 1 tg128 105.89 ± 0.15

ggml_vulkan: 0 = NVIDIA GeForce RTX 3090 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: KHR_coopmat

model size params backend ngl fa test t/s
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 0 pp512 3133.75 ± 26.72
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 0 tg128 103.28 ± 5.72
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 1 pp512 2992.94 ± 3.63
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 1 tg128 101.36 ± 0.18

ggml_vulkan: 0 = NVIDIA GeForce RTX 3090 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: none

model size params backend ngl fa test t/s
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 0 pp512 1932.35 ± 4.55
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 0 tg128 103.88 ± 4.29
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 1 pp512 1907.39 ± 5.47
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 1 tg128 100.69 ± 0.15

Performance is good in my tests.

@0cc4m
Copy link
Collaborator

0cc4m commented May 9, 2025

It would be cool if we could figure out the performance regression on AMD non-mesa drivers, but I wouldn't hold up the PR with it. They constantly cause issues. At least performance with them seems pretty good at this point, apart from this problem.

@Mushoz
Copy link

Mushoz commented May 9, 2025

ggml_vulkan: 0 = AMD Radeon (TM) Pro VII (RADV VEGA20) (radv) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: none
model size params backend ngl fa test t/s
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 0 pp512 661.44 ± 1.31
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 0 tg128 64.35 ± 0.14
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 1 pp512 605.52 ± 0.75
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 1 tg128 58.28 ± 0.12

Your AMD tests are also showing a token generation performance drop after enabling FA. I see the same with the latest build, but that wasn't the case in the earlier version, see here:

ggml_vulkan: 0 = AMD Radeon RX 7900 XTX (RADV NAVI31) (radv) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat
model size params backend ngl fa test t/s
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 pp512 331.03 ± 0.71
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 tg128 35.68 ± 0.04
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 313.83 ± 0.34
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 35.60 ± 0.01

Latest build with regressions included:

ggml_vulkan: 0 = AMD Radeon RX 7900 XTX (RADV NAVI31) (radv) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat
model size params backend ngl fa test t/s
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 315.60 ± 0.89
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 33.89 ± 0.03

@0cc4m
Copy link
Collaborator

0cc4m commented May 9, 2025

Yes, but my initial concern is just that there are no issues with the output and performance is roughly in line with expected numbers. Performance tuning can happen in follow-up PRs.

@Mushoz
Copy link

Mushoz commented May 9, 2025

Yes, but my initial concern is just that there are no issues with the output and performance is roughly in line with expected numbers. Performance tuning can happen in follow-up PRs.

Fair enough. It does seem better not to have any further holdup, since even with this regression this is a massive improvement and finally allows me to drop ROCm completely as KV cache quantization was the only thing preventing me from moving over to Vulkan.

Just hoping we can get back to the same token generation performance as the earlier version of this PR in a followup PR :)

Great job on this massive step forward for the Vulkan backend!

@LostRuins
Copy link
Collaborator

Seems to be working very well. Thanks!

@remon-nashid
Copy link

Can't wait to test this out once merged!

@netrunnereve
Copy link
Collaborator

Compared to my last run pp2000 is around 30% faster with these new changes, while everything else is pretty close to before. As the others mentioned optimizations will come eventually and I think this is good enough to merge.

model size params backend ngl threads main_gpu sm fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 8 1 none 1 pp512 173.22 ± 0.47
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 8 1 none 1 tg128 34.46 ± 0.03
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 8 1 none 1 pp2000 113.37 ± 0.19
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 8 1 none 1 tg2000 30.54 ± 0.47

@0cc4m
Copy link
Collaborator

0cc4m commented May 10, 2025

ggml_vulkan: 0 = AMD Radeon RX 6800 XT (RADV NAVI21) (radv) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 65536 | int dot: 1 | matrix cores: none

model size params backend ngl fa test t/s
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 0 pp512 1455.98 ± 2.36
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 0 tg128 87.02 ± 0.08
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 1 pp512 1385.49 ± 0.29
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 1 tg128 81.38 ± 0.12

@0cc4m 0cc4m merged commit dc1d2ad into ggml-org:master May 10, 2025
44 checks passed
@Nindaleth
Copy link
Contributor

Nindaleth commented May 10, 2025

Thanks for fixing #12526! Tests with Qwen2.5-Coder-1.5B-Instruct-Q8_0.gguf and Qwen2.5-Coder-14B-Instruct-Q4_K_L.gguf.

ggml_vulkan: 0 = AMD Radeon RX 6700 XT (RADV NAVI22) (radv) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 65536 | int dot: 1 | matrix cores: none

model size params backend ngl fa test t/s
qwen2 1.5B Q8_0 1.53 GiB 1.54 B ROCm 99 0 pp512 4693.34 ± 4.48
qwen2 1.5B Q8_0 1.53 GiB 1.54 B ROCm 99 0 tg128 114.12 ± 0.07
qwen2 1.5B Q8_0 1.53 GiB 1.54 B Vulkan 99 0 pp512 1991.92 ± 1.50
qwen2 1.5B Q8_0 1.53 GiB 1.54 B Vulkan 99 0 tg128 98.12 ± 0.64
qwen2 1.5B Q8_0 1.53 GiB 1.54 B Vulkan main 99 1 pp512 632.78 ± 24.03
qwen2 1.5B Q8_0 1.53 GiB 1.54 B Vulkan main 99 1 tg128 9.96 ± 0.07
qwen2 1.5B Q8_0 1.53 GiB 1.54 B Vulkan PR 99 1 pp512 1875.01 ± 1.59
qwen2 1.5B Q8_0 1.53 GiB 1.54 B Vulkan PR 99 1 tg128 89.87 ± 0.15
qwen2 1.5B Q8_0 1.53 GiB 1.54 B ROCm 99 1 pp512 3467.94 ± 2.05
qwen2 1.5B Q8_0 1.53 GiB 1.54 B ROCm 99 1 tg128 101.82 ± 0.89
model size params backend ngl fa test t/s
qwen2 14B Q4_K - Medium 8.90 GiB 14.77 B ROCm 99 0 pp512 409.84 ± 0.56
qwen2 14B Q4_K - Medium 8.90 GiB 14.77 B ROCm 99 0 tg128 27.27 ± 0.11
qwen2 14B Q4_K - Medium 8.90 GiB 14.77 B Vulkan 99 0 pp512 248.96 ± 0.52
qwen2 14B Q4_K - Medium 8.90 GiB 14.77 B Vulkan 99 0 tg128 29.09 ± 0.17
qwen2 14B Q4_K - Medium 8.90 GiB 14.77 B Vulkan main 99 1 pp512 111.39 ± 2.66
qwen2 14B Q4_K - Medium 8.90 GiB 14.77 B Vulkan main 99 1 tg128 12.33 ± 0.08
qwen2 14B Q4_K - Medium 8.90 GiB 14.77 B Vulkan PR 99 1 pp512 238.57 ± 1.41
qwen2 14B Q4_K - Medium 8.90 GiB 14.77 B Vulkan PR 99 1 tg128 27.16 ± 0.13
qwen2 14B Q4_K - Medium 8.90 GiB 14.77 B ROCm 99 1 pp512 355.37 ± 2.31
qwen2 14B Q4_K - Medium 8.90 GiB 14.77 B ROCm 99 1 tg128 26.23 ± 0.08

@soerenkampschroer
Copy link

Just wanted to let you know that on macOS (Intel CPU/AMD GPU) this doesn't seem to work. I tried using flash attention and I'm getting the following error:

common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
ggml_vulkan: Compute pipeline creation failed for flash_attn_f32_f16_D128_aligned_f32accf16
ggml_vulkan: vk::Device::createComputePipeline: ErrorInitializationFailed
libc++abi: terminating due to uncaught exception of type std::out_of_range: unordered_map::at: key not found
[1]    44288 abort      ./llama-server --port 2108 -m  --n-gpu-layers 200 -fa

I'm using MoltenVK v1.3.0 and Vulkan SDK v1.4.313 with an RX 6800, macOS 15.4.1.

./test_backend_ops also crashes here:

FLASH_ATTN_EXT(hsk=64,hsv=64,nh=4,nr=1,kv=512,nb=1,mask=1,max_bias=0.000000,logit_softcap=0.000000,prec=f32,type_KV=f16,permute=[0,1,2,3]): ggml_vulkan: Compute pipeline creation failed for flash_attn_f32_f16_D64_aligned_f32acc_smallrowsf16
ggml_vulkan: vk::Device::createComputePipeline: ErrorInitializationFailed
libc++abi: terminating due to uncaught exception of type std::out_of_range: unordered_map::at: key not found
[1]    44482 abort      ./test-backend-ops

I can provide more logs here or open a separate issue if you want me to.

@jeffbolznv
Copy link
Collaborator Author

Yeah, please file a new issue to track this. Do the validation layers report anything?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
devops improvements to build systems and github actions ggml changes relating to the ggml tensor library for machine learning Vulkan Issues specific to the Vulkan backend
Projects
None yet
Development

Successfully merging this pull request may close these issues.