You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
C:\Users\xeden\Downloads\llama-b5255-bin-win-vulkan-x64>llama-cli --version
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = AMD Radeon(TM) 8060S Graphics (AMD proprietary driver) | uma: 1 | fp16: 1 | warp size: 64 | shared memory: 32768 | int dot: 1 | matrix cores: KHR_coopmat
version: 5255 (d24d592)
built with MSVC 19.43.34808.0 for x64
Operating systems
Windows
GGML backends
Vulkan
Hardware
CPU AMD Ryzen AI MAX 395 Memory 128G (CPU 64G GPU 64G)
Models
Qwen2.5-VL-3B-Instruct-f16.gguf
Problem description & steps to reproduce
Device
CPU AMD Ryzen AI MAX 395
Memory
128GB GPU 64mb, CPU 64mb
Operating system
win11
Used llama.cpp version
llama-b5255-bin-win-vulkan-x64
Since AMD does not support ROCM of Ryzen AI MAX 395, I used vulkan as the backends, and it is no problem to run most of the llm models, including deepseek.
Name and Version
C:\Users\xeden\Downloads\llama-b5255-bin-win-vulkan-x64>llama-cli --version
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = AMD Radeon(TM) 8060S Graphics (AMD proprietary driver) | uma: 1 | fp16: 1 | warp size: 64 | shared memory: 32768 | int dot: 1 | matrix cores: KHR_coopmat
version: 5255 (d24d592)
built with MSVC 19.43.34808.0 for x64
Operating systems
Windows
GGML backends
Vulkan
Hardware
CPU AMD Ryzen AI MAX 395 Memory 128G (CPU 64G GPU 64G)
Models
Qwen2.5-VL-3B-Instruct-f16.gguf
Problem description & steps to reproduce
Device
CPU AMD Ryzen AI MAX 395
Memory
128GB GPU 64mb, CPU 64mb
Operating system
win11
Used llama.cpp version
llama-b5255-bin-win-vulkan-x64
Since AMD does not support ROCM of Ryzen AI MAX 395, I used vulkan as the backends, and it is no problem to run most of the llm models, including deepseek.
First Bad Commit
No response
Relevant log output
The text was updated successfully, but these errors were encountered: