You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
$./llama-cli --version
load_backend: loaded CPU backend from /app/libggml-cpu-icelake.so
version: 5280 (27aa259)
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
Operating systems
Linux
GGML backends
CPU
Hardware
8xRTX3090
Models
Meta Llama-3.2-1B-Instruct-16.gguf
Problem description & steps to reproduce
I tried to quantize F16.gguf llama-3.2-1b models into Q4_K_M. However, I encounter the segmentation fault error. Could you advise how to fix this error?
Name and Version
$./llama-cli --version
load_backend: loaded CPU backend from /app/libggml-cpu-icelake.so
version: 5280 (27aa259)
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
Operating systems
Linux
GGML backends
CPU
Hardware
8xRTX3090
Models
Meta Llama-3.2-1B-Instruct-16.gguf
Problem description & steps to reproduce
I tried to quantize F16.gguf llama-3.2-1b models into Q4_K_M. However, I encounter the segmentation fault error. Could you advise how to fix this error?
The text was updated successfully, but these errors were encountered: