Skip to content

Fail to install llama #2036

Closed as not planned
Closed as not planned
@Deeffyy

Description

@Deeffyy
  1. I install llama-ccp with this command: CMAKE_ARGS="-DLLAMA_AVX2=OFF -DLLAMA_F16C=OFF -DLLAMA_FMA=OFF" FORCE_CMAKE=1 pip install llama-cpp-python --no-cache-dir

  2. After starting the wheel build, I got an error: `Building wheels for collected packages: llama-cpp-python
    Building wheel for llama-cpp-python (pyproject.toml) ... error
    error: subprocess-exited-with-error

× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [101 lines of output]
*** scikit-build-core 0.11.5 using CMake 3.22.1 (wheel)
*** Configuring CMake...
loading initial cache file /tmp/tmppv4hm970/build/CMakeInit.txt
-- The C compiler identification is GNU 11.4.0
-- The CXX compiler identification is GNU 11.4.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/x86_64-linux-gnu-gcc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/x86_64-linux-gnu-g++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.34.1")
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- ccache found, compilation results will be cached. Disable with GGML_CCACHE=OFF.
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- Including CPU backend
-- Found OpenMP_C: -fopenmp (found version "4.5")
-- Found OpenMP_CXX: -fopenmp (found version "4.5")
-- Found OpenMP: TRUE (found version "4.5")
-- x86 detected
-- Adding CPU backend variant ggml-cpu: -march=native
CMake Warning at vendor/llama.cpp/ggml/CMakeLists.txt:305 (message):
GGML build version fixed at 1 likely due to a shallow clone.

  INSTALL TARGETS - target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
  INSTALL TARGETS - target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
  INSTALL TARGETS - target ggml has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
  INSTALL TARGETS - target ggml has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
  -- Configuring done
  -- Generating done
  CMake Warning:
    Manually-specified variables were not used by the project:
  
      LLAMA_AVX2
      LLAMA_F16C
      LLAMA_FMA
  
  
  -- Build files have been written to: /tmp/tmppv4hm970/build
  *** Building project with Ninja...
  [1/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -std=gnu++17 -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-threading.cpp
  [2/79] ccache /usr/bin/x86_64-linux-gnu-gcc -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -std=gnu11 -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-alloc.c
  [3/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -std=gnu++17 -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-opt.cpp
  [4/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -std=gnu++17 -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-backend.cpp
  [5/79] ccache /usr/bin/x86_64-linux-gnu-gcc -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -std=gnu11 -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml.c
  [6/79] ccache /usr/bin/x86_64-linux-gnu-gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_SHARED -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_CPU_AARCH64 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/.. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-cpu -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -march=native -fopenmp -std=gnu11 -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu.c.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu.c.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu.c.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c
  [7/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_SHARED -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_CPU_AARCH64 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/.. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-cpu -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -march=native -fopenmp -std=gnu++17 -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-cpu/ggml-cpu.cpp
  [8/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_SHARED -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_CPU_AARCH64 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/.. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-cpu -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -march=native -fopenmp -std=gnu++17 -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-hbm.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-hbm.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-hbm.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-hbm.cpp
  [9/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_SHARED -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_CPU -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -std=gnu++17 -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-backend-reg.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-backend-reg.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-backend-reg.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-backend-reg.cpp
  [10/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -std=gnu++17 -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/gguf.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/gguf.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/gguf.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/gguf.cpp
  [11/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_SHARED -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_CPU_AARCH64 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/.. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-cpu -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -march=native -fopenmp -std=gnu++17 -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-traits.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-traits.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-traits.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-traits.cpp
  [12/79] ccache /usr/bin/x86_64-linux-gnu-gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_SHARED -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_CPU_AARCH64 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/.. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-cpu -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -march=native -fopenmp -std=gnu11 -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-quants.c.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-quants.c.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-quants.c.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-quants.c
  [13/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_SHARED -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_CPU_AARCH64 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/.. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-cpu -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -march=native -fopenmp -std=gnu++17 -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/amx/amx.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/amx/amx.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/amx/amx.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-cpu/amx/amx.cpp
  [14/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_SHARED -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_CPU_AARCH64 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/.. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-cpu -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -march=native -fopenmp -std=gnu++17 -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-aarch64.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-aarch64.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-aarch64.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp
  [15/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_SHARED -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_CPU_AARCH64 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/.. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-cpu -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -march=native -fopenmp -std=gnu++17 -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/amx/mmq.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/amx/mmq.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/amx/mmq.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-cpu/amx/mmq.cpp
  [16/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_SHARED -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_CPU_AARCH64 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/.. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-cpu -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -march=native -fopenmp -std=gnu++17 -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/vec.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/vec.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/vec.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-cpu/vec.cpp
  [17/79] ccache /usr/bin/x86_64-linux-gnu-gcc -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -std=gnu11 -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-quants.c
  [18/79] : && /usr/bin/x86_64-linux-gnu-g++ -fPIC -O3 -DNDEBUG   -shared -Wl,-soname,libggml-base.so -o bin/libggml-base.so vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/gguf.cpp.o  -Wl,-rpath,"\$ORIGIN"  -lm && :
  [19/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_SHARED -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_CPU_AARCH64 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/.. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-cpu -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -march=native -fopenmp -std=gnu++17 -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/binary-ops.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/binary-ops.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/binary-ops.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-cpu/binary-ops.cpp
  [20/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_SHARED -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_CPU_AARCH64 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/.. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-cpu -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -march=native -fopenmp -std=gnu++17 -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/llamafile/sgemm.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/llamafile/sgemm.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/llamafile/sgemm.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-cpu/llamafile/sgemm.cpp
  [21/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_SHARED -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_CPU_AARCH64 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/.. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-cpu -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -march=native -fopenmp -std=gnu++17 -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/unary-ops.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/unary-ops.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/unary-ops.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-cpu/unary-ops.cpp
  [22/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/../include -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/llama.cpp
  [23/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/../include -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-batch.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-batch.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-batch.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/llama-batch.cpp
  [24/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/../include -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-adapter.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-adapter.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-adapter.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/llama-adapter.cpp
  [25/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/../include -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-arch.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-arch.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-arch.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/llama-arch.cpp
  [26/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/../include -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-chat.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-chat.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-chat.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/llama-chat.cpp
  [27/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/../include -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-hparams.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-hparams.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-hparams.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/llama-hparams.cpp
  [28/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_SHARED -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_CPU_AARCH64 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/.. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-cpu -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -march=native -fopenmp -std=gnu++17 -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ops.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ops.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ops.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/ggml-cpu/ops.cpp
  [29/79] : && /usr/bin/x86_64-linux-gnu-g++ -fPIC -O3 -DNDEBUG   -shared -Wl,-soname,libggml-cpu.so -o bin/libggml-cpu.so vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu.c.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu.cpp.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-aarch64.cpp.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-hbm.cpp.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-quants.c.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-traits.cpp.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/amx/amx.cpp.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/amx/mmq.cpp.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/binary-ops.cpp.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/unary-ops.cpp.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/vec.cpp.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ops.cpp.o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/llamafile/sgemm.cpp.o  -Wl,-rpath,"\$ORIGIN"  bin/libggml-base.so  /usr/lib/gcc/x86_64-linux-gnu/11/libgomp.so  /usr/lib/x86_64-linux-gnu/libpthread.a && :
  [30/79] : && /usr/bin/x86_64-linux-gnu-g++ -fPIC -O3 -DNDEBUG   -shared -Wl,-soname,libggml.so -o bin/libggml.so vendor/llama.cpp/ggml/src/CMakeFiles/ggml.dir/ggml-backend-reg.cpp.o  -Wl,-rpath,"\$ORIGIN"  -ldl  bin/libggml-cpu.so  bin/libggml-base.so && :
  [31/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/../include -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-io.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-io.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-io.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/llama-io.cpp
  [32/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/../include -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-context.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-context.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-context.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/llama-context.cpp
  [33/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/../include -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-memory.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-memory.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-memory.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/llama-memory.cpp
  [34/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/../include -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-impl.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-impl.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-impl.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/llama-impl.cpp
  [35/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/../include -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-graph.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-graph.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-graph.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/llama-graph.cpp
  [36/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/../include -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-mmap.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-mmap.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-mmap.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/llama-mmap.cpp
  [37/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/../include -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-kv-cache.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-kv-cache.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-kv-cache.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/llama-kv-cache.cpp
  [38/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/../include -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-model-loader.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-model-loader.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-model-loader.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/llama-model-loader.cpp
  [39/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/../include -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-grammar.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-grammar.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-grammar.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/llama-grammar.cpp
  [40/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/../include -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode-data.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode-data.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode-data.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/unicode-data.cpp
  [41/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/../include -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-model.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-model.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-model.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/llama-model.cpp
  FAILED: vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-model.cpp.o
  ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/../include -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-model.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-model.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-model.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/llama-model.cpp
  x86_64-linux-gnu-g++: fatal error: Killed signal terminated program cc1plus
  compilation terminated.
  [42/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/../include -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-quant.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-quant.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-quant.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/llama-quant.cpp
  [43/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/../include -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-vocab.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-vocab.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-vocab.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/llama-vocab.cpp
  [44/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/../include -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-sampling.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-sampling.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-sampling.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/llama-sampling.cpp
  [45/79] ccache /usr/bin/x86_64-linux-gnu-g++ -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/. -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/../include -I/tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/unicode.cpp.o -c /tmp/pip-install-r1petiuv/llama-cpp-python_ffc9333e9d634de390999713f2d0f70f/vendor/llama.cpp/src/unicode.cpp
  ninja: build stopped: subcommand failed.
  
  *** CMake build failed
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
`

CPU info:

flags		: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 ht syscall nx lm rep_good nopl xtopology cpuid tsc_known_freq pni cx16 x2apic hypervisor lahf_lm cpuid_fault pti
flags		: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 ht syscall nx lm rep_good nopl xtopology cpuid tsc_known_freq pni cx16 x2apic hypervisor lahf_lm cpuid_fault pti
flags		: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 ht syscall nx lm rep_good nopl xtopology cpuid tsc_known_freq pni cx16 x2apic hypervisor lahf_lm cpuid_fault pti

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions