Tags: lyhiving/LocalAI
Tags
test/fix: OSX Test Repair (mudler#1843) * test with gguf instead of ggml. Updates testPrompt to match? Adds debugging line to Dockerfile that I've found helpful recently. * fix testPrompt slightly * Sad Experiment: Test GH runner without metal? * break apart CGO_LDFLAGS * switch runner * upstream llama.cpp disables Metal on Github CI! * missed a dir from clean-tests * CGO_LDFLAGS * tmate failure + NO_ACCELERATE * whisper.cpp has a metal fix * do the exact opposite of the name of this branch, but keep it around for unrelated fixes? * add back newlines * add tmate to linux for testing * update fixtures * timeout for tmate
⬆️ Update ggerganov/llama.cpp (mudler#1840) Signed-off-by: GitHub <noreply@github.com> Co-authored-by: mudler <mudler@users.noreply.github.com>
⬆️ Update ggerganov/llama.cpp (mudler#1750) Signed-off-by: GitHub <noreply@github.com> Co-authored-by: mudler <mudler@users.noreply.github.com>
fix(python): pin exllama2 (mudler#1711) fix(python): pin python deps Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
⬆️ Update mudler/go-stable-diffusion (mudler#1674) Signed-off-by: GitHub <noreply@github.com> Co-authored-by: mudler <mudler@users.noreply.github.com>
⬆️ Update ggerganov/llama.cpp (mudler#1655) Signed-off-by: GitHub <noreply@github.com> Co-authored-by: mudler <mudler@users.noreply.github.com>
feat(grpc): backend SPI pluggable in embedding mode (mudler#1621) * run server * grpc backend embedded support * backend providable
feat(extra-backends): Improvements, adding mamba example (mudler#1618) * feat(extra-backends): Improvements vllm: add max_tokens, wire up stream event mamba: fixups, adding examples for mamba-chat * examples(mamba-chat): add * docs: update
PreviousNext