Tags: utilityai/llama-cpp-rs
Toggle 0.1.122's commit message
Merge pull request #831 from admiralakber/chore/update-llama-cpp-b6482
Update llama.cpp to b6482 (3d4053f)
Toggle 0.1.121's commit message
Merge pull request #820 from emilnorsker/main
Expose separator token
Toggle 0.1.120's commit message
Merge pull request #819 from fellhorn/dennis/mtmd-improvements
Multimodality improvements
Toggle 0.1.119's commit message
Merge pull request #815 from mediest/feat/expose-ggml-kv-types
Add KV cache type (K/V) configuration to LlamaContextParams
Toggle 0.1.118's commit message
Merge pull request #790 from fellhorn/dennis/feat/multi-modal
Multimodality support
Toggle 0.1.117's commit message
Merge pull request #802 from tmetsch/main
fix: create fresh batch for each line in embedding example
Toggle 0.1.116's commit message
Merge pull request #795 from utilityai/dependabot/cargo/clap-4.5.42
chore(deps): bump clap from 4.5.41 to 4.5.42
Toggle 0.1.115's commit message
Merge pull request #797 from caer/caer-gpt-oss-update
Update `llama.cpp` to latest version supporting `gpt-oss`
Toggle 0.1.114's commit message
Merge pull request #792 from Kakadus/bindgen-update
Bump bindgen from 0.69.5 to 0.72.0
Toggle 0.1.113's commit message
Merge pull request #786 from fellhorn/dep/update_llama_cpp_b6002
Bump llama.cpp to b6002
You can’t perform that action at this time.