OpenAI API client for Kotlin with multiplatform capabilities
Python bindings for llama.cpp
Structured outputs for llms
LLM Frontend for Power Users
lightweight package to simplify LLM API calls
Run Local LLMs on Any Device. Open-source
Port of Facebook's LLaMA model in C/C++
Self-hosted, community-driven, local OpenAI compatible API
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)
Integrate cutting-edge LLM technology quickly and easily into your app
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
Open-source, high-performance AI model with advanced reasoning
Distribute and run LLMs with a single file
Replace OpenAI GPT with another LLM in your app
Powerful AI language model (MoE) optimized for efficiency/performance
A RWKV management and startup tool, full automation, only 8MB
Zep: A long-term memory store for LLM / Chatbot applications
A high-throughput and memory-efficient inference and serving engine
Phi-3.5 for Mac: Locally-run Vision and Language Models
Low-code app builder for RAG and multi-agent AI applications
Seamlessly integrate LLMs into scikit-learn
Framework and no-code GUI for fine-tuning LLMs
Revolutionizing Database Interactions with Private LLM Technology
⚡ Building applications with LLMs through composability ⚡
LLM abstractions that aren't obstructions