vLLM is a fast and easy-to-use library for LLM inference and serving. High-throughput serving with various decoding algorithms, including parallel sampling, beam search, and more.
Features
- State-of-the-art serving throughput
- Efficient management of attention key and value memory with PagedAttention
- Continuous batching of incoming requests
- Optimized CUDA kernels
- Seamless integration with popular HuggingFace models
- Tensor parallelism support for distributed inference
License
Apache License V2.0Follow vLLM
Other Useful Business Software
Your top-rated shield against malware and online scams | Avast Free Antivirus
Our antivirus software scans for security and performance issues and helps you to fix them instantly. It also protects you in real time by analyzing unknown files before they reach your desktop PC or laptop — all for free.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of vLLM!