Skip to content

Pinned Loading

  1. vllm vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 60.4k 10.6k

  2. llm-compressor llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    Python 2.1k 260

  3. recipes recipes Public

    Common recipes to run vLLM

    Jupyter Notebook 169 58

Repositories

Showing 10 of 26 repositories
  • vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    vllm-project/vllm’s past year of commit activity
    Python 60,378 Apache-2.0 10,626 1,848 (32 issues need help) 1,178 Updated Oct 18, 2025
  • vllm-gaudi Public

    Community maintained hardware plugin for vLLM on Intel Gaudi

    vllm-project/vllm-gaudi’s past year of commit activity
    Python 12 Apache-2.0 51 1 54 Updated Oct 18, 2025
  • guidellm Public

    Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs

    vllm-project/guidellm’s past year of commit activity
    Python 645 Apache-2.0 89 84 (5 issues need help) 20 Updated Oct 18, 2025
  • tpu-inference Public

    TPU inference for vLLM, with unified JAX and PyTorch support.

    vllm-project/tpu-inference’s past year of commit activity
    Python 97 Apache-2.0 9 5 30 Updated Oct 18, 2025
  • vllm-spyre Public

    Community maintained hardware plugin for vLLM on Spyre

    vllm-project/vllm-spyre’s past year of commit activity
    Python 35 Apache-2.0 26 5 15 Updated Oct 17, 2025
  • ci-infra Public

    This repo hosts code for vLLM CI & Performance Benchmark infrastructure.

    vllm-project/ci-infra’s past year of commit activity
    HCL 23 42 1 21 Updated Oct 17, 2025
  • speculators Public

    A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM

    vllm-project/speculators’s past year of commit activity
    Python 60 Apache-2.0 11 4 (2 issues need help) 17 Updated Oct 17, 2025
  • aibrix Public

    Cost-efficient and pluggable Infrastructure components for GenAI inference

    vllm-project/aibrix’s past year of commit activity
    Go 4,303 Apache-2.0 468 232 (21 issues need help) 20 Updated Oct 17, 2025
  • llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    vllm-project/llm-compressor’s past year of commit activity
    Python 2,102 Apache-2.0 260 60 (12 issues need help) 41 Updated Oct 18, 2025
  • flash-attention Public Forked from Dao-AILab/flash-attention

    Fast and memory-efficient exact attention

    vllm-project/flash-attention’s past year of commit activity
    Python 96 BSD-3-Clause 2,066 0 17 Updated Oct 17, 2025