Pinned Loading
-
flash-attention
flash-attention PublicForked from ROCm/flash-attention
Fast and memory-efficient exact attention
Python
-
-
snowflakedb/ArcticInference
snowflakedb/ArcticInference PublicArcticInference: vLLM plugin for high-throughput, low-latency inference
-
Repeerc/flash-attention-v2-RDNA3-minimal
Repeerc/flash-attention-v2-RDNA3-minimal Publica simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA environments.
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.