Skip to content
View lahmuller's full-sized avatar

Block or report lahmuller

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Pinned Loading

  1. flash-attention flash-attention Public

    Forked from ROCm/flash-attention

    Fast and memory-efficient exact attention

    Python

  2. nano-vllm nano-vllm Public

    Forked from GeeeekExplorer/nano-vllm

    Nano vLLM

    Python

  3. snowflakedb/ArcticInference snowflakedb/ArcticInference Public

    ArcticInference: vLLM plugin for high-throughput, low-latency inference

    Python 280 37

  4. Repeerc/flash-attention-v2-RDNA3-minimal Repeerc/flash-attention-v2-RDNA3-minimal Public

    a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA environments.

    Python 48 7