Highlights
- Pro
Stars
Compress and Attend Transformers (CATs) 😸
Emerge-Lab / PufferDrive
Forked from PufferAI/PufferLibpuffer_drive dev
1 million FPS multi-agent driving simulator
Stable Diffusion web UI
The definitive Web UI for local AI, with powerful features and easy setup.
Open-source framework for the research and development of foundation models.
Minimal pretraining script for language modeling in PyTorch. Supporting torch compilation and DDP. It includes a model implementation and a data preprocessing script.
Inference-time scaling for LLMs-as-a-judge.
🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.
A Survey of Reinforcement Learning for Large Reasoning Models
A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.
Lightweight coding agent that runs in your terminal
M1: Towards Scalable Test-Time Compute with Mamba Reasoning Models
ddidacus / llama-titans
Forked from lucidrains/titans-pytorchAdaptation of titans-pytorch to llama models on HF
Unofficial implementation of Titans, SOTA memory for transformers, in Pytorch
Production-tested AI infrastructure tools for efficient AGI development and community-driven innovation
Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"
🔥 A minimal training framework for scaling FLA models
Fully open reproduction of DeepSeek-R1
A system monitoring tool powered by LLMs that provides real-time insights about your system's performance
The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attention
🚀 Efficient implementations of state-of-the-art linear attention models
AlphaFold 3 inference pipeline.



