GLM-4.5: Open-source LLM for intelligent agents by Z.ai
Run Local LLMs on Any Device. Open-source
Agentic, Reasoning, and Coding (ARC) foundation models
Robust Speech Recognition via Large-Scale Weak Supervision
Contexts Optical Compression
Open-source, high-performance AI model with advanced reasoning
Powerful AI language model (MoE) optimized for efficiency/performance
ChatGLM-6B: An Open Bilingual Dialogue Language Model
⚡ Building applications with LLMs through composability ⚡
Renderer for the harmony response format to be used with gpt-oss
Unified Multimodal Understanding and Generation Models
The no-nonsense RAG chunking library
MCP integration platforms for AI agents to use tools at any scale
Mixture-of-Experts Vision-Language Models for Advanced Multimodal
Ongoing research training transformer models at scale
Code for the paper Language Models are Unsupervised Multitask Learners
Inference code for CodeLlama models
Implementation of AudioLM audio generation model in Pytorch
An AI-powered security review GitHub Action using Claude
Central interface to connect your LLM's with external data
Dataset of GPT-2 outputs for research in detection, biases, and more
Pushing the Limits of Mathematical Reasoning in Open Language Models
Access large language models from the command-line
Phi-3.5 for Mac: Locally-run Vision and Language Models
TextWorld is a sandbox learning environment for the training