Mixture-of-Experts Vision-Language Models for Advanced Multimodal
CodeGeeX2: A More Powerful Multilingual Code Generation Model
DeepSeek Coder: Let the Code Write Itself
Ling is a MoE LLM provided and open-sourced by InclusionAI
An AI-powered security review GitHub Action using Claude
GLM-4 series: Open Multilingual Multimodal Chat LMs
Tool for exploring and debugging transformer model behaviors
MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training
Chinese LLaMA-2 & Alpaca-2 Large Model Phase II Project
FAIR Sequence Modeling Toolkit 2
Designed for text embedding and ranking tasks
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
Chinese and English multimodal conversational language model
State-of-the-art Image & Video CLIP, Multimodal Large Language Models
A state-of-the-art open visual language model
Open-source large language model family from Tencent Hunyuan
Chat & pretrained large vision language model
Chat & pretrained large audio language model proposed by Alibaba Cloud
A series of math-specific large language models of our Qwen2 series
NVIDIA Isaac GR00T N1.5 is the world's first open foundation model
Qwen2.5-VL is the multimodal large language model series
A Family of Open Foundation Models for Code Intelligence
GPT4V-level open-source multi-modal model based on Llama3-8B
GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning
Towards Real-World Vision-Language Understanding