Ling is a Mixture-of-Experts (MoE) large language model (LLM) provided and open-sourced by inclusionAI. The project offers different sizes (Ling-lite, Ling-plus) and emphasizes flexibility and efficiency: being able to scale, adapt expert activation, and perform across a range of natural language/reasoning tasks. Example scripts, inference pipelines, and documentation. The codebase includes inference, examples, models, documentation, and model download infrastructure. As more developers and researchers engage with the platform, we can expect rapid advancements and improvements, leading to even more sophisticated applications. Model inference and API code (e.g. integration with Transformers). This collaborative approach accelerates development and ensures that the models remain at the forefront of technology, addressing emerging challenges in various fields.
Features
- MoE architecture with sparse expert activation
- Multiple model sizes (Ling-lite, Ling-plus)
- High reasoning / instruction-following performance with efficient compute usage
- Model inference and API code (e.g. integration with Transformers)
- Example scripts, inference pipelines, and documentation
- Open-source model weights and release under MIT license