Stars
Turn any PDF or image document into structured data for your AI. A powerful, lightweight OCR toolkit that bridges the gap between images/PDFs and LLMs. Supports 100+ languages.
YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek-R1, Qwen3, Gemma 3, TTS 2x faster with 70% less VRAM.
Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and…
🚀🚀 「大模型」2小时完全从0训练26M的小参数GPT!🌏 Train a 26M-parameter GPT from scratch in just 2h!
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (V…
Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
TradingAgents: Multi-Agents LLM Financial Trading Framework
Qwen3 is the large language model series developed by Qwen team, Alibaba Cloud.
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
Train transformer language models with reinforcement learning.
End-to-End Object Detection with Transformers
基于多智能体LLM的中文金融交易框架 - TradingAgents中文增强版
ChatGLM3 series: Open Bilingual Chat LLMs | 开源双语对话语言模型
PyTorch implementation of the U-Net for image semantic segmentation with high quality images
Pre-Training with Whole Word Masking for Chinese BERT(中文BERT-wwm系列模型)
Matplotlib styles for scientific plotting
Pytorch implementation of convolutional neural network visualization techniques
GLM-4 series: Open Multilingual Multimodal Chat LMs | 开源多语言多模态对话模型
pycorrector is a toolkit for text error correction. 文本纠错,实现了Kenlm,T5,MacBERT,ChatGLM3,Qwen2.5等模型应用在纠错场景,开箱即用。
This repository contains a hand-curated resources for Prompt Engineering with a focus on Generative Pre-trained Transformer (GPT), ChatGPT, PaLM etc
📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
基于LangChain和ChatGLM-6B等系列LLM的针对本地知识库的自动问答
基于ChatGLM-6B、ChatGLM2-6B、ChatGLM3-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning、全参微调等
Paper and implementation of UNet-related model.
mPLUG-Owl: The Powerful Multi-modal Large Language Model Family