-
Peking University
- shenzhen
-
21:39
(UTC +08:00) - https://yulonghui.github.io/
- https://scholar.google.com/citations?user=3eHjDDgAAAAJ&hl=zh-CN&oi=ao
- @scut_longhui
Stars
A curated list of awesome Claude Skills, resources, and tools for customizing Claude AI workflows
Ideas for projects related to Tinker
This repository contains the code for the paper The Open Proof Corpus: Building a Large-Scale, Human-Validated Dataset of LLM-Generated Proofs.
Reference PyTorch implementation and models for DINOv3
🚀 MassGen is an open-source multi-agent scaling system that runs in your terminal, autonomously orchestrating frontier models and agents to collaborate, reason, and produce high-quality results. | …
gpt-oss-120b and gpt-oss-20b are two open-weight language models by OpenAI
Kimi K2 is the large language model series developed by Moonshot AI team
slime is an LLM post-training framework for RL Scaling.
open-source coding LLM for software engineering tasks
Technical report of Kimina-Prover Preview.
FlashMLA: Efficient Multi-head Latent Attention Kernels
MoBA: Mixture of Block Attention for Long-Context LLMs
🤗 smolagents: a barebones library for agents that think in code.
An AI-powered research assistant that performs iterative, deep research on any topic by combining search engines, web scraping, and large language models. The goal of this repo is to provide the si…
The official repo for "AceCoder: Acing Coder RL via Automated Test-Case Synthesis" [ACL25]
A python module to repair invalid JSON from LLMs
AnchorAttention: Improved attention for LLMs long-context training
Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering
OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models
Implementing Wordware's Twitter Personality Analysis (https://twitter.wordware.ai/) using APPL: A Prompt Programming Language (https://appl-team.github.io/appl/)
A series of math-specific large language models of our Qwen2 series.
Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]

