Stars
An open source light-weight and high performance inference framework for Hailo devices
🦜🔗 The platform for reliable agents.
AI-App / OpenDevin.OpenDevin
Forked from OpenHands/OpenHands🐚 OpenDevin: Code Less, Make More
AI debugger and AI coder integrated. Use AI to code and drives runtime debugger
Understand the evolution in Large software systems with LLM
An AI agent based on the Ollama large model, capable of executing Linux commands through natural language and invoking kernel hooks to delve into the underlying system.
Production-grade client-side tracing, profiling, and analysis for complex software systems.
Official inference framework for 1-bit LLMs
lightweight, standalone C++ inference engine for Google's Gemma models.
A curated list of awesome DeSci resources, projects, events, articles and more
Nodejs extension host for vim & neovim, load extensions like VSCode and host language servers.
[EMNLP 2025 Demo] PDF scientific paper translation with preserved formats - 基于 AI 完整保留排版的 PDF 文档全文双语翻译,支持 Google/DeepL/Ollama/OpenAI 等服务,提供 CLI/GUI/MCP/Docker/Zotero
Quickly build and run kernels inside a virtualized snapshot of your live system
Minimal reproduction of DeepSeek R1-Zero
A fast, lightweight, append-only file system for NAND flash on in low-power embedded systems
OSS-Fuzz - continuous fuzzing for open source software.
Learning eBPF, published by O'Reilly - out now! Here's where you'll find a VM config for the examples, and more
Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.
Provide powerful tools for seccomp analysis
Development resources from World of Warcraft
git mirror of the user interface source code for World of Warcraft
NTFS filesystem userspace utilities


