Stars
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
🦜🔗 The platform for reliable agents.
A high-throughput and memory-efficient inference and serving engine for LLMs
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
🌟 The Multi-Agent Framework: First AI Software Company, Towards Natural Language Programming
The simplest, fastest repository for training/finetuning medium-sized GPTs.
LlamaIndex is the leading framework for building LLM-powered agents over your data.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
TensorFlow code and pre-trained models for BERT
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Qlib is an AI-oriented Quant investment platform that aims to use AI tech to empower Quant Research, from exploring ideas to implementing productions. Qlib supports diverse ML modeling paradigms, i…
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Code and documentation to train Stanford's Alpaca models, and generate the data.
Code for the paper "Language Models are Unsupervised Multitask Learners"
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
Graph Neural Network Library for PyTorch
A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
Fast and memory-efficient exact attention
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Zipline, a Pythonic Algorithmic Trading Library
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
tiktoken is a fast BPE tokeniser for use with OpenAI's models.
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
A very simple framework for state-of-the-art Natural Language Processing (NLP)

