PyTorch library of curated Transformer models and their components
OpenMMLab Model Deployment Framework
Operating LLMs in production
Uplift modeling and causal inference with machine learning algorithms
Library for OCR-related tasks powered by Deep Learning
Private Open AI on Kubernetes
DoWhy is a Python library for causal inference
Uncover insights, surface problems, monitor, and fine tune your LLM
Deep learning optimization library: makes distributed training easy
Database system for building simpler and faster AI-powered application
Large Language Model Text Generation Inference
A library for accelerating Transformer models on NVIDIA GPUs
Standardized Serverless ML Inference Platform on Kubernetes
Run 100B+ language models at home, BitTorrent-style
A unified framework for scalable computing
A Pythonic framework to simplify AI service building
Multi-Modal Neural Networks for Semantic Search, based on Mid-Fusion
Data manipulation and transformation for audio signal processing
State-of-the-art Parameter-Efficient Fine-Tuning
State-of-the-art diffusion models for image and audio generation
A lightweight vision library for performing large object detection
Unified Model Serving Framework
LLMFlows - Simple, Explicit and Transparent LLM Apps
MII makes low-latency and high-throughput inference possible
Bring the notion of Model-as-a-Service to life