Open Source Python Artificial Intelligence Software - Page 11

Python Artificial Intelligence Software

View 11483 business solutions

Browse free open source Python Artificial Intelligence Software and projects below. Use the toggles on the left to filter open source Python Artificial Intelligence Software by OS, license, language, programming language, and project status.

  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    Build gen AI apps with an all-in-one modern database: MongoDB Atlas

    MongoDB Atlas provides built-in vector search and a flexible document model so developers can build, scale, and run gen AI apps without stitching together multiple databases. From LLM integration to semantic search, Atlas simplifies your AI architecture—and it’s free to get started.
    Start Free
  • Build Securely on AWS with Proven Frameworks Icon
    Build Securely on AWS with Proven Frameworks

    Lay a foundation for success with Tested Reference Architectures developed by Fortinet’s experts. Learn more in this white paper.

    Moving to the cloud brings new challenges. How can you manage a larger attack surface while ensuring great network performance? Turn to Fortinet’s Tested Reference Architectures, blueprints for designing and securing cloud environments built by cybersecurity experts. Learn more and explore use cases in this white paper.
    Download Now
  • 1
    HunyuanDiT

    HunyuanDiT

    Diffusion Transformer with Fine-Grained Chinese Understanding

    HunyuanDiT is a high-capability text-to-image diffusion transformer with bilingual (Chinese/English) understanding and multi-turn dialogue capability. It trains a diffusion model in latent space using a transformer backbone and integrates a Multimodal Large Language Model (MLLM) to refine captions and support conversational image generation. It supports adapters like ControlNet, IP-Adapter, LoRA, and can run under constrained VRAM via distillation versions. LoRA, ControlNet (pose, depth, canny), IP-adapter to extend control over generation. Integration with Gradio for web demos and diffusers / command-line compatibility. Supports multi-turn T2I (text-to-image) interactions so users can iteratively refine their images via dialogue.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 2
    HunyuanVideo

    HunyuanVideo

    HunyuanVideo: A Systematic Framework For Large Video Generation Model

    HunyuanVideo is a cutting-edge framework designed for large-scale video generation, leveraging advanced AI techniques to synthesize videos from various inputs. It is implemented in PyTorch, providing pre-trained model weights and inference code for efficient deployment. The framework aims to push the boundaries of video generation quality, incorporating multiple innovative approaches to improve the realism and coherence of the generated content. Release of FP8 model weights to reduce GPU memory usage / improve efficiency. Parallel inference code to speed up sampling, utilities and tests included.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 3
    InferSent

    InferSent

    InferSent sentence embeddings

    InferSent is a supervised sentence embedding method that learns universal representations from Natural Language Inference data and transfers well to many downstream tasks. It uses a BiLSTM encoder with max-pooling to produce fixed-length sentence vectors that capture semantics beyond bag-of-words statistics. Trained on large NLI datasets, the embeddings generalize across tasks like sentiment analysis, entailment, paraphrase detection, and semantic similarity with simple linear classifiers. The repository provides pretrained vectors, training scripts, and clear examples for evaluating transfer on a wide suite of benchmarks. Because the encoder is compact and language-agnostic at the interface level, it’s easy to drop into production pipelines that need robust semantic features. InferSent helped popularize the idea that supervised objectives (like NLI) can yield strong general-purpose sentence encoders, and it remains a reliable baseline against which to compare newer models.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 4
    IntentKit

    IntentKit

    An open and fair framework for everyone to build AI agents

    IntentKit is a natural language understanding (NLU) library focused on intent recognition and entity extraction, enabling developers to build conversational AI applications.
    Downloads: 5 This Week
    Last Update:
    See Project
  • Photo and Video Editing APIs and SDKs Icon
    Photo and Video Editing APIs and SDKs

    Trusted by 150 million+ creators and businesses globally

    Unlock Picsart's full editing suite by embedding our Editor SDK directly into your platform. Offer your users the power of a full design suite without leaving your site.
    Learn More
  • 5
    LLM CLI

    LLM CLI

    Access large language models from the command-line

    A CLI utility and Python library for interacting with Large Language Models, both via remote APIs and models that can be installed and run on your own machine.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 6
    LLMFlows

    LLMFlows

    LLMFlows - Simple, Explicit and Transparent LLM Apps

    LLMFlows is a framework for building simple, explicit, and transparent applications utilizing Large Language Models (LLMs). It emphasizes clarity and control in the development process, allowing developers to create LLM-powered applications with well-defined workflows and interactions. LLMFlows supports various LLMs and provides tools to manage prompts, responses, and application logic effectively.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 7
    LangChain

    LangChain

    ⚡ Building applications with LLMs through composability ⚡

    Large language models (LLMs) are emerging as a transformative technology, enabling developers to build applications that they previously could not. But using these LLMs in isolation is often not enough to create a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge. This library is aimed at assisting in the development of those types of applications.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 8
    LightFM

    LightFM

    A Python implementation of LightFM, a hybrid recommendation algorithm

    LightFM is a Python implementation of a number of popular recommendation algorithms for both implicit and explicit feedback, including efficient implementation of BPR and WARP ranking losses. It's easy to use, fast (via multithreaded model estimation), and produces high-quality results. It also makes it possible to incorporate both item and user metadata into the traditional matrix factorization algorithms. It represents each user and item as the sum of the latent representations of their features, thus allowing recommendations to generalize to new items (via item features) and to new users (via user features).
    Downloads: 5 This Week
    Last Update:
    See Project
  • 9
    Logfire MCP

    Logfire MCP

    The Logfire MCP Server is here

    The Logfire MCP Server is a Model Context Protocol server that allows AI applications to access OpenTelemetry traces and metrics sent to Logfire. It enables retrieval and analysis of telemetry data, enhancing debugging and observability workflows. ​
    Downloads: 5 This Week
    Last Update:
    See Project
  • No-Nonsense Code-to-Cloud Security for Devs | Aikido Icon
    No-Nonsense Code-to-Cloud Security for Devs | Aikido

    Connect your GitHub, GitLab, Bitbucket, or Azure DevOps account to start scanning your repos for free.

    Aikido provides a unified security platform for developers, combining 12 powerful scans like SAST, DAST, and CSPM. AI-driven AutoFix and AutoTriage streamline vulnerability management, while runtime protection blocks attacks.
    Start for Free
  • 10
    Make-A-Video - Pytorch (wip)

    Make-A-Video - Pytorch (wip)

    Implementation of Make-A-Video, new SOTA text to video generator

    Implementation of Make-A-Video, new SOTA text to video generator from Meta AI, in Pytorch. They combine pseudo-3d convolutions (axial convolutions) and temporal attention and show much better temporal fusion. The pseudo-3d convolutions isn't a new concept. It has been explored before in other contexts, say for protein contact prediction as "dimensional hybrid residual networks". The gist of the paper comes down to, take a SOTA text-to-image model (here they use DALL-E2, but the same learning points would easily apply to Imagen), make a few minor modifications for attention across time and other ways to skimp on the compute cost, do frame interpolation correctly, get a great video model out. Passing in images (if one were to pretrain on images first), both temporal convolution and attention will be automatically skipped. In other words, you can use this straightforwardly in your 2d Unet and then port it over to a 3d Unet once that phase of the training is done.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 11
    Mirascope

    Mirascope

    LLM abstractions that aren't obstructions

    Mirascope is a powerful, flexible, and user-friendly library that simplifies the process of working with LLMs through a unified interface that works across various supported providers, including OpenAI, Anthropic, Mistral, Gemini, Groq, Cohere, LiteLLM, Azure AI, Vertex AI, and Bedrock. Whether you're generating text, extracting structured information, or developing complex AI-driven agent systems, Mirascope provides the tools you need to streamline your development process and create powerful, robust applications.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 12
    Model Context Protocol Python SDK

    Model Context Protocol Python SDK

    The official Python SDK for Model Context Protocol servers and clients

    The Python SDK for Model Context Protocol provides utilities to interact with the protocol, enabling seamless communication with AI models.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 13
    Nexa SDK

    Nexa SDK

    Nexa SDK is a comprehensive toolkit for supporting ONNX and GGML

    Nexa SDK is a comprehensive toolkit for supporting ONNX and GGML models. It supports text generation, image generation, vision-language models (VLM), and speech-to-text (ASR), and text-to-speech (TTS) capabilities. Additionally, it offers an OpenAI-compatible API server with JSON schema mode for function calling and streaming support, and a user-friendly Streamlit UI. Users can run Nexa SDK in any device with Python environment, and GPU acceleration is supported, including CUDA, Metal, and ROCm. An executable version is also available.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 14
    Onyx

    Onyx

    Gen-AI Chat for Teams

    Onyx is an AI platform designed to integrate seamlessly with your company's documents, applications, and team members. It offers a feature-rich chat interface and supports integration with various Large Language Models (LLMs). Onyx ensures synchronized knowledge and access controls across over 40 connectors, including Google Drive, Slack, Confluence, and Salesforce. Users can create custom AI agents with unique prompts and actions, and deploy Onyx securely on various platforms, from laptops to cloud services.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 15
    OpenAI Harmony

    OpenAI Harmony

    Renderer for the harmony response format to be used with gpt-oss

    Harmony is a response format developed by OpenAI for use with the gpt-oss model series. It defines a structured way for language models to produce outputs, including regular text, reasoning traces, tool calls, and structured data. By mimicking the OpenAI Responses API, Harmony provides developers with a familiar interface while enabling more advanced capabilities such as multiple output channels, instruction hierarchies, and tool namespaces. The format is essential for ensuring gpt-oss models operate correctly, as they are trained to rely on this structure for generating and organizing their responses. For users accessing gpt-oss through third-party providers like HuggingFace, Ollama, or vLLM, Harmony formatting is handled automatically, but developers building custom inference setups must implement it directly. With its flexible design, Harmony serves as the foundation for creating more interpretable, controlled, and extensible interactions with open-weight language models.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 16
    OpenAssistant

    OpenAssistant

    Chat-based assistant that understands tasks

    OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so. In the same way that Stable Diffusion helped the world make art and images in new ways, we want to improve the world by providing amazing conversational AI. We are in the early stages of development, working from established research in applying RLHF to large language models. Open Assistant is a project organized by LAION and individuals around the world interested in bringing this technology to everyone. The code and models are licensed under the Apache 2.0 license. Open Assistant will be free to use and modify. There will be versions which will be runnable on consumer hardware. You do not need to run the project locally unless you are contributing to the development process. The website link above will take you to the public website where you can use the data collection app.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 17
    OpenCLIP

    OpenCLIP

    An open source implementation of CLIP

    The goal of this repository is to enable training models with contrastive image-text supervision and to investigate their properties such as robustness to distribution shift. Our starting point is an implementation of CLIP that matches the accuracy of the original CLIP models when trained on the same dataset. Specifically, a ResNet-50 model trained with our codebase on OpenAI's 15 million image subset of YFCC achieves 32.7% top-1 accuracy on ImageNet. OpenAI's CLIP model reaches 31.3% when trained on the same subset of YFCC. For ease of experimentation, we also provide code for training on the 3 million images in the Conceptual Captions dataset, where a ResNet-50x4 trained with our codebase reaches 22.2% top-1 ImageNet accuracy. This codebase is work in progress, and we invite all to contribute in making it more accessible and useful. In the future, we plan to add support for TPU training and release larger models. We hope this codebase facilitates and promotes further research.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 18
    OpenVINO Training Extensions

    OpenVINO Training Extensions

    Trainable models and NN optimization tools

    OpenVINO™ Training Extensions provide a convenient environment to train Deep Learning models and convert them using the OpenVINO™ toolkit for optimized inference. When ote_cli is installed in the virtual environment, you can use the ote command line interface to perform various actions for templates related to the chosen task type, such as running, training, evaluating, exporting, etc. ote train trains a model (a particular model template) on a dataset and saves results in two files. ote optimize optimizes a pre-trained model using NNCF or POT depending on the model format. NNCF optimization used for trained snapshots in a framework-specific format. POT optimization used for models exported in the OpenVINO IR format.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 19
    PokemonGo-Bot

    PokemonGo-Bot

    The Pokemon Go Bot, baking with community

    PokemonGo-Bot is a project created by the PokemonGoF team. Since no public API available for now, a patch to use HASH-Server was applied. PokemonGoF is not part of HASH-Server dev team and has no connection with it. Based on Python for botting on any operating system - Windows, macOS and Linux. Multi-bot supported. Able to edit bot if certain level has reached. Allow custom hash service provider, if any. GPS Location configuration. Search & spin Pokestops / Gyms. Diverse options for humanlike behavior from movement to overall game play. Ability to add multiple coordinates to select between your favorite botting locations. Support self defined path / route. Advanced catch, evolve and transfer confuration using our PokemonOptimizer settings. Determine which pokeball to use. Rules to determine the use of Razz and Pinap Berries. Exchange, evolve and catch Pokemon base on pre-configured rules. Transfer Pokemon in bulk. Auto switch mode.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 20
    PraisonAI

    PraisonAI

    PraisonAI application combines AutoGen and CrewAI or similar framework

    PraisonAI application combines AutoGen and CrewAI or similar frameworks into a low-code solution for building and managing multi-agent LLM systems, focusing on simplicity, customization, and efficient human-agent collaboration. Chat with your ENTIRE Codebase. Praison AI, leveraging both AutoGen and CrewAI or any other agent framework, represents a low-code, centralized framework designed to simplify the creation and orchestration of multi-agent systems for various LLM applications, emphasizing ease of use, customization, and human-agent interaction.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 21
    RealtimeSTT

    RealtimeSTT

    A robust, efficient, low-latency speech-to-text library

    RealtimeSTT is a Python-based realtime speech-to-text engine emphasizing low latency, wake-word detection, voice activity detection, and automatic speech segmentation. It provides asynchronous callbacks, nanosecond-precision timestamps, and CLI tools, suitable for building voice assistants, meeting transcribers, or live caption systems.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 22
    STORM

    STORM

    An LLM-powered knowledge curation system that researches topics

    STORM is an open-source virtual assistant framework developed by Stanford's OVAL lab. It is designed for creating natural language interfaces and assistants that can interact with APIs, databases, and services in a modular way.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 23
    SageMaker Training Toolkit

    SageMaker Training Toolkit

    Train machine learning models within Docker containers

    Train machine learning models within a Docker container using Amazon SageMaker. Amazon SageMaker is a fully managed service for data science and machine learning (ML) workflows. You can use Amazon SageMaker to simplify the process of building, training, and deploying ML models. To train a model, you can include your training script and dependencies in a Docker container that runs your training code. A container provides an effectively isolated environment, ensuring a consistent runtime and reliable training process. The SageMaker Training Toolkit can be easily added to any Docker container, making it compatible with SageMaker for training models. If you use a prebuilt SageMaker Docker image for training, this library may already be included. Write a training script (eg. train.py). Define a container with a Dockerfile that includes the training script and any dependencies.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 24
    Sapiens

    Sapiens

    High-resolution models for human tasks

    Sapiens is a research framework from Meta AI focused on embodied intelligence and human-like multimodal learning, aiming to train agents that can perceive, reason, and act in complex environments. It integrates sensory inputs such as vision, audio, and proprioception into a unified learning architecture that allows agents to understand and adapt to their surroundings dynamically. The project emphasizes long-horizon reasoning and cross-modal grounding—connecting language, perception, and action into a single agentic model capable of following abstract goals. It includes simulation environments, datasets, and benchmarks for testing grounded understanding, imitation learning, and decision-making. The system’s modular pipeline supports both imitation-based and reinforcement-based training strategies, allowing flexible experimentation with different embodiments and tasks.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 25
    Scrapling

    Scrapling

    An undetectable, powerful, flexible, high-performance Python library

    Scrapling is a Python scraping framework built for the modern web, combining high-performance fetchers with a rapid parsing engine to handle dynamic sites and anti-bot countermeasures. It emphasizes being “undetectable,” flexible, and fast, offering an approachable API for both experienced scrapers and newcomers. The library targets the full scraping pipeline: session handling, fetching, rendering when needed, parsing, and export—while keeping ergonomics front and center. Community posts and guides show active usage patterns, packaging tips, and frequent releases that iterate on speed and resilience. The repository positions Scrapling as a batteries-included alternative to stitching together many small libraries. In short, it aims to make tough targets tractable while keeping scripts readable and maintainable.
    Downloads: 5 This Week
    Last Update:
    See Project
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.