Open Source Python Artificial Intelligence Software for Linux - Page 3

Python Artificial Intelligence Software for Linux

View 821 business solutions

Browse free open source Python Artificial Intelligence Software for Linux and projects below. Use the toggles on the left to filter open source Python Artificial Intelligence Software for Linux by OS, license, language, programming language, and project status.

  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    Build gen AI apps with an all-in-one modern database: MongoDB Atlas

    MongoDB Atlas provides built-in vector search and a flexible document model so developers can build, scale, and run gen AI apps without stitching together multiple databases. From LLM integration to semantic search, Atlas simplifies your AI architecture—and it’s free to get started.
    Start Free
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • 1
    Hunyuan3D-2.1

    Hunyuan3D-2.1

    From Images to High-Fidelity 3D Assets

    Hunyuan3D-2.1 is Tencent Hunyuan’s advanced 3D asset generation system that produces high-fidelity 3D models with Physically Based Rendering (PBR) textures. It is fully open-source with released model weights, training, and inference code. It improves on prior versions by using a PBR texture pipeline (enabling realistic material effects like reflections and subsurface scattering) and allowing community fine-tuning and extension. It supports both shape generation (mesh geometry) and texture generation modules. Physically Based Rendering texture synthesis to model realistic material effects, including reflections, subsurface scattering, etc. Cross-platform support (MacOS, Windows, Linux) via Python / PyTorch, including diffusers-style APIs.
    Downloads: 23 This Week
    Last Update:
    See Project
  • 2
    Tabnine

    Tabnine

    Vim client for TabNine

    Tabnine is an AI-powered code completion extension trusted by millions of developers around the world. Whether you’re just getting started as a developer or if you’ve been doing it for decades, Tabnine will help you code twice as fast with half the keystrokes – all in your favorite IDE. Whether you call it IntelliSense, intelliCode, autocomplete, AI-assisted code completion, AI-powered code completion, AI copilot, AI code snippets, code suggestion, code prediction, code hinting, or content assist, you probably already know that it can save you tons of time, easily cutting your keystrokes in half. Powered by sophisticated machine learning models trained on billions of lines of trusted open source code from GitHub, Tabnine is the most advanced AI-powered code completion copilot available today. And like GitHub, it is an essential tool for professional developers.
    Downloads: 22 This Week
    Last Update:
    See Project
  • 3
    openpilot

    openpilot

    Open source driver assistance system

    openpilot is an open-source driver assistance system designed to improve upon the existing driver assistance of most modern cars today. openpilot gives you Tesla Autopilot-like functionality with functions like Adaptive Cruise Control (ACC), Automated Lane Centering (ALC), Forward Collision Warning (FCW) and Lane Departure Warning (LDW). All these with just a push of a button. openpilot also includes a camera-based driver monitoring feature that alerts distracted and asleep drivers while it is engaged. openpilot supports over 85 car makes and models of various years, and the list of supported vehicles continues to grow, including community-supported cars and features. Thousands of drivers have trusted openpilot and have rediscovered the joy of driving again with openpilot. While engaged, openpilot includes camera-based driver monitoring that works both day and night to alert the driver when their eyes are not on the road ahead.
    Downloads: 22 This Week
    Last Update:
    See Project
  • 4
    SAM 3D Objects

    SAM 3D Objects

    Models for object and human mesh reconstruction

    SAM 3D Objects is a foundation model that reconstructs full 3D geometry, texture, and spatial layout of objects and scenes from a single image. Given one RGB image and object masks (for example, from the Segment Anything family), it can generate a textured 3D mesh for each object, including pose and approximate scene layout. The model is specifically designed to be robust in real-world images with clutter, occlusions, small objects, and unusual viewpoints, where many earlier 3D-from-image systems struggle. It supports both single-object and multi-object generation, allowing you to reconstruct entire scenes rather than just isolated items. The repository provides code to run inference, a quickstart demo.py script, and environment setup instructions that connect to hosted checkpoints and configuration files. Outputs are aimed at downstream usability: the reconstructed assets are textured meshes suitable for further editing, rendering, or integration into 3D pipelines and engines.
    Downloads: 21 This Week
    Last Update:
    See Project
  • Grafana: The open and composable observability platform Icon
    Grafana: The open and composable observability platform

    Faster answers, predictable costs, and no lock-in built by the team helping to make observability accessible to anyone.

    Grafana is the open source analytics & monitoring solution for every database.
    Learn More
  • 5
    VibeVoice

    VibeVoice

    Open-source multi-speaker long-form text-to-speech model

    VibeVoice-1.5B is Microsoft’s frontier open-source text-to-speech (TTS) model designed for generating expressive, long-form, multi-speaker conversational audio such as podcasts. Unlike traditional TTS systems, it excels in scalability, speaker consistency, and natural turn-taking for up to 90 minutes of continuous speech with as many as four distinct speakers. A key innovation is its use of continuous acoustic and semantic speech tokenizers operating at an ultra-low frame rate of 7.5 Hz, enabling high audio fidelity with efficient processing of long sequences. The model integrates a Qwen2.5-based large language model with a diffusion head to produce realistic acoustic details and capture conversational context. Training involved curriculum learning with increasing sequence lengths up to 65K tokens, allowing VibeVoice to handle very long dialogues effectively. Safety mechanisms include an audible disclaimer and imperceptible watermarking in all generated audio to mitigate misuse risks.
    Downloads: 21 This Week
    Last Update:
    See Project
  • 6
    Basic Pitch

    Basic Pitch

    A lightweight audio-to-MIDI converter with pitch bend detection

    Basic Pitch is a Python library for Automatic Music Transcription (AMT), using lightweight neural network developed by Spotify's Audio Intelligence Lab. It's small, easy-to-use, pip install-able and npm install-able via its sibling repo. Basic Pitch may be simple, but it's is far from "basic"! basic-pitch is efficient and easy to use, and its multi pitch support, its ability to generalize across instruments, and its note accuracy compete with much larger and more resource-hungry AMT systems. Provide a compatible audio file and a basic-pitch will generate a MIDI file, complete with pitch bends. The basic pitch is instrument-agnostic and supports polyphonic instruments, so you can freely enjoy transcription of all your favorite music, no matter what instrument is used. Basic pitch works best on one instrument at a time.
    Downloads: 20 This Week
    Last Update:
    See Project
  • 7
    ImageAI

    ImageAI

    A python library built to empower developers

    ImageAI is an easy-to-use Computer Vision Python library that empowers developers to easily integrate state-of-the-art Artificial Intelligence features into their new and existing applications and systems. It is used by thousands of developers, students, researchers, tutors and experts in corporate organizations around the world. You will find features supported, links to official documentation as well as articles on ImageAI. ImageAI is widely used around the world by professionals, students, research groups and businesses. ImageAI provides API to recognize 1000 different objects in a picture using pre-trained models that were trained on the ImageNet-1000 dataset. The model implementations provided are SqueezeNet, ResNet, InceptionV3 and DenseNet. ImageAI provides API to detect, locate and identify 80 most common objects in everyday life in a picture using pre-trained models that were trained on the COCO Dataset.
    Downloads: 20 This Week
    Last Update:
    See Project
  • 8
    OpenVoice

    OpenVoice

    Instant voice cloning by MIT and MyShell. Audio foundation model

    OpenVoice is a versatile instant voice cloning system that can replicate a speaker’s tone color from just a short audio clip and then generate speech in multiple languages. It is designed not only to match the timbre of the reference voice, but also to give granular control over style parameters such as emotion, accent, rhythm, pauses, and intonation. The model supports cross-lingual and even zero-shot cross-lingual voice cloning, so a speaker recorded in one language can be made to speak naturally in others. Architecturally, OpenVoice separates “tone color” cloning from style control, which makes it easier to keep a consistent identity while flexibly changing prosody or language. The project provides open-weight models, inference code, and examples, making it suitable both for research and for building production voice experiences. It is actively developed by MyShell, which also integrates OpenVoice into broader agent and entertainment workflows.
    Downloads: 20 This Week
    Last Update:
    See Project
  • 9
    python-telegram-bot

    python-telegram-bot

    A Python wrapper you can't refuse

    python-telegram-bot is a library that provides a pure Python interface for the Telegram Bot API. It supports all types and methods of the API 4.8, and is compatible with all Python versions 3.5+ as well as PyPy. Apart from the pure API implementation, python-telegram-bot also offers several high-level classes contained in the telegram.ext submodule. These make bot development much easier and straightforward. python-telegram-bot is free and open source, fun to use, and fast and easy to install. Visit https://github.com/python-telegram-bot/python-telegram-bot/blob/master/examples/README.md to see official examples or the project’s wiki on https://github.com/python-telegram-bot/python-telegram-bot/wiki/Examples to see other community-built bots.
    Downloads: 20 This Week
    Last Update:
    See Project
  • Free and Open Source HR Software Icon
    Free and Open Source HR Software

    OrangeHRM provides a world-class HRIS experience and offers everything you and your team need to be that HR hero you know that you are.

    Give your HR team the tools they need to streamline administrative tasks, support employees, and make informed decisions with the OrangeHRM free and open source HR software.
    Learn More
  • 10

    Presage

    the intelligent predictive text entry platform

    Presage (formerly Soothsayer) is an intelligent predictive text entry system. Presage generates predictions by modelling natural language as a combination of redundant information sources. Presage computes probabilities for words which are most likely to be entered next by merging predictions generated by the different predictive algorithms. Presage's modular and extensible architecture allows its language model to be extended and customized to utilize statistical, syntactic, and semantic predictive algorithms. Presage's predictive capabilities are implemented by predictive plugins. Predictive plugins use services provided by the platform to implement multiple prediction techniques.
    Leader badge
    Downloads: 307 This Week
    Last Update:
    See Project
  • 11
    DeepSeek-OCR

    DeepSeek-OCR

    Contexts Optical Compression

    DeepSeek-OCR is an open-source optical character recognition solution built as part of the broader DeepSeek AI vision-language ecosystem. It is designed to extract text from images, PDFs, and scanned documents, and integrates with multimodal capabilities that understand layout, context, and visual elements beyond raw character recognition. The system treats OCR not simply as “read the text” but as “understand what the text is doing in the image”—for example distinguishing captions from body text, interpreting tables, or recognizing handwritten versus printed words. It supports local deployment, enabling organizations concerned about privacy or latency to run the pipeline on-premises rather than send sensitive documents to third-party cloud services. The codebase is written in Python with a focus on modularity: you can swap preprocessing, recognition, and post-processing components as needed for custom workflows.
    Downloads: 19 This Week
    Last Update:
    See Project
  • 12
    DeepSpeech

    DeepSpeech

    Open source embedded speech-to-text engine

    DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers. DeepSpeech is an open-source Speech-To-Text engine, using a model trained by machine learning techniques based on Baidu's Deep Speech research paper. Project DeepSpeech uses Google's TensorFlow to make the implementation easier. A pre-trained English model is available for use and can be downloaded following the instructions in the usage docs. If you want to use the pre-trained English model for performing speech-to-text, you can download it (along with other important inference material) from the DeepSpeech releases page.
    Downloads: 19 This Week
    Last Update:
    See Project
  • 13
    Dream Textures

    Dream Textures

    Stable Diffusion built-in to Blender

    Create textures, concept art, background assets, and more with a simple text prompt. Use the 'Seamless' option to create textures that tile perfectly with no visible seam. Texture entire scenes with 'Project Dream Texture' and depth to image. Re-style animations with the Cycles render pass. Run the models on your machine to iterate without slowdowns from a service. Create textures, concept art, and more with text prompts. Learn how to use the various configuration options to get exactly what you're looking for. Texture entire models and scenes with depth to image. Inpaint to fix up images and convert existing textures into seamless ones automatically. Outpaint to increase the size of an image by extending it in any direction. Perform style transfer and create novel animations with Stable Diffusion as a post processing step. Dream Textures has been tested with CUDA and Apple Silicon GPUs. Over 4GB of VRAM is recommended.
    Downloads: 19 This Week
    Last Update:
    See Project
  • 14
    Mistral Vibe CLI

    Mistral Vibe CLI

    Minimal CLI coding agent by Mistral

    Mistral Vibe is an AI-powered “vibe-coding” command-line interface (CLI) and coding-assistant framework built by Mistral AI to let developers write, refactor, search, and manage code through natural language and context-aware automation, rather than manual typing only. It aims to take developers out of repetitive boilerplate and let them stay “in the flow”: you can ask the tool to generate functions, refactor code, search across the codebase, manipulate files, commit changes via Git, or run commands — all from a unified CLI interface. Behind the scenes, it leverages Mistral’s coding-optimized LLM stack (including models tuned for code understanding and generation), with project-wide context awareness: it scans your file structure, Git status, and recent history to inform suggestions so that generated code aligns with existing context.
    Downloads: 19 This Week
    Last Update:
    See Project
  • 15
    MoneyPrinterTurbo

    MoneyPrinterTurbo

    Generate short videos with one click using AI LLM

    MoneyPrinterTurbo is an AI-driven tool that enables users to generate high-definition short videos with minimal input. By providing a topic or keyword, the system automatically creates video scripts, sources relevant media assets, adds subtitles, and incorporates background music, resulting in a polished video ready for distribution.
    Downloads: 19 This Week
    Last Update:
    See Project
  • 16
    edge-tts

    edge-tts

    Use Microsoft Edge's online text-to-speech service from Python

    edge-tts is a Python module and command-line tool that gives you direct access to Microsoft Edge’s online text-to-speech service without needing the Edge browser, Windows, or any API key. It wraps the same cloud voices used by Edge, exposing them through a simple CLI (edge-tts, edge-playback) and a Python API, so you can script high-quality speech generation in your own applications. The tool lets you list available voices, specify locale and voice name, and generate audio files in common formats like MP3 or WAV. It also supports generating subtitle files (such as SRT or VTT) alongside the speech, which is handy for video narration, e-learning, or accessibility workflows. From the CLI you can adjust parameters such as speaking rate, volume, and pitch, giving you some control over prosody without diving into SSML. The library is asynchronous under the hood, which makes it efficient for batch jobs or web services that need to synthesize many utterances concurrently.
    Downloads: 19 This Week
    Last Update:
    See Project
  • 17
    HexStrike AI MCP Agents

    HexStrike AI MCP Agents

    HexStrike AI MCP Agents is an advanced MCP server

    HexStrike AI is an MCP server that lets LLM agents autonomously operate a large catalog of offensive-security tools. Its goal is to bridge “language models” and practical pentest workflows—enumeration, exploitation, vulnerability discovery, and bug bounty reconnaissance—under safe, auditable controls. The server exposes typed tools and guardrails so agent prompts translate to concrete, parameterized actions rather than brittle shell strings. It ships with curated tool adapters, task orchestration, and guidance for connecting popular agent clients (Claude, GPT, Copilot) to a hardened execution environment. Documentation highlights the breadth of supported utilities and positions HexStrike as a research and red-team aid, not a point-and-click exploit kit. A public site and active repository activity signal an expanding community around autonomous security research agents.
    Downloads: 18 This Week
    Last Update:
    See Project
  • 18
    VGGFace2

    VGGFace2

    VGGFace2 Dataset for Face Recognition

    VGGFace2 is a large-scale face recognition dataset developed to support research on facial recognition across variations in pose, age, illumination, and identity. It consists of 3.31 million images covering 9,131 subjects, with an average of over 360 images per subject. The dataset was collected from Google Image Search, ensuring a wide diversity in ethnicity, profession, and real-world conditions. It is split into a training set with 8,631 identities and a test set with 500 identities, making it suitable for benchmarking and large-scale model training. Alongside the dataset, the repository provides pre-trained models based on ResNet-50 and SE-ResNet-50 architectures, trained with both MS-Celeb-1M pretraining and fine-tuning on VGGFace2. These models achieve strong verification performance on benchmarks such as IJB-B and include variants with lower-dimensional embeddings for compact feature representation. The project also includes preprocessing tools, face detection scripts, and etc.
    Downloads: 18 This Week
    Last Update:
    See Project
  • 19
    TorchRL

    TorchRL

    A modular, primitive-first, python-first PyTorch library

    TorchRL is an open-source Reinforcement Learning (RL) library for PyTorch. TorchRL provides PyTorch and python-first, low and high-level abstractions for RL that are intended to be efficient, modular, documented, and properly tested. The code is aimed at supporting research in RL. Most of it is written in Python in a highly modular way, such that researchers can easily swap components, transform them, or write new ones with little effort.
    Downloads: 17 This Week
    Last Update:
    See Project
  • 20
    gpt-oss

    gpt-oss

    gpt-oss-120b and gpt-oss-20b are two open-weight language models

    gpt-oss is OpenAI’s open-weight family of large language models designed for powerful reasoning, agentic workflows, and versatile developer use cases. The series includes two main models: gpt-oss-120b, a 117-billion parameter model optimized for general-purpose, high-reasoning tasks that can run on a single H100 GPU, and gpt-oss-20b, a lighter 21-billion parameter model ideal for low-latency or specialized applications on smaller hardware. Both models use a native MXFP4 quantization for efficient memory use and support OpenAI’s Harmony response format, enabling transparent full chain-of-thought reasoning and advanced tool integrations such as function calling, browsing, and Python code execution. The repository provides multiple reference implementations—including PyTorch, Triton, and Metal—for educational and experimental use, as well as example clients and tools like a terminal chat app and a Responses API server.
    Downloads: 17 This Week
    Last Update:
    See Project
  • 21
    HunyuanWorld 1.0

    HunyuanWorld 1.0

    Generating Immersive, Explorable, and Interactive 3D Worlds

    HunyuanWorld-1.0 is an open-source, simulation-capable 3D world generation model developed by Tencent Hunyuan that creates immersive, explorable, and interactive 3D environments from text or image inputs. It combines the strengths of video-based diversity and 3D-based geometric consistency through a novel framework using panoramic world proxies and semantically layered 3D mesh representations. This approach enables 360° immersive experiences, seamless mesh export for graphics pipelines, and disentangled object representations for enhanced interactivity. The architecture integrates panoramic proxy generation, semantic layering, and hierarchical 3D reconstruction to produce high-quality scene-scale 3D worlds from both text and images. HunyuanWorld-1.0 surpasses existing open-source methods in visual quality and geometric consistency, demonstrated by superior scores in BRISQUE, NIQE, Q-Align, and CLIP metrics.
    Downloads: 16 This Week
    Last Update:
    See Project
  • 22
    Qwen

    Qwen

    The official repo of Qwen chat & pretrained large language model

    Qwen is a series of large language models developed by Alibaba Cloud, consisting of various pretrained versions like Qwen-1.8B, Qwen-7B, Qwen-14B, and Qwen-72B. These models, which range from smaller to larger configurations, are designed for a wide range of natural language processing tasks. They are openly available for research and commercial use, with Qwen's code and model weights shared on GitHub. Qwen's capabilities include text generation, comprehension, and conversation, making it a versatile tool for developers looking to integrate advanced AI functionalities into their applications.
    Downloads: 16 This Week
    Last Update:
    See Project
  • 23
    Alpa

    Alpa

    Training and serving large-scale neural networks

    Alpa is a system for training and serving large-scale neural networks. Scaling neural networks to hundreds of billions of parameters has enabled dramatic breakthroughs such as GPT-3, but training and serving these large-scale neural networks require complicated distributed system techniques. Alpa aims to automate large-scale distributed training and serving with just a few lines of code.
    Downloads: 15 This Week
    Last Update:
    See Project
  • 24
    RAGFlow

    RAGFlow

    RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine

    RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It offers a streamlined RAG workflow for businesses of any scale, combining LLM (Large Language Models) to provide truthful question-answering capabilities, backed by well-founded citations from various complex formatted data.
    Downloads: 15 This Week
    Last Update:
    See Project
  • 25
    WhisperLive

    WhisperLive

    A nearly-live implementation of OpenAI's Whisper

    WhisperLive is a “nearly live” implementation of OpenAI’s Whisper model focused on real-time transcription. It runs as a server–client system in which the server hosts a Whisper backend and clients stream audio to be transcribed with very low delay. The project supports multiple inference backends, including Faster-Whisper, NVIDIA TensorRT, and OpenVINO, allowing you to target GPUs and different CPU architectures efficiently. It can handle microphone input, pre-recorded audio files, and network streams such as RTSP and HLS, making it flexible for live events, monitoring, or accessibility workflows. Configuration options let you control the number of clients, maximum connection time, and threading behavior so the server can be tuned for different deployment environments. On the client side, you can set the language, whether to translate into English, model size, voice activity detection, and output recording behavior.
    Downloads: 15 This Week
    Last Update:
    See Project