Browse free open source Python Generative AI and projects below. Use the toggles on the left to filter open source Python Generative AI by OS, license, language, programming language, and project status.

  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • 1
    RQ-Transformer

    RQ-Transformer

    Implementation of RQ Transformer, autoregressive image generation

    Implementation of RQ Transformer, which proposes a more efficient way of training multi-dimensional sequences autoregressively. This repository will only contain the transformer for now. You can use this vector quantization library for the residual VQ. This type of axial autoregressive transformer should be compatible with memcodes, proposed in NWT. It would likely also work well with multi-headed VQ. I also think there is something deeper going on, and have generalized this to any number of dimensions. You can use it by importing the HierarchicalCausalTransformer. For autoregressive (AR) modeling of high-resolution images, vector quantization (VQ) represents an image as a sequence of discrete codes. A short sequence length is important for an AR model to reduce its computational costs to consider long-range interactions of codes. However, we postulate that previous VQ cannot shorten the code sequence and generate high-fidelity images together in terms of the rate-distortion trade-off.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Recurrent Interface Network (RIN)

    Recurrent Interface Network (RIN)

    Implementation of Recurrent Interface Network (RIN)

    Implementation of Recurrent Interface Network (RIN), for highly efficient generation of images and video without cascading networks, in Pytorch. The author unawaredly reinvented the induced set-attention block from the set transformers paper. They also combine this with the self-conditioning technique from the Bit Diffusion paper, specifically for the latents. The last ingredient seems to be a new noise function based around the sigmoid, which the author claims is better than cosine scheduler for larger images. The big surprise is that the generations can reach this level of fidelity. Will need to verify this on my own machine. Additionally, we will try adding an extra linear attention on the main branch as well as self-conditioning in the pixel space. The insight of being able to self-condition on any hidden state of the network as well as the newly proposed sigmoid noise schedule are the two main findings.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    Reliable Metrics for Generative Models

    Reliable Metrics for Generative Models

    Code base for the precision, recall, density, and coverage metrics

    Reliable Fidelity and Diversity Metrics for Generative Models (ICML 2020). Devising indicative evaluation metrics for the image generation task remains an open problem. The most widely used metric for measuring the similarity between real and generated images has been the Fréchet Inception Distance (FID) score. Because it does not differentiate the fidelity and diversity aspects of the generated images, recent papers have introduced variants of precision and recall metrics to diagnose those properties separately. In this paper, we show that even the latest version of the precision and recall (Kynkäänniemi et al., 2019) metrics are not reliable yet. For example, they fail to detect the match between two identical distributions, they are not robust against outliers, and the evaluation hyperparameters are selected arbitrarily. We propose density and coverage metrics that solve the above issues.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    SentenceTransformers

    SentenceTransformers

    Multilingual sentence & image embeddings with BERT

    SentenceTransformers is a Python framework for state-of-the-art sentence, text and image embeddings. The initial work is described in our paper Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. You can use this framework to compute sentence / text embeddings for more than 100 languages. These embeddings can then be compared e.g. with cosine-similarity to find sentences with a similar meaning. This can be useful for semantic textual similar, semantic search, or paraphrase mining. The framework is based on PyTorch and Transformers and offers a large collection of pre-trained models tuned for various tasks. Further, it is easy to fine-tune your own models. Our models are evaluated extensively and achieve state-of-the-art performance on various tasks. Further, the code is tuned to provide the highest possible speed.
    Downloads: 0 This Week
    Last Update:
    See Project
  • No-Nonsense Code-to-Cloud Security for Devs | Aikido Icon
    No-Nonsense Code-to-Cloud Security for Devs | Aikido

    Connect your GitHub, GitLab, Bitbucket, or Azure DevOps account to start scanning your repos for free.

    Aikido provides a unified security platform for developers, combining 12 powerful scans like SAST, DAST, and CSPM. AI-driven AutoFix and AutoTriage streamline vulnerability management, while runtime protection blocks attacks.
    Start for Free
  • 5
    Seq2seq Chatbot for Keras

    Seq2seq Chatbot for Keras

    This repository contains a new generative model of chatbot

    This repository contains a new generative model of chatbot based on seq2seq modeling. The trained model available here used a small dataset composed of ~8K pairs of context (the last two utterances of the dialogue up to the current point) and respective response. The data were collected from dialogues of English courses online. This trained model can be fine-tuned using a closed-domain dataset to real-world applications. The canonical seq2seq model became popular in neural machine translation, a task that has different prior probability distributions for the words belonging to the input and output sequences since the input and output utterances are written in different languages. The architecture presented here assumes the same prior distributions for input and output words. Therefore, it shares an embedding layer (Glove pre-trained word embedding) between the encoding and decoding processes through the adoption of a new model.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Simple StyleGan2 for Pytorch

    Simple StyleGan2 for Pytorch

    Simplest working implementation of Stylegan2

    Simple Pytorch implementation of Stylegan2 that can be completely trained from the command-line, no coding needed. You will need a machine with a GPU and CUDA installed. You can also specify the location where intermediate results and model checkpoints should be stored. You can increase the network capacity (which defaults to 16) to improve generation results, at the cost of more memory. By default, if the training gets cut off, it will automatically resume from the last checkpointed file. Once you have finished training, you can generate images from your latest checkpoint. If a previous checkpoint contained a better generator, (which often happens as generators start degrading towards the end of training), you can load from a previous checkpoint with another flag. A technique used in both StyleGAN and BigGAN is truncating the latent values so that their values fall close to the mean. The small the truncation value, the better the samples will appear at the cost of sample variety.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    Stable Diffusion in Docker

    Stable Diffusion in Docker

    Run the Stable Diffusion releases in a Docker container

    Run the Stable Diffusion releases in a Docker container with txt2img, img2img, depth2img, pix2pix, upscale4x, and inpaint. Run the Stable Diffusion releases on Huggingface in a GPU-accelerated Docker container. By default, the pipeline uses the full model and weights which requires a CUDA capable GPU with 8GB+ of VRAM. It should take a few seconds to create one image. On less powerful GPUs you may need to modify some of the options; see the Examples section for more details. If you lack a suitable GPU you can set the options --device cpu and --onnx instead. Since it uses the model, you will need to create a user access token in your Huggingface account. Save the user access token in a file called token.txt and make sure it is available when building the container. Create an image from an existing image and a text prompt. Modify an existing image with its depth map and a text prompt.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    StudioGAN

    StudioGAN

    StudioGAN is a Pytorch library providing implementations of networks

    StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation. StudioGAN aims to offer an identical playground for modern GANs so that machine learning researchers can readily compare and analyze a new idea. Moreover, StudioGAN provides an unprecedented-scale benchmark for generative models. The benchmark includes results from GANs (BigGAN-Deep, StyleGAN-XL), auto-regressive models (MaskGIT, RQ-Transformer), and Diffusion models (LSGM++, CLD-SGM, ADM-G-U). StudioGAN is a self-contained library that provides 7 GAN architectures, 9 conditioning methods, 4 adversarial losses, 13 regularization modules, 6 augmentation modules, 8 evaluation metrics, and 5 evaluation backbones. Among these configurations, we formulate 30 GANs as representatives. Each modularized option is managed through a configuration system that works through a YAML file.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Swirl

    Swirl

    Swirl queries any number of data sources with APIs

    Swirl queries any number of data sources with APIs and uses spaCy and NLTK to re-rank the unified results without extracting and indexing anything! Includes zero-code configs for Apache Solr, ChatGPT, Elastic Search, OpenSearch, PostgreSQL, Google BigQuery, RequestsGet, Google PSE, NLResearch.com, Miro & more! SWIRL adapts and distributes queries to anything with a search API - search engines, databases, noSQL engines, cloud/SaaS services etc - and uses AI (Large Language Models) to re-rank the unified results without extracting and indexing anything. It's intended for use by developers and data scientists who want to solve multi-silo search problems from enterprise search to new monitoring & alerting solutions that push information to users continuously. Built on the Python/Django/RabbitMQ stack, SWIRL includes connectors to Apache Solr, ChatGPT, Elastic, OpenSearch | PostgreSQL, Google BigQuery plus generic HTTP/GET/JSON with configurations for premium services.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Photo and Video Editing APIs and SDKs Icon
    Photo and Video Editing APIs and SDKs

    Trusted by 150 million+ creators and businesses globally

    Unlock Picsart's full editing suite by embedding our Editor SDK directly into your platform. Offer your users the power of a full design suite without leaving your site.
    Learn More
  • 10
    Synthetic Data Vault (SDV)

    Synthetic Data Vault (SDV)

    Synthetic Data Generation for tabular, relational and time series data

    The Synthetic Data Vault (SDV) is a Synthetic Data Generation ecosystem of libraries that allows users to easily learn single-table, multi-table and timeseries datasets to later on generate new Synthetic Data that has the same format and statistical properties as the original dataset. Synthetic data can then be used to supplement, augment and in some cases replace real data when training Machine Learning models. Additionally, it enables the testing of Machine Learning or other data dependent software systems without the risk of exposure that comes with data disclosure. Underneath the hood it uses several probabilistic graphical modeling and deep learning based techniques. To enable a variety of data storage structures, we employ unique hierarchical generative modeling and recursive sampling techniques.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    TFKit

    TFKit

    Handling multiple nlp task in one pipeline

    TFKit is a tool kit mainly for language generation. It leverages the use of transformers on many tasks with different models in this all-in-one framework. All you need is a little change of config. You can use tfkit for model training and evaluation with tfkit-train and tfkit-eval. The key to combine different task together is to make different task with same data format. All data will be in csv format - tfkit will use csv for all task, normally it will have two columns, first columns is the input of models, the second column is the output of models. Plane text with no tokenization - there is no need to tokenize text before training, or do re-calculating for tokenization, tfkit will handle it for you. No header is needed.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    TGAN

    TGAN

    Generative adversarial training for generating synthetic tabular data

    We are happy to announce that our new model for synthetic data called CTGAN is open-sourced. The new model is simpler and gives better performance on many datasets. TGAN is a tabular data synthesizer. It can generate fully synthetic data from real data. Currently, TGAN can generate numerical columns and categorical columns. TGAN has been developed and runs on Python 3.5, 3.6 and 3.7. Also, although it is not strictly required, the usage of a virtualenv is highly recommended in order to avoid interfering with other software installed in the system where TGAN is run. For development, you can use make install-develop instead in order to install all the required dependencies for testing and code listing. In order to be able to sample new synthetic data, TGAN first needs to be fitted to existing data.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Texar-PyTorch

    Texar-PyTorch

    Integrating the Best of TF into PyTorch, for Machine Learning

    Texar-PyTorch is a toolkit aiming to support a broad set of machine learning, especially natural language processing and text generation tasks. Texar provides a library of easy-to-use ML modules and functionalities for composing whatever models and algorithms. The tool is designed for both researchers and practitioners for fast prototyping and experimentation. Texar-PyTorch was originally developed and is actively contributed by Petuum and CMU in collaboration with other institutes. A mirror of this repository is maintained by Petuum Open Source. Texar-PyTorch integrates many of the best features of TensorFlow into PyTorch, delivering highly usable and customizable modules superior to PyTorch native ones. Texar-PyTorch (this repo) and Texar-TF have mostly the same interfaces. Both further combine the best design of TF and PyTorch. Data processing, model architectures, loss functions, training and inference algorithms, evaluation, etc.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Text Gen

    Text Gen

    Almost state of art text generation library

    Almost state of art text generation library. Text gen is a python library that allow you build a custom text generation model with ease. Something sweet built with Tensorflow and Pytorch(coming soon). Load your data, your data must be in a text format. Download the example data from the example folder. Tune your model to know the best optimizer, activation method to use.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    TextBox

    TextBox

    A text generation library with pre-trained language models github.com

    TextBox 2.0 is an up-to-date text generation library based on Python and PyTorch focusing on building a unified and standardized pipeline for applying pre-trained language models to text generation. From a task perspective, we consider 13 common text generation tasks such as translation, story generation, and style transfer, and their corresponding 83 widely-used datasets. From a model perspective, we incorporate 47 pre-trained language models/modules covering the categories of general, translation, Chinese, dialogue, controllable, distilled, prompting, and lightweight models (modules). From a training perspective, we support 4 pre-training objectives and 4 efficient and robust training strategies, such as distributed data parallel and efficient generation. Compared with the previous version of TextBox, this extension mainly focuses on building a unified, flexible, and standardized framework for better supporting PLM-based text generation models.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    TextGen

    TextGen

    textgen, Text Generation models

    Implementation of Text Generation models. textgen implements a variety of text generation models, including UDA, GPT2, Seq2Seq, BART, T5, SongNet and other models, out of the box. UDA, non-core word replacement. EDA, simple data augmentation technique: similar words, synonym replacement, random word insertion, deletion, replacement. This project refers to Google's UDA (non-core word replacement) algorithm and EDA algorithm, based on TF-IDF to replace some unimportant words in sentences with synonyms, random word insertion, deletion, replacement, etc. method, generating new text and implementing text augmentation This project realizes the back translation function based on Baidu translation API, first translate Chinese sentences into English, and then translate English into new Chinese. This project implements the training and prediction of Seq2Seq, ConvSeq2Seq, and BART models based on PyTorch, which can be used for text generation tasks such as text translation.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    TorchGAN

    TorchGAN

    Research Framework for easy and efficient training of GANs

    The torchgan package consists of various generative adversarial networks and utilities that have been found useful in training them. This package provides an easy-to-use API which can be used to train popular GANs as well as develop newer variants. The core idea behind this project is to facilitate easy and rapid generative adversarial model research. TorchGAN is a Pytorch-based framework for designing and developing Generative Adversarial Networks. This framework has been designed to provide building blocks for popular GANs and also to allow customization for cutting-edge research. Using TorchGAN's modular structure allows.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    bert4keras

    bert4keras

    Keras implement of transformers for humans

    Our light reimplementation of bert for keras. A cleaner, lighter version of bert for keras. This is the keras version of the transformer model library re-implemented by the author and is committed to combining transformer and keras with as clean code as possible. The original intention of this project is for the convenience of modification and customization, so it may be updated frequently. Load the pre-trained weights of bert/roberta/albert for fine-tune. Implement the attention mask required by the language model and seq2seq. Pre-training code from zero (supports TPU, multi-GPU, please see pertaining). Compatible with keras, tf.keras.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    cerche

    cerche

    Experimental search engine for conversational AI such as parl.ai

    This is an experimental search engine for conversational AI such as parl.ai, large language models such as OpenAI GPT3, and humans (maybe).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    commit-autosuggestions

    commit-autosuggestions

    A tool that AI automatically recommends commit messages

    This is implementation of CommitBERT: Commit Message Generation Using Pre-Trained Programming Language Model. CommitBERT is accepted in ACL workshop : NLP4Prog. Have you ever hesitated to write a commit message? Now get a commit message from Artificial Intelligence! CodeBERT: A Pre-Trained Model for Programming and Natural Languages introduces a pre-trained model in a combination of Program Language and Natural Language(PL-NL). It also introduces the problem of converting code into natural language (Code Documentation Generation). We can use CodeBERT to create a model that generates a commit message when code is added. However, most code changes are not made only by add of the code, and some parts of the code are deleted. We plan to slowly conquer languages that are not currently supported. To run this project, you need a flask-based inference server (GPU) and a client (commit module). If you don't have a GPU, don't worry, you can use it through Google Colab.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    hebrew-gpt_neo

    hebrew-gpt_neo

    Hebrew text generation models based on EleutherAI's gpt-neo

    Hebrew text generation models based on EleutherAI's gpt-neo. Each was trained on a TPUv3-8 which was made available to me via the TPU Research Cloud Program. The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    hexabot

    hexabot

    Hexabot is an open-source AI chatbot / agent builder.

    Hexabot is an open-source AI chatbot / agent solution. It allows you to create and manage multi-channel, and multilingual chatbots / agents with ease. Hexabot is designed for flexibility and customization, offering powerful text-to-action capabilities. Originally a closed-source project (version 1), we've now open-sourced version 2 to contribute to the community and enable developers to customize and extend the platform with extensions.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    langchain-prefect

    langchain-prefect

    Tools for using Langchain with Prefect

    Large Language Models (LLMs) are interesting and useful  -  building apps that use them responsibly feels like a no-brainer. Tools like Langchain make it easier to build apps using LLMs. We need to know details about how our apps work, even when we want to use tools with convenient abstractions that may obfuscate those details. Prefect is built to help data people build, run, and observe event-driven workflows wherever they want. It provides a framework for creating deployments on a whole slew of runtime environments (from Lambda to Kubernetes), and is cloud agnostic (best supports AWS, GCP, Azure). For this reason, it could be a great fit for observing apps that use LLMs. RecordLLMCalls is a ContextDecorator that can be used to track LLM calls made by Langchain LLMs as Prefect flows. Run several LLM calls via langchain agent as Prefect subflows.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    marqo

    marqo

    Tensor search for humans

    A tensor-based search and analytics engine that seamlessly integrates with your applications, websites, and workflows. Marqo is a versatile and robust search and analytics engine that can be integrated into any website or application. Due to horizontal scalability, Marqo provides lightning-fast query times, even with millions of documents. Marqo helps you configure deep-learning models like CLIP to pull semantic meaning from images. It can seamlessly handle image-to-image, image-to-text and text-to-image search and analytics. Marqo adapts and stores your data in a fully schemaless manner. It combines tensor search with a query DSL that provides efficient pre-filtering. Tensor search allows you to go beyond keyword matching and search based on the meaning of text, images and other unstructured data. Be a part of the tribe and help us revolutionize the future of search. Whether you are a contributor, a user, or simply have questions about Marqo, we got your back.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    min(DALL·E)

    min(DALL·E)

    min(DALL·E) is a fast, minimal port of DALL·E Mini to PyTorch

    This is a fast, minimal port of Boris Dayma's DALL·E Mini (with mega weights). It has been stripped down for inference and converted to PyTorch. The only third-party dependencies are numpy, requests, pillow and torch. The required models will be downloaded to models_root if they are not already there. Set the dtype to torch.float16 to save GPU memory. If you have an Ampere architecture GPU you can use torch.bfloat16. Set the device to either cuda or "cpu". Once everything has finished initializing, call generate_image with some text as many times as you want. Use a positive seed for reproducible results. Higher values for supercondition_factor result in better agreement with the text but a narrower variety of generated images. Every image token is sampled from the top_k most probable tokens. The largest logit is subtracted from the logits to avoid infs. The logits are then divided by the temperature. If is_seamless is true, the image grid will be tiled in token space not pixel space.
    Downloads: 0 This Week
    Last Update:
    See Project
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.