112 projects for "python code generator" with 3 filters applied:

  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    The database for AI-powered applications.

    MongoDB Atlas is the developer-friendly database used to build, scale, and run gen AI and LLM-powered apps—without needing a separate vector database. Atlas offers built-in vector search, global availability across 115+ regions, and flexible document modeling. Start building AI apps faster, all in one place.
    Start Free
  • Build Securely on AWS with Proven Frameworks Icon
    Build Securely on AWS with Proven Frameworks

    Lay a foundation for success with Tested Reference Architectures developed by Fortinet’s experts. Learn more in this white paper.

    Moving to the cloud brings new challenges. How can you manage a larger attack surface while ensuring great network performance? Turn to Fortinet’s Tested Reference Architectures, blueprints for designing and securing cloud environments built by cybersecurity experts. Learn more and explore use cases in this white paper.
    Download Now
  • 1
    verl

    verl

    Volcano Engine Reinforcement Learning for LLMs

    VERL is a reinforcement-learning–oriented toolkit designed to train and align modern AI systems, from language models to decision-making agents. It brings together supervised fine-tuning, preference modeling, and online RL into one coherent training stack so teams can move from raw data to aligned policies with minimal glue code. The library focuses on scalability and efficiency, offering distributed training loops, mixed precision, and replay/buffering utilities that keep accelerators busy...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Style Aligned

    Style Aligned

    Official code for Style Aligned Image Generation via Shared Attention

    StyleAligned is a diffusion-model editing technique and codebase that preserves the visual “style” of an original image while applying new semantic edits driven by text. Instead of fully re-generating an image—and risking changes to lighting, texture, or rendering choices—the method aligns internal features across denoising steps so the target edit inherits the source style. This alignment acts like a constraint on the model’s evolution, steering composition, palette, and brushwork even as...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    PPTAgent

    PPTAgent

    PPTAgent: Generating and Evaluating Presentations

    PPTAgent is a research system for generating and evaluating slide decks that goes beyond simple text-to-slides. It follows a two-stage, edit-based workflow: first it analyzes reference presentations to infer slide roles and structure, then it drafts an outline and iteratively performs editing actions to produce new slides. The project includes both the generation agent and an evaluation framework, PPTEval, to score content quality, design, and coherence. The repository highlights the EMNLP...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    Detic

    Detic

    Code release for "Detecting Twenty-thousand Classes

    Detic (“Detecting Twenty-thousand Classes using Image-level Supervision”) is a large-vocabulary object detector that scales beyond fully annotated datasets by leveraging image-level labels. It decouples localization from classification, training a strong box localizer on standard detection data while learning classifiers from weak supervision and large image-tag corpora. A shared region proposal backbone feeds a flexible classification head that can expand to tens of thousands of categories...
    Downloads: 3 This Week
    Last Update:
    See Project
  • Photo and Video Editing APIs and SDKs Icon
    Photo and Video Editing APIs and SDKs

    Trusted by 150 million+ creators and businesses globally

    Unlock Picsart's full editing suite by embedding our Editor SDK directly into your platform. Offer your users the power of a full design suite without leaving your site.
    Learn More
  • 5
    DiT (Diffusion Transformers)

    DiT (Diffusion Transformers)

    Official PyTorch Implementation of "Scalable Diffusion Models"

    DiT (Diffusion Transformer) is a powerful architecture that applies transformer-based modeling directly to diffusion generative processes for high-quality image synthesis. Unlike CNN-based diffusion models, DiT represents the diffusion process in the latent space and processes image tokens through transformer blocks with learned positional encodings, offering scalability and superior sample quality. The model architecture parallels large language models but for image tokens—each block...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 6
    OpenCV

    OpenCV

    Open Source Computer Vision Library

    The Open Source Computer Vision Library has >2500 algorithms, extensive documentation and sample code for real-time computer vision. It works on Windows, Linux, Mac OS X, Android, iOS in your browser through JavaScript. Languages: C++, Python, Julia, Javascript Homepage: https://opencv.org Q&A forum: https://forum.opencv.org/ Documentation: https://docs.opencv.org Source code: https://github.com/opencv Please pay special attention to our tutorials! https://docs.opencv.org/master...
    Leader badge
    Downloads: 4,169 This Week
    Last Update:
    See Project
  • 7
    Armadillo

    Armadillo

    fast C++ library for linear algebra & scientific computing

    .../download.html * Documentation: http://arma.sourceforge.net/docs.html * Bug reports: http://arma.sourceforge.net/faq.html * Git repo: https://gitlab.com/conradsnicta/armadillo-code
    Leader badge
    Downloads: 2,197 This Week
    Last Update:
    See Project
  • 8
    ChatGPT Plugins Collection

    ChatGPT Plugins Collection

    An unofficial collection of Plugins for ChatGPT

    ... highlights practical applications of plugins across domains such as productivity, data access, and automation. The project also serves as a starting point for developers interested in building their own custom plugins, offering inspiration and code samples. With its open structure, it encourages collaboration and knowledge sharing in the growing ecosystem of ChatGPT extensions.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 9
    LM Human Preferences

    LM Human Preferences

    Code for the paper Fine-Tuning Language Models from Human Preferences

    ... learning (or related techniques) guided by that reward model. The code is provided “as is” and explicitly says it may no longer run out-of-the-box due to dependencies or dataset migrations. It was tested on the smallest GPT-2 (124M parameters) under a specific environment (TensorFlow 1.x, specific CUDA / cuDNN combinations). It includes utilities for launching experiments, sampling from policies, and simple experiment orchestration.
    Downloads: 1 This Week
    Last Update:
    See Project
  • No-Nonsense Code-to-Cloud Security for Devs | Aikido Icon
    No-Nonsense Code-to-Cloud Security for Devs | Aikido

    Connect your GitHub, GitLab, Bitbucket, or Azure DevOps account to start scanning your repos for free.

    Aikido provides a unified security platform for developers, combining 12 powerful scans like SAST, DAST, and CSPM. AI-driven AutoFix and AutoTriage streamline vulnerability management, while runtime protection blocks attacks.
    Start for Free
  • 10
    Alpa

    Alpa

    Training and serving large-scale neural networks

    Alpa is a system for training and serving large-scale neural networks. Scaling neural networks to hundreds of billions of parameters has enabled dramatic breakthroughs such as GPT-3, but training and serving these large-scale neural networks require complicated distributed system techniques. Alpa aims to automate large-scale distributed training and serving with just a few lines of code.
    Downloads: 15 This Week
    Last Update:
    See Project
  • 11
    FLUX.1 Krea

    FLUX.1 Krea

    Powerful open source image generation model

    FLUX.1 Krea [dev] is an open-source 12-billion parameter image generation model developed collaboratively by Krea and Black Forest Labs, designed to deliver superior aesthetic control and high image quality. It is a rectified-flow model distilled from the original Krea 1, providing enhanced sampling efficiency through classifier-free guidance distillation. The model supports generation at resolutions between 1024 and 1280 pixels with recommended inference steps between 28 and 32 for optimal...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 12
    Grok-1

    Grok-1

    Open-source, high-performance Mixture-of-Experts large language model

    Grok-1 is a 314-billion-parameter Mixture-of-Experts (MoE) large language model developed by xAI. Designed to optimize computational efficiency, it activates only 25% of its weights for each input token. In March 2024, xAI released Grok-1's model weights and architecture under the Apache 2.0 license, making them openly accessible to developers. The accompanying GitHub repository provides JAX example code for loading and running the model. Due to its substantial size, utilizing Grok-1 requires...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 13
    HunyuanVideo-I2V

    HunyuanVideo-I2V

    A Customizable Image-to-Video Model based on HunyuanVideo

    HunyuanVideo-I2V is a customizable image-to-video generation framework developed by Tencent, extending the capabilities of HunyuanVideo. It allows for high-quality video creation from still images, using PyTorch and providing pre-trained model weights, inference code, and customizable training options. The system includes a LoRA training code for adding special effects and enhancing video realism, aiming to offer versatile and scalable solutions for generating videos from static image inputs.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 14
    Qwen2.5-Coder

    Qwen2.5-Coder

    Qwen2.5-Coder is the code version of Qwen2.5, the large language model

    Qwen2.5-Coder, developed by QwenLM, is an advanced open-source code generation model designed for developers seeking powerful and diverse coding capabilities. It includes multiple model sizes—ranging from 0.5B to 32B parameters—providing solutions for a wide array of coding needs. The model supports over 92 programming languages and offers exceptional performance in generating code, debugging, and mathematical problem-solving. Qwen2.5-Coder, with its long context length of 128K tokens, is ideal...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 15
    ChatGLM2-6B

    ChatGLM2-6B

    An Open Bilingual Chat LLM | Open Source Bilingual Conversation LLM

    ChatGLM2-6B is an advanced open-source bilingual dialogue model developed by THUDM. It is the second iteration of the ChatGLM series, designed to offer enhanced performance while maintaining the strengths of its predecessor, including smooth conversation flow and low deployment barriers. The model is fine-tuned for both Chinese and English languages, making it a versatile tool for various multilingual applications. ChatGLM2-6B aims to push the boundaries of natural language understanding and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    GLM-4-32B-0414

    GLM-4-32B-0414

    Open Multilingual Multimodal Chat LMs

    GLM-4-32B-0414 is a powerful open-source large language model featuring 32 billion parameters, designed to deliver performance comparable to leading models like OpenAI’s GPT series. It supports multilingual and multimodal chat capabilities with an extensive 32K token context length, making it ideal for dialogue, reasoning, and complex task completion. The model is pre-trained on 15 trillion tokens of high-quality data, including substantial synthetic reasoning datasets, and further enhanced...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    ConvNeXt V2

    ConvNeXt V2

    Code release for ConvNeXt V2 model

    ... competition across channels. The result is a convnet that competes strongly with transformer architectures on recognition benchmarks while being efficient and hardware-friendly. The repository provides official PyTorch implementations for multiple model sizes (Atto, Femto, Pico, up through Huge), conversion from JAX weights, code for pretraining/fine-tuning, and pretrained checkpoints. It supports both self-supervised pretraining and supervised fine-tuning.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 18
    UnionML

    UnionML

    Build and deploy machine learning microservices

    Creating ML apps should be simple and frictionless. UnionML is an open-source Python framework built on top of Flyte™, unifying the complex ecosystem of ML tools into a single interface. Combine the tools that you love using a simple, standardized API so you can stop writing so much boilerplate and focus on what matters: the data and the models that learn from them. Fit the rich ecosystem of tools and frameworks into a common protocol for machine learning. Using industry-standard machine...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    d2l-zh

    d2l-zh

    Chinese-language edition of Dive into Deep Learning

    d2l‑zh is the Chinese-language edition of Dive into Deep Learning, an interactive, open‑source deep learning textbook that combines code, math, and explanatory text. It features runnable Jupyter notebooks compatible with multiple frameworks (e.g., PyTorch, MXNet, TensorFlow), comprehensive theoretical analysis, and exercises. Widely adopted in over 70 countries and used by more than 500 universities for teaching deep learning.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    ConvNeXt

    ConvNeXt

    Code release for ConvNeXt model

    ConvNeXt is a modernized convolutional neural network (CNN) architecture designed to rival Vision Transformers (ViTs) in accuracy and scalability while retaining the simplicity and efficiency of CNNs. It revisits classic ResNet-style backbones through the lens of transformer design trends—large kernel sizes, inverted bottlenecks, layer normalization, and GELU activations—to bridge the performance gap between convolutions and attention-based models. ConvNeXt’s clean, hierarchical structure...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 21
    Video Pre-Training

    Video Pre-Training

    Learning to Act by Watching Unlabeled Online Videos

    The Video PreTraining (VPT) repository provides code and model artifacts for a project where agents learn to act by watching human gameplay videos—specifically, gameplay of Minecraft—using behavioral cloning. The idea is to learn general priors of control from large-scale, unlabeled video data, and then optionally fine-tune those priors for more goal-directed behavior via environment interaction. The repository contains demonstration models of different widths, fine-tuned variants (e.g...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 22
    Guided Diffusion

    Guided Diffusion

    Codebase for Diffusion Models Beat GANS on Image Synthesis

    The guided-diffusion repository is centered on diffusion models for image synthesis, with a focus on classifier guidance and improvements over earlier diffusion frameworks. It is derived from OpenAI’s improved-diffusion work, enhanced to include guided generation where a classifier (or other guidance mechanism) can steer sampling toward desired classes or attributes. The code provides model definitions (UNet, diffusion schedules), sampling and training scripts, and utilities for guidance...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 23
    Mask2Former

    Mask2Former

    Code release for "Masked-attention Mask Transformer

    Mask2Former is a unified segmentation architecture that handles semantic, instance, and panoptic segmentation with one model and one training recipe. Its core idea is to cast segmentation as mask classification: a transformer decoder predicts a set of mask queries, each with an associated class score, eliminating the need for task-specific heads. A pixel decoder fuses multi-scale features and feeds masked attention in the transformer so each query focuses computation on its current spatial...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 24
    ReinventCommunity

    ReinventCommunity

    Jupyter Notebook tutorials for REINVENT 3.2

    This repository is a collection of useful jupyter notebooks, code snippets and example JSON files illustrating the use of Reinvent 3.2.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    PyCls

    PyCls

    Codebase for Image Classification Research, written in PyTorch

    pycls is a focused PyTorch codebase for image classification research that emphasizes reproducibility and strong, transparent baselines. It popularized families like RegNet and supports classic architectures (ResNet, ResNeXt) with clean implementations and consistent training recipes. The repository includes highly tuned schedules, augmentations, and regularization settings that make it straightforward to match reported accuracy without guesswork. Distributed training and mixed precision are...
    Downloads: 4 This Week
    Last Update:
    See Project
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.