Showing 299 open source projects for "nvidia"

View related business solutions
  • Get Avast Free Antivirus | Your top-rated shield against malware and online scams Icon
    Get Avast Free Antivirus | Your top-rated shield against malware and online scams

    Boost your PC's defense against cyberthreats and web-based scams.

    Our antivirus software scans for security and performance issues and helps you to fix them instantly. It also protects you in real time by analyzing unknown files before they reach your desktop PC or laptop — all for free.
    Free Download
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • 1
    NVIDIA NeMo

    NVIDIA NeMo

    Toolkit for conversational AI

    NVIDIA NeMo, part of the NVIDIA AI platform, is a toolkit for building new state-of-the-art conversational AI models. NeMo has separate collections for Automatic Speech Recognition (ASR), Natural Language Processing (NLP), and Text-to-Speech (TTS) models. Each collection consists of prebuilt modules that include everything needed to train on your data. Every module can easily be customized, extended, and composed to create new conversational AI model architectures. Conversational AI...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 2
    NVIDIA AgentIQ

    NVIDIA AgentIQ

    The NVIDIA AgentIQ toolkit is an open-source library

    NVIDIA AgentIQ is an open-source toolkit designed to efficiently connect, evaluate, and accelerate teams of AI agents. It provides a framework-agnostic platform that integrates seamlessly with various data sources and tools, enabling developers to build composable and reusable agentic workflows. By treating agents, tools, and workflows as simple function calls, AgentIQ facilitates rapid development and optimization of AI-driven applications, enhancing collaboration and efficiency in complex...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    NVIDIA Merlin

    NVIDIA Merlin

    Library providing end-to-end GPU-accelerated recommender systems

    NVIDIA Merlin is an open-source library that accelerates recommender systems on NVIDIA GPUs. The library enables data scientists, machine learning engineers, and researchers to build high-performing recommenders at scale. Merlin includes tools to address common feature engineering, training, and inference challenges. Each stage of the Merlin pipeline is optimized to support hundreds of terabytes of data, which is all accessible through easy-to-use APIs. For more information, see NVIDIA Merlin...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    NVIDIA FLARE

    NVIDIA FLARE

    NVIDIA Federated Learning Application Runtime Environment

    NVIDIA Federated Learning Application Runtime Environment NVIDIA FLARE is a domain-agnostic, open-source, extensible SDK that allows researchers and data scientists to adapt existing ML/DL workflows(PyTorch, TensorFlow, Scikit-learn, XGBoost etc.) to a federated paradigm. It enables platform developers to build a secure, privacy-preserving offering for a distributed multi-party collaboration. NVIDIA FLARE is built on a componentized architecture that allows you to take federated learning...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Powering the best of the internet | Fastly Icon
    Powering the best of the internet | Fastly

    Fastly's edge cloud platform delivers faster, safer, and more scalable sites and apps to customers.

    Ensure your websites, applications and services can effortlessly handle the demands of your users with Fastly. Fastly’s portfolio is designed to be highly performant, personalized and secure while seamlessly scaling to support your growth.
    Try for free
  • 5
    NVIDIA GPU Exporter

    NVIDIA GPU Exporter

    Nvidia GPU exporter for prometheus using nvidia-smi binary

    Nvidia GPU exporter for prometheus, using nvidia-smi binary to gather metrics. There are many Nvidia GPU exporters out there however they have problems such as not being maintained, not providing pre-built binaries, having a dependency to Linux and/or Docker, targeting enterprise setups (DCGM) and so on.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 6
    NVIDIA GPU Operator

    NVIDIA GPU Operator

    NVIDIA GPU Operator creates/configures/manages GPUs atop Kubernetes

    Kubernetes provides access to special hardware resources such as NVIDIA GPUs, NICs, Infiniband adapters and other devices through the device plugin framework. However, configuring and managing nodes with these hardware resources requires the configuration of multiple software components such as drivers, container runtimes or other libraries which are difficult and prone to errors. The NVIDIA GPU Operator uses the operator framework within Kubernetes to automate the management of all NVIDIA...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 7
    NVIDIA Linux Open GPU Kernel Module

    NVIDIA Linux Open GPU Kernel Module

    NVIDIA Linux open GPU kernel module source

    This is the source release of the NVIDIA Linux open GPU kernel modules, version 530.41.03. Note that the kernel modules built here must be used with GSP firmware and user-space NVIDIA GPU driver components from a corresponding 530.41.03 driver release. Currently, the kernel modules can be built for x86_64 or aarch64. If cross-compiling, set these variables on the make command line. Any reasonably modern version of GCC or Clang can be used to build the kernel modules. Note that the kernel...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 8
    NVIDIA device plugin for Kubernetes

    NVIDIA device plugin for Kubernetes

    NVIDIA device plugin for Kubernetes

    The NVIDIA device plugin for Kubernetes is a Daemonset that allows you to automatically Expose the number of GPUs on each node of your cluster. Keep track of the health of your GPUs. Run GPU-enabled containers in your Kubernetes cluster.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    NVIDIA Container Toolkit

    NVIDIA Container Toolkit

    Build and run Docker containers leveraging NVIDIA GPUs

    The NVIDIA Container Toolkit allows users to build and run GPU accelerated Docker containers. The toolkit includes a container runtime library and utilities to automatically configure containers to leverage NVIDIA GPUs. Make sure you have installed the NVIDIA driver and Docker engine for your Linux distribution Note that you do not need to install the CUDA Toolkit on the host system, but the NVIDIA driver needs to be installed. The NVIDIA Container Toolkit supports different container engines...
    Downloads: 1 This Week
    Last Update:
    See Project
  • Picsart Enterprise Background Removal API for Stunning eCommerce Visuals Icon
    Picsart Enterprise Background Removal API for Stunning eCommerce Visuals

    Instantly remove the background from your images in just one click.

    With our Remove Background API tool, you can access the transformative capabilities of automation , which will allow you to turn any photo asset into compelling product imagery. With elevated visuals quality on your digital platforms, you can captivate your audience, and therefore achieve higher engagement and sales.
    Learn More
  • 10
    Downloads: 39 This Week
    Last Update:
    See Project
  • 11
    This repository includes patched legacy nVIDIA drivers for newer Linux kernels (5.8 - 6.6). Works on all Linux distros.
    Downloads: 16 This Week
    Last Update:
    See Project
  • 12
    AimAhead

    AimAhead

    The fastest AI powered Aimbot

    AimAhead is an AI-powered aim assist tool designed for high-speed target acquisition. It captures the screen, processes the image through a selected AI model to detect enemies, and then aims towards them. Optimized for NVIDIA graphics cards, AimAhead converts ONNX models to TensorRT engine files for enhanced performance, achieving between 100 to 200 cycles per second depending on the model used.
    Downloads: 289 This Week
    Last Update:
    See Project
  • 13
    TensorRT

    TensorRT

    C++ library for high performance inference on NVIDIA GPUs

    NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference. It includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for deep learning inference applications. TensorRT-based applications perform up to 40X faster than CPU-only platforms during inference. With TensorRT, you can optimize neural network models trained in all major frameworks, calibrate for lower precision with high accuracy, and deploy to hyperscale data centers...
    Downloads: 39 This Week
    Last Update:
    See Project
  • 14
    XMRig

    XMRig

    RandomX, KawPow, CryptoNight, AstroBWT and GhostRider unified miner

    High performance, open-source, cross-platform RandomX, KawPow, CryptoNight, and AstroBWT CPU/GPU miner, RandomX benchmark, and stratum proxy. XMRig is a high-performance, open-source, cross-platform RandomX, KawPow, CryptoNight, and AstroBWT unified CPU/GPU miner and RandomX benchmark. Official binaries are available for Windows, Linux, macOS, and FreeBSD. The preferred way to configure the miner is the JSON config file as it is more flexible and human-friendly. The command-line interface...
    Downloads: 40 This Week
    Last Update:
    See Project
  • 15
    NVTOP

    NVTOP

    GPU & Accelerator process monitoring for AMD, Apple, Huawei, Intel

    NVTOP stands for Neat Video card TOP, a (h) top-like task monitor for GPUs and accelerators. It can handle multiple GPUs and print information about them in a htop-familiar way. Currently supported vendors are AMD (Linux AMD GPU driver), Apple (limited M1 & M2 support), Huawei (Ascend), Intel (Linux i915 driver), NVIDIA (Linux proprietary divers), and Qualcomm Adreno (Linux MSM driver).
    Downloads: 24 This Week
    Last Update:
    See Project
  • 16
    ONNX Runtime

    ONNX Runtime

    ONNX Runtime: cross-platform, high performance ML inferencing

    ... where applicable alongside graph optimizations and transforms. ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Support for a variety of frameworks, operating systems and hardware platforms. Built-in optimizations that deliver up to 17X faster inferencing and up to 1.4X faster training.
    Downloads: 18 This Week
    Last Update:
    See Project
  • 17
    Jellyfin Android TV

    Jellyfin Android TV

    Android TV Client for Jellyfin

    Jellyfin Android TV is a Jellyfin client for Android TV, Nvidia Shield, and Amazon Fire TV devices. We welcome all contributions and pull requests! If you have a larger feature in mind please open an issue so we can discuss the implementation before you start. Jellyfin is the volunteer-built media solution that puts you in control of your media. Stream to any device from your own server, with no strings attached. Your media, your server, your way. Jellyfin enables you to collect, manage...
    Downloads: 25 This Week
    Last Update:
    See Project
  • 18
    Torch-TensorRT

    Torch-TensorRT

    PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT

    Torch-TensorRT is a compiler for PyTorch/TorchScript, targeting NVIDIA GPUs via NVIDIA’s TensorRT Deep Learning Optimizer and Runtime. Unlike PyTorch’s Just-In-Time (JIT) compiler, Torch-TensorRT is an Ahead-of-Time (AOT) compiler, meaning that before you deploy your TorchScript code, you go through an explicit compile step to convert a standard TorchScript program into a module targeting a TensorRT engine. Torch-TensorRT operates as a PyTorch extension and compiles modules that integrate...
    Downloads: 11 This Week
    Last Update:
    See Project
  • 19
    Sunshine

    Sunshine

    Self-hosted game stream host for Moonlight

    Sunshine is an open-source self‑hosted cloud gaming server that implements NVIDIA’s GameStream protocol. Compatible with Moonlight clients across platforms, it supports low‑latency streaming via software or hardware encoding (AMD/Intel/NVIDIA) and offers a browser‑based control UI for pairing.
    Downloads: 9 This Week
    Last Update:
    See Project
  • 20
    Zenith

    Zenith

    Sort of like top or htop but with zoom-able charts, CPU, GPU

    In terminal graphical metrics for your *nix system written in Rust. The make file provides for building fully static versions on Linux against the musl C library. It requires musl-gcc to be installed on the system. Install "musl-tools" package on debian/ubuntu derivatives, "musl-gcc" on fedora and equivalent on other distributions from their standard repos. If one needs to build with NVIDIA support in a virtual environment, then it requires some more setup since typically the VM software...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 21
    Nvitop

    Nvitop

    An interactive NVIDIA-GPU process viewer and beyond

    nvitop is an interactive NVIDIA device and process monitoring tool. It has a colorful and informative interface that continuously updates the status of the devices and processes. As a resource monitor, it includes many features and options, such as tree-view, environment variable viewing, process filtering, process metrics monitoring, etc. Beyond that, the package also ships a CUDA device selection tool nvisel for deep learning researchers. It also provides handy APIs that allow developers...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 22
    InvokeAI

    InvokeAI

    InvokeAI is a leading creative engine for Stable Diffusion models

    .... InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products. This fork is supported across Linux, Windows and Macintosh. Linux users can use either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm driver). We do not recommend the GTX 1650 or 1660 series video cards. They are unable to run in half-precision mode and do not have sufficient VRAM to render 512x512 images.
    Downloads: 15 This Week
    Last Update:
    See Project
  • 23
    waifu2x ncnn Vulkan

    waifu2x ncnn Vulkan

    waifu2x converter ncnn version, run fast GPU with vulkan

    ncnn implementation of waifu2x converter. Runs fast on Intel/AMD/Nvidia/Apple-Silicon with Vulkan API. waifu2x-ncnn-vulkan uses ncnn project as the universal neural network inference framework.
    Downloads: 10 This Week
    Last Update:
    See Project
  • 24
    Transformer Engine

    Transformer Engine

    A library for accelerating Transformer models on NVIDIA GPUs

    Transformer Engine (TE) is a library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper GPUs, to provide better performance with lower memory utilization in both training and inference. TE provides a collection of highly optimized building blocks for popular Transformer architectures and an automatic mixed precision-like API that can be used seamlessly with your framework-specific code. TE also includes a framework-agnostic C++ API...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 25
    enhancr

    enhancr

    Video Frame Interpolation & Super Resolution using NVIDIA's TensorRT

    ... inference by NVIDIA, which can speed up AI processes significantly. Pre-packaged, without the need to install Docker or WSL (Windows Subsystem for Linux) - and NCNN inference by Tencent which is lightweight and runs on NVIDIA, AMD and even Apple Silicon - in contrast to the mammoth of an inference PyTorch is, which only runs on NVIDIA GPUs.
    Downloads: 74 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • 3
  • 4
  • 5
  • Next
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.