Open Source Python Neural Network Libraries for Linux

Python Neural Network Libraries for Linux

View 10 business solutions

Browse free open source Python Neural Network Libraries for Linux and projects below. Use the toggles on the left to filter open source Python Neural Network Libraries for Linux by OS, license, language, programming language, and project status.

  • Get Avast Free Antivirus with 24/7 AI-powered online scam detection Icon
    Get Avast Free Antivirus with 24/7 AI-powered online scam detection

    Get protection for today’s online threats. Free.

    Award-winning antivirus protection, as well as protection against online scams, dangerous Wi-Fi connections, hacked accounts, and ransomware. It includes Avast Assistant, your built-in AI partner, which gives you help with suspicious online messages, offers, and more.
    Free Download
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • 1
    AIMET

    AIMET

    AIMET is a library that provides advanced quantization and compression

    Qualcomm Innovation Center (QuIC) is at the forefront of enabling low-power inference at the edge through its pioneering model-efficiency research. QuIC has a mission to help migrate the ecosystem toward fixed-point inference. With this goal, QuIC presents the AI Model Efficiency Toolkit (AIMET) - a library that provides advanced quantization and compression techniques for trained neural network models. AIMET enables neural networks to run more efficiently on fixed-point AI hardware accelerators. Quantized inference is significantly faster than floating point inference. For example, models that we’ve run on the Qualcomm® Hexagon™ DSP rather than on the Qualcomm® Kryo™ CPU have resulted in a 5x to 15x speedup. Plus, an 8-bit model also has a 4x smaller memory footprint relative to a 32-bit model. However, often when quantizing a machine learning model (e.g., from 32-bit floating point to an 8-bit fixed point value), the model accuracy is sacrificed.
    Downloads: 24 This Week
    Last Update:
    See Project
  • 2
    Alpa

    Alpa

    Training and serving large-scale neural networks

    Alpa is a system for training and serving large-scale neural networks. Scaling neural networks to hundreds of billions of parameters has enabled dramatic breakthroughs such as GPT-3, but training and serving these large-scale neural networks require complicated distributed system techniques. Alpa aims to automate large-scale distributed training and serving with just a few lines of code.
    Downloads: 21 This Week
    Last Update:
    See Project
  • 3
    Imagen - Pytorch

    Imagen - Pytorch

    Implementation of Imagen, Google's Text-to-Image Neural Network

    Implementation of Imagen, Google's Text-to-Image Neural Network that beats DALL-E2, in Pytorch. It is the new SOTA for text-to-image synthesis. Architecturally, it is actually much simpler than DALL-E2. It consists of a cascading DDPM conditioned on text embeddings from a large pre-trained T5 model (attention network). It also contains dynamic clipping for improved classifier-free guidance, noise level conditioning, and a memory-efficient unit design. It appears neither CLIP nor prior network is needed after all. And so research continues. For simpler training, you can directly supply text strings instead of precomputing text encodings. (Although for scaling purposes, you will definitely want to precompute the textual embeddings + mask)
    Downloads: 14 This Week
    Last Update:
    See Project
  • 4
    spaCy

    spaCy

    Industrial-strength Natural Language Processing (NLP)

    spaCy is a library built on the very latest research for advanced Natural Language Processing (NLP) in Python and Cython. Since its inception it was designed to be used for real world applications-- for building real products and gathering real insights. It comes with pretrained statistical models and word vectors, convolutional neural network models, easy deep learning integration and so much more. spaCy is the fastest syntactic parser in the world according to independent benchmarks, with an accuracy within 1% of the best available. It's blazing fast, easy to install and comes with a simple and productive API.
    Downloads: 10 This Week
    Last Update:
    See Project
  • Powering the best of the internet | Fastly Icon
    Powering the best of the internet | Fastly

    Fastly's edge cloud platform delivers faster, safer, and more scalable sites and apps to customers.

    Ensure your websites, applications and services can effortlessly handle the demands of your users with Fastly. Fastly’s portfolio is designed to be highly performant, personalized and secure while seamlessly scaling to support your growth.
    Try for free
  • 5
    MMDeploy

    MMDeploy

    OpenMMLab Model Deployment Framework

    MMDeploy is an open-source deep learning model deployment toolset. It is a part of the OpenMMLab project. Models can be exported and run in several backends, and more will be compatible. All kinds of modules in the SDK can be extended, such as Transform for image processing, Net for Neural Network inference, Module for postprocessing and so on. Install and build your target backend. ONNX Runtime is a cross-platform inference and training accelerator compatible with many popular ML/DNN frameworks. Please read getting_started for the basic usage of MMDeploy.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 6
    Stock prediction deep neural learning

    Stock prediction deep neural learning

    Predicting stock prices using a TensorFlow LSTM

    Predicting stock prices can be a challenging task as it often does not follow any specific pattern. However, deep neural learning can be used to identify patterns through machine learning. One of the most effective techniques for series forecasting is using LSTM (long short-term memory) networks, which are a type of recurrent neural network (RNN) capable of remembering information over a long period of time. This makes them extremely useful for predicting stock prices. Predicting stock prices is a complex task, as it is influenced by various factors such as market trends, political events, and economic indicators. The fluctuations in stock prices are driven by the forces of supply and demand, which can be unpredictable at times. To identify patterns and trends in stock prices, deep learning techniques can be used for machine learning. Long short-term memory (LSTM) is a type of recurrent neural network (RNN) that is specifically designed for sequence modeling and prediction.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 7
    Fairseq

    Fairseq

    Facebook AI Research Sequence-to-Sequence Toolkit written in Python

    Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks. We provide reference implementations of various sequence modeling papers. Recent work by Microsoft and Google has shown that data parallel training can be made significantly more efficient by sharding the model parameters and optimizer state across data parallel workers. These ideas are encapsulated in the new FullyShardedDataParallel (FSDP) wrapper provided by fairscale. Fairseq can be extended through user-supplied plug-ins. Models define the neural network architecture and encapsulate all of the learnable parameters. Criterions compute the loss function given the model outputs and targets. Tasks store dictionaries and provide helpers for loading/iterating over Datasets, initializing the Model/Criterion and calculating the loss.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 8
    Fast Artificial Neural Network Library is a free open source neural network library, which implements multilayer artificial neural networks in C with support for both fully connected and sparsely connected networks. Cross-platform execution in both fixed and floating point are supported. It includes a framework for easy handling of training data sets. It is easy to use, versatile, well documented, and fast. Bindings to more than 15 programming languages are available. An easy to read introduction article and a reference manual accompanies the library with examples and recommendations on how to use the library. Several graphical user interfaces are also available for the library.
    Downloads: 19 This Week
    Last Update:
    See Project
  • 9
    Lightweight' GAN

    Lightweight' GAN

    Implementation of 'lightweight' GAN, proposed in ICLR 2021

    Implementation of 'lightweight' GAN proposed in ICLR 2021, in Pytorch. The main contribution of the paper is a skip-layer excitation in the generator, paired with autoencoding self-supervised learning in the discriminator. Quoting the one-line summary "converge on single gpu with few hours' training, on 1024 resolution sub-hundred images". Augmentation is essential for Lightweight GAN to work effectively in a low data setting. You can test and see how your images will be augmented before they pass into a neural network (if you use augmentation). The general recommendation is to use suitable augs for your data and as many as possible, then after some time of training disable the most destructive (for image) augs. You can turn on automatic mixed precision with one flag --amp. You should expect it to be 33% faster and save up to 40% memory. Aim is an open-source experiment tracker that logs your training runs, and enables a beautiful UI to compare them.
    Downloads: 2 This Week
    Last Update:
    See Project
  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    The database for AI-powered applications.

    MongoDB Atlas is the developer-friendly database used to build, scale, and run gen AI and LLM-powered apps—without needing a separate vector database. Atlas offers built-in vector search, global availability across 115+ regions, and flexible document modeling. Start building AI apps faster, all in one place.
    Start Free
  • 10
    PyG

    PyG

    Graph Neural Network Library for PyTorch

    PyG (PyTorch Geometric) is a library built upon PyTorch to easily write and train Graph Neural Networks (GNNs) for a wide range of applications related to structured data. It consists of various methods for deep learning on graphs and other irregular structures, also known as geometric deep learning, from a variety of published papers. In addition, it consists of easy-to-use mini-batch loaders for operating on many small and single giant graphs, multi GPU-support, DataPipe support, distributed graph learning via Quiver, a large number of common benchmark datasets (based on simple interfaces to create your own), the GraphGym experiment manager, and helpful transforms, both for learning on arbitrary graphs as well as on 3D meshes or point clouds. All it takes is 10-20 lines of code to get started with training a GNN model (see the next section for a quick tour).
    Downloads: 2 This Week
    Last Update:
    See Project
  • 11
    Python Outlier Detection

    Python Outlier Detection

    A Python toolbox for scalable outlier detection

    PyOD is a comprehensive and scalable Python toolkit for detecting outlying objects in multivariate data. This exciting yet challenging field is commonly referred as outlier detection or anomaly detection. PyOD includes more than 30 detection algorithms, from classical LOF (SIGMOD 2000) to the latest COPOD (ICDM 2020) and SUOD (MLSys 2021). Since 2017, PyOD [AZNL19] has been successfully used in numerous academic researches and commercial products [AZHC+21, AZNHL19]. PyOD has multiple neural network-based models, e.g., AutoEncoders, which are implemented in both PyTorch and Tensorflow. PyOD contains multiple models that also exist in scikit-learn. It is possible to train and predict with a large number of detection models in PyOD by leveraging SUOD framework. A benchmark is supplied for select algorithms to provide an overview of the implemented models. In total, 17 benchmark datasets are used for comparison, which can be downloaded at ODDS.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 12
    Stanza

    Stanza

    Stanford NLP Python library for many human languages

    Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing. Stanza is a Python natural language analysis package. It contains tools, which can be used in a pipeline, to convert a string containing human language text into lists of sentences and words, to generate base forms of those words, their parts of speech and morphological features, to give a syntactic structure dependency parse, and to recognize named entities. The toolkit is designed to be parallel among more than 70 languages, using the Universal Dependencies formalism. Stanza is built with highly accurate neural network components that also enable efficient training and evaluation with your own annotated data.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 13
    AdaNet

    AdaNet

    Fast and flexible AutoML with learning guarantees

    AdaNet is a TensorFlow framework for fast and flexible AutoML with learning guarantees. AdaNet is a lightweight TensorFlow-based framework for automatically learning high-quality models with minimal expert intervention. AdaNet builds on recent AutoML efforts to be fast and flexible while providing learning guarantees. Importantly, AdaNet provides a general framework for not only learning a neural network architecture but also for learning to the ensemble to obtain even better models. At each iteration, it measures the ensemble loss for each candidate, and selects the best one to move onto the next iteration. Adaptive neural architecture search and ensemble learning in a single train call. Regression, binary and multi-class classification, and multi-head task support. A tf.estimator.Estimator API for training, evaluation, prediction, and serving models.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 14
    Neural Network Intelligence

    Neural Network Intelligence

    AutoML toolkit for automate machine learning lifecycle

    Neural Network Intelligence is an open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning. NNI (Neural Network Intelligence) is a lightweight but powerful toolkit to help users automate feature engineering, neural architecture search, hyperparameter tuning and model compression. The tool manages automated machine learning (AutoML) experiments, dispatches and runs experiments' trial jobs generated by tuning algorithms to search the best neural architecture and/or hyper-parameters in different training environments like Local Machine, Remote Servers, OpenPAI, Kubeflow, FrameworkController on K8S (AKS etc.) DLWorkspace (aka. DLTS) AML (Azure Machine Learning) and other cloud options. NNI provides CommandLine Tool as well as an user friendly WebUI to manage training experiements.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 15
    PennyLane

    PennyLane

    A cross-platform Python library for differentiable programming

    A cross-platform Python library for differentiable programming of quantum computers. Train a quantum computer the same way as a neural network. Built-in automatic differentiation of quantum circuits, using the near-term quantum devices directly. You can combine multiple quantum devices with classical processing arbitrarily! Support for hybrid quantum and classical models, and compatible with existing machine learning libraries. Quantum circuits can be set up to interface with either NumPy, PyTorch, JAX, or TensorFlow, allowing hybrid CPU-GPU-QPU computations. The same quantum circuit model can be run on different devices. Install plugins to run your computational circuits on more devices, including Strawberry Fields, Amazon Braket, Qiskit and IBM Q, Google Cirq, Rigetti Forest, and the Microsoft QDK.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 16
    Sonnet

    Sonnet

    TensorFlow-based neural network library

    Sonnet is a neural network library built on top of TensorFlow designed to provide simple, composable abstractions for machine learning research. Sonnet can be used to build neural networks for various purposes, including different types of learning. Sonnet’s programming model revolves around a single concept: modules. These modules can hold references to parameters, other modules and methods that apply some function on the user input. There are a number of predefined modules that already ship with Sonnet, making it quite powerful and yet simple at the same time. Users are also encouraged to build their own modules. Sonnet is designed to be extremely unopinionated about your use of modules. It is simple to understand, and offers clear and focused code.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 17
    A neural net module written in python. The aim of the project is to provide a large set of neural network types accessed by an API that is easy to use and powerful.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
    RNNLIB is a recurrent neural network library for sequence learning problems. Applicable to most types of spatiotemporal data, it has proven particularly effective for speech and handwriting recognition. full installation and usage instructions given at http://sourceforge.net/p/rnnl/wiki/Home/
    Downloads: 1 This Week
    Last Update:
    See Project
  • 19
    DeepXDE

    DeepXDE

    A library for scientific machine learning & physics-informed learning

    DeepXDE is a library for scientific machine learning and physics-informed learning. DeepXDE includes the following algorithms. Physics-informed neural network (PINN). Solving different problems. Solving forward/inverse ordinary/partial differential equations (ODEs/PDEs) [SIAM Rev.] Solving forward/inverse integro-differential equations (IDEs) [SIAM Rev.] fPINN: solving forward/inverse fractional PDEs (fPDEs) [SIAM J. Sci. Comput.] NN-arbitrary polynomial chaos (NN-aPC): solving forward/inverse stochastic PDEs (sPDEs) [J. Comput. Phys.] PINN with hard constraints (hPINN): solving inverse design/topology optimization [SIAM J. Sci. Comput.] Residual-based adaptive sampling [SIAM Rev., arXiv] Gradient-enhanced PINN (gPINN) [Comput. Methods Appl. Mech. Eng.] PINN with multi-scale Fourier features [Comput. Methods Appl. Mech. Eng.]
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Feed-forward neural network for python
    ffnet is a fast and easy-to-use feed-forward neural network training solution for python. Many nice features are implemented: arbitrary network connectivity, automatic data normalization, very efficient training tools, network export to fortran code. Now ffnet has also a GUI called ffnetui.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    Forecasting Best Practices

    Forecasting Best Practices

    Time Series Forecasting Best Practices & Examples

    Time series forecasting is one of the most important topics in data science. Almost every business needs to predict the future in order to make better decisions and allocate resources more effectively. This repository provides examples and best practice guidelines for building forecasting solutions. The goal of this repository is to build a comprehensive set of tools and examples that leverage recent advances in forecasting algorithms to build solutions and operationalize them. Rather than creating implementations from scratch, we draw from existing state-of-the-art libraries and build additional utilities around processing and featuring the data, optimizing and evaluating models, and scaling up to the cloud. The examples and best practices are provided as Python Jupyter notebooks and R markdown files and a library of utility functions.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Haiku

    Haiku

    JAX-based neural network library

    Haiku is a library built on top of JAX designed to provide simple, composable abstractions for machine learning research. Haiku is a simple neural network library for JAX that enables users to use familiar object-oriented programming models while allowing full access to JAX’s pure function transformations. Haiku is designed to make the common things we do such as managing model parameters and other model state simpler and similar in spirit to the Sonnet library that has been widely used across DeepMind. It preserves Sonnet’s module-based programming model for state management while retaining access to JAX’s function transformations. Haiku can be expected to compose with other libraries and work well with the rest of JAX. Similar to Sonnet modules, Haiku modules are Python objects that hold references to their own parameters, other modules, and methods that apply functions on user inputs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    MetaNet

    MetaNet

    Free portable library for meta neural network research

    MetaNet provides free library for meta neural network research. MetaNet library contain feed-forward neural net realisation and several integrated dataset (MNIST).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    Minkowski Engine

    Minkowski Engine

    Auto-diff neural network library for high-dimensional sparse tensors

    The Minkowski Engine is an auto-differentiation library for sparse tensors. It supports all standard neural network layers such as convolution, pooling, unspooling, and broadcasting operations for sparse tensors. The Minkowski Engine supports various functions that can be built on a sparse tensor. We list a few popular network architectures and applications here. To run the examples, please install the package and run the command in the package root directory. Compressing a neural network to speed up inference and minimize memory footprint has been studied widely. One of the popular techniques for model compression is pruning the weights in convnets, is also known as sparse convolutional networks. Such parameter-space sparsity used for model compression compresses networks that operate on dense tensors and all intermediate activations of these networks are also dense tensors.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    NLP Architect

    NLP Architect

    A model library for exploring state-of-the-art deep learning

    NLP Architect is an open-source Python library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing and Natural Language Understanding neural networks. The library includes our past and ongoing NLP research and development efforts as part of Intel AI Lab. NLP Architect is designed to be flexible for adding new models, neural network components, data handling methods, and for easy training and running models. NLP Architect is a model-oriented library designed to showcase novel and different neural network optimizations. The library contains NLP/NLU-related models per task, different neural network topologies (which are used in models), procedures for simplifying workflows in the library, pre-defined data processors and dataset loaders and misc utilities. The library is designed to be a tool for model development: data pre-processing, build model, train, validate, infer, save or load a model.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.