Browse free open source Machine Learning software and projects for Mac below. Use the toggles on the left to filter open source Machine Learning software by OS, license, language, programming language, and project status.

  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Run applications fast and securely in a fully managed environment Icon
    Run applications fast and securely in a fully managed environment

    Cloud Run is a fully-managed compute platform that lets you run your code in a container directly on top of scalable infrastructure.

    Run frontend and backend services, batch jobs, deploy websites and applications, and queue processing workloads without the need to manage infrastructure.
    Try for free
  • 1
    Armadillo

    Armadillo

    fast C++ library for linear algebra & scientific computing

    * Fast C++ library for linear algebra (matrix maths) and scientific computing * Easy to use functions and syntax, deliberately similar to Matlab / Octave * Uses template meta-programming techniques to increase efficiency * Provides user-friendly wrappers for OpenBLAS, Intel MKL, LAPACK, ATLAS, ARPACK, SuperLU and FFTW libraries * Useful for machine learning, pattern recognition, signal processing, bioinformatics, statistics, finance, etc. * Downloads: http://arma.sourceforge.net/download.html * Documentation: http://arma.sourceforge.net/docs.html * Bug reports: http://arma.sourceforge.net/faq.html * Git repo: https://gitlab.com/conradsnicta/armadillo-code
    Leader badge
    Downloads: 3,123 This Week
    Last Update:
    See Project
  • 2
    ONNX Runtime

    ONNX Runtime

    ONNX Runtime: cross-platform, high performance ML inferencing

    ONNX Runtime is a cross-platform inference and training machine-learning accelerator. ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Support for a variety of frameworks, operating systems and hardware platforms. Built-in optimizations that deliver up to 17X faster inferencing and up to 1.4X faster training.
    Downloads: 60 This Week
    Last Update:
    See Project
  • 3
    Netron

    Netron

    Visualizer for neural network, deep learning, machine learning models

    Netron is a viewer for neural network, deep learning and machine learning models. Netron supports ONNX, Keras, TensorFlow Lite, Caffe, Darknet, Core ML, MNN, MXNet, ncnn, PaddlePaddle, Caffe2, Barracuda, Tengine, TNN, RKNN, MindSpore Lite, and UFF. Netron has experimental support for TensorFlow, PyTorch, TorchScript, OpenVINO, Torch, Arm NN, BigDL, Chainer, CNTK, Deeplearning4j, MediaPipe, ML.NET, scikit-learn, TensorFlow.js. There is an extense variety of sample model files to download or open using the browser version. It is supported by macOS, Windows, Linux, Python Server and browser.
    Downloads: 47 This Week
    Last Update:
    See Project
  • 4
    Java Neural Network Framework Neuroph
    Neuroph is lightweight Java Neural Network Framework which can be used to develop common neural network architectures. Small number of basic classes which correspond to basic NN concepts, and GUI editor makes it easy to learn and use.
    Leader badge
    Downloads: 143 This Week
    Last Update:
    See Project
  • Lightspeed golf course management software Icon
    Lightspeed golf course management software

    Lightspeed Golf is all-in-one golf course management software to help courses simplify operations, drive revenue and deliver amazing golf experiences.

    From tee sheet management, point of sale and payment processing to marketing, automation, reporting and more—Lightspeed is built for the pro shop, restaurant, back office, beverage cart and beyond.
    Learn More
  • 5
    OpenPose

    OpenPose

    Real-time multi-person keypoint detection library for body, face, etc.

    OpenPose has represented the first real-time multi-person system to jointly detect human body, hand, facial, and foot keypoints (in total 135 keypoints) on single images. It is authored by Ginés Hidalgo, Zhe Cao, Tomas Simon, Shih-En Wei, Yaadhav Raaj, Hanbyul Joo, and Yaser Sheikh. It is maintained by Ginés Hidalgo and Yaadhav Raaj. OpenPose would not be possible without the CMU Panoptic Studio dataset. We would also like to thank all the people who has helped OpenPose in any way. 15, 18 or 25-keypoint body/foot keypoint estimation, including 6 foot keypoints. Runtime invariant to number of detected people. 2x21-keypoint hand keypoint estimation. Runtime depends on number of detected people. 70-keypoint face keypoint estimation. Runtime depends on number of detected people. Input: Image, video, webcam, Flir/Point Grey, IP camera, and support to add your own custom input source (e.g., depth camera).
    Downloads: 28 This Week
    Last Update:
    See Project
  • 6
    dlib C++ Library
    Dlib is a C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real world problems.
    Leader badge
    Downloads: 100 This Week
    Last Update:
    See Project
  • 7
    TensorFlow

    TensorFlow

    TensorFlow is an open source library for machine learning

    Originally developed by Google for internal use, TensorFlow is an open source platform for machine learning. Available across all common operating systems (desktop, server and mobile), TensorFlow provides stable APIs for Python and C as well as APIs that are not guaranteed to be backwards compatible or are 3rd party for a variety of other languages. The platform can be easily deployed on multiple CPUs, GPUs and Google's proprietary chip, the tensor processing unit (TPU). TensorFlow expresses its computations as dataflow graphs, with each node in the graph representing an operation. Nodes take tensors—multidimensional arrays—as input and produce tensors as output. The framework allows for these algorithms to be run in C++ for better performance, while the multiple levels of APIs let the user determine how high or low they wish the level of abstraction to be in the models produced. Tensorflow can also be used for research and production with TensorFlow Extended.
    Downloads: 19 This Week
    Last Update:
    See Project
  • 8
    Gradio

    Gradio

    Create UIs for your machine learning model in Python in 3 minutes

    Gradio is the fastest way to demo your machine learning model with a friendly web interface so that anyone can use it, anywhere! Gradio can be installed with pip. Creating a Gradio interface only requires adding a couple lines of code to your project. You can choose from a variety of interface types to interface your function. Gradio can be embedded in Python notebooks or presented as a webpage. A Gradio interface can automatically generate a public link you can share with colleagues that lets them interact with the model on your computer remotely from their own devices. Once you've created an interface, you can permanently host it on Hugging Face. Hugging Face Spaces will host the interface on its servers and provide you with a link you can share. One of the best ways to share your machine learning model, API, or data science workflow with others is to create an interactive demo that allows your users or colleagues to try out the demo in their browsers.
    Downloads: 15 This Week
    Last Update:
    See Project
  • 9
    Rasa

    Rasa

    Open source machine learning framework to automate text conversations

    Rasa is an open source machine learning framework to automate text-and voice-based conversations. With Rasa, you can build contextual assistants on Facebook Messenger, Slack, Google Hangouts, Webex Teams, Microsoft Bot Framework, Rocket.Chat, Mattermost, Telegram, and Twilio or on your own custom conversational channels. Rasa helps you build contextual assistants capable of having layered conversations with lots of back-and-forths. In order for a human to have a meaningful exchange with a contextual assistant, the assistant needs to be able to use context to build on things that were previously discussed. Rasa enables you to build assistants that can do this in a scalable way. Rasa uses Poetry for packaging and dependency management. If you want to build it from the source, you have to install Poetry first. By default, Poetry will try to use the currently activated Python version to create the virtual environment for the current project automatically.
    Downloads: 14 This Week
    Last Update:
    See Project
  • AI-powered conversation intelligence software Icon
    AI-powered conversation intelligence software

    Unlock call analytics that provide actionable insights with our call tracking software, empowering you to identify what's working and what's not.

    Every customer interaction is vital to your business success and revenue growth. With Jiminny’s AI-powered conversation intelligence software, we take recording, capturing, and meticulous analysis of call recordings to the next level. Unlock call analytics that provide actionable insights with our call tracking software, empowering you to identify what's working and what's not. Seamlessly support your biggest objectives across the entire business landscape with our innovative call tracking system.
    Learn More
  • 10
    AlphaZero.jl

    AlphaZero.jl

    A generic, simple and fast implementation of Deepmind's AlphaZero

    Beyond its much publicized success in attaining superhuman level at games such as Chess and Go, DeepMind's AlphaZero algorithm illustrates a more general methodology of combining learning and search to explore large combinatorial spaces effectively. We believe that this methodology can have exciting applications in many different research areas. Because AlphaZero is resource-hungry, successful open-source implementations (such as Leela Zero) are written in low-level languages (such as C++) and optimized for highly distributed computing environments. This makes them hardly accessible for students, researchers and hackers. Many simple Python implementations can be found on Github, but none of them is able to beat a reasonable baseline on games such as Othello or Connect Four. As an illustration, the benchmark in the README of the most popular of them only features a random baseline, along with a greedy baseline that does not appear to be significantly stronger.
    Downloads: 11 This Week
    Last Update:
    See Project
  • 11
    dlib

    dlib

    Toolkit for making machine learning and data analysis applications

    Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real world problems. It is used in both industry and academia in a wide range of domains including robotics, embedded devices, mobile phones, and large high performance computing environments. Dlib's open source licensing allows you to use it in any application, free of charge. Good unit test coverage, the ratio of unit test lines of code to library lines of code is about 1 to 4. The library is tested regularly on MS Windows, Linux, and Mac OS X systems. No other packages are required to use the library, only APIs that are provided by an out of the box OS are needed. There is no installation or configure step needed before you can use the library. All operating system specific code is isolated inside the OS abstraction layers which are kept as small as possible.
    Downloads: 10 This Week
    Last Update:
    See Project
  • 12
    Imagen - Pytorch

    Imagen - Pytorch

    Implementation of Imagen, Google's Text-to-Image Neural Network

    Implementation of Imagen, Google's Text-to-Image Neural Network that beats DALL-E2, in Pytorch. It is the new SOTA for text-to-image synthesis. Architecturally, it is actually much simpler than DALL-E2. It consists of a cascading DDPM conditioned on text embeddings from a large pre-trained T5 model (attention network). It also contains dynamic clipping for improved classifier-free guidance, noise level conditioning, and a memory-efficient unit design. It appears neither CLIP nor prior network is needed after all. And so research continues. For simpler training, you can directly supply text strings instead of precomputing text encodings. (Although for scaling purposes, you will definitely want to precompute the textual embeddings + mask)
    Downloads: 8 This Week
    Last Update:
    See Project
  • 13
    SageMaker Training Toolkit

    SageMaker Training Toolkit

    Train machine learning models within Docker containers

    Train machine learning models within a Docker container using Amazon SageMaker. Amazon SageMaker is a fully managed service for data science and machine learning (ML) workflows. You can use Amazon SageMaker to simplify the process of building, training, and deploying ML models. To train a model, you can include your training script and dependencies in a Docker container that runs your training code. A container provides an effectively isolated environment, ensuring a consistent runtime and reliable training process. The SageMaker Training Toolkit can be easily added to any Docker container, making it compatible with SageMaker for training models. If you use a prebuilt SageMaker Docker image for training, this library may already be included. Write a training script (eg. train.py). Define a container with a Dockerfile that includes the training script and any dependencies.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 14
    Deepchecks

    Deepchecks

    Test Suites for validating ML models & data

    Deepchecks is the leading tool for testing and for validating your machine learning models and data, and it enables doing so with minimal effort. Deepchecks accompany you through various validation and testing needs such as verifying your data’s integrity, inspecting its distributions, validating data splits, evaluating your model and comparing between different models. While you’re in the research phase, and want to validate your data, find potential methodological problems, and/or validate your model and evaluate it. To run a specific single check, all you need to do is import it and then to run it with the required (check-dependent) input parameters. More details about the existing checks and the parameters they can receive can be found in our API Reference. An ordered collection of checks, that can have conditions added to them. The Suite enables displaying a concluding report for all of the Checks that ran.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 15

    Face Recognition

    World's simplest facial recognition api for Python & the command line

    Face Recognition is the world's simplest face recognition library. It allows you to recognize and manipulate faces from Python or from the command line using dlib's (a C++ toolkit containing machine learning algorithms and tools) state-of-the-art face recognition built with deep learning. Face Recognition is highly accurate and is able to do a number of things. It can find faces in pictures, manipulate facial features in pictures, identify faces in pictures, and do face recognition on a folder of images from the command line. It could even do real-time face recognition and blur faces on videos when used with other Python libraries.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 16
    Axon

    Axon

    Nx-powered Neural Networks

    Nx-powered Neural Networks for Elixir. Axon consists of the following components. Functional API – A low-level API of numerical definitions (defn) of which all other APIs build on. Model Creation API – A high-level model creation API which manages model initialization and application. Optimization API – An API for creating and using first-order optimization techniques based on the Optax library. Training API – An API for quickly training models, inspired by PyTorch Ignite. Axon provides abstractions that enable easy integration while maintaining a level of separation between each component. You should be able to use any of the APIs without dependencies on others. By decoupling the APIs, Axon gives you full control over each aspect of creating and training a neural network. At the lowest-level, Axon consists of a number of modules with functional implementations of common methods in deep learning.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 17
    Sonnet

    Sonnet

    TensorFlow-based neural network library

    Sonnet is a neural network library built on top of TensorFlow designed to provide simple, composable abstractions for machine learning research. Sonnet can be used to build neural networks for various purposes, including different types of learning. Sonnet’s programming model revolves around a single concept: modules. These modules can hold references to parameters, other modules and methods that apply some function on the user input. There are a number of predefined modules that already ship with Sonnet, making it quite powerful and yet simple at the same time. Users are also encouraged to build their own modules. Sonnet is designed to be extremely unopinionated about your use of modules. It is simple to understand, and offers clear and focused code.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 18
    The Julia Programming Language

    The Julia Programming Language

    High-level, high-performance dynamic language for technical computing

    Julia is a fast, open source high-performance dynamic language for technical computing. It can be used for data visualization and plotting, deep learning, machine learning, scientific computing, parallel computing and so much more. Having a high level syntax, Julia is easy to use for programmers of every level and background. Julia has more than 2,800 community-registered packages including various mathematical libraries, data manipulation tools, and packages for general purpose computing. Libraries from Python, R, C/Fortran, C++, and Java can also be used.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 19
    Django friendly finite state machine

    Django friendly finite state machine

    Django friendly finite state machine support

    Django-fsm adds simple declarative state management for Django models. If you need parallel task execution, view, and background task code reuse over different flows - check my new project Django-view flow. Instead of adding a state field to a Django model and managing its values by hand, you use FSMField and mark model methods with the transition decorator. These methods could contain side effects of the state change. You may also take a look at the Django-fsm-admin project containing a mixin and template tags to integrate Django-fsm state transitions into the Django admin. FSM really helps to structure the code, especially when a new developer comes to the project. FSM is most effective when you use it for some sequential steps. Transition logging support could be achieved with help of django-fsm-log package.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 20
    Gen.jl

    Gen.jl

    A general-purpose probabilistic programming system

    An open-source stack for generative modeling and probabilistic inference. Gen’s inference library gives users building blocks for writing efficient probabilistic inference algorithms that are tailored to their models, while automating the tricky math and the low-level implementation details. Gen helps users write hybrid algorithms that combine neural networks, variational inference, sequential Monte Carlo samplers, and Markov chain Monte Carlo. Gen features an easy-to-use modeling language for writing down generative models, inference models, variational families, and proposal distributions using ordinary code. But it also lets users migrate parts of their model or inference algorithm to specialized modeling languages for which it can generate especially fast code. Users can also hand-code parts of their models that demand better performance. Neural network inference is fast, but can be inaccurate on out-of-distribution data, and requires expensive training.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 21
    Horovod

    Horovod

    Distributed training framework for TensorFlow, Keras, PyTorch, etc.

    Horovod was originally developed by Uber to make distributed deep learning fast and easy to use, bringing model training time down from days and weeks to hours and minutes. With Horovod, an existing training script can be scaled up to run on hundreds of GPUs in just a few lines of Python code. Horovod can be installed on-premise or run out-of-the-box in cloud platforms, including AWS, Azure, and Databricks. Horovod can additionally run on top of Apache Spark, making it possible to unify data processing and model training into a single pipeline. Once Horovod has been configured, the same infrastructure can be used to train models with any framework, making it easy to switch between TensorFlow, PyTorch, MXNet, and future frameworks as machine learning tech stacks continue to evolve. Start scaling your model training with just a few lines of Python code. Scale up to hundreds of GPUs with upwards of 90% scaling efficiency.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 22
    Lucid

    Lucid

    A collection of infrastructure and tools for research

    Lucid is a collection of infrastructure and tools for research in neural network interpretability. Lucid is research code, not production code. We provide no guarantee it will work for your use case. Lucid is maintained by volunteers who are unable to provide significant technical support. Start visualizing neural networks with no setup. The following notebooks run right from your browser, thanks to Collaboratory. It's a Jupyter notebook environment that requires no setup to use and runs entirely in the cloud. You can run the notebooks on your local machine, too. Clone the repository and find them in the notebooks subfolder. You will need to run a local instance of the Jupyter notebook environment to execute them. Feature visualization answers questions about what a network, or parts of a network, are looking for by generating examples.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 23
    Rubix ML

    Rubix ML

    A high-level machine learning and deep learning library for PHP

    Rubix ML is a free open-source machine learning (ML) library that allows you to build programs that learn from your data using the PHP language. We provide tools for the entire machine learning life cycle from ETL to training, cross-validation, and production with over 40 supervised and unsupervised learning algorithms. In addition, we provide tutorials and other educational content to help you get started using ML in your projects. Our intuitive interface is quick to grasp while hiding alot of power and complexity. Write less code and iterate faster leaving the hard stuff to us. Rubix ML utilizes a versatile modular architecture that is defined by a few key abstractions and their types and interfaces. Train models in a fraction of the time by installing the optional Tensor extension powered by C. Learners such as neural networks will automatically get a performance boost.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 24
    Seldon Core

    Seldon Core

    An MLOps framework to package, deploy, monitor and manage models

    The de facto standard open-source platform for rapidly deploying machine learning models on Kubernetes. Seldon Core, our open-source framework, makes it easier and faster to deploy your machine learning models and experiments at scale on Kubernetes. Seldon Core serves models built in any open-source or commercial model building framework. You can make use of powerful Kubernetes features like custom resource definitions to manage model graphs. And then connect your continuous integration and deployment (CI/CD) tools to scale and update your deployment. Built on Kubernetes, runs on any cloud and on-premises. Framework agnostic, supports top ML libraries, toolkits and languages. Advanced deployments with experiments, ensembles and transformers. Our open-source framework makes it easier and faster to deploy your machine learning models and experiments at scale on Kubernetes. The Kubeflow project is dedicated to making deployments of machine learning (ML) workflows on Kubernetes.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 25
    lightning AI

    lightning AI

    The most intuitive, flexible, way for researchers to build models

    Build in days not months with the most intuitive, flexible framework for building models and Lightning Apps (ie: ML workflow templates) which "glue" together your favorite ML lifecycle tools. Build models and build/publish end-to-end ML workflows that "glue" your favorite tools together. Models are “easy”, the “glue” work is hard. Lightning Apps are community-built templates that stitch together your favorite ML lifecycle tools into cohesive ML workflows that can run on your laptop or any cluster. Find templates (Lightning Apps), modify them and publish your own. Lightning Apps can even be full standalone ML products! Run on your laptop for free! Download the code and type 'lightning run app'. Feel free to ssh into any machine and run from there as well. In research, we often have multiple separate scripts to train models, finetune them, collect results and more.
    Downloads: 3 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • 3
  • 4
  • 5
  • Next