Tired of wasting thousands of dollars on API subscriptions to support all your automations? What if there was a free alternative?
The 100% FREE No-Code Architects Toolkit API processes different types of media. It is built in Python using Flask.
The API can convert audio files. It creates transcriptions of content. It translates content between languages. It adds captions to videos. It can do very complicated media processing for content creation. The API can also manage files across multiple cloud services like Google Drive, Amazon S3, Google Cloud Storage, and Dropbox.
You can deploy this toolkit in several ways. It works with Docker. It runs on Google Cloud Platform. It functions on Digital Ocean. You can use it with any system that hosts Docker.
Easily replace services like ChatGPT Whisper, Cloud Convert, Createomate, JSON2Video, PDF(dot)co, Placid and OCodeKit.
Want help? Join a supportive community and get dedicated tech support.
Join the ONLY community where you learn to leverage AI automation and content to grow your business (and streamline your biz).
Who's this for?
- Coaches and consultants
- AI Automation agencies
- SMMA & Content agencies
- SaaS Startup Founders
Get courses, community, support, daily calls and more.
Join the No-Code Architects Community today!
Each endpoint is supported by robust payload validation and detailed API documentation to facilitate easy integration and usage.
/v1/audio/concatenate
- Combines multiple audio files into a single audio file.
/v1/code/execute/python
- Executes Python code remotely and returns the execution results.
/v1/ffmpeg/compose
- Provides a flexible interface to FFmpeg for complex media processing operations.
/v1/image/convert/video
- Transforms a static image into a video with custom duration and zoom effects.
-
- Converts media files from one format to another with customizable codec options.
-
- Converts various media formats specifically to MP3 audio.
-
- Downloads media content from various online sources using yt-dlp.
-
- Provides a web interface for collecting and displaying feedback on media content.
-
- Transcribes or translates audio/video content from a provided media URL.
-
- Detects silence intervals in a given media file.
-
- Extracts comprehensive metadata from media files including format, codecs, resolution, and bitrates.
/v1/s3/upload
- Uploads files to Amazon S3 storage by streaming directly from a URL.
-
- Provides a simple authentication mechanism to validate API keys.
-
- Verifies that the NCA Toolkit API is properly installed and functioning.
-
- Retrieves the status of a specific job by its ID.
-
- Retrieves the status of all jobs within a specified time range.
-
- Adds customizable captions to videos with various styling options.
-
- Combines multiple videos into a single continuous video file.
-
- Extracts a thumbnail image from a specific timestamp in a video.
-
- Cuts specified segments from a video file with optional encoding settings.
-
- Splits a video into multiple segments based on specified start and end times.
-
- Trims a video by keeping only the content between specified start and end times.
docker build -t no-code-architects-toolkit .
This repository is configured with a GitHub Actions workflow that automatically builds and publishes Docker images to GitHub Container Registry when:
- A new release is published
- A new version tag (v*.. format) is pushed
The Docker images are published to:
- GitHub Container Registry:
ghcr.io/yourusername/no-code-architects-toolkit
No additional GitHub secrets are needed as the workflow uses the built-in GITHUB_TOKEN
secret for authentication.
You can pull the pre-built images directly from GitHub Container Registry:
docker pull ghcr.io/yourusername/no-code-architects-toolkit:latest
Replace yourusername
with your actual GitHub username.
- Purpose: Used for API authentication.
- Requirement: Mandatory.
- Purpose: Endpoint URL for the S3-compatible service.
- Requirement: Mandatory if using S3-compatible storage.
- Purpose: The access key for the S3-compatible storage service.
- Requirement: Mandatory if using S3-compatible storage.
- Purpose: The secret key for the S3-compatible storage service.
- Requirement: Mandatory if using S3-compatible storage.
- Purpose: The bucket name for the S3-compatible storage service.
- Requirement: Mandatory if using S3-compatible storage.
- Purpose: The region for the S3-compatible storage service.
- Requirement: Mandatory if using S3-compatible storage, "None" is acceptible for some s3 providers.
- Purpose: The JSON credentials for the GCP Service Account.
- Requirement: Mandatory if using GCP storage.
- Purpose: The name of the GCP storage bucket.
- Requirement: Mandatory if using GCP storage.
- Purpose: Limits the maximum number of concurrent tasks in the queue.
- Default: 0 (unlimited)
- Recommendation: Set to a value based on your server resources, e.g., 10-20 for smaller instances.
- Purpose: Number of worker processes for handling requests.
- Default: Number of CPU cores + 1
- Recommendation: 2-4× number of CPU cores for CPU-bound workloads.
- Purpose: Timeout (in seconds) for worker processes.
- Default: 30
- Recommendation: Increase for processing large media files (e.g., 300-600).
- Purpose: Directory for temporary file storage during processing.
- Default: /tmp
- Recommendation: Set to a path with sufficient disk space for your expected workloads.
- Ensure all required environment variables are set based on the storage provider in use (GCP or S3-compatible).
- Missing any required variables will result in errors during runtime.
- Performance variables can be tuned based on your workload and available resources.
docker run -d -p 8080:8080 \
# Authentication (required)
-e API_KEY=your_api_key \
# Cloud storage provider (choose one)
# s3
#
#-e S3_ENDPOINT_URL=https://nyc3.digitaloceanspaces.com \
#-e S3_ACCESS_KEY=your_access_key \
#-e S3_SECRET_KEY=your_secret_key \
#-e S3_BUCKET_NAME=your_bucket_name \
#-e S3_REGION=nyc3 \
# Or
# GCP Storage
#
#-e GCP_SA_CREDENTIALS='{"your":"service_account_json"}' \
#-e GCP_BUCKET_NAME=your_gcs_bucket_name \
# Local storage configuration (optional)
-e LOCAL_STORAGE_PATH=/tmp \
# Performance tuning (optional)
-e MAX_QUEUE_LENGTH=10 \
-e GUNICORN_WORKERS=4 \
-e GUNICORN_TIMEOUT=300 \
no-code-architects-toolkit
This API can be deployed to various cloud platforms:
The Digital Ocean App platform is pretty easy to set up and get going, but it can cost more then other cloud providers.
You need to use the "webhook_url" (for any request that exceeds 1 min) in your API payload to avoid timeouts due to CloudFlair proxy timeout.
If you use the webhook_url, there is no limit to the processing length.
- Digital Ocean App Platform Installation Guide - Deploy the API on Digital Ocean App Platform
Sometimes difficult for people to install (especially on Google Business Workspaces), lots of detailed security exceptions.
However this is one of the cheapest options with great performance because you're only charged when the NCA Toolkit is processesing a request.
Outside of that you are not charged.
GCP Run will terminate long rununing processes, which can happen when processing larger files (whether you use the webhook_url or not).
However, when your processing times are consistant lower than 5 minutes (e.g. you're only process smaller files), it works great! The performance is also great and as soon as you stop making requests you stop paying.
They also have a GPU option that might be usable for better performance (untested).
- Google Cloud RUN Platform (GCP) Installation Guide - Deploy the API on Google Cloud Run
You can use these instructions to deploy the NCA Toolkit to any linux server (on any platform)
You can more easily control performance and cost this way, but requires more technical skill to get up and running (not much though).
- Install the Postman Template on your computer
- Import the API example requests from the template
- Configure your environment variables in Postman:
base_url
: Your deployed API URLx-api-key
: Your API key configured during installation
- Use the example requests to validate that the API is functioning correctly
- Use the NCA Toolkit API GPT to explore additional features
We welcome contributions from the public! If you'd like to contribute to this project, please follow these steps:
- Fork the repository
- Create a new branch for your feature or bug fix
- Make your changes
- Submit a pull request to the "build" branch
- Ensure any install or build dependencies are removed before the end of the layer when doing a build.
- Update the README.md with details of changes to the interface, this includes new environment variables, exposed ports, useful file locations and container parameters.
If you want to add new API endpoints, check out our Adding Routes Guide to learn how to use the dynamic route registration system.
Thank you for your contributions!
Get courses, community, support daily calls and more.
Join the No-Code Architects Community today!
This project is licensed under the GNU General Public License v2.0 (GPL-2.0).