Roboflow Docs
DashboardForum
  • Build Vision Models with Roboflow
  • Quickstart
  • Roboflow Enterprise
  • Workspaces
    • Create a Workspace
    • Delete a Workspace
    • Add Team Members
    • Role-Based Access Control
  • Usage Based Pricing
  • Workflows
    • Create a Workflow
    • Build a Workflow
    • Test a Workflow
    • Deploy a Workflow
    • Workflow Examples
      • Multimodal Model Workflow
    • Share a Workflow
      • Workflow Sharing Configuration
    • Advance Workflow Topics
      • JSON Editor
  • Datasets
    • Create a Project
    • Upload Data
      • Import Data from Cloud Providers
        • AWS S3 Bucket
        • Azure Blob Storage
        • Google Cloud Storage
      • Upload Video
      • Import from Roboflow Universe
    • Manage Batches
    • Search a Dataset
    • Create a Dataset Version
    • Preprocess Images
    • Create Augmented Images
    • Add Tags to Images
    • Manage Classes
    • Edit Keypoint Skeletons
    • Create an Annotation Attribute
    • Export Versions
    • Dataset Analytics
    • Merge Projects
    • Delete an Image
    • Delete a Version
    • Delete a Project
    • Project Folders
  • Annotate
    • Annotation Tools
    • Use Roboflow Annotate
      • Annotate Keypoints
      • Label Assist (AI Labeling)
      • Enhanced Smart Polygon with SAM (AI Labeling)
      • Smart Polygon (AI Labeling)
      • Keyboard Shortcuts
      • Comment on an Image
      • Annotation History
      • Similarity Search
      • Box Prompting (AI Labeling)
    • Automated Annotation with Auto Label
    • Collaborate on Annotations
    • Annotation Insights
    • Labeling Best Practices
  • Train
    • Train a Model in Roboflow
      • Train from Scratch
      • Train from a Universe Checkpoint
      • Python Package
      • Roboflow Notebooks (GitHub)
    • Train from Azure Vision
    • Train from Google Cloud
    • View Training Results
    • Evaluate Trained Models
    • Custom Training Notebooks
  • Deploy
    • Deployment Overview
      • Roboflow Managed Deployments Overview
    • Serverless Hosted API
      • Object Detection
      • Classification
      • Instance Segmentation
        • Semantic Segmentation
      • Keypoint Detection
      • Foundation Models
        • CLIP
        • OCR
        • YOLO-World
      • Video Inference
        • Use a Fine-Tuned Model
        • Use CLIP
        • Use Gaze Detection
        • API Reference
        • Video Inference JSON Output Format
      • Pre-Trained Model APIs
        • Blur People API
        • OCR API
        • Logistics API
        • Image Tagging API
        • People Detection API
        • Fish Detection API
        • Bird Detection API
        • PPE Detection API
        • Barcode Detection API
        • License Plate Detection API
        • Ceramic Defect Detection API
        • Metal Defect Detection API
    • Serverless Hosted API V2
    • Dedicated Deployments
      • How to create a dedicated deployment (Roboflow App)
      • How to create a dedicated deployment (Roboflow CLI)
      • How to use a dedicated deployment
      • How to manage dedicated deployment using HTTP APIs
    • SDKs
      • Python inference-sdk
      • Web Browser
        • inferencejs Reference
        • inferencejs Requirements
      • Lens Studio
        • Changelog - Lens Studio
      • Mobile iOS
      • Luxonis OAK
    • Upload Custom Weights
    • Download Roboflow Model Weights
    • Enterprise Deployment
      • License Server
      • Offline Mode
      • Kubernetes
      • Docker Compose
    • Model Monitoring
      • Alerting
  • Roboflow CLI
    • Introduction
    • Installation and Authentication
    • Getting Help
    • Upload Dataset
    • Download Dataset
    • Run Inference
  • API Reference
    • Introduction
    • Python Package
    • REST API Structure
    • Authentication
    • Workspace and Project IDs
    • Workspaces
    • Workspace Image Query
    • Batches
    • Annotation Jobs
    • Projects
      • Initialize
      • Create
      • Project Folders API
    • Images
      • Upload Images
      • Image Details
      • Upload Dataset
      • Upload an Annotation
      • Search
      • Tags
    • Versions
      • View a Version
      • Create a Project Version
    • Inference
    • Export Data
    • Train a Model
    • Annotation Insights
      • Annotation Insights (Legacy Endpoint)
    • Model Monitoring
      • Custom Metadata
      • Inference Result Stats
  • Support
    • Share a Workspace with Support
    • Account Deletion
    • Frequently Asked Questions
Powered by GitBook
On this page
  • API Reference
  • URL
  • Request Body
  • Response Format
  • Drawing a Box from the Inference API JSON Output

Was this helpful?

  1. Deploy
  2. Serverless Hosted API

Object Detection

Run inference on your object detection models hosted on Roboflow.

PreviousServerless Hosted APINextClassification

Last updated 3 months ago

Was this helpful?

There are several ways to run object detection inferences using the Roboflow Hosted API. You can use one of our different SDKs, or send a REST request to our hosted endpoint.

To install dependencies, pip install inference-sdk.

# import the inference-sdk
from inference_sdk import InferenceHTTPClient

CLIENT = InferenceHTTPClient(
    api_url="https://detect.roboflow.com",
    api_key="API_KEY"
)

result = CLIENT.infer(your_image.jpg, model_id="football-players-detection-3zvbc/12")

Linux or MacOS

Retrieving JSON predictions for a local file called YOUR_IMAGE.jpg:

base64 YOUR_IMAGE.jpg | curl -d @- \
"https://detect.roboflow.com/your-model/42?api_key=YOUR_KEY"

Inferring on an image hosted elsewhere on the web via its URL (don't forget to ):

curl -X POST "https://detect.roboflow.com/your-model/42?\
api_key=YOUR_KEY&\
image=https%3A%2F%2Fi.imgur.com%2FPEEvqPN.png"

Windows

You will need to install and . The easiest way to do this is to use the which also includes the curl and base64 command line tools when you select "Use Git and optional Unix tools from the Command Prompt" during installation.

Then you can use the same commands as above.

Node.js

Inferring on a Local Image

const axios = require("axios");
const fs = require("fs");

const image = fs.readFileSync("YOUR_IMAGE.jpg", {
    encoding: "base64"
});

axios({
    method: "POST",
    url: "https://detect.roboflow.com/your-model/42",
    params: {
        api_key: "YOUR_KEY"
    },
    data: image,
    headers: {
        "Content-Type": "application/x-www-form-urlencoded"
    }
})
.then(function(response) {
    console.log(response.data);
})
.catch(function(error) {
    console.log(error.message);
});

Inferring on an Image Hosted Elsewhere via URL

const axios = require("axios");

axios({
    method: "POST",
    url: "https://detect.roboflow.com/your-model/42",
    params: {
        api_key: "YOUR_KEY",
        image: "https://i.imgur.com/PEEvqPN.png"
    }
})
.then(function(response) {
    console.log(response.data);
})
.catch(function(error) {
    console.log(error.message);
});

Web

Swift

Inferring on a Local Image

import UIKit

// Load Image and Convert to Base64
let image = UIImage(named: "your-image-path") // path to image to upload ex: image.jpg
let imageData = image?.jpegData(compressionQuality: 1)
let fileContent = imageData?.base64EncodedString()
let postData = fileContent!.data(using: .utf8)

// Initialize Inference Server Request with API_KEY, Model, and Model Version
var request = URLRequest(url: URL(string: "https://detect.roboflow.com/your-model/your-model-version?api_key=YOUR_APIKEY&name=YOUR_IMAGE.jpg")!,timeoutInterval: Double.infinity)
request.addValue("application/x-www-form-urlencoded", forHTTPHeaderField: "Content-Type")
request.httpMethod = "POST"
request.httpBody = postData

// Execute Post Request
URLSession.shared.dataTask(with: request, completionHandler: { data, response, error in
    
    // Parse Response to String
    guard let data = data else {
        print(String(describing: error))
        return
    }
    
    // Convert Response String to Dictionary
    do {
        let dict = try JSONSerialization.jsonObject(with: data, options: []) as? [String: Any]
    } catch {
        print(error.localizedDescription)
    }
    
    // Print String Response
    print(String(data: data, encoding: .utf8)!)
}).resume()

Objective C

Kotlin

Inferring on a Local Image

import java.io.*
import java.net.HttpURLConnection
import java.net.URL
import java.nio.charset.StandardCharsets
import java.util.*

fun main() {
    // Get Image Path
    val filePath = System.getProperty("user.dir") + System.getProperty("file.separator") + "YOUR_IMAGE.jpg"
    val file = File(filePath)

    // Base 64 Encode
    val encodedFile: String
    val fileInputStreamReader = FileInputStream(file)
    val bytes = ByteArray(file.length().toInt())
    fileInputStreamReader.read(bytes)
    encodedFile = String(Base64.getEncoder().encode(bytes), StandardCharsets.US_ASCII)
    val API_KEY = "" // Your API Key
    val MODEL_ENDPOINT = "dataset/v" // Set model endpoint (Found in Dataset URL)

    // Construct the URL
    val uploadURL ="https://detect.roboflow.com/" + MODEL_ENDPOINT + "?api_key=" + API_KEY + "&name=YOUR_IMAGE.jpg";

    // Http Request
    var connection: HttpURLConnection? = null
    try {
        // Configure connection to URL
        val url = URL(uploadURL)
        connection = url.openConnection() as HttpURLConnection
        connection.requestMethod = "POST"
        connection.setRequestProperty("Content-Type",
                "application/x-www-form-urlencoded")
        connection.setRequestProperty("Content-Length",
                Integer.toString(encodedFile.toByteArray().size))
        connection.setRequestProperty("Content-Language", "en-US")
        connection.useCaches = false
        connection.doOutput = true

        //Send request
        val wr = DataOutputStream(
                connection.outputStream)
        wr.writeBytes(encodedFile)
        wr.close()

        // Get Response
        val stream = connection.inputStream
        val reader = BufferedReader(InputStreamReader(stream))
        var line: String?
        while (reader.readLine().also { line = it } != null) {
            println(line)
        }
        reader.close()
    } catch (e: Exception) {
        e.printStackTrace()
    } finally {
        connection?.disconnect()
    }
}
main()

Inferring on an Image Hosted Elsewhere via URL

import java.io.BufferedReader
import java.io.DataOutputStream
import java.io.InputStreamReader
import java.net.HttpURLConnection
import java.net.URL
import java.net.URLEncoder

fun main() {
    val imageURL = "https://i.imgur.com/PEEvqPN.png" // Replace Image URL
    val API_KEY = "" // Your API Key
    val MODEL_ENDPOINT = "dataset/v" // Set model endpoint

    // Upload URL
    val uploadURL = "https://detect.roboflow.com/" + MODEL_ENDPOINT + "?api_key=" + API_KEY + "&image=" + URLEncoder.encode(imageURL, "utf-8");

    // Http Request
    var connection: HttpURLConnection? = null
    try {
        // Configure connection to URL
        val url = URL(uploadURL)
        connection = url.openConnection() as HttpURLConnection
        connection.requestMethod = "POST"
        connection.setRequestProperty("Content-Type", "application/x-www-form-urlencoded")
        connection.setRequestProperty("Content-Length", Integer.toString(uploadURL.toByteArray().size))
        connection.setRequestProperty("Content-Language", "en-US")
        connection.useCaches = false
        connection.doOutput = true

        // Send request
        val wr = DataOutputStream(connection.outputStream)
        wr.writeBytes(uploadURL)
        wr.close()

        // Get Response
        val stream = URL(uploadURL).openStream()
        val reader = BufferedReader(InputStreamReader(stream))
        var line: String?
        while (reader.readLine().also { line = it } != null) {
            println(line)
        }
        reader.close()
    } catch (e: Exception) {
        e.printStackTrace()
    } finally {
        connection?.disconnect()
    }
}

main()

Java

Inferring on a Local Image

import java.io.*;
import java.net.HttpURLConnection;
import java.net.URL;
import java.nio.charset.StandardCharsets;
import java.util.Base64;

public class InferenceLocal {
    public static void main(String[] args) throws IOException {
        // Get Image Path
        String filePath = System.getProperty("user.dir") + System.getProperty("file.separator") + "YOUR_IMAGE.jpg";
        File file = new File(filePath);

        // Base 64 Encode
        String encodedFile;
        FileInputStream fileInputStreamReader = new FileInputStream(file);
        byte[] bytes = new byte[(int) file.length()];
        fileInputStreamReader.read(bytes);
        encodedFile = new String(Base64.getEncoder().encode(bytes), StandardCharsets.US_ASCII);

        String API_KEY = ""; // Your API Key
        String MODEL_ENDPOINT = "dataset/v"; // model endpoint

        // Construct the URL
        String uploadURL = "https://detect.roboflow.com/" + MODEL_ENDPOINT + "?api_key=" + API_KEY
                + "&name=YOUR_IMAGE.jpg";

        // Http Request
        HttpURLConnection connection = null;
        try {
            // Configure connection to URL
            URL url = new URL(uploadURL);
            connection = (HttpURLConnection) url.openConnection();
            connection.setRequestMethod("POST");
            connection.setRequestProperty("Content-Type", "application/x-www-form-urlencoded");

            connection.setRequestProperty("Content-Length", Integer.toString(encodedFile.getBytes().length));
            connection.setRequestProperty("Content-Language", "en-US");
            connection.setUseCaches(false);
            connection.setDoOutput(true);

            // Send request
            DataOutputStream wr = new DataOutputStream(connection.getOutputStream());
            wr.writeBytes(encodedFile);
            wr.close();

            // Get Response
            InputStream stream = connection.getInputStream();
            BufferedReader reader = new BufferedReader(new InputStreamReader(stream));
            String line;
            while ((line = reader.readLine()) != null) {
                System.out.println(line);
            }
            reader.close();
        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            if (connection != null) {
                connection.disconnect();
            }
        }

    }

}

Inferring on an Image Hosted Elsewhere via URL

import java.io.BufferedReader;
import java.io.DataOutputStream;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.net.HttpURLConnection;
import java.net.URL;
import java.net.URLEncoder;
import java.nio.charset.StandardCharsets;

public class InferenceHosted {
    public static void main(String[] args) {
        String imageURL = "https://i.imgur.com/PEEvqPN.png"; // Replace Image URL
        String API_KEY = ""; // Your API Key
        String MODEL_ENDPOINT = "dataset/v"; // model endpoint

        // Upload URL
        String uploadURL = "https://detect.roboflow.com/" + MODEL_ENDPOINT + "?api_key=" + API_KEY + "&image="
                + URLEncoder.encode(imageURL, StandardCharsets.UTF_8);

        // Http Request
        HttpURLConnection connection = null;
        try {
            // Configure connection to URL
            URL url = new URL(uploadURL);
            connection = (HttpURLConnection) url.openConnection();
            connection.setRequestMethod("POST");
            connection.setRequestProperty("Content-Type", "application/x-www-form-urlencoded");

            connection.setRequestProperty("Content-Length", Integer.toString(uploadURL.getBytes().length));
            connection.setRequestProperty("Content-Language", "en-US");
            connection.setUseCaches(false);
            connection.setDoOutput(true);

            // Send request
            DataOutputStream wr = new DataOutputStream(connection.getOutputStream());
            wr.writeBytes(uploadURL);
            wr.close();

            // Get Response
            InputStream stream = new URL(uploadURL).openStream();
            BufferedReader reader = new BufferedReader(new InputStreamReader(stream));
            String line;
            while ((line = reader.readLine()) != null) {
                System.out.println(line);
            }
            reader.close();
        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            if (connection != null) {
                connection.disconnect();
            }
        }
    }
}

Gemfile

Gemfile
source "https://rubygems.org"

gem "httparty", "~> 0.18.1"
gem "base64", "~> 0.1.0"
gem "cgi", "~> 0.2.1"

Gemfile.lock

Gemfile.lock
GEM
  remote: https://rubygems.org/
  specs:
    base64 (0.1.0)
    cgi (0.2.1)
    httparty (0.18.1)
      mime-types (~> 3.0)
      multi_xml (>= 0.5.2)
    mime-types (3.3.1)
      mime-types-data (~> 3.2015)
    mime-types-data (3.2021.0225)
    multi_xml (0.6.0)

PLATFORMS
  x64-mingw32
  x86_64-linux

DEPENDENCIES
  base64 (~> 0.1.0)
  cgi (~> 0.2.1)
  httparty (~> 0.18.1)

BUNDLED WITH
   2.2.15

Inferring on a Local Image

require 'base64'
require 'httparty'

encoded = Base64.encode64(File.open("YOUR_IMAGE.jpg", "rb").read)
model_endpoint = "dataset/v" # Set model endpoint
api_key = "" # Your API KEY Here

params = "?api_key=" + api_key
+ "&name=YOUR_IMAGE.jpg"

response = HTTParty.post(
    "https://detect.roboflow.com/" + model_endpoint + params,
    body: encoded, 
    headers: {
    'Content-Type' => 'application/x-www-form-urlencoded',
    'charset' => 'utf-8'
  })

  puts response

 

Inferring on an Image Hosted Elsewhere via URL

require 'httparty'
require 'cgi'

model_endpoint = "dataset/v" # Set model endpoint
api_key = "" # Your API KEY Here
img_url = "https://i.imgur.com/PEEvqPN.png" # Construct the URL

img_url = CGI::escape(img_url)

params =  "?api_key=" + api_key + "&image=" + img_url

response = HTTParty.post(
    "https://detect.roboflow.com/" + model_endpoint + params,
    headers: {
    'Content-Type' => 'application/x-www-form-urlencoded',
    'charset' => 'utf-8'
  })

puts response

Inferring on a Local Image

<?php

// Base 64 Encode Image
$data = base64_encode(file_get_contents("YOUR_IMAGE.jpg"));

$api_key = ""; // Set API Key
$model_endpoint = "dataset/v"; // Set model endpoint (Found in Dataset URL)

// URL for Http Request
$url = "https://detect.roboflow.com/" . $model_endpoint
. "?api_key=" . $api_key
. "&name=YOUR_IMAGE.jpg";

// Setup + Send Http request
$options = array(
  'http' => array (
    'header' => "Content-type: application/x-www-form-urlencoded\r\n",
    'method'  => 'POST',
    'content' => $data
  ));

$context  = stream_context_create($options);
$result = file_get_contents($url, false, $context);
echo $result;
?>

Inferring on an Image Hosted Elsewhere via URL

<?php

$api_key = ""; // Set API Key
$model_endpoint = "dataset/v"; // Set model endpoint (Found in Dataset URL)
$img_url = "https://i.imgur.com/PEEvqPN.png";

// URL for Http Request
$url =  "https://detect.roboflow.com/" . $model_endpoint
. "?api_key=" . $api_key
. "&image=" . urlencode($img_url);

// Setup + Send Http request
$options = array(
  'http' => array (
    'header' => "Content-type: application/x-www-form-urlencoded\r\n",
    'method'  => 'POST'
  ));

$context  = stream_context_create($options);
$result = file_get_contents($url, false, $context);
echo $result;
?>

Inferring on a Local Image

package main

import (
    "bufio"
    "encoding/base64"
    "fmt"
    "io/ioutil"
    "os"
	"net/http"
	"strings"
)

func main() {
	api_key := ""  // Your API Key
	model_endpoint := "dataset/v" // Set model endpoint

    // Open file on disk.
    f, _ := os.Open("YOUR_IMAGE.jpg")

    // Read entire JPG into byte slice.
    reader := bufio.NewReader(f)
    content, _ := ioutil.ReadAll(reader)

    // Encode as base64.
    data := base64.StdEncoding.EncodeToString(content)
	uploadURL := "https://detect.roboflow.com/" + model_endpoint + "?api_key=" + api_key + "&name=YOUR_IMAGE.jpg"

	req, _ := http.NewRequest("POST", uploadURL, strings.NewReader(data))
    req.Header.Set("Accept", "application/json")

    client := &http.Client{}
    resp, _ := client.Do(req)
    defer resp.Body.Close()

   	bytes, _ := ioutil.ReadAll(resp.Body)
    fmt.Println(string(bytes))

}

Inferring on an Image Hosted Elsewhere via URL

package main

import (
    "fmt"
	"net/http"
	"net/url"
  "io/ioutil"
)

func main() {
	api_key := ""  // Your API Key
	model_endpoint := "dataset/v" // Set model endpoint
	img_url := "https://i.ibb.co/jzr27x0/YOUR-IMAGE.jpg"


	uploadURL := "https://detect.roboflow.com/" + model_endpoint + "?api_key=" + api_key + "&image=" + url.QueryEscape(img_url)

	req, _ := http.NewRequest("POST", uploadURL, nil)
    req.Header.Set("Accept", "application/json")

    client := &http.Client{}
    resp, _ := client.Do(req)
    defer resp.Body.Close()

   	bytes, _ := ioutil.ReadAll(resp.Body)
    fmt.Println(string(bytes))


}

Inferring on a Local Image

using System;
using System.IO;
using System.Net;
using System.Text;

namespace InferenceLocal
{
    class InferenceLocal
    {

        static void Main(string[] args)
        {
            byte[] imageArray = System.IO.File.ReadAllBytes(@"YOUR_IMAGE.jpg");
            string encoded = Convert.ToBase64String(imageArray);
            byte[] data = Encoding.ASCII.GetBytes(encoded);
            string API_KEY = ""; // Your API Key
            string MODEL_ENDPOINT = "dataset/v"; // Set model endpoint

            // Construct the URL
            string uploadURL =
                    "https://detect.roboflow.com/" + MODEL_ENDPOINT + "?api_key=" + API_KEY
                + "&name=YOUR_IMAGE.jpg";

            // Service Request Config
            ServicePointManager.Expect100Continue = true;
            ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;

            // Configure Request
            WebRequest request = WebRequest.Create(uploadURL);
            request.Method = "POST";
            request.ContentType = "application/x-www-form-urlencoded";
            request.ContentLength = data.Length;

            // Write Data
            using (Stream stream = request.GetRequestStream())
            {
                stream.Write(data, 0, data.Length);
            }

            // Get Response
            string responseContent = null;
            using (WebResponse response = request.GetResponse())
            {
                using (Stream stream = response.GetResponseStream())
                {
                    using (StreamReader sr99 = new StreamReader(stream))
                    {
                        responseContent = sr99.ReadToEnd();
                    }
                }
            }

            Console.WriteLine(responseContent);

        }
    }
}

Inferring on an Image Hosted Elsewhere via URL

using System;
using System.IO;
using System.Net;
using System.Web;

namespace InferenceHosted
{
    class InferenceHosted
    {
        static void Main(string[] args)
        {
            string API_KEY = ""; // Your API Key
            string imageURL = "https://i.ibb.co/jzr27x0/YOUR-IMAGE.jpg";
            string MODEL_ENDPOINT = "dataset/v"; // Set model endpoint

            // Construct the URL
            string uploadURL =
                    "https://detect.roboflow.com/" + MODEL_ENDPOINT
                    + "?api_key=" + API_KEY
                    + "&image=" + HttpUtility.UrlEncode(imageURL);

            // Service Point Config
            ServicePointManager.Expect100Continue = true;
            ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;

            // Configure Http Request
            WebRequest request = WebRequest.Create(uploadURL);
            request.Method = "POST";
            request.ContentType = "application/x-www-form-urlencoded";
            request.ContentLength = 0;

            // Get Response
            string responseContent = null;
            using (WebResponse response = request.GetResponse())
            {
                using (Stream stream = response.GetResponseStream())
                {
                    using (StreamReader sr99 = new StreamReader(stream))
                    {
                        responseContent = sr99.ReadToEnd();
                    }
                }
            }

            Console.WriteLine(responseContent);

        }
    }
}

Try asking Lenny, our AI-powered chatbot, to create a code sample for you!

API Reference

URL

POST https://detect.roboflow.com/:projectId/:versionNumber

Name
Type
Description

projectId

string

The url-safe version of the dataset name. You can find it in the web UI by looking at the URL on the main project view or by clicking the "Get curl command" button in the train results section of your dataset version after training your model.

version

number

The version number identifying the version of of your dataset

See how to get your project ID and version number .

There are two ways you can send an image to the Hosted Inference API via a REST request:

  • Attach a base64 encoded image to the POST request body

  • Send a URL of an image file using the image URL query

    • ex: https://detect.roboflow.com/:datasetSlug/:versionNumber?image=https://imageurl.com

Query Parameters

Name
Type
Description

image

string

URL of the image to add. Use if your image is hosted elsewhere. (Required when you don't POST a base64 encoded image in the request body.) Note: don't forget to URL-encode it.

classes

string

Restrict the predictions to only those of certain classes. Provide as a comma-separated string. Example: dog,cat Default: not present (show all classes)

overlap

number

The maximum percentage (on a scale of 0-100) that bounding box predictions of the same class are allowed to overlap before being combined into a single box.

Default: 30

confidence

number

A threshold for the returned predictions on a scale of 0-100. A lower number will return more predictions. A higher number will return fewer high-certainty predictions.

Default: 40

stroke

number

The width (in pixels) of the bounding box displayed around predictions (only has an effect when format is image).

Default: 1

labels

boolean

Whether or not to display text labels on the predictions (only has an effect when format is image).

Default: false

format

string

Options:

  • json: returns an array of JSON predictions. (See response format tab).

  • image: returns an image with annotated predictions as a binary blob with a Content-Type of image/jpeg.

Default: json

api_key

string

Your API key (obtained via your workspace API settings page)

Request Body

Type
Description

string

A base64 encoded image. (Required when you don't pass an image URL in the query parameters).

The content type should be application/x-www-form-urlencoded with a string body.

Response Format

The hosted API inference endpoint, as well as most of our SDKs, return a JSON object containing an array of predictions. Each prediction has the following properties:

  • x = the horizontal center point of the detected object

  • y = the vertical center point of the detected object

  • width = the width of the bounding box

  • height = the height of the bounding box

  • class = the class label of the detected object

  • confidence = the model's confidence that the detected object has the correct label and position coordinates

Here is an example response object from the REST API:

{
    "predictions": [
        {
            "x": 189.5,
            "y": 100,
            "width": 163,
            "height": 186,
            "class": "helmet",
            "confidence": 0.544
        }
    ],
    "image": {
        "width": 2048,
        "height": 1371
    }
}

The image attribute contains the height and width of the image sent for inference. You may need to use these values for bounding box calculations.

Drawing a Box from the Inference API JSON Output

Frameworks and packages for rendering bounding boxes can differ in positional formats. Given the response JSON object's properties, a bounding box can always be drawn using some combination of the following rules:

  • the center point will always be (x,y)

  • the corner points (x1, y1) and (x2, y2) can be found using:

    • x1 = x - (width/2)

    • y1 = y - (height/2)

    • x2 = x + (width/2)

    • y2 = y + (height/2)

The corner points approach is a common pattern and seen in libraries such as Pillow when building the box object to render bounding boxes within an Image.

Don't forget to iterate through all detections found when working with predictions!

# example box object from the Pillow library
for bounding_box in detections:
    x1 = bounding_box['x'] - bounding_box['width'] / 2
    x2 = bounding_box['x'] + bounding_box['width'] / 2
    y1 = bounding_box['y'] - bounding_box['height'] / 2
    y2 = bounding_box['y'] + bounding_box['height'] / 2
    box = (x1, x2, y1, y2)

We're using to perform the POST request in this example so first run npm install axios to install the dependency.

We have realtime on-device inference available via roboflow.js; see .

axios
the documentation here
Click here to request an Objective-C snippet.
URL encode it
curl for Windows
GNU's base64 tool for Windows
git for Windows installer
here