Skip to main content

Calling the REST API

Use the Modelbit REST API to add inferences from your deployments into products, websites, and apps.

Single inference requests

The easiest way to fetch inferences from a deployment is one inference at a time. The single inference syntax is suited for use cases where you want predictions about one customer or event at a time.

curl -s -XPOST "https://<your-workspace-url>/v1/example/latest" \
-d '{"data": <inference-request>}'

Learn more about single inference REST requests.

Batch inference requests

To retrieve many inferences in a single REST call, use the batch request syntax . It's similar to the single request syntax, but instead takes a list of inference requests.

curl -s -XPOST "https://<your-workspace-url>/v1/example/latest" \
-d '{"data": [[1, <request-1>], [2, <request-2>]]}'

Learn more about batch inference REST requests.

Using API keys

You may limit access to your deployed models using API keys. You can create API keys in the Settings area of Modelbit, and send them to deployments in the Authorization header.

Learn more about REST requests with API keys.