Skip to main content

Async REST responses

Instead of waiting for your deployment's response you can have Modelbit call a webhook when the inference completes. This is useful when the service calling Modelbit's REST API cannot synchronously wait for the response due to network or architecture limitations.

Supply the response_webhook parameter along with your data to use this functionality. The REST API will respond immediately with the batchId of your inference request and, once the inference completes, it will post the results to the webhook URL you supplied.

For example:

curl -s -XPOST "https://<your-workspace-url>/v1/my_deployment/latest" -d '{"data": ..., "response_webhook": "https://..."}'

The body posted to the webhook is in the same format as when the REST API is called synchronously. The header x-mb-batch-id contains the batchId of the request.

You can also use query string parameters in your webhook URL to later identify your request when processing the result in your webhook handler.

info

Modelbit will retry sending the webhook to your server several times if your server responds with an error (e.g. an HTTP 500).

Ignoring timeouts‚Äč

The server receiving the webhook should respond quickly. If the response takes longer than 10 seconds Modelbit will treat this as a timeout error and re-send the webhook request to your server.

Sometimes it's not possible for your server to respond quickly due to architectural constraints. That's ok! You can tell Modelbit to ignore timeout errors with the response_webhook_ignore_timeout boolean parameter.

In Python that would look like:

requests.post("https://<your-workspace-url>/v1/example_deployment/latest",
json={
"data": {...},
"response_webhook": "https://your-server.com/...",
"response_webhook_ignore_timeout": True,
})