Skip to main content


Adds custom performance tracing to your deployment. Performance traces show how long different parts of your deployment's code take to run. They are visualized on the Logs page of each deployment.

Calls to built-in Modelbit functions like mb.get_dataset and mb.get_model are automatically traced. As are activities related to the setup of the deployment (e.g. downloading pickles, importing libraries). Most of these traces will appear infrequently because the modelbit package caches some calls to improve performance. Cached calls are nearly-instant so they won't show up in traces (or affect the performance of your deployment!).

The modelbit.trace functionality allows you to add additional tracing to your deployment to help you track down which parts might be slow.


Tracing is most helpful on deployments that take at least 100ms to complete. Individual traces shorter than 10ms are not recorded.


  • name: str The name of the trace. This name is for you to identify the relevant code later on.


No value is returned.

If running outside of a deployment, the trace call will print out how long the contained code took to run. If run within a deployment then trace information will be added to the deployment and visualized on the Logs page.


Tracing code that processes inputs

To find out if processing the inputs to your model takes a long time you can add a trace around that step. Only the code withing the with block of modelbit.trace gets measured.

def my_predict_function(raw_inputs):

with modelbit.trace("process inputs"):
inputs = []
for i in raw_inputs:

return my_model.predict(inputs)

Tracing two calls to helper functions

You can have multiple traces within a function.

def my_predict_function(...):

with modelbit.trace("get_tiles"):
tiles = get_tiles(...)

with modelbit.trace("process_image"):
img = process_image(tiles)

return ...

Tracing initialization code

If using git you can add traces to your imports and loading steps. These traces will only appear during the first call to your deployment since they'll be cached on subsequent calls.
import modelbit

with modelbit.trace("import torch and tf"):
import torch
import tensorflow

with modelbit.trace("load model"):
checkpoint = MyFancyModel.prepare_checkpoint(...)
model = MyFancyModel.from_pretrained(checkpoint)

def make_prediction(...):
return model.make_prediction(...)

Set trace name dynamically

The name of the trace can include information relevant to what's being called. Simply interpolate that information into the string.

def my_predict_function(raw_inputs):

with modelbit.trace(f"process {len(raw_inputs)} inputs"):
inputs = []
for i in raw_inputs:

return my_model.predict(inputs)