Skip to main content

Deployments using the registry

To make a deployment that uses the models in the registry, call mb.get_model in your inference function.

Storing a model in the registry

First, store a model in the registry so your inference function has a model to fetch:

from sklearn import linear_model
model = linear_model.LinearRegression()[[1], [2], [3]], [2, 4, 6])

# store the model, we'll call it "doubler_model"
mb.add_model("doubler_model", model)

Fetch the model in your inference function

Now that we have a model in the registry, we can build an inference function that uses it:

def double_number_example(a: int):
model = mb.get_model("doubler_model")
return model.predict([[a]])[0]

At inference time the deployment will fetch the model and use it to make a prediction. Models are cached in the deployment so subsequent calls to get_model are instant.


Modelbit automatically detects which Python packages are needed in your the deployment environment using your local Python environment.

Deployments that use mb.get_model should run their inference functions before deploying. These test runs allow mb.deploy to infer Python package dependences based on the models that mb.get_model fetches.

For example, run double_number_example before deploying it and your Python dependencies will be auto-detected:

double_number_example(5) # make sure it works