Skip to main content

Deploying models

Every Modelbit deployment consists of three parts:

  • A Python function. This is the code that will execute at runtime and returns your prediction.
  • Dependencies of the deployed function. These include the ML model, other functions, and any other values needed by your main function.
  • API endpoints to use the deployed function. Your deployment can be called from Snowflake, Redshift, and REST.

Begin by writing a Python function that will be deployed to Modelbit. The function's arguments will be the parameters the API receives at runtime.

from sklearn import linear_model
lm = linear_model.LinearRegression()[[1], [2], [3]], [2, 4, 6])

def example_doubler(half: int) -> int:
if type(half) is not int:
return None
return round(lm.predict([[half]])[0])


Since example_doubler references lm as a dependency, lm is pickled and sent to production along with the example_doubler function. All of your function's dependencies will be pickled and sent to production, so feel free to include feature encoders, scalers, and anything else you may need.

The Python type hints can be used for input validation will set the input and output types of your warehouse's SQL functions that call this model.

If you don't typically work in a notebook environment, you can deploy models with git.