Starting with a Python notebook
With Modelbit, you can deploy any model from a Python notebook to a REST API in minutes. To get started, create a blank notebook in the tool of your choice (Colab, Hex, Jupyter, etc.). Then install the modelbit
package:
pip install --upgrade modelbit
Next we'll create a model and an inference function in the notebook. Then we'll use the modelbit
package to turn that model and inference function into a REST API.
Create a simple model
Modelbit can deploy any kind of ML model. For this example we'll keep it simple and deploy a scikit-learn linear regression that doubles numbers. This process we're following also works for complex models that use GPUs or have multi-gigabyte checkpoint files. So let's get started!
First, train a simple linear regression:
from sklearn import linear_model
lm = linear_model.LinearRegression()
lm.fit([[1], [2], [3]], [2, 4, 6])
The above code makes the world's simplest model: a linear regression called lm
that happily doubles numbers. Great!
Now let's make an inference function that calls our linear regression and returns the result:
def predict_double(half: float) -> float:
return lm.predict([[half]])[0]
# test that it works by doubling the number 5
print(predict_double(5))
With our model and inference function complete we're ready to turn them into a REST API.
Deploy your first model
We'll use the modelbit
package to deploy the inference function to a REST API.
Behind the scenes, the modelbit
package will capture all the relevant source code, model artifacts and Python packages from your notebook and build a Docker image. Then it'll deploy that Docker image to a cluster of servers and create a REST API for you to call your model. That all happens automagically!
First, log into your Modelbit workspace:
import modelbit
mb = modelbit.login()
Click the displayed link and your notebook will be authenticated with your Modelbit account.
Now, we'll deploy the predict_double
function to a REST API using mb.deploy
:
mb.deploy(predict_double)
This command will display a button for you to see your deployment in Modelbit! In Modelbit you'll see the REST API endpoints, build logs for your deployment's environment, and more.
Call your model's REST API
You've trained a simple model and deployed it to Modelbit. Now it's time to call that deployment and see it work.
You can call your deployment from Python using modelbit.get_inference
or from the command line using curl
. The API Endpoints tab for your deployment in Modelbit will show you the exact syntax for your model. It'll look similar to this:
- modelbit.get_inference
- requests
- curl
modelbit.get_inference(
deployment="predict_double",
workspace="<YOUR_WORKSPACE>",
region="<YOUR_REGION>",
data=5)
# return value
{"data": 10 }
Calling example_doubler
with the requests
package:
import json, requests
requests.post("https://<YOUR_WORKSPACE_URL>/v1/predict_double/latest",
headers={"Content-Type":"application/json"},
data=json.dumps({"data": 5})).json()
# return value
{"data": 10 }
curl -s -XPOST "https://<YOUR_WORKSPACE_URL>/v1/predict_double/latest" -d '{"data": 5}'
# return value
'{"data": 10}'
Once you've called your deployment, you'll see the results in the Logs tab within Modelbit.
Congratulations, you've deployed your first model to Modelbit!
Next steps
You created a model and an inference function that calls the model. That's all Modelbit needs to make you REST APIs! Modelbit can capture any function you create in a Python notebook and turn it into a REST API, complete with all of the dependent pip
packages and model artifacts.
When you used mb.deploy
you created a REST API that ran your inference function. That API was callable using Python or curl
.
From here we invite you to develop and deploy more complex models, experiment with A/B testing, and organize your many model artifacts in the model registry.