Deploying with Modelbit
Modelbit deployments each represent a collection of Python source code, model artifacts and environment dependencies. And each deployment has a REST API, logs showing recent calls and errors, and related settings around monitoring and resources.
For more information around creating a deployment, check out:
- Custom Python environments for specifying the
pip
andapt-get
packages to install - Private packages for including your private
.whl
packages - Compute environments to use GPUs
After you've deployed, you'll want to call your model:
- Single inference requests shows the syntax typical for use cases where you want a single inference from your model.
- Batch inference requests shows the typical syntax for sending and receiving many inferences at once.
- DataFrame mode is for deployments that expect to receive pandas DataFrames.
And you can monitor your deployments with:
- Slack alerts to get notified when there are errors
- Logs integrations to stream logs into Datadog and Snowflake