Skip to main content

Base images with CUDA

Modelbit deployments and training jobs are containerized with Docker. By default, the Python and system packages you specify are installed into a Docker image that only contains the Python language. This works well for most models.

However, some models require additional libraries, like the NVIDIA Container Toolkit's CUDA libraries, that are challenging to install into a standard Python Docker image. In those cases you can tell Modelbit to use a specific base image that comes with the CUDA libraries pre-installed as a starting point for your deployment or training job.

Supported base images

tip

Modelbit supports a limited set of Docker base images known to work well within the Modelbit platform. If you need a different image to support your model, reach out to Modelbit Support so we can support it!

The following base images are currently supported. When specifying a base image, make sure to also specify a Python language version that is compatible with the base image.

Base imageSupport Python versions
pytorch/pytorch:2.5.1-cuda12.4-cudnn9-devel3.11
pytorch/pytorch:2.5.1-cuda12.4-cudnn9-runtime3.11
nvidia/cuda:12.4.1-cudnn-devel-ubuntu22.043.10 or 3.11
nvidia/cuda:12.4.1-cudnn-runtime-ubuntu22.043.10 or 3.11

Specifying base images

You can specify the base image needed using the baseImage property a deployment's metadata.yaml or a training job's metadata.yaml.

For example:

training_jobs/my_job/metadata.yaml
owner: you@company.com
runtimeInfo:
baseImage: pytorch/pytorch:2.5.1-cuda12.4-cudnn9-devel
mainFunction: train_model
mainFunctionArgs: []
pythonVersion: "3.11"
systemPackages: null
schedules: []
schemaVersion: 1