Skip to main content

Using Tecton features for inferences

This example will show you how to connect Tecton with Modelbit and use features from your Tecton feature stores in Modelbit deployments.

Creating a Tecton API key

Modelbit connects to Tecton using a Tecton API key. If you don't already one, you can create a Tecton API key following the steps below:

From a terminal that's authenticated with Tecton, create a Service Account for Modelbit:

tecton service-account create --name modelbit

That'll create an API key:

Save this API Key - you will not be able to get it again.
API Key: de064...
Service Account ID: 3721a...

You also need to grant the new Service Account access to call the Tecton API in your workspace:

tecton access-control assign-role --service-account 3721a... --role consumer --workspace my-workspace

Your Tecton API key is now ready to be added to Modelbit!

Adding a Tecton API key to Modelbit

In your Modelbit workspace, open Settings and then Integrations.

Click the Tecton tile and past in your Tecton API key. Then click Save. Your Tecton API key has been added to Modelbit!

Using your Tecton feature store

Any ML model and Tecton feature store can work with Modelbit. In this example we assume you're already familiar with Tecton so we'll deploy the example model, and use the example Tecton feature store fraud_detection_feature_service, from the Tecton Quickstart Tutorial.

Begin by authenticating with Modelbit in your Python notebook:

import modelbit
mb = modelbit.login()

Then create a function to fetch feature values for a user_id from Tecton. In this case we're getting features from the fraud_detection_feature_service feature store.

import requests, json, os
import pandas as pd

TECTON_WORKSPACE = "my-workspace"
TECTON_ACCOUNT = "my-account"

def get_tecton_feature_data(user_id: str):
online_feature_data = requests.post(
headers={"Authorization": "Tecton-key " + mb.get_secret("TECTON_API_KEY")},
url=f"https://{TECTON_ACCOUNT}.tecton.ai/api/v1/feature-service/get-features",
data=json.dumps({
"params": {
"feature_service_name": "fraud_detection_feature_service",
"join_key_map": {"user_id": user_id},
"metadata_options": {"include_names": True},
"workspace_name": TECTON_WORKSPACE
}
}),
)
return online_feature_data.json()

You'll notice that we're calling mb.get_secret("TECTON_API_KEY") to fetch our Tecton API key so we can authenticate with Tecton. This will work both in your notebook and in Modelbit deployments at inference time.

Now, we'll use the features returned by get_tecton_feature_data, and the model trained in the quickstart, to compute inferences:

def predict_fraud(user_id: str) -> float:
feature_data = get_tecton_feature_data(user_id)
columns = [f["name"].replace(".", "__") for f in feature_data["metadata"]["features"]]
data = [feature_data["result"]["features"]]
features = pd.DataFrame(data, columns=columns)
return model.predict(features)[0]

We can call predict_fraud to test that everything is working as expected:

predict_fraud("user_502567604689")

Which correctly returns 0, not fraud!

Deploying to Modelbit

With our inference code ready to go, we can deploy to Modelbit and create a REST endpoint for our model:

mb.deploy(predict_fraud)

Our deployment is now ready to receive a user_id, look up feature values from Tecton, and evaluate whether or not the user is fraudulent! You can test it by calling Modelbit's REST API:

curl -s -XPOST "http://<your-workspace>.modelbit.com/v1/predict_fraud/latest" -d '{"data": user_id}'