Skip to main content

Prompt engineering with LangChain

In this tutorial we'll use LangChain to call OpenAI from a Modelbit deployment.

Add your API key to Modelbit

Make an OpenAI API Key to load into Modelbit. In Modelbit's Settings, click Integrations and then click the OpenAI tile. Add your OpenAI API key to this form.

Install LangChain

We'll develop this deployment in a Python notebook. Make sure LangChain is installed, authenticate your Python notebook with Modelbit, and set your API key for local development:

pip install langchain langchain_openai

Authenticate with Modelbit:

import modelbit
mb = modelbit.login()

And set your API key in the notebook:

import os
os.environ['OPENAI_API_KEY'] = mb.get_secret("OPENAI_API_KEY")

Create a LangChain chat prompt

We'll make a simple chatbot to assist with customer support:

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

def chat_with_langchain(input: str) -> str:
llm = ChatOpenAI()
prompt = ChatPromptTemplate.from_messages([
("system", "You are a tech support agent at a financial services company."),
("user", "{input}")
])
output_parser = StrOutputParser()
chain = prompt | llm | output_parser
return chain.invoke({"input": input})

Let's test that our function works as expected:

chat_with_langchain("Why won't you give me a refund?")

Success, it responds with:

I'm sorry to hear that you're experiencing difficulties with our refund process. As
a tech support agent, I...

Deploy to Modelbit

We can now deploy chat_with_langchain to Modelbit:

mb.deploy(chat_with_langchain)

This will create a REST API and a SQL function that uses LangChain to call OpenAI.