In this tutorial, we’ll walk through the steps to create a Chainlit application integrated with LangChain.
Preview of what you will build
Prerequisites
Before getting started, make sure you have the following:
- A working installation of Chainlit
- The LangChain package installed
- An OpenAI API key
- Basic understanding of Python programming
Step 1: Create a Python file
Create a new Python file named app.py
in your project directory. This file will contain the main logic for your LLM application.
Step 2: Write the Application Logic
In app.py
, import the necessary packages and define one function to handle a new chat session and another function to handle messages incoming from the UI.
With LangChain
Let’s go through a small example.
If your agent/chain does not have an async implementation, fallback to the
sync implementation.
This code sets up an instance of Runnable
with a custom ChatPromptTemplate
for each chat session. The Runnable
is invoked everytime a user sends a message to generate the response.
The callback handler is responsible for listening to the chain’s intermediate steps and sending them to the UI.
With LangGraph
from typing import Literal
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import ToolNode
from langchain.schema.runnable.config import RunnableConfig
from langchain_core.messages import HumanMessage
import chainlit as cl
@tool
def get_weather(city: Literal["nyc", "sf"]):
"""Use this to get weather information."""
if city == "nyc":
return "It might be cloudy in nyc"
elif city == "sf":
return "It's always sunny in sf"
else:
raise AssertionError("Unknown city")
tools = [get_weather]
model = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
final_model = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
model = model.bind_tools(tools)
final_model = final_model.with_config(tags=["final_node"])
tool_node = ToolNode(tools=tools)
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import END, StateGraph, START
from langgraph.graph.message import MessagesState
from langchain_core.messages import BaseMessage, SystemMessage, HumanMessage
def should_continue(state: MessagesState) -> Literal["tools", "final"]:
messages = state["messages"]
last_message = messages[-1]
if last_message.tool_calls:
return "tools"
return "final"
def call_model(state: MessagesState):
messages = state["messages"]
response = model.invoke(messages)
return {"messages": [response]}
def call_final_model(state: MessagesState):
messages = state["messages"]
last_ai_message = messages[-1]
response = final_model.invoke(
[
SystemMessage("Rewrite this in the voice of Al Roker"),
HumanMessage(last_ai_message.content),
]
)
response.id = last_ai_message.id
return {"messages": [response]}
builder = StateGraph(MessagesState)
builder.add_node("agent", call_model)
builder.add_node("tools", tool_node)
builder.add_node("final", call_final_model)
builder.add_edge(START, "agent")
builder.add_conditional_edges(
"agent",
should_continue,
)
builder.add_edge("tools", "agent")
builder.add_edge("final", END)
graph = builder.compile()
@cl.on_message
async def on_message(msg: cl.Message):
config = {"configurable": {"thread_id": cl.context.session.id}}
cb = cl.LangchainCallbackHandler()
final_answer = cl.Message(content="")
for msg, metadata in graph.stream({"messages": [HumanMessage(content=msg.content)]}, stream_mode="messages", config=RunnableConfig(callbacks=[cb], **config)):
if (
msg.content
and not isinstance(msg, HumanMessage)
and metadata["langgraph_node"] == "final"
):
await final_answer.stream_token(msg.content)
await final_answer.send()
Step 3: Run the Application
To start your app, open a terminal and navigate to the directory containing app.py
. Then run the following command:
The -w
flag tells Chainlit to enable auto-reloading, so you don’t need to restart the server every time you make changes to your application. Your chatbot UI should now be accessible at http://localhost:8000.
When using LangChain, prompts and completions are not cached by default. To
enable the cache, set the cache=true
in your chainlit config file.