;
// Start widget container expanded, i.e. wide
expanded?: boolean;
// Set language of Chainlit's UI
// Defaults to preferred language of the user
// via navigator.language, then to en-US
language?: string;
// Start with the copilot chat panel open
opened?: boolean;
}
```
## Function Calling
The Copilot can call functions on your website. This is useful for taking actions on behalf of the user. For example, you can call a function to create a new document, or to open a modal.
First, create a `CopilotFunction` in your Chainlit server:
```py theme={null}
import chainlit as cl
@cl.on_message
async def on_message(msg: cl.Message):
if cl.context.session.client_type == "copilot":
fn = cl.CopilotFunction(name="test", args={"msg": msg.content})
res = await fn.acall()
await cl.Message(content=res).send()
```
Then, in your app/website, add the following event listener:
```js theme={null}
window.addEventListener("chainlit-call-fn", (e) => {
const { name, args, callback } = e.detail;
if (name === "test") {
console.log(name, args);
callback("You sent: " + args.msg);
}
});
```
As you can see, the event listener receives the function name, arguments, and a callback function. The callback function should be called with the result of the function call.
## Send a Message
The Copilot can also send messages directly to the Chainlit server. This is useful for sending context information or user actions to the Chainlit server (like the user selected from cell A1 to B1 on a table).
First, update the `@cl.on_message` decorated function to your Chainlit server:
```py theme={null}
import chainlit as cl
@cl.on_message
async def on_message(msg: cl.Message):
if cl.context.session.client_type == "copilot":
if msg.type == "system_message":
# do something with the message
return
fn = cl.CopilotFunction(name="test", args={"msg": msg.content})
res = await fn.acall()
await cl.Message(content=res).send()
```
Then, in your app/website, you can emit an event like this:
```js theme={null}
window.sendChainlitMessage({
type: "system_message",
output: "Hello World!",
});
```
## Security
### Cross Origin Resource Sharing (CORS)
Don't forget to add the origin of the host website to the [allow\_origins](/backend/config/project) config field to a list of allowed origins.
### Authentication
If you want to authenticate users on the Copilot, you can enable [authentication](/authentication) on the Chainlit server.
If the Chainlit app and the host website are deployed on different domains,
you will have to add `CHAINLIT_COOKIE_SAMESITE=none` to the Chainlit app env
variables.
While the standalone Chainlit application handles the authentication process, the Copilot needs to be configured with an access token. This token is used to authenticate the user with the Chainlit server.
The host app/website is responsible for generating the token and passing it to the as `accessToken`. Here are examples of how to generate the token in different languages:
You will need the `CHAINLIT_AUTH_SECRET` you generated when [configuring
authentication](/authentication).
```py jwt.py theme={null}
import jwt
from datetime import datetime, timedelta
CHAINLIT_AUTH_SECRET = "your-secret"
def create_jwt(identifier: str, metadata: dict) -> str:
to_encode = {
"identifier": identifier,
"metadata": metadata,
"exp": datetime.utcnow() + timedelta(minutes=60 * 24 * 15), # 15 days
}
encoded_jwt = jwt.encode(to_encode, CHAINLIT_AUTH_SECRET, algorithm="HS256")
return encoded_jwt
access_token = create_jwt("user-1", {"name": "John Doe"})
```
```ts jwt.ts theme={null}
import jwt from "jsonwebtoken";
const CHAINLIT_AUTH_SECRET = "your-secret";
interface Metadata {
[key: string]: any;
}
function createJwt(identifier: string, metadata: Metadata): string {
const toEncode = {
identifier: identifier,
metadata: metadata,
exp: Math.floor(Date.now() / 1000) + 60 * 60 * 24 * 15, // 15 days
};
const encodedJwt = jwt.sign(toEncode, CHAINLIT_AUTH_SECRET, {
algorithm: "HS256",
});
return encodedJwt;
}
const accessToken = createJwt("user-1", { name: "John Doe" });
```
# Discord
Source: https://docs.chainlit.io/deploy/discord
To make your Chainlit app available on Discord, you will need to create a Discord app and set up the necessary environment variables.
## How it Works
The Discord bot will listen to messages mentioning it in channels and direct messages.
It will send replies to a dedicated thread or DM depending on the context.
## Supported Features
| Message | Streaming | Elements | Audio | Ask User | Chat History | Chat Profiles | Feedback |
| ------- | --------- | -------- | ----- | -------- | ------------ | ------------- | -------- |
| ✅ | ❌ | ✅ | ❌ | ❌ | ✅ | ❌ | ✅ |
## Install the Discord Library
The Discord library is not included in the Chainlit dependencies. You will have to install it manually.
```bash theme={null}
pip install discord
```
## Create a Discord App
To start, navigate to the [Discord apps dashboard](https://discord.com/developers/applications). Here, you should find a button that says New Application. When you click this button, select the option to create your app from scratch.
## Set the Environment Variables
Navigate to the Bot tab and click on `Reset Token`. This will make the token visible. Copy it and set it as an environment variable in your Chainlit app.
```bash theme={null}
DISCORD_BOT_TOKEN=your_bot_token
```
## Set Intents
Navigate to the Bot tab and enable the `MESSAGE CONTENT INTENT`, then click on Save Changes.
## Working Locally
If you are working locally, you will have to expose your local Chainlit app to the internet to receive incoming messages to Discord. You can use [ngrok](https://ngrok.com/) for this.
```bash theme={null}
ngrok http 8000
```
## Start the Chainlit App
Since the Chainlit app is not running, the Discord bot will not be able to communicate with it.
For the example, we will use this simple app:
```python my_app.py theme={null}
import chainlit as cl
@cl.on_message
async def on_message(msg: cl.Message):
# Access the original discord message
print(cl.user_session.get("discord_message"))
# Access the discord user
print(cl.user_session.get("user"))
# Access potential attached files
attached_files = msg.elements
await cl.Message(content="Hello World").send()
```
Start the Chainlit app.
Using -h to not open the default Chainlit UI since we are using Discord.
```bash theme={null}
chainlit run my_app.py -h
```
## Install the Discord Bot to Your Workspace
Navigate to the OAuth2 tab. In the OAuth2 URL Generator, select the `bot` scope.
Then, in the Bot Permissions section, select the following permissions.
You can check that you have selected the right permissions by looking at the
number of permissions parameter of the URL. It should be `377957238848`.
Copy the generated URL and paste it in your browser. You will be prompted to add the bot to a server. Select the server you want to add the bot to.
That's it! You should now be able to interact with your Chainlit app through Discord.
## Chat History
Chat history is directly available through discord.
```python theme={null}
from chainlit.discord.app import client as discord_client
import chainlit as cl
import discord
@cl.on_message
async def on_message(msg: cl.Message):
# The user session resets on every Discord message.
# So we add previous chat messages manually.
messages = cl.user_session.get("messages", [])
channel: discord.abc.MessageableChannel = cl.user_session.get("discord_channel")
if channel:
cl.user_session.get("messages")
discord_messages = [message async for message in channel.history(limit=10)]
# Go through last 10 messages and remove the current message.
for x in discord_messages[::-1][:-1]:
messages.append({
"role": "assistant" if x.author.name == discord_client.user.name else "user",
"content": x.clean_content if x.clean_content else x.channel.name # first message is empty
})
# Your code here
```
# Overview
Source: https://docs.chainlit.io/deploy/overview
A Chainlit application can be consumed through multiple platforms. Write your assistant logic once, use everywhere!
## Available Platforms
The native Chainlit UI. Available on port 8000.
Embed your Chainlit app on any website as a Copilot.
Learn how to integrate your custom React frontend with the Chainlit backend.
}
>
Make your Chainlit app available on Teams.
}
href="/deploy/slack"
>
Make your Chainlit app available on Slack.
}
>
Make your Chainlit app available on Discord.
## Tips & Tricks
### Start Chainlit with -h
When running a Chainlit app in production, you should always add `-h` to the
`chainlit run` command. Otherwise a browser window will be opened server side
and might break your deployment.
### Double check the host
By default, the Chainlit server host is `127.0.0.1`.
Typically, if you are running Chainlit on docker, you want to add `--host 0.0.0.0` to your chainlit command.
### Account for websockets
Chainlit is built upon websockets, which means the service you deploy your app
to has to support them. When auto scaling, make sure to enable sticky sessions (or session affinity).
Even with sticky sessions, load balancers sometime struggle to consistently route a client to the same container.
In that case you can set `transports = ["websocket"]` in your `.chainlit/config.toml` file.
### Deploying Chainlit on a subpath
If you need to deploy your Chainlit app to a subpath like
`https://my-app.com/chainlit`, you will need to set the `--root-path
/chainlit` flag when running the `chainlit run` command. This will ensure that
the app is served from the correct path.
### Cross origins
If your end users consumes the Chainlit UI from the same origin as the server, everything will work out of the box.
However, if you embed Chainlit on a website, the connection will fail because of CORS.
In that case, you will have to update the `allow_origins` field of your `.chainlit/config.toml`.
## Community resource
After you've successfully set up and tested your Chainlit application locally, the next step is to make it accessible to a wider audience by deploying it to a hosting service. This guide provides various options for self-hosting your Chainlit app.
* on [Ploomber Cloud](https://docs.cloud.ploomber.io/en/latest/apps/chainlit.html)
* on [AWS](https://ankushgarg.super.site/how-to-deploy-your-chatgpt-like-app-with-chainlit-and-aws-ecs)
* on [Azure Container](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/create-an-azure-openai-langchain-chromadb-and-chainlit-chat-app/ba-p/3885602)
* on [Google Cloud Run](https://pseudohvr.medium.com/deploying-chainlit-on-gcp-72231ba6b77f)
* on [Google App Engine](https://github.com/amjadraza/langchain-chainlit-docker-deployment-template)
* on [Replit](https://replit.com/@DanConstantini/Build-a-Chatbot-with-OpenAI-LangChain-and-Chainlit?v=1)
* on [Render](https://discord.com/channels/1088038867602526210/1126834266504966294/1126845898287230977)
* on [Fly.io](https://dev.to/willydouhard/how-to-deploy-your-chainlit-app-to-flyio-38ja)
* on [HuggingFace Spaces](https://github.com/Chainlit/cookbook/tree/main/chroma-qa-chat)
# Additional resources
Source: https://docs.chainlit.io/deploy/react/additional-resources
## Additional Resources
* [@chainlit/react-client npm package](https://www.npmjs.com/package/@chainlit/react-client)\
Explore the @chainlit/react-client npm package.
* [Recoil Documentation](https://recoiljs.org/docs/introduction/getting-started)\
Learn more about setting up and using Recoil for state management in React applications.
* [SWR Documentation](https://swr.vercel.app/)\
Discover how to leverage SWR for data fetching, caching, and revalidation in React applications.
* [Socket.IO Documentation](https://socket.io/docs/v4/)\
Understand how real-time communication is handled via Socket.IO, integral to the `useChatInteract` hook's operations.
* [JWT Documentation](https://jwt.io/introduction/)\
Learn about JSON Web Tokens (JWT) and how they are used for secure authentication.
# Installation and setup
Source: https://docs.chainlit.io/deploy/react/installation-and-setup
## Overview
The `@chainlit/react-client` package provides a set of React hooks as well as an API client to connect to your **Chainlit** application from any React application. The package includes hooks for managing chat sessions, messages, data, and interactions.
## Installation
To install the package, run the following command in your project directory:
```bash theme={null}
npm install @chainlit/react-client
```
This package uses **Recoil** to manage its state. This means you will have to wrap your application in a recoil provider:
```typescript theme={null}
import React from 'react';
import ReactDOM from 'react-dom/client';
import { RecoilRoot } from 'recoil';
import { ChainlitAPI, ChainlitContext } from '@chainlit/react-client';
const CHAINLIT_SERVER_URL = 'http://localhost:8000';
const apiClient = new ChainlitAPI(CHAINLIT_SERVER_URL, 'webapp');
ReactDOM.createRoot(document.getElementById('root') as HTMLElement).render(
);
```
# Overview
Source: https://docs.chainlit.io/deploy/react/overview
Chainlit allows you to create a custom frontend for your application, offering you the flexibility to design a unique user experience. By integrating your frontend with Chainlit's backend, you can harness the full power of Chainlit's features, including:
* Abstractions for easier development
* Monitoring and observability
* Seamless integrations with various tools
* Robust authentication mechanisms
* Support for multi-user environments
* Efficient data streaming capabilities
Learn how to install and set up the Chainlit React client.
Explore the key features provided by the React client.
Explore additional resources for the React client.
Learn how to integrate your custom React frontend with the Chainlit backend.
The [@chainlit/react-client](https://www.npmjs.com/package/@chainlit/react-client) package is designed for integrating Chainlit applications with React. It offers several hooks and an API client for seamless connection and interaction.
## Supported Features
| Message | Streaming | Elements | Audio | Ask User | Chat History | Chat Profiles | Feedback |
| ------- | --------- | -------- | ----- | -------- | ------------ | ------------- | -------- |
| ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
# Usage
Source: https://docs.chainlit.io/deploy/react/usage
## React Hooks
The `@chainlit/react-client` package provides several React hooks to manage various aspects of your chat application seamlessly:
* **[`useChatSession`](#usechatsession-hook)**: Manages the chat session's connection to the WebSocket server.
* **[`useChatMessages`](#usechatmessages-hook)**: Manages retrieval and rendering of chat messages.
* **[`useChatData`](#usechatdata-hook)**: Accesses chat-related data and states.
* **[`useChatInteract`](#usechatinteract-hook)**: Provides methods to interact with the chat system.
* **[`useAuth`](#useauth-hook)**: Handles authentication processes.
* **[`useApi`](#useapi-hook)**: Simplifies API interactions with built-in support for data fetching and error handling.
***
### `useChatSession` Hook
This hook is responsible for managing the chat session's connection to the WebSocket server.
#### Methods
* **`connect`**: Establishes a connection to the WebSocket server.
* **`disconnect`**: Disconnects from the WebSocket server.
* **`setChatProfile`**: Sets the chat profile state.
#### Example
```tsx theme={null}
import { useChatSession } from '@chainlit/react-client';
const ChatComponent = () => {
const { connect, disconnect, chatProfile, setChatProfile } = useChatSession();
// Connect to the WebSocket server
useEffect(() => {
connect({
userEnv: {
/* user environment variables */
},
accessToken: 'Bearer YOUR ACCESS TOKEN', // Optional Chainlit auth token
});
return () => {
disconnect();
};
}, []);
// Rest of your component logic
};
```
***
### `useChatMessages` Hook
The `useChatMessages` hook provides access to the current chat messages, the first user interaction, and the active thread ID within your React application. It leverages Recoil for state management, ensuring that your components reactively update in response to state changes.
#### Returned Values
* **`threadId`** (`string | undefined`):\
The identifier of the current chat thread.
* **`messages`** (`IStep[]`):\
An array of chat messages.
* **`firstInteraction`** (`string | undefined`):\
The content of the first user-initiated interaction.
#### Example
```tsx theme={null}
import { useChatMessages } from '@chainlit/react-client';
const MessagesComponent = () => {
const { messages, firstInteraction, threadId } = useChatMessages();
return (
Thread ID: {threadId}
{firstInteraction &&
First Interaction: {firstInteraction}
}
{messages.map((message) => (
{message.content}
))}
);
};
```
***
### `useChatData` Hook
The `useChatData` hook offers comprehensive access to various chat-related states and data within your React application.
#### Returned Properties
* **`actions`** (`IAction[]`)
* **`askUser`** (`IAsk | undefined`)
* **`chatSettingsValue`** (`any`)
* **`connected`** (`boolean`)
* **`disabled`** (`boolean`)
* **`error`** (`boolean | undefined`)
* **`loading`** (`boolean`)
* **`tasklists`** (`ITasklistElement[]`)
#### Example
```tsx theme={null}
import { useChatData } from '@chainlit/react-client';
const ChatStatusComponent = () => {
const { connected, loading, error, actions, askUser, chatSettingsValue } = useChatData();
return (
Chat Status
{loading &&
Loading chat...
}
{error &&
There was an error with the chat session.
}
{connected ? 'Connected to chat.' : 'Disconnected from chat.'}
Available Actions
{actions.map((action) => (
{action.name}
))}
{askUser && (
User Prompt
{askUser.message}
)}
Chat Settings
{JSON.stringify(chatSettingsValue, null, 2)}
);
};
```
***
### `useChatInteract` Hook
The `useChatInteract` hook provides a comprehensive set of methods to interact with the chat system within your React application.
#### Methods
* **`sendMessage`**
* **`replyMessage`**
* **`clear`**
* **`uploadFile`**
* **`callAction`**
* **`startAudioStream`**
* **`sendAudioChunk`**
* **`stopTask`**
#### Example
```tsx theme={null}
import { useChatInteract } from '@chainlit/react-client';
const ChatInteraction = () => {
const { sendMessage, replyMessage, clear } = useChatInteract();
return (
sendMessage({ content: 'Hello!' })}>Send
replyMessage({ content: 'Reply!' })}>Reply
Clear
);
};
```
***
### `useAuth` Hook
The `useAuth` hook manages authentication within your React application, providing functionalities like user sessions and token management.
#### Properties & Methods
* **`authConfig`**
* **`user`**
* **`accessToken`**
* **`isLoading`**
* **`logout`**
#### Example
```tsx theme={null}
import { useAuth } from '@chainlit/react-client';
const UserProfile = () => {
const { user, logout } = useAuth();
if (!user) return No user logged in.
;
return (
Username: {user.username}
Logout
);
};
```
***
### `useApi` Hook
The `useApi` hook simplifies data fetching and error handling using [SWR](https://swr.vercel.app/).
#### Example
```tsx theme={null}
import { useApi } from '@chainlit/react-client';
const Settings = () => {
const { data, error, isLoading } = useApi('/project/settings');
if (isLoading) return Loading...
;
if (error) return Error: {error.message}
;
return {JSON.stringify(data, null, 2)} ;
};
```
# Slack
Source: https://docs.chainlit.io/deploy/slack
To make your Chainlit app available on Slack, you will need to create a Slack app and set up the necessary environment variables.
## How it Works
The Slack bot will listen to messages mentioning it in channels and direct messages.
It will send replies to a dedicated thread or DM depending on the context.
## Supported Features
| Message | Streaming | Elements | Audio | Ask User | Chat History | Chat Profiles | Feedback |
| ------- | --------- | -------- | ----- | -------- | ------------ | ------------- | -------- |
| ✅ | ❌ | ✅ | ❌ | ❌ | ✅ | ❌ | ✅ |
## Install the Slack Bolt Library
The Slack Bolt library is not included in the Chainlit dependencies. You will have to install it manually.
```bash theme={null}
pip install slack_bolt
```
## Create a Slack App
To start, navigate to the [Slack apps dashboard for the Slack API](https://api.slack.com/apps). Here, you should find a green button that says Create New App. When you click this button, select the option to create your app from scratch.
Create a name for your bot, such as "ChainlitDemo". Select the workspace you would like your bot to exist in.
## Connection Modes
Chainlit supports two ways to connect to Slack:
* **HTTP Mode** (default): Slack sends events to your Chainlit server via HTTP. Requires a public URL.
* **Socket Mode** (since 2.7.0): Chainlit connects to Slack via WebSocket. No public URL needed — ideal for local development or restrictive networks.
To use Socket Mode, set the `SLACK_WEBSOCKET_TOKEN` environment variable (see [Environment Variables](#set-the-environment-variables)). When set, Socket Mode takes priority over the HTTP handler.
## Working Locally
### With Socket Mode (recommended)
If you use Socket Mode, no public URL or ngrok is needed. Chainlit connects to Slack over WebSocket directly. See the [Socket Mode app manifest](#socket-mode) below.
### With HTTP Mode
If you are working locally with HTTP mode, you will have to expose your local Chainlit app to the internet to receive incoming messages from Slack. You can use [ngrok](https://ngrok.com/) for this.
```bash theme={null}
ngrok http 8000
```
This will give you a public URL that you can use to set up the app manifest. Do not forget to replace it once you deploy Chainlit to a public host.
## Set the App Manifest
Go to App Manifest and paste the following Yaml.
Replace the `{placeholders}` with your own values.
```yaml theme={null}
display_information:
name: { APP_NAME }
features:
bot_user:
display_name: { APP_NAME }
always_online: false
oauth_config:
scopes:
user:
- im:history
- channels:history
bot:
- app_mentions:read
- channels:read
- chat:write
- files:read
- files:write
- im:history
- im:read
- im:write
- users:read
- users:read.email
- channels:history
- groups:history
settings:
event_subscriptions:
request_url: https://{ CHAINLIT_APP_HOST }/slack/events
bot_events:
- app_home_opened
- app_mention
- message.im
interactivity:
is_enabled: true
request_url: https://{ CHAINLIT_APP_HOST }/slack/events
org_deploy_enabled: false
socket_mode_enabled: false
token_rotation_enabled: false
```
Click on Save Changes.
You will see a warning stating that the URL is not verified. You can ignore this for now.
### Socket Mode
If you are using Socket Mode, apply these changes to the manifest above: remove both `request_url` fields and set `socket_mode_enabled` to `true`.
```yaml theme={null}
settings:
event_subscriptions:
request_url: https://{ CHAINLIT_APP_HOST }/slack/events # [!code --]
bot_events:
- app_home_opened
- app_mention
- message.im
interactivity:
is_enabled: true
request_url: https://{ CHAINLIT_APP_HOST }/slack/events # [!code --]
org_deploy_enabled: false
socket_mode_enabled: false # [!code --]
socket_mode_enabled: true # [!code ++]
token_rotation_enabled: false
```
## \[Optional] Allow users to send DMs to Chainlit
By default the app will only listen to mentions in channels.
If you want to allow users to send direct messages to the app, go to App Home and enable "Allow users to send Slash commands and messages from the messages tab".
## \[Optional] Emoji Reaction on Message Received
Adds an optional feature to show emoji reactions when Slack messages are received, providing immediate user feedback while the bot processes the request.
This feature requires the `reactions:write` OAuth scope. If you enable this feature, you'll need to add this scope to your Slack app's OAuth configuration.
To enable this feature, add the following configuration to your `chainlit.md` file:
```toml theme={null}
[features.slack]
reaction_on_message_received = true
```
This feature is disabled by default to maintain backward compatibility. If you enable this feature, you'll need to add the `reactions:write` scope to your Slack app's OAuth configuration.
## Install the Slack App to Your Workspace
Navigate to the Install App tab and click on Install to Workspace.
## Set the Environment Variables
Set the environment variables outside of your application code.
### Bot Token
Once the slack application is installed, you will see the Bot User OAuth Token. Set this as an environment variable in your Chainlit app.
```bash theme={null}
SLACK_BOT_TOKEN=your_bot_token
```
### Signing Secret
Navigate to the Basic Information tab and copy the Signing Secret. Then set it as an environment variable in your Chainlit app.
```bash theme={null}
SLACK_SIGNING_SECRET=your_signing_secret
```
### App-Level Token (Socket Mode only)
If you are using Socket Mode, you also need an App-Level Token. Navigate to **Basic Information** > **App-Level Tokens**, create a token with the `connections:write` scope, and set it as an environment variable.
```bash theme={null}
SLACK_WEBSOCKET_TOKEN=your_app_level_token
```
When `SLACK_WEBSOCKET_TOKEN` is set, Chainlit uses Socket Mode and the HTTP `/slack/events` endpoint is not registered. You do not need `SLACK_SIGNING_SECRET` in Socket Mode.
## Start the Chainlit App
Since the Chainlit app is not running, the Slack app will not be able to communicate with it.
For the example, we will use this simple app:
```python my_app.py theme={null}
import chainlit as cl
@cl.on_message
async def on_message(msg: cl.Message):
# Access the original slack event
print(cl.user_session.get("slack_event"))
# Access the slack user
print(cl.user_session.get("user"))
# Access potential attached files
attached_files = msg.elements
await cl.Message(content="Hello World").send()
```
Reminder: Make sure the environment variables are set. If using HTTP mode,
your local Chainlit app must be exposed to the internet via ngrok.
Start the Chainlit app:
```bash theme={null}
chainlit run my_app.py -h
```
Using -h to not open the default Chainlit UI since we are using Slack.
You should now be able to interact with your Chainlit app through Slack.
## Chat History
Chat history is directly available through the `fetch_slack_message_history` method.
It will fetch the last messages from the current thread or DM channel.
```python theme={null}
import chainlit as cl
import discord
@cl.on_message
async def on_message(msg: cl.Message):
fetch_slack_message_history = cl.user_session.get("fetch_slack_message_history")
if fetch_slack_message_history:
print(await fetch_slack_message_history(limit=10))
# Your code here
```
# Teams
Source: https://docs.chainlit.io/deploy/teams
To make your Chainlit app available on Teams, you will need to create a Teams bot and set up the necessary environment variables.
## How it Works
The Teams bot will be available in direct messages.
## Supported Features
| Message | Streaming | Elements | Audio | Ask User | Chat History | Chat Profiles | Feedback |
| ------- | --------- | -------- | ----- | -------- | ------------ | ------------- | -------- |
| ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ |
## Install the Botbuilder Library
The Botbuilder library is not included in the Chainlit dependencies. You will have to install it manually.
```bash theme={null}
pip install botbuilder-core
```
## Create a Teams App
To start, navigate to the [App Management](https://dev.teams.microsoft.com/apps) page. Here, create a new app.
## Fill the App Basic Information
Navigate to Configure > Basic Information and fill in the basic information about your app.
You won't be able to publish your app until you fill in all the required fields.
## Create the Bot
Navigate to Configure > App features and add the Bot feature.
Create a new bot and give it the following permissions and save.
## Go to the Bot Framework Portal
Navigate to the [Bot Framework Portal](https://dev.botframework.com/bots/), click on the Bot you just created and go to the Settings page.
## Get the App ID
In the Bot Framework Portal, you will find the app ID. Copy it and set it as an environment variable in your Chainlit app.
```
TEAMS_APP_ID=your_app_id
```
## Working Locally
If you are working locally, you will have to expose your local Chainlit app to the internet to receive incoming messages to Teams. You can use [ngrok](https://ngrok.com/) for this.
```bash theme={null}
ngrok http 8000
```
This will give you a public URL that you can use to set up the app manifest. Do not forget to replace it once you deploy Chainlit to a public host.
## Set the Message Endpoint
Under Configuration, set the messaging endpoint to your Chainlit app HTTPS URL and add the `/teams/events` suffix.
## Get the App Secret
On the same page, you will find a blue "Manage Microsoft App ID and password" button. Click on it.
Navigate to Manage > Certificates & secrets and create a new client secret. Copy it and set it as an environment variable in your Chainlit app.
```
TEAMS_APP_PASSWORD=your_app_secret
```
## Support Multi Tenant Account Types
Navigate to Manage > Authentication and toggle "Accounts in any organizational directory (Any Microsoft Entra ID tenant - Multitenant)" then save.
## Start the Chainlit App
Since the Chainlit app is not running, the Teams bot will not be able to communicate with it.
For the example, we will use this simple app:
```python my_app.py theme={null}
import chainlit as cl
@cl.on_message
async def on_message(msg: cl.Message):
# Access the teams user
print(cl.user_session.get("user"))
# Access potential attached files
attached_files = msg.elements
await cl.Message(content="Hello World").send()
```
Reminder: Make sure the environment variables are set and that your local
chainlit app is exposed to the internet via ngrok.
Start the Chainlit app:
```bash theme={null}
chainlit run my_app.py -h
```
Using -h to not open the default Chainlit UI since we are using Teams.
## Publish the Bot
Back to the [App Management](https://dev.teams.microsoft.com/apps) page, navigate to "Publish to org" and click on "Publish".
## Authorize the Bot
The Bot will have to be authorized by the Teams admin before it can be used. To do so navigate to the [Teams admin center](https://admin.teams.microsoft.com/policies/manage-apps) and find the app.
Then authorize it.
You should now be able to interact with your Chainlit app through Teams.
# Web App
Source: https://docs.chainlit.io/deploy/webapp
The native Chainlit UI that is available on port 8000. Should open in your default browser when you run `chainlit run`.
## Supported Features
| Message | Streaming | Elements | Audio | Ask User | Chat History | Chat Profiles | Feedback |
| ------- | --------- | -------- | ----- | -------- | ------------ | ------------- | -------- |
| ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
## Window Messaging
When running the Web App inside an iframe, the server and parent window can communicate using window messages. This is useful for sending context information to the Chainlit server and updating your parent window based on the server's response.
Add a `@cl.on_window_message` decorated function to your Chainlit server to receive messages sent from the parent window.
```py theme={null}
import chainlit as cl
@cl.on_window_message
async def window_message(message: str):
if message.startswith("Client: "):
await cl.Message(content=f"Window message received: {message}").send()
```
Then, in your app/website, you can emit a window message like this:
```js theme={null}
const iframe = document.getElementById('the-iframe');
iframe.contentWindow.postMessage('Client: Hello from parent window', '*');
```
To send a message from the server to the parent window, use `cl.send_window_message`:
```py theme={null}
import chainlit as cl
@cl.on_message
async def message():
await cl.send_window_message("Server: Hello from Chainlit")
```
The parent window can listen for messages like this:
```js theme={null}
window.addEventListener('message', (event) => {
if (event.data.startsWith("Server: ")) {
console.log('Parent window received:', event.data);
}
});
```
### Example
Check out this example from the cookbook that uses the window messaging feature: [https://github.com/Chainlit/cookbook/tree/main/window-message](https://github.com/Chainlit/cookbook/tree/main/window-message)
# Community
Source: https://docs.chainlit.io/examples/community
## Videos
* [Build Python LLM apps in minutes Using Chainlit ⚡️](https://www.youtube.com/watch?v=tv7rn5AsxFY) from [Krish Naik](https://twitter.com/Krishnaik06)
* [Build an Arxiv QA Chat Application in Minutes!](https://www.youtube.com/watch?v=9SBUStfCtmk) from [Chris Alexiuk](https://twitter.com/c_s_ale)
* [Chainlit: Build LLM Apps in MINUTES!](https://www.youtube.com/watch?v=rcXPq3UcxIY) from [WorldOfAI](https://www.youtube.com/@intheworldofai)
* [Now Build & Share LLM Apps Super Fast with Chainlit](https://www.youtube.com/watch?v=_S3usFpVJOM) from [Sunny Bhaveen Chandra](https://www.youtube.com/c/c17hawke)
* [Chainlit CrashCourse - Build LLM ChatBot with Chainlit and Python & GPT](https://www.youtube.com/watch?v=pqriC9OT2aY) from [JCharisTech](https://www.youtube.com/@JCharisTech)
* [Chat with ... anything](https://twitter.com/waseemhnyc/status/1665923724426502148) by [Waseem H](https://twitter.com/waseemhnyc)
* [Unleash the Power of Falcon with LangChain: Step-by-Step Guide to Run Chat App using Chainlit](https://www.youtube.com/watch?v=HG0_0lqrWs4\&ab_channel=MenloParkLab) by [Menlo Park Lab](https://www.youtube.com/@menloparklab)
* [Chainlit tutorial series](https://www.youtube.com/playlist?list=PL2fGiugrNoogRNUHUWCDAnooWKmfVDnFS) (in chinese) by [01coder](https://www.youtube.com/@01coder30)
## Articles
* [AI Agents tutorial: How to create information retrieval Chatbot](https://lablab.ai/t/agents-retrieval-chatbot) from [Jakub Misiło](https://www.linkedin.com/in/jmisilo/)
* [Create an Azure OpenAI, LangChain, ChromaDB, and Chainlit Chat App in Container Apps using Terraform](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/create-an-azure-openai-langchain-chromadb-and-chainlit-chat-app/ba-p/3885602) from [Paolo Salvatori](https://techcommunity.microsoft.com/t5/user/viewprofilepage/user-id/988334#profile)
* [Create A Chatbot with Internet Connectivity Powered by Langchain and Chainlit](https://levelup.gitconnected.com/create-a-chatbot-with-internet-connectivity-powered-by-langchain-and-chainlit-cba86f57ab2e) from [Yeyu Hang](https://medium.com/@wenbohuang0307)
* [For Chatbot Development, Streamlit Is Good, But Chainlit Is Better](https://levelup.gitconnected.com/for-chatbot-development-streamlit-is-good-but-chainlit-is-better-4112f9473a69) from [Yeyu Hang](https://medium.com/@wenbohuang0307)
* [Build and Deploy a Chat App Powered by LangChain and Chainlit using Docker](https://levelup.gitconnected.com/build-deploy-a-chat-app-powered-by-langchain-chainlit-using-docker-4f687da08625) from [MA Raza, Ph.D.](https://medium.com/gitconnected/build-deploy-a-chat-app-powered-by-langchain-chainlit-using-docker-4f687da08625)
Note that some of those tutorials might use the old sync version of the package. See the [Migration Guide](/examples/openai-sql) to update those!
# Cookbook
Source: https://docs.chainlit.io/examples/cookbook
The Cookbook repository serves as a valuable resource and starting point for developers looking to explore the capabilities of Chainlit in creating LLM apps.
It provides a diverse collection of **example projects**, each residing in its own folder, showcasing the integration of various tools such as **OpenAI, Anthropiс, LangChain, LlamaIndex, ChromaDB, Pinecone and more**.
Whether you are seeking basic tutorials or in-depth use cases, the Cookbook repository offers inspiration and practical insights!
# Text to SQL
Source: https://docs.chainlit.io/examples/openai-sql
Let's build a simple app that helps users to create SQL queries with natural language.
## Prerequisites
This example has extra dependencies. You can install them with:
```bash theme={null}
pip install chainlit openai
```
## Imports
```python app.py theme={null}
from openai import AsyncOpenAI
import chainlit as cl
cl.instrument_openai()
client = AsyncOpenAI(api_key="YOUR_OPENAI_API_KEY")
```
## Define a prompt template and LLM settings
````python app.py theme={null}
template = """SQL tables (and columns):
* Customers(customer_id, signup_date)
* Streaming(customer_id, video_id, watch_date, watch_minutes)
A well-written SQL query that {input}:
```"""
settings = {
"model": "gpt-3.5-turbo",
"temperature": 0,
"max_tokens": 500,
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 0,
"stop": ["```"],
}
````
## Add the Assistant Logic
Here, we decorate the `main` function with the [@on\_message](/api-reference/lifecycle-hooks/on-message) decorator to tell Chainlit to run the `main` function each time a user sends a message.
Then, we wrap our text to sql logic in a [Step](/concepts/step).
```python app.py theme={null}
@cl.set_starters
async def starters():
return [
cl.Starter(
label=">50 minutes watched",
message="Compute the number of customers who watched more than 50 minutes of video this month."
)
]
@cl.on_message
async def main(message: cl.Message):
stream = await client.chat.completions.create(
messages=[
{
"role": "user",
"content": template.format(input=message.content),
}
], stream=True, **settings
)
msg = await cl.Message(content="", language="sql").send()
async for part in stream:
if token := part.choices[0].delta.content or "":
await msg.stream_token(token)
await msg.update()
```
## Try it out
```bash theme={null}
chainlit run app.py -w
```
You can ask questions like `Compute the number of customers who watched more than 50 minutes of video this month`.
# Document QA
Source: https://docs.chainlit.io/examples/qa
In this example, we're going to build an chatbot QA app. We'll learn how to:
* Upload a document
* Create vector embeddings from a file
* Create a chatbot app with the ability to display sources used to generate an answer
This example is inspired from the [LangChain doc](https://python.langchain.com/en/latest/use_cases/question_answering.html)
## Prerequisites
This example has extra dependencies. You can install them with:
```bash theme={null}
pip install langchain langchain-community chromadb tiktoken openai langchain-openai
```
Then, you need to go to create an OpenAI key [here](https://platform.openai.com/account/api-keys).
The state of the union file is available
[here](https://github.com/Chainlit/cookbook/blob/main/llama-index/data/state_of_the_union.txt)
## Conversational Document QA with LangChain
```python qa.py theme={null}
import os
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.vectorstores import Chroma
from langchain.chains import (
ConversationalRetrievalChain,
)
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain_community.chat_message_histories import ChatMessageHistory
from langchain.memory import ConversationBufferMemory
import chainlit as cl
os.environ["OPENAI_API_KEY"] = (
"OPENAI_API_KEY"
)
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
@cl.on_chat_start
async def on_chat_start():
files = None
# Wait for the user to upload a file
while files is None:
files = await cl.AskFileMessage(
content="Please upload a text file to begin!",
accept=["text/plain"],
max_size_mb=20,
timeout=180,
).send()
file = files[0]
msg = cl.Message(content=f"Processing `{file.name}`...")
await msg.send()
with open(file.path, "r", encoding="utf-8") as f:
text = f.read()
# Split the text into chunks
texts = text_splitter.split_text(text)
# Create a metadata for each chunk
metadatas = [{"source": f"{i}-pl"} for i in range(len(texts))]
# Create a Chroma vector store
embeddings = OpenAIEmbeddings()
docsearch = await cl.make_async(Chroma.from_texts)(
texts, embeddings, metadatas=metadatas
)
message_history = ChatMessageHistory()
memory = ConversationBufferMemory(
memory_key="chat_history",
output_key="answer",
chat_memory=message_history,
return_messages=True,
)
# Create a chain that uses the Chroma vector store
chain = ConversationalRetrievalChain.from_llm(
ChatOpenAI(model_name="gpt-4o-mini", temperature=0, streaming=True),
chain_type="stuff",
retriever=docsearch.as_retriever(),
memory=memory,
return_source_documents=True,
)
# Let the user know that the system is ready
msg.content = f"Processing `{file.name}` done. You can now ask questions!"
await msg.update()
cl.user_session.set("chain", chain)
@cl.on_message
async def main(message: cl.Message):
chain = cl.user_session.get("chain") # type: ConversationalRetrievalChain
cb = cl.AsyncLangchainCallbackHandler()
res = await chain.acall(message.content, callbacks=[cb])
answer = res["answer"]
source_documents = res["source_documents"] # type: List[Document]
text_elements = [] # type: List[cl.Text]
if source_documents:
for source_idx, source_doc in enumerate(source_documents):
source_name = f"source_{source_idx}"
# Create the text element referenced in the message
text_elements.append(
cl.Text(
content=source_doc.page_content, name=source_name, display="side"
)
)
source_names = [text_el.name for text_el in text_elements]
if source_names:
answer += f"\nSources: {', '.join(source_names)}"
else:
answer += "\nNo sources found"
await cl.Message(content=answer, elements=text_elements).send()
```
## Try it out
```bash theme={null}
chainlit run qa.py
```
You can then upload any `.txt` file to the UI and ask questions about it.
If you are using `state_of_the_union.txt` you can ask questions like `What did the president say about Ketanji Brown Jackson?`.
# Security - PII
Source: https://docs.chainlit.io/examples/security
When building chat applications, it's crucial to ensure the secure handling of sensitive data, especially Personal Identifiable Information (PII). PII can be directly or indirectly linked to an individual, making it essential to protect user privacy by preventing the transmission of such data to language models.
### Example of PII
Consider the text below, where PII has been highlighted:
> Hello, my name is **John** and I live in **New York**.
> My credit card number is **3782-8224-6310-005** and my phone number is **(212) 688-5500**.
And here is the anonymized version:
> Hello, my name is \ and I live in \. My credit card number is \ and my phone number is \.
## Analyze and anonymize data
Integrate [Microsoft Presidio](https://microsoft.github.io/presidio/) for robust data sanitization in your Chainlit application.
```python Code Example theme={null}
import chainlit as cl
@cl.on_message
async def main(message: cl.Message):
# Notice that the message is passed as is
response = await cl.Message(
content=f"Received: {message.content}",
).send()
```
Before proceeding, ensure that the Python packages required for PII analysis and anonymization are installed. Run the following commands in your terminal to install them:
```shell theme={null}
pip install presidio-analyzer presidio-anonymizer spacy
python -m spacy download en_core_web_lg
```
Create an async context manager that utilizes the Presidio Analyzer to inspect the incoming text for any PII. This context manager can be included in your main function to scrutinize messages before they are processed.
When PII is detected, you should present the user with the option to either continue or cancel the operation. Use Chainlit's messaging system to accomplish this.
```python Code Example theme={null}
from presidio_analyzer import AnalyzerEngine
from contextlib import asynccontextmanager
analyzer = AnalyzerEngine()
@asynccontextmanager
async def check_text(text: str):
pii_results = analyzer.analyze(text=text, language="en")
if pii_results:
response = await cl.AskActionMessage(
content="PII detected",
actions=[
cl.Action(name="continue", payload={"value": "continue"}, label="✅ Continue"),
cl.Action(name="cancel", payload={"value": "continue"}, label="❌ Cancel"),
],
).send()
if response is None or response.get("payload").get("value") == "cancel":
raise InterruptedError
yield
# ...
@cl.on_message
async def main(message: cl.Message):
async with check_text(message.content):
# This block is only executed when the user press "Continue"
response = await cl.Message(
content=f"Received: {message.content}",
).send()
```
If your application has a requirement to anonymize PII, Presidio can also do that. Modify the check\_text context manager to return anonymized text when PII is detected.
```python Code Example theme={null}
from presidio_anonymizer import AnonymizerEngine
anonymizer = AnonymizerEngine()
@asynccontextmanager
async def check_text(text: str):
pii_results = analyzer.analyze(text=text, language="en")
if pii_results:
response = await cl.AskActionMessage(
content="PII detected",
actions=[
cl.Action(name="continue", payload={"value": "continue"}, label="✅ Continue"),
cl.Action(name="cancel", payload={"value": "continue"}, label="❌ Cancel"),
],
).send()
if response is None or response.get("payload").get("value") == "cancel":
raise InterruptedError
yield anonymizer.anonymize(
text=text,
analyzer_results=pii_results,
).text
else:
yield text
# ...
@cl.on_message
async def main(message: cl.Message):
async with check_text(message.content) as anonymized_message:
response = await llm_chain.arun(
anonymized_message
callbacks=[cl.AsyncLangchainCallbackHandler()]
)
```
# Installation
Source: https://docs.chainlit.io/get-started/installation
Chainlit requires `python>=3.9`.
You can install Chainlit it via pip as follows:
```bash theme={null}
pip install chainlit
```
This will make the `chainlit` command available on your system.
Make sure everything runs smoothly:
```bash theme={null}
chainlit hello
```
This should spawn the chainlit UI and ask for your name like so:
## Next steps
Learn on how to use Chainlit with any python code.
Integrate Chainlit with other frameworks.
# Overview
Source: https://docs.chainlit.io/get-started/overview
Chainlit is an open-source Python package to build production ready Conversational AI.
## Key features
1. [Build fast:](/examples/openai-sql) Get started in a couple lines of Python
2. [Authentication:](/authentication/overview) Integrate with corporate identity providers and existing authentication infrastructure
3. [Data persistence:](/data-persistence/overview) Collect, monitor and analyze data from your users
4. [Visualize multi-steps reasoning:](/concepts/step) Understand the intermediary steps that produced an output at a glance
5. [Multi Platform:](/deploy/overview) Write your assistant logic once, use everywhere
## Integrations
Chainlit is compatible with all Python programs and libraries. That being said, it comes with a set of integrations with popular libraries and frameworks.
Learn how to use any LangChain agent with Chainlit.
Learn how to explore your OpenAI calls in Chainlit.
Learn how to integrate your OpenAI Assistants with Chainlit.
Learn how to use any Mistral AI calls in Chainlit.
Learn how to integrate your Semantic Kernel code with Chainlit.
Learn how to integrate your Llama Index code with Chainlit.
Learn how to integrate your Autogen agents with Chainlit.
# In Pure Python
Source: https://docs.chainlit.io/get-started/pure-python
In this tutorial, we'll walk through the steps to create a minimal LLM app.
## Prerequisites
Before getting started, make sure you have the following:
* A working installation of Chainlit
* Basic understanding of Python programming
## Step 1: Create a Python file
Create a new Python file named `app.py` in your project directory. This file will contain the main logic for your LLM application.
## Step 2: Write the Application Logic
In `app.py`, import the Chainlit package and define a function that will handle incoming messages from the chatbot UI. Decorate the function with the `@cl.on_message` decorator to ensure it gets called whenever a user inputs a message.
Here's the basic structure of the script:
```python app.py theme={null}
import chainlit as cl
@cl.on_message
async def main(message: cl.Message):
# Your custom logic goes here...
# Send a response back to the user
await cl.Message(
content=f"Received: {message.content}",
).send()
```
The `main` function will be called every time a user inputs a message in the chatbot UI. You can put your custom logic within the function to process the user's input, such as analyzing the text, calling an API, or computing a result.
The [Message](/api-reference/message) class is responsible for sending a reply back to the user. In this example, we simply send a message containing the user's input.
## Step 3: Run the Application
To start your Chainlit app, open a terminal and navigate to the directory containing `app.py`. Then run the following command:
```bash theme={null}
chainlit run app.py -w
```
The `-w` flag tells Chainlit to enable auto-reloading, so you don't need to restart the server every time you make changes to your application. Your chatbot UI should now be accessible at [http://localhost:8000](http://localhost:8000).
## Next Steps
Learn about the core concepts of Chainlit
Explore the Chainlit cookbook for more examples
# Migrate to Chainlit v2.0.0
Source: https://docs.chainlit.io/guides/migration/2.0.0
Join the discord for live updates: [https://discord.gg/AzyvDHWARx](https://discord.gg/AzyvDHWARx)
## Updating Chainlit
Begin the migration by updating Chainlit to the latest version:
```bash theme={null}
pip install --upgrade chainlit
```
## What changes?
The Chainlit UI (including the copilot) has been completely re-written with Shadcn/Tailwind. This brings several advantages:
1. The codebase is simpler and more contribution friendly.
2. It enabled the new custom element feature.
3. The theme customisation is more powerful.
Full changelog available [here](https://github.com/Chainlit/chainlit/blob/main/CHANGELOG.md#200---2025-01-06).
## How to migrate?
### 1. Regenerate the config file
The following fields have been removed from the `config.toml` file:
1. **follow\_symlink**: Chainlit no longer uses `StaticFiles` to serve files.
2. **font\_family**, **custom\_font**, **\[UI.theme]**: Theme customisation now uses a [separate file](/customisation/theme).
3. **audio**: Chainlit audio streaming has been rework to match the [realtime APIs](/advanced-features/multi-modal).
You can either manually remove those field or remove the `.chainlit/config.toml` file and restart your application.
### 2. Cookie Auth & Cross Origins
All of the authentication mechanisms now use cookie auth instead of directly using a JWT. This change makes Chainlit more secure.
This does not require any change in your app code. However, this implies that Chainlit is now more picky about cross origins (for instance when using a copilot on a website).
If you need to consume a Chainlit app on a different origin, make sure you allow it in the `config.toml` under `allow_origins`.
### 3. Actions
1. The **value** field has replaced with `payload` which accepts a Python dict. This makes actions more useful.
2. The **description** field has been renamed `tooltip`.
3. The field `icon` has been added. You can use any lucide icon name.
4. The **collapsed** field has been removed.
### 4. Copilot Widget Config
1. The **fontFamily** field has been removed. Check the [new custom theme documentation](/customisation/theme).
2. the `button.style` field has been replaced with `button.className`. You can use any tailwind class to style the widget button.
# Async / Sync
Source: https://docs.chainlit.io/guides/sync-async
Asynchronous programming is a powerful way to handle multiple tasks concurrently without blocking the execution of your program. Chainlit is async by default to allow agents to execute tasks in parallel and allow multiple users on a single app.
Python introduced the `asyncio` library to make it easier to write asynchronous code using the `async/await` syntax. This onboarding guide will help you understand the basics of asynchronous programming in Python and how to use it in your Chainlit project.
### Understanding async/await
The `async` and `await` keywords are used to define and work with asynchronous code in Python. An `async` function is a coroutine, which is a special type of function that can pause its execution and resume later, allowing other tasks to run in the meantime.
To define an async function, use the `async def` syntax:
```python theme={null}
async def my_async_function():
# Your async code goes here
```
To call an async function, you need to use the `await` keyword:
```python theme={null}
async def another_async_function():
result = await my_async_function()
```
### Working with Chainlit
Chainlit uses asynchronous programming to handle events and tasks efficiently. When creating a Chainlit agent, you'll often need to define async functions to handle events and perform actions.
For example, to create an async function that responds to messages in Chainlit:
```python theme={null}
import chainlit as cl
@cl.on_message
async def main(message: cl.Message):
# Your custom logic goes here
# Send a response back to the user
await cl.Message(
content=f"Received: {message.content}",
).send()
```
### Long running synchronous tasks
In some cases, you need to run long running synchronous functions in your Chainlit project. To prevent blocking the event loop, you can utilize the `make_async` function provided by the Chainlit library to transform a synchronous function into an asynchronous one:
```python theme={null}
from chainlit import make_async
def my_sync_function():
# Your synchronous code goes here
import time
time.sleep(10)
return 0
async_function = make_async(my_sync_function)
async def main():
result = await async_function()
```
By using this approach, you can maintain the non-blocking nature of your project while still incorporating synchronous functions when necessary.
### Call an async function from a sync function
If you need to run an asynchronous function inside a sync function, you can use the `run_sync` function provided by the Chainlit library:
```python theme={null}
from chainlit import run_sync
async def my_async_function():
# Your asynchronous code goes here
def main():
result = run_sync(my_async_function())
main()
```
By following this guide, you should now have a basic understanding of asynchronous programming in Python and how to use it in your Chainlit project.
As you continue to work with Chainlit, you'll find that async/await and the asyncio library provide a powerful and efficient way to handle multiple agents/tasks concurrently.
# Embedchain
Source: https://docs.chainlit.io/integrations/embedchain
In this tutorial, we'll walk through the steps to create a Chainlit application integrated with [Embedchain](https://github.com/embedchain/embedchain).
## Step 1: Create a Chainlit Application
In `app.py`, import the necessary packages and define one function to handle a new chat session and another function to handle messages incoming from the UI.
### With Embedchain
```python app.py theme={null}
import chainlit as cl
from embedchain import Pipeline as App
import os
os.environ["OPENAI_API_KEY"] = "sk-xxx"
@cl.on_chat_start
async def on_chat_start():
app = App.from_config(config={
'app': {
'config': {
'name': 'chainlit-app'
}
},
'llm': {
'config': {
'stream': True,
}
}
})
# import your data here
app.add("https://www.forbes.com/profile/elon-musk/")
app.collect_metrics = False
cl.user_session.set("app", app)
@cl.on_message
async def on_message(message: cl.Message):
app = cl.user_session.get("app")
msg = cl.Message(content="")
for chunk in await cl.make_async(app.chat)(message.content):
await msg.stream_token(chunk)
await msg.send()
```
## Step 2: Run the Application
To start your app, open a terminal and navigate to the directory containing `app.py`. Then run the following command:
```bash theme={null}
chainlit run app.py -w
```
## Next Steps
Congratulations! You've just created your first LLM app with Chainlit and Embedchain.
Happy coding! 🎉
# FastAPI
Source: https://docs.chainlit.io/integrations/fastapi
Chainlit can be mounted as a FastAPI sub application.
```py my_cl_app theme={null}
import chainlit as cl
@cl.on_chat_start
async def main():
await cl.Message(content="Hello World").send()
```
```py main theme={null}
from fastapi import FastAPI
from chainlit.utils import mount_chainlit
app = FastAPI()
@app.get("/app")
def read_main():
return {"message": "Hello World from main app"}
mount_chainlit(app=app, target="my_cl_app.py", path="/chainlit")
```
In the example above, we have a FastAPI application with a single endpoint `/app`. We mount the Chainlit application `my_cl_app.py` to the `/chainlit` path.
Start the FastAPI server:
```bash theme={null}
uvicorn main:app --host 0.0.0.0 --port 80
```
When using FastAPI integration, header authentication is the preferred method
for authenticating users. This approach allows Chainlit to delegate the
authentication process to the parent FastAPI application, providing a more
seamless and secure integration.
# LangChain/LangGraph
Source: https://docs.chainlit.io/integrations/langchain
In this tutorial, we'll walk through the steps to create a Chainlit application integrated with [LangChain](https://github.com/hwchase17/langchain).
## Prerequisites
Before getting started, make sure you have the following:
* A working installation of Chainlit
* The LangChain package installed
* An OpenAI API key
* Basic understanding of Python programming
## Step 1: Create a Python file
Create a new Python file named `app.py` in your project directory. This file will contain the main logic for your LLM application.
## Step 2: Write the Application Logic
In `app.py`, import the necessary packages and define one function to handle a new chat session and another function to handle messages incoming from the UI.
### With LangChain
Let's go through a small example.
If your agent/chain does not have an async implementation, fallback to the
sync implementation.
```python Async LCEL theme={null}
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema import StrOutputParser
from langchain.schema.runnable import Runnable
from langchain.schema.runnable.config import RunnableConfig
from typing import cast
import chainlit as cl
@cl.on_chat_start
async def on_chat_start():
model = ChatOpenAI(streaming=True)
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You're a very knowledgeable historian who provides accurate and eloquent answers to historical questions.",
),
("human", "{question}"),
]
)
runnable = prompt | model | StrOutputParser()
cl.user_session.set("runnable", runnable)
@cl.on_message
async def on_message(message: cl.Message):
runnable = cast(Runnable, cl.user_session.get("runnable")) # type: Runnable
msg = cl.Message(content="")
async for chunk in runnable.astream(
{"question": message.content},
config=RunnableConfig(callbacks=[cl.LangchainCallbackHandler()]),
):
await msg.stream_token(chunk)
await msg.send()
```
```python Sync LCEL theme={null}
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema import StrOutputParser
from langchain.schema.runnable import Runnable
from langchain.schema.runnable.config import RunnableConfig
import chainlit as cl
@cl.on_chat_start
async def on_chat_start():
model = ChatOpenAI(streaming=True)
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You're a very knowledgeable historian who provides accurate and eloquent answers to historical questions.",
),
("human", "{question}"),
]
)
runnable = prompt | model | StrOutputParser()
cl.user_session.set("runnable", runnable)
@cl.on_message
async def on_message(message: cl.Message):
runnable = cl.user_session.get("runnable") # type: Runnable
msg = cl.Message(content="")
for chunk in await cl.make_async(runnable.stream)(
{"question": message.content},
config=RunnableConfig(callbacks=[cl.LangchainCallbackHandler()]),
):
await msg.stream_token(chunk)
await msg.send()
```
This code sets up an instance of `Runnable` with a custom `ChatPromptTemplate` for each chat session. The `Runnable` is invoked everytime a user sends a message to generate the response.
The callback handler is responsible for listening to the chain's intermediate steps and sending them to the UI.
### With LangGraph
```python theme={null}
from typing import Literal
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import ToolNode
from langchain.schema.runnable.config import RunnableConfig
from langchain_core.messages import HumanMessage
import chainlit as cl
@tool
def get_weather(city: Literal["nyc", "sf"]):
"""Use this to get weather information."""
if city == "nyc":
return "It might be cloudy in nyc"
elif city == "sf":
return "It's always sunny in sf"
else:
raise AssertionError("Unknown city")
tools = [get_weather]
model = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
final_model = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
model = model.bind_tools(tools)
# NOTE: this is where we're adding a tag that we'll can use later to filter the model stream events to only the model called in the final node.
# This is not necessary if you call a single LLM but might be important in case you call multiple models within the node and want to filter events
# from only one of them.
final_model = final_model.with_config(tags=["final_node"])
tool_node = ToolNode(tools=tools)
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import END, StateGraph, START
from langgraph.graph.message import MessagesState
from langchain_core.messages import BaseMessage, SystemMessage, HumanMessage
def should_continue(state: MessagesState) -> Literal["tools", "final"]:
messages = state["messages"]
last_message = messages[-1]
# If the LLM makes a tool call, then we route to the "tools" node
if last_message.tool_calls:
return "tools"
# Otherwise, we stop (reply to the user)
return "final"
def call_model(state: MessagesState):
messages = state["messages"]
response = model.invoke(messages)
# We return a list, because this will get added to the existing list
return {"messages": [response]}
def call_final_model(state: MessagesState):
messages = state["messages"]
last_ai_message = messages[-1]
response = final_model.invoke(
[
SystemMessage("Rewrite this in the voice of Al Roker"),
HumanMessage(last_ai_message.content),
]
)
# overwrite the last AI message from the agent
response.id = last_ai_message.id
return {"messages": [response]}
builder = StateGraph(MessagesState)
builder.add_node("agent", call_model)
builder.add_node("tools", tool_node)
# add a separate final node
builder.add_node("final", call_final_model)
builder.add_edge(START, "agent")
builder.add_conditional_edges(
"agent",
should_continue,
)
builder.add_edge("tools", "agent")
builder.add_edge("final", END)
graph = builder.compile()
@cl.on_message
async def on_message(msg: cl.Message):
config = {"configurable": {"thread_id": cl.context.session.id}}
cb = cl.LangchainCallbackHandler()
final_answer = cl.Message(content="")
for msg, metadata in graph.stream({"messages": [HumanMessage(content=msg.content)]}, stream_mode="messages", config=RunnableConfig(callbacks=[cb], **config)):
if (
msg.content
and not isinstance(msg, HumanMessage)
and metadata["langgraph_node"] == "final"
):
await final_answer.stream_token(msg.content)
await final_answer.send()
```
## Step 3: Run the Application
To start your app, open a terminal and navigate to the directory containing `app.py`. Then run the following command:
```bash theme={null}
chainlit run app.py -w
```
The `-w` flag tells Chainlit to enable auto-reloading, so you don't need to restart the server every time you make changes to your application. Your chatbot UI should now be accessible at [http://localhost:8000](http://localhost:8000).
When using LangChain, prompts and completions are not cached by default. To
enable the cache, set the `cache=true` in your chainlit config file.
# LiteLLM
Source: https://docs.chainlit.io/integrations/litellm
In this tutorial, we will guide you through the steps to create a Chainlit application integrated with [LiteLLM Proxy](https://docs.litellm.ai/docs/simple_proxy)
The benefits of using LiteLLM Proxy with Chainlit is:
* You can [call 100+ LLMs in the OpenAI API format](https://docs.litellm.ai/docs/providers)
* Use Virtual Keys to set budget limits and track usage
* see LLM API calls in a step in the UI, and you can explore them in the prompt playground.
You shouldn't configure this integration if you're already using another
integration like Langchain or LlamaIndex. Both integrations would
record the same generation and create duplicate steps in the UI.
## Prerequisites
Before getting started, make sure you have the following:
* A working installation of Chainlit
* The OpenAI package installed
* [LiteLLM Proxy Running](https://docs.litellm.ai/docs/proxy/deploy)
* [A LiteLLM Proxy API Key](https://docs.litellm.ai/docs/proxy/virtual_keys)
* Basic understanding of Python programming
## Step 1: Create a Python file
Create a new Python file named `app.py` in your project directory. This file will contain the main logic for your LLM application.
## Step 2: Write the Application Logic
In `app.py`, import the necessary packages and define one function to handle messages incoming from the UI.
```python theme={null}
from openai import AsyncOpenAI
import chainlit as cl
client = AsyncOpenAI(
api_key="anything", # litellm proxy virtual key
base_url="http://0.0.0.0:4000" # litellm proxy base_url
)
# Instrument the OpenAI client
cl.instrument_openai()
settings = {
"model": "gpt-3.5-turbo", # model you want to send litellm proxy
"temperature": 0,
# ... more settings
}
@cl.on_message
async def on_message(message: cl.Message):
response = await client.chat.completions.create(
messages=[
{
"content": "You are a helpful bot, you always reply in Spanish",
"role": "system"
},
{
"content": message.content,
"role": "user"
}
],
**settings
)
await cl.Message(content=response.choices[0].message.content).send()
```
## Step 3: Run the Application
To start your app, open a terminal and navigate to the directory containing `app.py`. Then run the following command:
```bash theme={null}
chainlit run app.py -w
```
The `-w` flag tells Chainlit to enable auto-reloading, so you don't need to restart the server every time you make changes to your application. Your chatbot UI should now be accessible at [http://localhost:8000](http://localhost:8000).
# Llama Index
Source: https://docs.chainlit.io/integrations/llama-index
In this tutorial, we will guide you through the steps to create a Chainlit application integrated with [Llama Index](https://github.com/jerryjliu/llama_index).
## Prerequisites
Before diving in, ensure that the following prerequisites are met:
* A working installation of Chainlit
* The Llama Index package installed
* An OpenAI API key
* A basic understanding of Python programming
## Step 1: Set Up Your Data Directory
Create a folder named `data` in the root of your app folder. Download the [state of the union](https://github.com/Chainlit/cookbook/blob/main/llama-index/data/state_of_the_union.txt) file (or any files of your own choice) and place it in the `data` folder.
## Step 2: Create the Python Script
Create a new Python file named `app.py` in your project directory. This file will contain the main logic for your LLM application.
## Step 3: Write the Application Logic
In `app.py`, import the necessary packages and define one function to handle a new chat session and another function to handle messages incoming from the UI.
In this tutorial, we are going to use `RetrieverQueryEngine`. Here's the basic structure of the script:
```python app.py theme={null}
import os
import openai
import chainlit as cl
from llama_index.core import (
Settings,
StorageContext,
VectorStoreIndex,
SimpleDirectoryReader,
load_index_from_storage,
)
from llama_index.llms.openai import OpenAI
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.core.query_engine.retriever_query_engine import RetrieverQueryEngine
from llama_index.core.callbacks import CallbackManager
from llama_index.core.service_context import ServiceContext
openai.api_key = os.environ.get("OPENAI_API_KEY")
try:
# rebuild storage context
storage_context = StorageContext.from_defaults(persist_dir="./storage")
# load index
index = load_index_from_storage(storage_context)
except:
documents = SimpleDirectoryReader("./data").load_data(show_progress=True)
index = VectorStoreIndex.from_documents(documents)
index.storage_context.persist()
@cl.on_chat_start
async def start():
Settings.llm = OpenAI(
model="gpt-3.5-turbo", temperature=0.1, max_tokens=1024, streaming=True
)
Settings.embed_model = OpenAIEmbedding(model="text-embedding-3-small")
Settings.context_window = 4096
service_context = ServiceContext.from_defaults(callback_manager=CallbackManager([cl.LlamaIndexCallbackHandler()]))
query_engine = index.as_query_engine(streaming=True, similarity_top_k=2, service_context=service_context)
cl.user_session.set("query_engine", query_engine)
await cl.Message(
author="Assistant", content="Hello! Im an AI assistant. How may I help you?"
).send()
@cl.on_message
async def main(message: cl.Message):
query_engine = cl.user_session.get("query_engine") # type: RetrieverQueryEngine
msg = cl.Message(content="", author="Assistant")
res = await cl.make_async(query_engine.query)(message.content)
for token in res.response_gen:
await msg.stream_token(token)
await msg.send()
```
This code sets up an instance of `RetrieverQueryEngine` for each chat session. The `RetrieverQueryEngine` is invoked everytime a user sends a message to generate the response.
The callback handlers are responsible for listening to the intermediate steps and sending them to the UI.
## Step 4: Launch the Application
To kick off your LLM app, open a terminal, navigate to the directory containing `app.py`, and run the following command:
```bash theme={null}
chainlit run app.py -w
```
The `-w` flag enables auto-reloading so that you don't have to restart the server each time you modify your application. Your chatbot UI should now be accessible at [http://localhost:8000](http://localhost:8000).
# vLLM, LMStudio, HuggingFace
Source: https://docs.chainlit.io/integrations/message-based
We can leverage the OpenAI instrumentation to log calls from inference servers that use messages-based API, such as vLLM, LMStudio or HuggingFace's TGI.
You shouldn't configure this integration if you're already using another integration like LangChain or LlamaIndex. Both integrations would record the same generation and create duplicate steps in the UI.
Create a new Python file named `app.py` in your project directory. This file will contain the main logic for your LLM application.
In `app.py`, import the necessary packages and define one function to handle messages incoming from the UI.
```python theme={null}
from openai import AsyncOpenAI
import chainlit as cl
client = AsyncOpenAI(base_url="http://localhost:1234/v1", api_key="lm-studio")
# Instrument the OpenAI client
cl.instrument_openai()
settings = {
"model": "gpt-3.5-turbo",
"temperature": 0,
# ... more settings
}
@cl.on_message
async def on_message(message: cl.Message):
response = await client.chat.completions.create(
messages=[
{
"content": "You are a helpful bot, you always reply in Spanish",
"role": "system"
},
{
"content": message.content,
"role": "user"
}
],
**settings
)
await cl.Message(content=response.choices[0].message.content).send()
```
Create a file named `.env` in the same folder as your `app.py` file. Add your OpenAI API key in the `OPENAI_API_KEY` variable.
To start your app, open a terminal and navigate to the directory containing `app.py`. Then run the following command:
```bash theme={null}
chainlit run app.py -w
```
The `-w` flag tells Chainlit to enable auto-reloading, so you don't need to restart the server every time you make changes to your application. Your chatbot UI should now be accessible at [http://localhost:8000](http://localhost:8000).
# Mistral AI
Source: https://docs.chainlit.io/integrations/mistralai
You shouldn't configure this integration if you're already using another
integration like Langchain or LlamaIndex. Both integrations would
record the same generation and create duplicate steps in the UI.
## Prerequisites
Before getting started, make sure you have the following:
* A working installation of Chainlit
* The Mistral AI python client package installed, `mistralai`
* A [Mistral AI API key](https://console.mistral.ai/api-keys/)
* Basic understanding of Python programming
## Step 1: Create a Python file
Create a new Python file named `app.py` in your project directory. This file will contain the main logic for your LLM application.
## Step 2: Write the Application Logic
In `app.py`, import the necessary packages and define one function to handle messages incoming from the UI.
```python theme={null}
import os
import chainlit as cl
from mistralai import Mistral
# Initialize the Mistral client
client = Mistral(api_key=os.getenv("MISTRAL_API_KEY"))
@cl.on_message
async def on_message(message: cl.Message):
response = await client.chat.complete_async(
model="mistral-small-latest",
max_tokens=100,
temperature=0.5,
stream=False,
# ... more setting
messages=[
{
"role": "system",
"content": "You are a helpful bot, you always reply in French."
},
{
"role": "user",
"content": message.content # Content of the user message
}
]
)
await cl.Message(content=response.choices[0].message.content).send()
```
## Step 3: Fill the environment variables
Create a file named `.env` in the same folder as your `app.py` file. Add your Mistral AI API key in the `MISTRAL_API_KEY` variable.
## Step 4: Run the Application
To start your app, open a terminal and navigate to the directory containing `app.py`. Then run the following command:
```bash theme={null}
chainlit run app.py -w
```
The `-w` flag tells Chainlit to enable auto-reloading, so you don't need to restart the server every time you make changes to your application. Your chatbot UI should now be accessible at [http://localhost:8000](http://localhost:8000).
# OpenAI
Source: https://docs.chainlit.io/integrations/openai
If you are using OpenAI assistants, check out the [OpenAI
Assistant](https://github.com/Chainlit/cookbook/tree/main/openai-data-analyst)
example app.
The benefits of this integration is that you can see the OpenAI API calls in a step in the UI, and you can explore them in the prompt playground.
You need to add `cl.instrument_openai()` after creating your OpenAI client.
You shouldn't configure this integration if you're already using another
integration like Langchain or LlamaIndex. Both integrations would
record the same generation and create duplicate steps in the UI.
## Prerequisites
Before getting started, make sure you have the following:
* A working installation of Chainlit
* The OpenAI package installed
* An OpenAI API key
* Basic understanding of Python programming
## Step 1: Create a Python file
Create a new Python file named `app.py` in your project directory. This file will contain the main logic for your LLM application.
## Step 2: Write the Application Logic
In `app.py`, import the necessary packages and define one function to handle messages incoming from the UI.
```python theme={null}
from openai import AsyncOpenAI
import chainlit as cl
client = AsyncOpenAI()
# Instrument the OpenAI client
cl.instrument_openai()
settings = {
"model": "gpt-3.5-turbo",
"temperature": 0,
# ... more settings
}
@cl.on_message
async def on_message(message: cl.Message):
response = await client.chat.completions.create(
messages=[
{
"content": "You are a helpful bot, you always reply in Spanish",
"role": "system"
},
{
"content": message.content,
"role": "user"
}
],
**settings
)
await cl.Message(content=response.choices[0].message.content).send()
```
## Step 3: Fill the environment variables
Create a file named `.env` in the same folder as your `app.py` file. Add your OpenAI API key in the `OPENAI_API_KEY` variable.
## Step 4: Run the Application
To start your app, open a terminal and navigate to the directory containing `app.py`. Then run the following command:
```bash theme={null}
chainlit run app.py -w
```
The `-w` flag tells Chainlit to enable auto-reloading, so you don't need to restart the server every time you make changes to your application. Your chatbot UI should now be accessible at [http://localhost:8000](http://localhost:8000).
# Semantic Kernel
Source: https://docs.chainlit.io/integrations/semantic-kernel
In this tutorial, we'll walk through the steps to create a Chainlit application integrated with [Microsoft's Semantic Kernel](https://github.com/microsoft/semantic-kernel). The integration automatically visualizes Semantic Kernel function calls (like plugins or tools) as Steps in the Chainlit UI.
## Prerequisites
Before getting started, make sure you have the following:
* A working installation of Chainlit
* The `semantic-kernel` package installed
* An LLM API key (e.g., OpenAI, Azure OpenAI) configured for Semantic Kernel
* Basic understanding of Python programming and Semantic Kernel concepts (Kernel, Plugins, Functions)
## Step 1: Create a Python file
Create a new Python file named `app.py` in your project directory. This file will contain the main logic for your LLM application using Semantic Kernel.
## Step 2: Write the Application Logic
In `app.py`, import the necessary packages, set up your Semantic Kernel `Kernel`, add the `SemanticKernelFilter` for Chainlit integration, and define functions to handle chat sessions and incoming messages.
Here's an example demonstrating how to set up the kernel and use the filter:
```python app.py theme={null}
import chainlit as cl
import semantic_kernel as sk
from semantic_kernel.connectors.ai import FunctionChoiceBehavior
from semantic_kernel.connectors.ai.open_ai import (
OpenAIChatCompletion,
OpenAIChatPromptExecutionSettings,
)
from semantic_kernel.functions import kernel_function
from semantic_kernel.contents import ChatHistory
request_settings = OpenAIChatPromptExecutionSettings(
function_choice_behavior=FunctionChoiceBehavior.Auto(filters={"excluded_plugins": ["ChatBot"]})
)
# Example Native Plugin (Tool)
class WeatherPlugin:
@kernel_function(name="get_weather", description="Gets the weather for a city")
def get_weather(self, city: str) -> str:
"""Retrieves the weather for a given city."""
if "paris" in city.lower():
return f"The weather in {city} is 20°C and sunny."
elif "london" in city.lower():
return f"The weather in {city} is 15°C and cloudy."
else:
return f"Sorry, I don't have the weather for {city}."
@cl.on_chat_start
async def on_chat_start():
# Setup Semantic Kernel
kernel = sk.Kernel()
# Add your AI service (e.g., OpenAI)
# Make sure OPENAI_API_KEY and OPENAI_ORG_ID are set in your environment
ai_service = OpenAIChatCompletion(service_id="default", ai_model_id="gpt-4o")
kernel.add_service(ai_service)
# Import the WeatherPlugin
kernel.add_plugin(WeatherPlugin(), plugin_name="Weather")
# Instantiate and add the Chainlit filter to the kernel
# This will automatically capture function calls as Steps
sk_filter = cl.SemanticKernelFilter(kernel=kernel)
cl.user_session.set("kernel", kernel)
cl.user_session.set("ai_service", ai_service)
cl.user_session.set("chat_history", ChatHistory())
@cl.on_message
async def on_message(message: cl.Message):
kernel = cl.user_session.get("kernel") # type: sk.Kernel
ai_service = cl.user_session.get("ai_service") # type: OpenAIChatCompletion
chat_history = cl.user_session.get("chat_history") # type: ChatHistory
# Add user message to history
chat_history.add_user_message(message.content)
# Create a Chainlit message for the response stream
answer = cl.Message(content="")
async for msg in ai_service.get_streaming_chat_message_content(
chat_history=chat_history,
user_input=message.content,
settings=request_settings,
kernel=kernel,
):
if msg.content:
await answer.stream_token(msg.content)
# Add the full assistant response to history
chat_history.add_assistant_message(answer.content)
# Send the final message
await answer.send()
```
## Step 3: Run the Application
To start your app, open a terminal and navigate to the directory containing `app.py`. Then run the following command:
```bash theme={null}
chainlit run app.py -w
```
The `-w` flag tells Chainlit to enable auto-reloading, so you don't need to restart the server every time you make changes to your application. Your chatbot UI should now be accessible at [http://localhost:8000](http://localhost:8000). Interact with the bot, and if you ask for the weather (and the LLM uses the tool), you should see a "Weather-get\_weather" step appear in the UI.