You shouldn’t configure this integration if you’re already using another
integration like Langchain or LlamaIndex. Both integrations would
record the same generation and create duplicate steps in the UI.
Prerequisites
Before getting started, make sure you have the following:
- A working installation of Chainlit
- The Mistral AI python client package installed,
mistralai
- A Mistral AI API key
- Basic understanding of Python programming
Step 1: Create a Python file
Create a new Python file named app.py in your project directory. This file will contain the main logic for your LLM application.
Step 2: Write the Application Logic
In app.py, import the necessary packages and define one function to handle messages incoming from the UI.
import os
import chainlit as cl
from mistralai import Mistral
# Initialize the Mistral client
client = Mistral(api_key=os.getenv("MISTRAL_API_KEY"))
@cl.on_message
async def on_message(message: cl.Message):
response = await client.chat.complete_async(
model="mistral-small-latest",
max_tokens=100,
temperature=0.5,
stream=False,
# ... more setting
messages=[
{
"role": "system",
"content": "You are a helpful bot, you always reply in French."
},
{
"role": "user",
"content": message.content # Content of the user message
}
]
)
await cl.Message(content=response.choices[0].message.content).send()
Step 3: Fill the environment variables
Create a file named .env in the same folder as your app.py file. Add your Mistral AI API key in the MISTRAL_API_KEY variable.
Step 4: Run the Application
To start your app, open a terminal and navigate to the directory containing app.py. Then run the following command:
The -w flag tells Chainlit to enable auto-reloading, so you don’t need to restart the server every time you make changes to your application. Your chatbot UI should now be accessible at http://localhost:8000.