LiteLLM
In this tutorial, we will guide you through the steps to create a Chainlit application integrated with LiteLLM Proxy
The benefits of using LiteLLM Proxy with Chainlit is:
- You can call 100+ LLMs in the OpenAI API format
- Use Virtual Keys to set budget limits and track usage
- see LLM API calls in a step in the UI, and you can explore them in the prompt playground.
You shouldn’t configure this integration if you’re already using another integration like Haystack, Langchain or LlamaIndex. Both integrations would record the same generation and create duplicate steps in the UI.
Prerequisites
Before getting started, make sure you have the following:
- A working installation of Chainlit
- The OpenAI package installed
- LiteLLM Proxy Running
- A LiteLLM Proxy API Key
- Basic understanding of Python programming
Step 1: Create a Python file
Create a new Python file named app.py
in your project directory. This file will contain the main logic for your LLM application.
Step 2: Write the Application Logic
In app.py
, import the necessary packages and define one function to handle messages incoming from the UI.
Step 3: Run the Application
To start your app, open a terminal and navigate to the directory containing app.py
. Then run the following command:
The -w
flag tells Chainlit to enable auto-reloading, so you don’t need to restart the server every time you make changes to your application. Your chatbot UI should now be accessible at http://localhost:8000.
Was this page helpful?