In this tutorial, we’ll walk through the steps to create a Chainlit application integrated with Microsoft’s Semantic Kernel. The integration automatically visualizes Semantic Kernel function calls (like plugins or tools) as Steps in the Chainlit UI.

Prerequisites

Before getting started, make sure you have the following:

  • A working installation of Chainlit
  • The semantic-kernel package installed
  • An LLM API key (e.g., OpenAI, Azure OpenAI) configured for Semantic Kernel
  • Basic understanding of Python programming and Semantic Kernel concepts (Kernel, Plugins, Functions)

Step 1: Create a Python file

Create a new Python file named app.py in your project directory. This file will contain the main logic for your LLM application using Semantic Kernel.

Step 2: Write the Application Logic

In app.py, import the necessary packages, set up your Semantic Kernel Kernel, add the SemanticKernelFilter for Chainlit integration, and define functions to handle chat sessions and incoming messages.

Here’s an example demonstrating how to set up the kernel and use the filter:

app.py
import chainlit as cl
import semantic_kernel as sk
from semantic_kernel.connectors.ai import FunctionChoiceBehavior
from semantic_kernel.connectors.ai.open_ai import (
    OpenAIChatCompletion,
    OpenAIChatPromptExecutionSettings,
)
from semantic_kernel.functions import kernel_function
from semantic_kernel.contents import ChatHistory

request_settings = OpenAIChatPromptExecutionSettings(
    function_choice_behavior=FunctionChoiceBehavior.Auto(filters={"excluded_plugins": ["ChatBot"]})
)

# Example Native Plugin (Tool)
class WeatherPlugin:
    @kernel_function(name="get_weather", description="Gets the weather for a city")
    def get_weather(self, city: str) -> str:
        """Retrieves the weather for a given city."""
        if "paris" in city.lower():
            return f"The weather in {city} is 20°C and sunny."
        elif "london" in city.lower():
            return f"The weather in {city} is 15°C and cloudy."
        else:
            return f"Sorry, I don't have the weather for {city}."

@cl.on_chat_start
async def on_chat_start():
    # Setup Semantic Kernel
    kernel = sk.Kernel()

    # Add your AI service (e.g., OpenAI)
    # Make sure OPENAI_API_KEY and OPENAI_ORG_ID are set in your environment
    ai_service = OpenAIChatCompletion(service_id="default", ai_model_id="gpt-4o")
    kernel.add_service(ai_service)

    # Import the WeatherPlugin
    kernel.add_plugin(WeatherPlugin(), plugin_name="Weather")
    
    # Instantiate and add the Chainlit filter to the kernel
    # This will automatically capture function calls as Steps
    sk_filter = cl.SemanticKernelFilter(kernel=kernel)

    cl.user_session.set("kernel", kernel)
    cl.user_session.set("ai_service", ai_service)
    cl.user_session.set("chat_history", ChatHistory())

@cl.on_message
async def on_message(message: cl.Message):
    kernel = cl.user_session.get("kernel") # type: sk.Kernel
    ai_service = cl.user_session.get("ai_service") # type: OpenAIChatCompletion
    chat_history = cl.user_session.get("chat_history") # type: ChatHistory

    # Add user message to history
    chat_history.add_user_message(message.content)

    # Create a Chainlit message for the response stream
    answer = cl.Message(content="")

    async for msg in ai_service.get_streaming_chat_message_content(
        chat_history=chat_history,
        user_input=message.content,
        settings=request_settings,
        kernel=kernel,
    ):
        if msg.content:
            await answer.stream_token(msg.content)

    # Add the full assistant response to history
    chat_history.add_assistant_message(answer.content)

    # Send the final message
    await answer.send()

Step 3: Run the Application

To start your app, open a terminal and navigate to the directory containing app.py. Then run the following command:

chainlit run app.py -w

The -w flag tells Chainlit to enable auto-reloading, so you don’t need to restart the server every time you make changes to your application. Your chatbot UI should now be accessible at http://localhost:8000. Interact with the bot, and if you ask for the weather (and the LLM uses the tool), you should see a “Weather-get_weather” step appear in the UI.