LLM powered Assistants take multiple steps to process a user’s request, forming a chain of thought. Unlike a Message, a Step has a type, an input/output and a start/end.

In the UI, the steps of type tool are displayed in real time to give the user a sense of the assistant’s thought process. In Literal AI, the full chain of thought is logged for debugging and replayability purposes.

A Simple Tool Calling Example

Lets take a simple example of a Chain of Thought that takes a user’s message, process it and sends a response.

import chainlit as cl


@cl.step(type="tool")
async def tool():
    # Simulate a running task
    await cl.sleep(2)

    return "Response from the tool!"


@cl.on_message
async def main(message: cl.Message):
    final_answer = await cl.Message(content="").send()

    # Call the tool
    tool_res = await tool()

    # Send the final answer.
    await cl.Message(content="This is the final answer").send()

Output of the code above

Step API

There are two ways to create steps, either by using the the @cl.step decorator or by using the cl.Step class.