LLM powered Assistants take multiple steps to process a user’s request, forming a chain of thought.
Unlike a Message, a Step has a type, an input/output and a start/end.Depending on the config.ui.cot setting, the full chain of thought can be displayed in full, hidden or only the tool calls.
Lets take a simple example of a Chain of Thought that takes a user’s message, process it and sends a response.
Copy
import chainlit as cl@cl.step(type="tool")async def tool(): # Simulate a running task await cl.sleep(2) return "Response from the tool!"@cl.on_messageasync def main(message: cl.Message): # Call the tool tool_res = await tool() # Send the final answer. await cl.Message(content="This is the final answer").send()