CompletionGeneration
A CompletionGeneration
contains all of the data that has been sent to a completion-based LLM (like Llama2) as well as the response from the LLM.
It is designed to be passed to a Step to enable the Prompt Playground.
Attributes
The LLM model used.
The LLM prompt.
The completion of the prompt.
The variables of the prompt, represented as a dictionary.
The provider of the LLM, such as “openai”.
The list of tools that can be used by the LLM.
The settings the LLM provider was called with.
The error returned by the LLM.
The tags you want to assign to this generation.
Total tokens sent to the LLM.
Total tokens returned by the LLM
Total tokens used by the LLM.
Time from the request to receiving the first token.
LLM speed of token generation, in tokens per second.
Total duration of the LLM request.
Example
A completion generation displayed in the Prompt Playground
Was this page helpful?