LLM Providers allow you to configure the prompt playground so that it can work with any LLM provider.

Provider library

We currently offer the implementation for OpenAI, AzureOpenAI and Anthropic. This enables you to get started quickly without having to re-implement the connection to these LLM providers.

The HuggingFace provider example is available to help you customize it to the HugginFace-hosted model of your choice.

OpenAI

Required environment variables:

OPENAI_API_KEY=

AzureOpenAI

Required environment variables:

AZURE_OPENAI_API_KEY=
AZURE_OPENAI_API_VERSION=
AZURE_OPENAI_ENDPOINT= # something like https://mydomain.openai.azure.com
AZURE_OPENAI_DEPLOYMENT_NAME= # something like gpt-35-turbo

Optional environment variables:

AZURE_OPENAI_API_KEY="" # AZURE_OPENAI_API_KEY is mandatory so you can't set it to None, but you can set it to an empty string
AZURE_AD_TOKEN=
AZURE_DEPLOYMENT=

Anthropic

Required environment variables:

ANTHROPIC_API_KEY=

Custom provider

You can define your own provider to connect with your LLM provider of choice. Below is an example of how it would look like.

Step 1: Define the LLM Provider

The create_completion method is required, it is where you should place the logic where you communicate with your LLM provider.

The format_message and message_to_string function are encouraged. Defining them allows the provider to work with both defining prompts as one string or as a list of messages.

The environment variables are required to auto-detect which provider is configured. Chainlit only shows configured LLM providers in the prompt playground.

The inputs is a list of controls that this LLM provider offers that Chainlit will display in the side panel of the prompt playground.

The is_chat property is a toggle to define whether you feed a list of messages to the LLM provider (like OpenAI’s gpt-4 model) or one text string (like Anthropic’s claude-2 model).

from fastapi.responses import StreamingResponse

from chainlit.input_widget import Select, Slider
from chainlit.playground.config import BaseProvider, add_llm_provider


class ExampleProvider(BaseProvider):

    # Format the message based on the template provided
    def format_message(self, message: cl.GenerationMessage, inputs: Optional[Dict]):
        message = super().format_message(message, inputs)
        # Additional formatting...
        return message

    def message_to_string(self, message: cl.GenerationMessage):
        return message.content

    async def create_completion(self, request):
        await super().create_completion(request)

        # Check the OpenAI provider for a streaming example
        # https://github.com/Chainlit/chainlit/blob/main/src/chainlit/playground/providers/openai.py#L174

        stream = ["This ", "is ", "the ", "test ", "completion"]

        async def create_event_stream():
            for token in stream:
                await cl.sleep(0.1)
                yield token

        return StreamingResponse(create_event_stream())


example_env_vars = {"api_key": "EXAMPLE_API_KEY"}

Example = ExampleProvider(
    id="example",
    name="Example",
    env_vars=example_env_vars,
    inputs=[
        Select(
            id="model",
            label="Model",
            values=["text-001", "text-002"],
            initial_value="text-002",
        ),
        Slider(
            id="temperature",
            label="Temperature",
            min=0.0,
            max=1.0,
            step=0.01,
            initial=0.9,
        )
    ],
    is_chat=False,
)

Step 2: Register the LLM Provider:

Once you have defined the provider, you need to tell Chainlit that it exists.

add_llm_provider(Example)

Langchain Provider

Adding an LLM Provider from a Langchain LLM class is straightforward.

Langchain Providers won’t have settings editable in the Prompt Playground. If you need the settings, you should extend the BaseProvider class and create your own provider.

Here is an example with HuggingFaceHub (works the same for other LLMs).

import os
from chainlit.playground.config import add_llm_provider
from chainlit.playground.providers.langchain import LangchainGenericProvider
from langchain.llms import HuggingFaceHub

# Instantiate the LLM
llm = HuggingFaceHub(
    model_kwargs={"max_length": 500},
    repo_id="declare-lab/flan-alpaca-large",
    huggingfacehub_api_token=os.environ["HUGGINGFACEHUB_API_TOKEN"],
)

# Add the LLM provider
add_llm_provider(
    LangchainGenericProvider(
        # It is important that the id of the provider matches the _llm_type
        id=llm._llm_type,
        # The name is not important. It will be displayed in the UI.
        name="HuggingFaceHub",
        # This should always be a Langchain llm instance (correctly configured)
        llm=llm,
        # If the LLM works with messages, set this to True
        is_chat=False
    )
)