Skip to content

Interface

LLM Interface

Base LLM is the base class for all LLMs. It provides a common interface for all LLMs to follow.

Base LLM Config

Base LLM Config is the configuration object for an LLM. It is used to configure the LLM and is passed to the LLM when it is created.

director.llm.base.BaseLLMConfig

Bases: BaseSettings

Base configuration for all LLMs.

Parameters:

Name Type Description Default
llm_type str

Type of LLM. e.g. "openai", "anthropic", etc.

required
api_key str

API key for the LLM.

required
api_base str

Base URL for the LLM API.

required
chat_model str

Model name for chat completions.

required
temperature str

Sampling temperature for completions.

required
top_p float

Top p sampling for completions.

required
max_tokens int

Maximum tokens to generate.

required
timeout int

Timeout for the request.

required

Base LLM

Base LLM is the base class for all LLMs. It provides a common interface for all LLMs to follow.

director.llm.base.BaseLLM

BaseLLM(config)

Bases: ABC

Interface for all LLMs. All LLMs should inherit from this class.

Parameters:

Name Type Description Default
config BaseLLMConfig

Configuration for the LLM.

required
Source code in backend/director/llm/base.py
def __init__(self, config: BaseLLMConfig):
    """
    :param config: Configuration for the LLM.
    """
    self.config = config
    self.llm_type = config.llm_type
    self.api_key = config.api_key
    self.api_base = config.api_base
    self.chat_model = config.chat_model
    self.temperature = config.temperature
    self.top_p = config.top_p
    self.max_tokens = config.max_tokens
    self.timeout = config.timeout
    self.enable_langfuse = config.enable_langfuse

chat_completions abstractmethod

chat_completions(messages, tools)

Abstract method for chat completions

Source code in backend/director/llm/base.py
@abstractmethod
def chat_completions(self, messages: List[Dict], tools: List[Dict]) -> LLMResponse:
    """Abstract method for chat completions"""
    pass

LLM Response

LLM Response is the response object for an LLM. It is returned by the LLM after processing an input message.

director.llm.base.LLMResponse pydantic-model

Bases: BaseModel

Response model for completions from LLMs.