Expand description
LLM provider abstraction.
LlmAgent wraps multiple large-language-model backends behind a single
async complete() interface. Supported providers:
- Anthropic (
claude-*models, Messages API) - OpenAI (
gpt-*and compatible, Chat Completions API) - Ollama (local, OpenAI-compatible endpoint)
The active provider and model are selected via LlmConfig.
Structs§
- Chat
Message - A single turn in a conversation (role + content).
- LlmAgent
- An actor that calls an LLM provider and returns completions.
- LlmConfig
- Configuration for the LLM backend.
Enums§
- LlmProvider
- Supported LLM provider backends.
Functions§
- calc_
cost_ nano_ usd - Calculate cost in nano-USD from token counts and model name.