Installation¶
Lumen works with Python 3.11+ on Linux, Windows, and Mac.
Already installed?
Jump to the Quick Start to start chatting with your data.
Bring Your Own LLM¶
Lumen works with any LLM provider. Choose the approach that fits your needs:
- ☁️ Cloud providers — OpenAI, Anthropic, Google, Mistral, Azure (easiest to get started)
- 🖥️ Locally hosted — Ollama, Llama.cpp (free, runs on your machine, no API keys)
- 🔀 Router/Multi-provider — LiteLLM (unified interface, 100+ models)
☁️ Cloud Service Providers¶
Use hosted LLM APIs from major providers. Fastest to set up, pay-per-use pricing.
Get your API key from platform.openai.com
Get your API key from console.anthropic.com
Get your API key from AI Studio
Get your API key from console.mistral.ai
pip install 'lumen[ai-openai]'
export AZURE_OPENAI_API_KEY=your-key
export AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
export AZURE_OPENAI_API_VERSION=2024-02-15-preview
Configure your Azure OpenAI resource in the Azure Portal
🖥️ Locally Hosted¶
Run open-source LLMs on your own machine. No API keys required, full privacy, free to use.
🔀 Router / Multi-Provider¶
Use a unified interface to switch between multiple LLM providers. Useful for testing different models or supporting multiple backends.
Supports 100+ models across OpenAI, Anthropic, Google, Mistral, and more.
Set environment variables for your providers:
Then configure your model in Lumen using the provider prefix:
Verify Installation¶
Next Steps¶
Ready to start? Head to the Quick Start guide to chat with your first dataset.