Installation¶
Lumen works with Python 3.11+ on Linux, Windows, and Mac.
Already installed?
Jump to the Quick Start to start chatting with your data.
Bring Your Own LLM¶
Lumen works with any LLM provider. Choose the approach that fits your needs:
- ☁️ Cloud providers — OpenAI, Anthropic, Google, Mistral, Azure (easiest to get started)
- 🖥️ Locally hosted — Ollama, Llama.cpp (free, runs on your machine, no API keys)
- 🔀 Router/Multi-provider — LiteLLM (unified interface, 100+ models)
Cloud Service Providers¶
Use hosted LLM APIs from major providers. Fastest to set up, pay-per-use pricing.
Get your API key:
- Visit platform.openai.com/api-keys
- Click "Create new secret key"
- Copy the key (starts with
sk-) - Set environment variable:
Get your API key:
- Visit console.anthropic.com/settings/keys
- Click "Create Key"
- Copy the key (starts with
sk-ant-) - Set environment variable:
Get your API key:
- Visit aistudio.google.com/apikey
- Click "Create API key"
- Choose "Create API key in new project" or select existing project
- Copy the key (starts with
AIza) - Set environment variable:
# macOS/Linux
export GEMINI_API_KEY='your-key-here'
# Windows PowerShell
$env:GEMINI_API_KEY='your-key-here'
# Windows CMD
set GEMINI_API_KEY=your-key-here
Alternative: You can also use GOOGLE_API_KEY instead of GEMINI_API_KEY.
Get your API key:
- Visit console.mistral.ai/api-keys
- Click "Create new key"
- Copy the key
- Set environment variable:
pip install 'lumen[ai-openai]'
export AZUREAI_ENDPOINT_KEY=your-key
export AZUREAI_ENDPOINT_URL=https://your-resource.openai.azure.com/
Get your credentials:
- Visit portal.azure.com
- Navigate to your Azure OpenAI resource
- Go to "Keys and Endpoint"
- Copy KEY 1 or KEY 2 and your endpoint URL
- Set environment variables:
# macOS/Linux
export AZUREAI_ENDPOINT_KEY='your-key-here'
export AZUREAI_ENDPOINT_URL='https://your-resource.openai.azure.com/'
# Windows PowerShell
$env:AZUREAI_ENDPOINT_KEY='your-key-here'
$env:AZUREAI_ENDPOINT_URL='https://your-resource.openai.azure.com/'
# Windows CMD
set AZUREAI_ENDPOINT_KEY=your-key-here
set AZUREAI_ENDPOINT_URL=https://your-resource.openai.azure.com/
pip install 'lumen[ai-mistralai]'
export AZUREAI_ENDPOINT_KEY=your-key
export AZUREAI_ENDPOINT_URL=https://your-resource-endpoint.com/
Get your credentials:
- Visit portal.azure.com
- Navigate to your Azure AI resource with Mistral deployment
- Go to "Keys and Endpoint"
- Copy KEY 1 or KEY 2 and your endpoint URL
- Set environment variables (same as Azure OpenAI above)
Locally Hosted¶
Run open-source LLMs on your own machine. No API keys required, full privacy, free to use.
Setup Ollama:
- Install Ollama from ollama.com
- Start the Ollama service (usually starts automatically)
- Pull a model:
- (Optional) Set custom endpoint if not using default:
No additional environment variables needed! Ollama works out of the box.
No setup required! The first time you use Llama.cpp, Lumen will automatically download the model you specify. No environment variables needed.
Optional: Set a custom model URL:
Anaconda AI Navigator runs models locally on your machine. No API key needed!
Default endpoint: http://localhost:8080/v1
Optional: Set custom endpoint:
Router / Multi-Provider¶
Use a unified interface to access multiple LLM providers and models.
Supports 100+ models across OpenAI, Anthropic, Google, Mistral, and more.
Set environment variables for providers you want to use:
Then use any supported model:
AWS Bedrock is a managed gateway that provides access to foundation models from Anthropic, Meta, Mistral, Amazon, Cohere, and AI21 through a unified API.
Authentication:
Choose your Lumen provider:
Optimized for Claude models using Anthropic's SDK.
Universal access to all Bedrock models using boto3.
import lumen.ai as lmai
llm = lmai.llm.Bedrock(
model_kwargs={
"default": {"model": "us.anthropic.claude-sonnet-4-5-20250929-v1:0"},
}
)
ui = lmai.ExplorerUI(data='data.csv', llm=llm)
ui.servable()
Available models:
- Anthropic (Claude), Meta (Llama), Mistral, Amazon (Titan), Cohere, AI21
- Model IDs:
us.anthropic.claude-*,meta.llama3-*,mistral.*,amazon.titan-* - Full model list
IAM Permissions:
Making Environment Variables Persistent¶
Set variables permanently so you don't have to export them every session:
Add to ~/.bashrc or ~/.bash_profile:
- Search for "Environment Variables" in Start Menu
- Click "Edit the system environment variables"
- Click "Environment Variables"
- Under "User variables", click "New"
- Add variable name (e.g.,
OPENAI_API_KEY) and value - Click OK
- Restart your terminal
Verify Installation¶
Test your LLM connection:
# Test script
import lumen.ai as lmai
llm = lmai.llm.OpenAI() # or Anthropic(), Google(), etc.
ui = lmai.ExplorerUI(data='test.csv')
ui.servable()
Next Steps¶
Ready to start? Head to the Quick Start guide to chat with your first dataset.
Missing Your Favorite LLM?¶
Missing your favorite LLM? Let us know by submitting a GitHub issue!