Command-Line Interface¶
Launch Lumen AI from the command line with lumen-ai serve.
Basic usage¶
Starts the server at localhost:5006.
Load data¶
Configure LLM¶
Choose provider¶
If not specified, Lumen auto-detects from environment variables.
Supported providers:
openai- RequiresOPENAI_API_KEYanthropic- RequiresANTHROPIC_API_KEYgoogle- RequiresGEMINI_API_KEYmistral- RequiresMISTRAL_API_KEYazure-openai- RequiresAZUREAI_ENDPOINT_KEYollama- Local modelsllama-cpp- Local modelslitellm- Multi-provider
Set API key¶
Or use environment variables:
Configure models¶
Use different models for different tasks:
Multiple models
lumen-ai serve --model-kwargs '{
"default": {"model": "gpt-4o-mini"},
"sql": {"model": "gpt-4o"}
}'
Escape JSON properly
The JSON string must be properly quoted. Use single quotes around the entire JSON, double quotes inside.
Adjust temperature¶
Lower (0.1) = deterministic. Higher (0.7) = creative. Range: 0.0-2.0
Select agents¶
Agent names are case-insensitive. The "Agent" suffix is optional: sql = sqlagent = SQLAgent
Common flags¶
| Flag | Purpose | Example |
|---|---|---|
--provider |
LLM provider | --provider anthropic |
--api-key |
API key | --api-key sk-... |
--model-kwargs |
Model config | --model-kwargs '{"sql": {"model": "gpt-4o"}}' |
--temperature |
Randomness | --temperature 0.5 |
--agents |
Active agents | --agents SQLAgent ChatAgent |
--port |
Server port | --port 8080 |
--address |
Network address | --address 0.0.0.0 |
--show |
Auto-open browser | --show |
--log-level |
Verbosity | --log-level DEBUG |
Full example¶
Complete configuration
lumen-ai serve penguins.csv \
--provider openai \
--model-kwargs '{"default": {"model": "gpt-4o-mini"}, "sql": {"model": "gpt-4o"}}' \
--temperature 0.5 \
--agents SQLAgent ChatAgent VegaLiteAgent \
--port 8080 \
--show
View all options¶
Shows all available flags including Panel server options.