Command-Line Interface¶
Launch Lumen AI from the command line with lumen-ai serve.
Basic usage¶
Starts the server at localhost:5006.
Load data¶
Database connections¶
Connect to databases using SQLAlchemy URLs:
Automatic database detection¶
For .db files, Lumen automatically detects whether they're SQLite or DuckDB databases:
Lumen reads the file header to determine the database type:
- SQLite files start with
"SQLite format 3" - DuckDB files start with
"DUCK"
You can still use explicit URLs if preferred:
Tables auto-discovered
When connecting to a database, Lumen automatically discovers all available tables. No need to specify individual table names.
Supported databases:
- SQLite:
sqlite:///path/to/file.db(or justfile.dbfor auto-detection) - DuckDB:
duckdb:///path/to/file.db(or justfile.dbfor auto-detection) - PostgreSQL:
postgresql://user:pass@host:port/db - MySQL:
mysql+pymysql://user:pass@host:port/db - Oracle:
oracle://user:pass@host:port/db - SQL Server:
mssql+pyodbc://user:pass@host:port/db
See SQLAlchemy documentation for full URL syntax.
Configure LLM¶
Choose provider¶
If not specified, Lumen auto-detects from environment variables.
Supported providers:
openai- RequiresOPENAI_API_KEYanthropic- RequiresANTHROPIC_API_KEYgoogle- RequiresGEMINI_API_KEYmistral- RequiresMISTRAL_API_KEYazure-openai- RequiresAZUREAI_ENDPOINT_KEYollama- Local modelsllama-cpp- Local modelslitellm- Multi-provider
Set API key¶
Or use environment variables:
Configure models¶
Quick model selection¶
For simple cases, use --model to set the default model:
lumen-ai serve --provider openai --model 'gpt-4o-mini'
lumen-ai serve --provider anthropic --model 'claude-sonnet-4-5'
lumen-ai serve --provider google --model 'gemini-2.0-flash'
The --model argument automatically sets model_kwargs['default']['model'] for you.
Advanced model configuration¶
For multiple models or additional parameters, use --model-kwargs with JSON:
lumen-ai serve --model-kwargs '{
"default": {"model": "gpt-4o-mini"},
"sql": {"model": "gpt-4o"},
"edit": {"model": "gpt-4o"}
}'
lumen-ai serve --provider ollama \
--model 'qwen3:32b' \
--model-kwargs '{"edit": {"model": "mistral-small3.2:24b"}}'
This sets qwen3:32b as the default model and mistral-small3.2:24b for editing tasks.
Escape JSON properly
The JSON string must be properly quoted. Use single quotes around the entire JSON, double quotes inside.
Model types
Common model types in model_kwargs:
default- General queries and analysissql- SQL query generation (some providers use specialized models)edit- Code/chart editing (may use more capable models)ui- UI responsive check (lightweight models)
Adjust temperature¶
Lower (0.1) = deterministic. Higher (0.7) = creative. Range: 0.0-2.0
Select agents¶
Agent names are case-insensitive. The "Agent" suffix is optional: sql = sqlagent = SQLAgent
Common flags¶
| Flag | Purpose | Example |
|---|---|---|
--code-execution |
Code execution mode | --code-execution prompt |
--provider |
LLM provider | --provider anthropic |
--api-key |
API key | --api-key sk-... |
--model |
Default model | --model 'qwen3:32b' |
--model-kwargs |
Advanced model config | --model-kwargs '{"sql": {"model": "gpt-4o"}}' |
--temperature |
Randomness | --temperature 0.5 |
--agents |
Active agents | --agents SQLAgent ChatAgent |
--port |
Server port | --port 8080 |
--address |
Network address | --address 0.0.0.0 |
--show |
Auto-open browser | --show |
--log-level |
Verbosity | --log-level DEBUG |
Full example¶
lumen-ai serve penguins.csv \
--provider openai \
--model 'gpt-4o-mini' \
--model-kwargs '{"sql": {"model": "gpt-4o"}}' \
--temperature 0.5 \
--agents SQLAgent ChatAgent VegaLiteAgent \
--port 8080 \
--show
lumen-ai serve data/*.csv \
--provider ollama \
--model 'qwen3:32b' \
--temperature 0.4 \
--log-level debug \
--show
View all options¶
Shows all available flags including Panel server options.