Skip to main content
Lukan supports multiple model providers. All providers implement a common Provider interface returning AsyncIterable<StreamEvent>.

Available Providers

ProviderIDDefault ModelAuth
NebiusnebiusMiniMaxAI/MiniMax-M2.1NEBIUS_API_KEY env or nebiusApiKey
Anthropicanthropicclaude-sonnet-4-5-20250929ANTHROPIC_API_KEY or anthropicApiKey
Fireworksfireworksaccounts/fireworks/models/minimax-m2p5FIREWORKS_API_KEY or fireworksApiKey
GitHub Copilotgithub-copilotclaude-sonnet-4.5lukan copilot-auth
OpenAI Codexopenai-codexgpt-5.3-codexlukan codex-auth
z.aizaiglm-5ZAI_API_KEY or zaiApiKey
Ollama Cloudollama-cloudFetched from APIOLLAMA_CLOUD_API_KEY or ollamaCloudApiKey
OpenAI Compatibleopenai-compatibleFetched from endpointOPENAI_COMPATIBLE_API_KEY or openaiCompatibleApiKey
Lukan Cloudlukan-cloudInherited from AnthropicManaged by Lukan Cloud account

Setting Up a Provider

Interactive Setup

lukan setup
This interactive wizard helps you select a provider and enter your API key.

Manual Configuration

  1. Set the API key via environment variable:
    export NEBIUS_API_KEY="your-key-here"
    
  2. Or add it to ~/.config/lukan/credentials.json:
    {
      "nebiusApiKey": "your-key-here"
    }
    
Credentials are read from environment variables first, then from ~/.config/lukan/credentials.json.

Selecting Models

# List available models
lukan models

# Select a specific model
lukan models nebius MiniMaxAI/MiniMax-M2.1
You can also specify the provider and model directly:
lukan --provider nebius --model MiniMaxAI/MiniMax-M2.1

Per-Channel Configuration

You can override the provider per-channel (WhatsApp, Email) in config.json:
{
  "whatsapp": {
    "provider": "anthropic",
    "model": "claude-sonnet-4-5-20250929"
  },
  "email": {
    "provider": "nebius",
    "model": "MiniMaxAI/MiniMax-M2.1"
  }
}

OAuth Authentication

GitHub Copilot

lukan copilot-auth
This initiates an OAuth device flow to authenticate with GitHub.

OpenAI Codex

lukan codex-auth
This initiates OAuth authentication with OpenAI Codex.

Ollama Cloud

Ollama Cloud fetches models dynamically from https://ollama.com/api/tags. Vision-capable models are detected automatically (patterns: vision, -vl, multimodal, llava, llama-4).
lukan --provider ollama-cloud --model <model-name>

OpenAI Compatible

Generic provider for any OpenAI-compatible endpoint (vLLM, Ollama local, LM Studio, etc.). Requires a base URL:
{
  "openaiCompatibleBaseURL": "http://localhost:11434/v1",
  "openaiCompatibleProviderName": "My Local LLM",
  "provider": "openai-compatible"
}
Models are fetched dynamically from the /models endpoint. The API key is optional.

Lukan Cloud

Managed proxy service that forwards requests to Anthropic. Base URL defaults to https://api.lukan.cloud and can be overridden with the LUKAN_CLOUD_URL environment variable.
lukan --provider lukan-cloud
A 402 response indicates quota exceeded. Visit your Lukan Cloud account to upgrade.