Bring your own key
Pellucid never bakes in a provider. Plug in xAI, Anthropic, OpenAI, or OpenRouter — no code changes, no vendor lock.
Every model call goes through LiteLLM, which means any provider that LiteLLM speaks works in Pellucid. The model identifier and the API key are the only things that change. The agent prompts, JSON schemas, and aggregation logic are provider-agnostic.
Two ways to set a key
Pellucid resolves credentials in this order, picking the first one it finds:
- Per-install BYOK row. Anything saved via
POST /api/v1/settings/provider(or the/settingspage in the web app) takes precedence. The plaintext key is encrypted at rest with a per-install Fernet key (gitignored atapps/api/.pellucid_key) and decrypted only when a model call is dispatched. - Environment variable. When no BYOK row exists, Pellucid falls back to the standard provider env var (e.g.
XAI_API_KEY). Useful for headless servers, CI, and Docker.
Either way, the model identifier is stored alongside the key, so you can switch providers without restarting the API.
Supported providers
| Provider | Default model identifier | Env var |
|---|---|---|
| xAI | xai/grok-4 | XAI_API_KEY |
| Anthropic | anthropic/claude-sonnet-4-6 | ANTHROPIC_API_KEY |
| OpenAI | openai/gpt-5 | OPENAI_API_KEY |
| OpenRouter | openrouter/x-ai/grok-4 | OPENROUTER_API_KEY |
Any LiteLLM-supported model works — the table above lists the four we test against. The model identifier is opaque to Pellucid; we forward it as-is.
Option 1 — environment variable
Best for servers and Docker. Edit apps/api/.env (copy from.env.example on first run) and set both the model and the key:
# Default model — used when no BYOK row exists
PELLUCID_LLM_MODEL=anthropic/claude-sonnet-4-6
# One of the following, matching the model above
ANTHROPIC_API_KEY=sk-ant-...
# OPENAI_API_KEY=sk-...
# XAI_API_KEY=xai-...
# OPENROUTER_API_KEY=sk-or-...Restart pnpm dev:api to pick up the new env. The next analyze run will use the new provider — no other changes required.
Option 2 — paste a key in /settings
Best for local dev and demos, especially when multiple people share a machine. Open http://localhost:3000/settings, pick a model, and paste the matching key. The web UI calls:
curl -X POST http://localhost:8000/api/v1/settings/provider \
-H "Content-Type: application/json" \
-d '{
"model": "anthropic/claude-sonnet-4-6",
"api_key": "sk-ant-..."
}'
# → { "model": "anthropic/...", "has_key": true, "source": "byok" }Verify the active provider with a GET. The source field tells you which path was resolved — "byok" if a row was found,"env" otherwise.
curl http://localhost:8000/api/v1/settings/provider
# Without BYOK and without an env var:
# → { "model": "xai/grok-4", "has_key": false, "source": "env" }
#
# With a saved BYOK row:
# → { "model": "anthropic/...", "has_key": true, "source": "byok" }To clear a BYOK row and fall back to the env var:
curl -X DELETE http://localhost:8000/api/v1/settings/providerHow keys are stored
Pellucid uses cryptography.fernet with a 32-byte URL-safe key held at apps/api/.pellucid_key. The file is created on first boot and gitignored. Plain-text keys never touch the database — only the Fernet ciphertext is persisted, decrypted only at the moment of dispatch.
.pellucid_key as a crown-jewel secret. Losing it means every BYOK row becomes unrecoverable; leaking it means anyone can decrypt your stored API keys. For production deploys, mount it from your secrets manager and back it up independently of the database.Picking a model
The Pellucid agents are tuned to expect a frontier-tier reasoning model. We test against:
- xAI Grok 4 — the default. Strong domain reasoning, fast, cheapest of the four for typical SRS-sized inputs.
- Anthropic Claude Sonnet 4.6 — best at long-context documents (full contracts, multi-section policies).
- OpenAI GPT-5 — drop-in alternative; comparable quality.
- OpenRouter — useful for hitting Grok/Claude/GPT from a single key, or for routing to a regional/inference-optimised host.
Smaller or older models (e.g. gpt-3.5, haiku) tend to skip the JSON schema or hallucinate finding spans. They’ll work, but quality drops sharply. If the analyze run completes with empty agent findings, look at the API logs for JSONDecodeError— that’s usually the cause.
Multi-tenant deployments
The Session 1 build is single-tenant: there’s exactly one BYOK row. If you’re hosting Pellucid for multiple users, run one API instance per tenant (Docker Compose makes this trivial — see self-hosting) or wait for the per-user provider scope on the roadmap.
Troubleshooting
- 401 from the provider.The key is wrong, or the env var name doesn’t match the model prefix.
anthropic/...looks forANTHROPIC_API_KEY, etc. - Agents return no findings. Check the API logs — usually the model returned non-JSON. Switch to one of the four tested models above.
- Rate limit / 429. Pellucid issues four parallel agent calls per analysis. If your provider tier is rate-limited, lower concurrency by running agents serially (a config flag on the roadmap) or switch to a higher tier.