Documentation

AI Providers

Ignitra supports multiple AI providers. Switch between them by changing one environment variable — zero code changes.

Supported Providers

OpenAI

Models: gpt-4o, gpt-4o-mini, gpt-4-turbo

Set: AI_PROVIDER=openai, AI_MODEL=gpt-4o-mini

Anthropic (Claude)

Models: claude-sonnet-4-20250514, claude-haiku-4-5-20251001

Set: AI_PROVIDER=anthropic, AI_MODEL=claude-sonnet-4-20250514

Google (Gemini)

Models: gemini-2.0-flash, gemini-1.5-pro

Set: AI_PROVIDER=google, AI_MODEL=gemini-2.0-flash

How It Works

Ignitra uses the Vercel AI SDK which provides a unified interface across providers. The provider selection happens in src/lib/ai/provider.ts:

The getModel() function reads AI_PROVIDER and AI_MODEL from environment variables and returns the correct provider instance. Your chat API route calls this function — so switching providers requires zero code changes.

Recommended Setup

Development: gpt-4o-mini (cheapest, fast)

Production: depends on your use case

  • Best quality: claude-sonnet-4-20250514 or gpt-4o
  • Best speed: gemini-2.0-flash
  • Best value: gpt-4o-mini
AI Providers | Ignitra Docs