Providers

How to set up each LLM provider with Perspt.

OpenAI

Models: GPT-5.2, o3-mini, o1-preview, GPT-4

# Set API key
export OPENAI_API_KEY="sk-..."

# Use with Perspt
perspt chat --model gpt-5.2

Get API Key: platform.openai.com

Anthropic

Models: Claude Opus 4.5, Claude 3.5 Sonnet

export ANTHROPIC_API_KEY="sk-ant-..."
perspt chat --model claude-opus-4.5

Get API Key: console.anthropic.com

Google Gemini

Models: Gemini 3 Flash, Gemini 3 Pro

export GEMINI_API_KEY="..."
perspt chat --model gemini-3-flash

Get API Key: aistudio.google.com

Groq

Models: Llama 3.x (ultra-fast inference)

export GROQ_API_KEY="..."
perspt chat --model llama-3.3-70b

Get API Key: console.groq.com

Best for: Fast prototyping, testing

Cohere

Models: Command R, Command R+

export COHERE_API_KEY="..."
perspt chat --model command-r-plus

Get API Key: dashboard.cohere.com

XAI (Grok)

Models: Grok

export XAI_API_KEY="..."
perspt chat --model grok-2

Get API Key: console.x.ai

DeepSeek

Models: DeepSeek Coder, DeepSeek Chat

export DEEPSEEK_API_KEY="..."
perspt chat --model deepseek-coder

Get API Key: platform.deepseek.com

Ollama (Local)

Models: Llama 3.2, CodeLlama, DeepSeek Coder (local)

# No API key needed
ollama serve
ollama pull llama3.2
perspt chat --model llama3.2

Setup: See Local Models with Ollama

Provider Comparison

Provider

Speed

Best For

Cost

OpenAI

Medium

Reasoning, complex tasks

$$$

Anthropic

Medium

Code generation, safety

$$$

Google

Fast

Long context, multimodal

$$

Groq

Ultra-fast

Prototyping, testing

$

Ollama

Variable

Privacy, offline use

Free

Agent Mode Recommendations

For optimal SRBN performance:

perspt agent \
  --architect-model gpt-5.2 \       # Deep reasoning
  --actuator-model claude-opus-4.5 \ # Strong coding
  --verifier-model gemini-3-pro \   # Fast analysis
  --speculator-model gemini-3-flash \ # Ultra-fast
  "Your task"

See Also