Configuration Guide¶
Perspt offers flexible configuration options to customize your AI chat experience. This guide covers all configuration methods, from zero-config automatic provider detection to advanced JSON configurations.
Automatic Provider Detection (Zero-Config)¶
NEW! Perspt now features intelligent automatic provider detection that makes getting started as simple as setting an environment variable.
How It Works¶
When you launch Perspt without specifying a provider, it automatically scans your environment variables for supported provider API keys and selects the first one found based on this priority order:
OpenAI (
OPENAI_API_KEY
) - Default model: gpt-4o-miniAnthropic (
ANTHROPIC_API_KEY
) - Default model: claude-3-5-sonnet-20241022Google Gemini (
GEMINI_API_KEY
) - Default model: gemini-1.5-flashGroq (
GROQ_API_KEY
) - Default model: llama-3.1-70b-versatileCohere (
COHERE_API_KEY
) - Default model: command-r-plusXAI (
XAI_API_KEY
) - Default model: grok-betaDeepSeek (
DEEPSEEK_API_KEY
) - Default model: deepseek-chatOllama (auto-detected if running) - Default model: llama3.2
Quick Examples¶
# Option 1: OpenAI (highest priority)
export OPENAI_API_KEY="sk-your-key"
perspt # Auto-detects OpenAI, uses gpt-4o-mini
# Option 2: Multiple providers - OpenAI takes priority
export OPENAI_API_KEY="sk-your-openai-key"
export ANTHROPIC_API_KEY="sk-ant-your-anthropic-key"
perspt # Uses OpenAI (higher priority)
# Option 3: Force a specific provider
perspt --provider anthropic # Uses Anthropic even if OpenAI key exists
# Option 4: Ollama (no API key needed)
# Just ensure Ollama is running: ollama serve
perspt # Auto-detects Ollama if no other keys found
What Happens When No Providers Are Found¶
If no API keys are detected, Perspt displays helpful setup instructions:
❌ No LLM provider configured!
To get started, either:
1. Set an environment variable for a supported provider:
• OPENAI_API_KEY=sk-your-key
• ANTHROPIC_API_KEY=sk-ant-your-key
• GEMINI_API_KEY=your-key
# ... (shows all supported providers)
2. Use command line arguments:
perspt --provider openai --api-key sk-your-key
Manual Configuration Methods¶
For more control or advanced setups, Perspt supports traditional configuration methods with the following priority order (highest to lowest):
Command-line arguments (highest priority)
Configuration file (
config.json
)Environment variables
Automatic provider detection
Default values (lowest priority)
This means command-line arguments will override config file settings, which override environment variables, and so on.
Environment Variables¶
Environment variables are the simplest way to configure Perspt and enable automatic provider detection.
API Keys (Auto-Detection Enabled)¶
Setting any of these environment variables enables automatic provider detection:
# OpenAI (Priority 1 - will be auto-selected first)
export OPENAI_API_KEY="sk-your-openai-api-key-here"
# Anthropic (Priority 2)
export ANTHROPIC_API_KEY="your-anthropic-api-key-here"
# Google Gemini (Priority 3)
export GEMINI_API_KEY="your-google-api-key-here"
# Groq (Priority 4)
export GROQ_API_KEY="your-groq-api-key-here"
# Cohere (Priority 5)
export COHERE_API_KEY="your-cohere-api-key-here"
# XAI (Priority 6)
export XAI_API_KEY="your-xai-api-key-here"
# DeepSeek (Priority 7)
export DEEPSEEK_API_KEY="your-deepseek-api-key-here"
# Ollama (Priority 8 - no API key needed, auto-detected if service is running)
# Just run: ollama serve
Note
Automatic Detection: Simply set any of these environment variables and run perspt
with no arguments. Perspt will automatically detect and use the highest priority provider available.
Legacy Provider Settings (Manual Override)¶
These variables override automatic detection and force manual configuration:
# Default provider
export PERSPT_PROVIDER="openai"
# Default model
export PERSPT_MODEL="gpt-4o-mini"
# Custom API base URL
export PERSPT_API_BASE="https://api.openai.com/v1"
Configuration File¶
For persistent settings, create a config.json
file:
Basic Configuration¶
{
"api_key": "your-api-key-here",
"default_model": "gpt-4o-mini",
"default_provider": "openai",
"provider_type": "openai"
}
Complete Configuration¶
{
"api_key": "sk-your-openai-api-key",
"default_model": "gpt-4o-mini",
"default_provider": "openai",
"provider_type": "openai",
"providers": {
"openai": "https://api.openai.com/v1",
"anthropic": "https://api.anthropic.com",
"google": "https://generativelanguage.googleapis.com/v1beta",
"azure": "https://your-resource.openai.azure.com/"
},
"ui": {
"theme": "dark",
"show_timestamps": true,
"markdown_rendering": true,
"auto_scroll": true
},
"behavior": {
"stream_responses": true,
"input_queuing": true,
"auto_save_history": false,
"max_history_length": 1000
},
"advanced": {
"request_timeout": 30,
"retry_attempts": 3,
"retry_delay": 1.0,
"concurrent_requests": 1
}
}
Configuration File Locations¶
Perspt searches for configuration files in this order:
Specified path:
perspt --config /path/to/config.json
Current directory:
./config.json
User config directory: - Linux:
~/.config/perspt/config.json
- macOS:~/Library/Application Support/perspt/config.json
- Windows:%APPDATA%/perspt/config.json
Provider Configuration¶
OpenAI¶
export OPENAI_API_KEY="sk-your-key-here"
export PERSPT_PROVIDER="openai"
export PERSPT_MODEL="gpt-4o-mini"
{
"api_key": "sk-your-key-here",
"provider_type": "openai",
"default_model": "gpt-4o-mini",
"providers": {
"openai": "https://api.openai.com/v1"
}
}
perspt --provider-type openai \
--model-name gpt-4o-mini \
--api-key "sk-your-key-here"
Available Models:
- gpt-4.1
- Latest and most advanced GPT model
- gpt-4o
- Latest GPT-4 Omni model
- gpt-4o-mini
- Faster, cost-effective GPT-4 Omni
- o1-preview
- Advanced reasoning model
- o1-mini
- Efficient reasoning model
- o3-mini
- Next-generation reasoning model
- gpt-4-turbo
- Latest GPT-4 Turbo
- gpt-4
- Standard GPT-4
Anthropic¶
export ANTHROPIC_API_KEY="your-key-here"
export PERSPT_PROVIDER="anthropic"
export PERSPT_MODEL="claude-3-sonnet-20240229"
{
"api_key": "your-key-here",
"provider_type": "anthropic",
"default_model": "claude-3-sonnet-20240229",
"providers": {
"anthropic": "https://api.anthropic.com"
}
}
perspt --provider-type anthropic \
--model-name claude-3-sonnet-20240229 \
--api-key "your-key-here"
Available Models:
- claude-3-opus-20240229
- Most capable Claude model
- claude-3-sonnet-20240229
- Balanced performance and speed
- claude-3-haiku-20240307
- Fastest Claude model
Google (Gemini)¶
export GOOGLE_API_KEY="your-key-here"
export PERSPT_PROVIDER="google"
export PERSPT_MODEL="gemini-pro"
{
"api_key": "your-key-here",
"provider_type": "google",
"default_model": "gemini-pro",
"providers": {
"google": "https://generativelanguage.googleapis.com/v1beta"
}
}
perspt --provider-type google \
--model-name gemini-pro \
--api-key "your-key-here"
Available Models:
- gemini-pro
- Google’s most capable model
- gemini-pro-vision
- Multimodal capabilities
Command-Line Options¶
Perspt supports extensive command-line configuration:
Basic Options¶
perspt [OPTIONS]
Option |
Description |
---|---|
|
Path to configuration file |
|
AI provider (openai, anthropic, google, groq, cohere, xai, deepseek, ollama) |
|
Specific model to use |
|
API key for authentication |
|
List available models for provider |
|
Show help information |
|
Show version information |
Advanced Options¶
# Custom API endpoint
perspt --api-base "https://your-custom-endpoint.com/v1"
# Increase request timeout
perspt --timeout 60
# Disable streaming responses
perspt --no-stream
# Set maximum retries
perspt --max-retries 5
# Custom user agent
perspt --user-agent "MyApp/1.0"
Examples¶
# Use specific OpenAI model
perspt --provider-type openai --model-name gpt-4
# Use Anthropic with custom timeout
perspt --provider-type anthropic \
--model-name claude-3-sonnet-20240229 \
--timeout 45
# Use custom configuration file
perspt --config ~/.perspt/work-config.json
# List available models
perspt --provider-type openai --list-models
UI Customization¶
Interface Settings¶
Configure the terminal interface appearance:
{
"ui": {
"theme": "dark",
"show_timestamps": true,
"timestamp_format": "%H:%M",
"markdown_rendering": true,
"syntax_highlighting": true,
"auto_scroll": true,
"scroll_buffer": 1000,
"word_wrap": true,
"show_token_count": false
}
}
Color Themes¶
Customize colors for different message types:
{
"ui": {
"colors": {
"user_message": "#60a5fa",
"assistant_message": "#10b981",
"error_message": "#ef4444",
"warning_message": "#f59e0b",
"info_message": "#8b5cf6",
"timestamp": "#6b7280",
"border": "#374151",
"background": "#111827"
}
}
}
Behavior Settings¶
Streaming and Responses¶
{
"behavior": {
"stream_responses": true,
"input_queuing": true,
"auto_retry_on_error": true,
"show_thinking_indicator": true,
"preserve_context": true
}
}
History Management¶
{
"behavior": {
"auto_save_history": true,
"history_file": "~/.perspt/chat_history.json",
"max_history_length": 1000,
"history_compression": true,
"clear_history_on_exit": false
}
}
Advanced Configuration¶
Network Settings¶
{
"advanced": {
"request_timeout": 30,
"connect_timeout": 10,
"retry_attempts": 3,
"retry_delay": 1.0,
"retry_exponential_backoff": true,
"max_concurrent_requests": 1,
"user_agent": "Perspt/0.4.0",
"proxy": {
"http": "http://proxy:8080",
"https": "https://proxy:8080"
}
}
}
Security Settings¶
{
"security": {
"verify_ssl": true,
"api_key_masking": true,
"log_requests": false,
"log_responses": false,
"encrypt_history": false
}
}
Performance Tuning¶
{
"performance": {
"buffer_size": 8192,
"chunk_size": 1024,
"memory_limit": "100MB",
"cache_responses": false,
"preload_models": false
}
}
Multiple Configurations¶
Work vs Personal¶
Create separate configurations for different contexts:
work-config.json:
{
"api_key": "sk-work-key-here",
"provider_type": "openai",
"default_model": "gpt-4",
"ui": {
"theme": "professional",
"show_timestamps": true
},
"behavior": {
"auto_save_history": true,
"history_file": "~/.perspt/work_history.json"
}
}
personal-config.json:
{
"api_key": "sk-personal-key-here",
"provider_type": "anthropic",
"default_model": "claude-3-sonnet-20240229",
"ui": {
"theme": "vibrant",
"show_timestamps": false
},
"behavior": {
"auto_save_history": false
}
}
Usage:
# Work configuration
perspt --config work-config.json
# Personal configuration
perspt --config personal-config.json
# Create aliases for convenience
alias work-ai="perspt --config ~/.perspt/work-config.json"
alias personal-ai="perspt --config ~/.perspt/personal-config.json"
Configuration Validation¶
Perspt validates your configuration and provides helpful error messages:
# Validate configuration without starting
perspt --config config.json --validate
# Check configuration and list available models
perspt --config config.json --list-models
Common validation errors:
Invalid API key format: Ensure your API key follows the correct format
Missing required fields: Some providers require specific configuration
Invalid model names: Use
--list-models
to see available optionsNetwork connectivity: Check internet connection and proxy settings
Configuration Templates¶
Generate template configurations for different use cases:
# Generate basic template
perspt --generate-config basic > config.json
# Generate advanced template
perspt --generate-config advanced > advanced-config.json
# Generate provider-specific template
perspt --generate-config openai > openai-config.json
Migration and Import¶
From Other Tools¶
Import configurations from similar tools:
# Import from environment variables
perspt --import-env > config.json
# Import from ChatGPT CLI config
perspt --import chatgpt-cli ~/.chatgpt-cli/config.yaml
# Import from OpenAI CLI config
perspt --import openai-cli ~/.openai/config.json
Backup and Restore¶
# Backup current configuration
cp ~/.config/perspt/config.json ~/.config/perspt/config.backup.json
# Restore from backup
cp ~/.config/perspt/config.backup.json ~/.config/perspt/config.json
# Export configuration with history
perspt --export-config --include-history > full-backup.json
Best Practices¶
Security¶
Never commit API keys to version control
Use environment variables for sensitive data
Rotate API keys regularly
Use separate keys for different projects
Enable API key masking in logs
Organization¶
Use descriptive config names (
work-config.json
,research-config.json
)Create aliases for frequently used configurations
Document your configurations with comments (where supported)
Use version control for non-sensitive configuration parts
Regular backups of important configurations
Performance¶
Set appropriate timeouts based on your network
Configure retry settings for reliability
Use streaming for better user experience
Limit history length to prevent memory issues
Enable compression for large chat histories
Troubleshooting¶
Common Issues¶
Configuration not found:
# Check current working directory
ls -la config.json
# Check user config directory
ls -la ~/.config/perspt/
# Use absolute path
perspt --config /full/path/to/config.json
Invalid JSON format:
# Validate JSON syntax
cat config.json | python -m json.tool
# Or use jq
jq . config.json
API key not working:
# Test API key directly
curl -H "Authorization: Bearer $OPENAI_API_KEY" \
"https://api.openai.com/v1/models"
# Check environment variable
echo $OPENAI_API_KEY
Provider connection issues:
# Test network connectivity
ping api.openai.com
# Check proxy settings
echo $HTTP_PROXY $HTTPS_PROXY
# Test with verbose output
perspt --config config.json --verbose
Getting Help¶
If you need assistance with configuration:
Check the examples in this guide
Use the validation commands to check your config
Review the error messages - they often contain helpful hints
Ask the community on GitHub Discussions
File an issue if you find a bug in configuration handling
See also
Getting Started - Basic setup and first run
AI Providers - Provider-specific guides
Troubleshooting - Common issues and solutions
Advanced Features - Advanced usage patterns