Troubleshooting¶
This comprehensive troubleshooting guide helps you diagnose and resolve issues with Perspt’s genai crate integration, provider connectivity, and advanced features.
Quick Diagnostics¶
Start with these diagnostic commands to check system status:
# Check provider connectivity and model availability
perspt --provider-type openai --list-models
# Validate specific model
perspt --provider-type anthropic --model claude-3-5-sonnet-20241022 --list-models
# Test with minimal configuration
perspt --api-key your-key --provider-type openai --model gpt-3.5-turbo
Environment Variable Check
# Check if API keys are set
echo $OPENAI_API_KEY
echo $ANTHROPIC_API_KEY
echo $GOOGLE_API_KEY
# Verify genai crate can access providers
export RUST_LOG=debug
perspt --provider-type openai --list-models
Common Issues¶
GenAI Crate Integration Issues¶
Provider Authentication Failures
Error: Authentication failed for provider 'openai'
Caused by: Invalid API key
Solutions:
Verify API key format:
# OpenAI keys start with 'sk-' echo $OPENAI_API_KEY | head -c 5 # Should show 'sk-' # Anthropic keys start with 'sk-ant-' echo $ANTHROPIC_API_KEY | head -c 7 # Should show 'sk-ant-'
Test API key directly:
# Test OpenAI API key curl -H "Authorization: Bearer $OPENAI_API_KEY" \ https://api.openai.com/v1/models # Test Anthropic API key curl -H "x-api-key: $ANTHROPIC_API_KEY" \ https://api.anthropic.com/v1/models
Check API key permissions and billing: - Ensure API key has model access permissions - Verify account has sufficient credits/billing set up - Check for rate limiting or usage quotas
Model Validation Failures
Error: Model 'gpt-4.1' not available for provider 'openai'
Available models: gpt-3.5-turbo, gpt-4, gpt-4-turbo...
Solutions:
Check model availability:
# List all available models for provider perspt --provider-type openai --list-models # Search for specific model perspt --provider-type openai --list-models | grep gpt-4
Use correct model names:
# Correct model names (case-sensitive) perspt --provider-type openai --model gpt-4o-mini # ✓ Correct perspt --provider-type openai --model GPT-4O-Mini # ✗ Wrong case perspt --provider-type openai --model gpt4o-mini # ✗ Missing hyphen
Check provider-specific model access: - Some models require special access (e.g., GPT-4, Claude Opus) - Verify your account tier supports the requested model - Check if model is in beta/preview status
Streaming Connection Issues
Error: Streaming connection interrupted
Caused by: Connection reset by peer
Solutions:
Network connectivity check:
# Test basic connectivity ping api.openai.com ping api.anthropic.com # Check for proxy/firewall issues curl -I https://api.openai.com/v1/models
Provider service status: - Check OpenAI Status: https://status.openai.com - Check Anthropic Status: https://status.anthropic.com - Check Google AI Status: https://status.google.com
Adjust streaming settings:
{ "provider_type": "openai", "default_model": "gpt-4o-mini", "stream_timeout": 30, "retry_attempts": 3, "buffer_size": 1024 }
Common syntax errors:
{ "provider": "openai", // ❌ Comments not allowed in JSON "api_key": "sk-...", // ❌ Trailing comma }
Correct format:
{ "provider": "openai", "api_key": "sk-..." }
Missing Required Fields:
{ "provider": "openai" // ❌ Missing api_key }
Solution: Ensure all required fields are present:
{ "provider": "openai", "api_key": "your-api-key", "model": "gpt-4" }
Configuration File Not Found
Error: Configuration file not found at ~/.config/perspt/config.json
Solutions:
Create the configuration directory:
mkdir -p ~/.config/perspt
Create a basic configuration file:
cat > ~/.config/perspt/config.json << EOF { "provider": "openai", "api_key": "your-api-key", "model": "gpt-4" } EOF
Specify a custom configuration path:
perspt --config /path/to/your/config.json
API Connection Issues¶
Invalid API Key
Error: Authentication failed - Invalid API key
Solutions:
Verify API key format:
# OpenAI keys start with 'sk-' # Anthropic keys start with 'sk-ant-' # Check your provider's documentation
Test API key manually:
# OpenAI curl -H "Authorization: Bearer YOUR_API_KEY" \\ https://api.openai.com/v1/models # Anthropic curl -H "x-api-key: YOUR_API_KEY" \\ -H "anthropic-version: 2023-06-01" \\ https://api.anthropic.com/v1/messages
Check API key permissions: - Ensure the key has necessary permissions - Check if the key is associated with the correct organization - Verify the key hasn’t expired
Network Connectivity Issues
Error: Failed to connect to API endpoint
Solutions:
Check internet connectivity:
ping google.com curl -I https://api.openai.com
Verify firewall/proxy settings:
# Check if behind corporate firewall echo $HTTP_PROXY echo $HTTPS_PROXY
Test with different endpoints:
# Try different base URLs curl https://api.openai.com/v1/models curl https://api.anthropic.com/v1/models
Configure proxy if needed:
{ "provider": "openai", "proxy": { "http": "http://proxy.company.com:8080", "https": "https://proxy.company.com:8080" } }
Rate Limiting
Error: Rate limit exceeded
Solutions:
Wait and retry: - Most rate limits reset within minutes - Implement exponential backoff
Check rate limits:
# Check OpenAI rate limits curl -H "Authorization: Bearer YOUR_API_KEY" \\ https://api.openai.com/v1/usage
Optimize requests:
{ "rate_limiting": { "requests_per_minute": 50, "delay_between_requests": 1.2, "max_retries": 3 } }
Upgrade API plan: - Consider higher-tier plans for increased limits - Contact provider support for enterprise limits
Model and Response Issues¶
Model Not Available
Error: Model 'gpt-5' not found
Solutions:
Check available models:
> /list-models
Verify model name spelling:
{ "model": "gpt-4-turbo", // ✓ Correct "model": "gpt-4-turob" // ❌ Typo }
Check provider model availability: - Some models may be region-specific - Newer models might not be available to all users
Slow Responses
Causes and solutions:
Large context windows:
{ "max_tokens": 1000, // ✓ Reasonable "conversation_history_limit": 20 // ✓ Limit history }
Network latency:
# Test latency to provider ping api.openai.com
Provider server load: - Check provider status pages - Try different models or regions
Unexpected Responses
AI responses seem off-topic or inappropriate
Solutions:
Review system prompt:
{ "system_prompt": "You are a helpful assistant..." // Clear instructions }
Adjust model parameters:
{ "temperature": 0.3, // Lower for more focused responses "top_p": 0.8, // Reduce randomness "frequency_penalty": 0.2 // Reduce repetition }
Clear conversation history:
> /clear
Local Model Issues¶
Ollama Connection Failed
Error: Failed to connect to Ollama at localhost:11434
Solutions:
Check if Ollama is running:
# Start Ollama ollama serve # Check if running curl http://localhost:11434/api/tags
Verify model is installed:
ollama list ollama pull llama2:7b # Install if missing
Check port configuration:
{ "provider": "ollama", "base_url": "http://localhost:11434" // Correct port }
Insufficient Memory/GPU
Error: Out of memory when loading model
Solutions:
Use smaller models:
# Instead of 13B model, use 7B ollama pull llama2:7b ollama pull mistral:7b
Adjust GPU layers:
{ "provider": "ollama", "options": { "num_gpu": 0, // Use CPU only "num_thread": 4 // Limit CPU threads } }
Monitor system resources:
# Check memory usage htop nvidia-smi # For GPU usage
Platform-Specific Issues¶
macOS Issues¶
Gatekeeper Blocking Execution
"perspt" cannot be opened because it is from an unidentified developer
Solution:
sudo xattr -rd com.apple.quarantine /path/to/perspt
Homebrew Installation Issues
# Update Homebrew
brew update
brew upgrade
# Clear caches
brew cleanup
# Reinstall if needed
brew uninstall perspt
brew install perspt
Linux Issues¶
Missing Shared Libraries
error while loading shared libraries: libssl.so.1.1
Solutions:
# Ubuntu/Debian
sudo apt update
sudo apt install libssl1.1 libssl-dev
# Fedora/RHEL
sudo dnf install openssl-libs openssl-devel
# Check library dependencies
ldd /path/to/perspt
Permission Issues
# Make executable
chmod +x perspt
# Install system-wide
sudo cp perspt /usr/local/bin/
Windows Issues¶
PowerShell Execution Policy
# Check current policy
Get-ExecutionPolicy
# Set policy to allow local scripts
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
Windows Defender False Positive
Add Perspt to Windows Defender exclusions
Download from official sources only
Verify file hashes if available
Advanced Troubleshooting¶
Debug Mode¶
Enable detailed logging:
{
"debug": {
"enabled": true,
"log_level": "trace",
"log_file": "~/.config/perspt/debug.log"
}
}
Run with verbose output:
perspt --verbose --debug
Log Analysis¶
Check log files for detailed error information:
# View recent logs
tail -f ~/.config/perspt/perspt.log
# Search for specific errors
grep -i "error" ~/.config/perspt/perspt.log
# Analyze API calls
grep -i "api" ~/.config/perspt/debug.log
Network Debugging¶
Capture network traffic:
# Using tcpdump (Linux/macOS)
sudo tcpdump -i any -n host api.openai.com
# Using netstat
netstat -an | grep :443
Test with curl:
# Test OpenAI API
curl -v -H "Authorization: Bearer YOUR_API_KEY" \\
-H "Content-Type: application/json" \\
-d '{"model":"gpt-4o-mini","messages":[{"role":"user","content":"Hello"}]}' \\
https://api.openai.com/v1/chat/completions
Configuration Debugging¶
Validate configuration:
# Check JSON syntax
python -c "import json; print(json.load(open('config.json')))"
# Validate with Perspt
perspt --validate-config
Test minimal configuration:
{
"provider": "openai",
"api_key": "your-key",
"model": "gpt-4o-mini"
}
Performance Debugging¶
Monitor resource usage:
# Monitor CPU and memory
top -p $(pgrep perspt)
# Monitor disk I/O
iotop -p $(pgrep perspt)
Profile network usage:
# Monitor bandwidth usage
netlimit -p $(pgrep perspt)
Recovery Procedures¶
Reset Configuration¶
Backup current configuration:
cp ~/.config/perspt/config.json ~/.config/perspt/config.json.backup
Reset to defaults:
rm ~/.config/perspt/config.json perspt --create-config
Restore from backup if needed:
cp ~/.config/perspt/config.json.backup ~/.config/perspt/config.json
Clear Cache and Data¶
# Clear conversation history
rm -rf ~/.config/perspt/history/
# Clear cache
rm -rf ~/.config/perspt/cache/
# Clear temporary files
rm -rf /tmp/perspt*
Complete Reinstallation¶
# Remove all Perspt data
rm -rf ~/.config/perspt/
rm -rf ~/.local/share/perspt/
# Uninstall and reinstall
# (method depends on installation method)
Getting Help¶
Community Support¶
GitHub Issues: Report bugs and feature requests
Discussions: Ask questions and share tips
Discord/Slack: Real-time community support
Reporting Issues¶
When reporting issues, include:
System information:
perspt --version uname -a # or systeminfo on Windows
Configuration (sanitized):
{ "provider": "openai", "model": "gpt-4", "api_key": "sk-***redacted***" }
Error messages (full text)
Steps to reproduce
Expected vs actual behavior
Professional Support¶
For enterprise users:
Priority support tickets
Direct communication channels
Custom configuration assistance
Integration consulting
Provider-Specific Troubleshooting¶
OpenAI Provider Issues¶
Authentication and API Key Problems
Error: Invalid API key for OpenAI
Error: Rate limit exceeded for model gpt-4
Solutions:
API Key Validation:
# Verify OpenAI API key format (should start with 'sk-') echo $OPENAI_API_KEY | head -c 3 # Should show 'sk-' # Test API key with curl curl -H "Authorization: Bearer $OPENAI_API_KEY" \ https://api.openai.com/v1/models
Rate Limiting Management:
# Use tier-appropriate models perspt --provider-type openai --model gpt-3.5-turbo # Lower tier perspt --provider-type openai --model gpt-4o-mini # Tier 1+ perspt --provider-type openai --model gpt-4 # Tier 3+
Quota and Billing Issues: - Check OpenAI dashboard for usage limits - Verify payment method is valid - Monitor usage to avoid unexpected charges
Model Access Issues
Error: Model 'o1-preview' not available
Error: Insufficient quota for GPT-4
Solutions:
Model Tier Requirements:
# Tier 1 models (widely available) perspt --provider-type openai --model gpt-3.5-turbo perspt --provider-type openai --model gpt-4o-mini # Tier 2+ models (higher usage requirements) perspt --provider-type openai --model gpt-4 perspt --provider-type openai --model gpt-4-turbo # Special access models (invitation/waitlist) perspt --provider-type openai --model o1-preview perspt --provider-type openai --model o1-mini
Reasoning Model Limitations: - o1 models have special usage patterns - Higher latency expected for reasoning - May have stricter rate limits
Anthropic Provider Issues¶
Claude Model Access
Error: Model 'claude-3-opus-20240229' not available
Error: Anthropic API key authentication failed
Solutions:
API Key Format:
# Anthropic keys start with 'sk-ant-' echo $ANTHROPIC_API_KEY | head -c 7 # Should show 'sk-ant-' # Test with curl curl -H "x-api-key: $ANTHROPIC_API_KEY" \ -H "anthropic-version: 2023-06-01" \ https://api.anthropic.com/v1/models
Model Availability:
# Generally available models perspt --provider-type anthropic --model claude-3-5-sonnet-20241022 perspt --provider-type anthropic --model claude-3-5-haiku-20241022 # Request access for Opus through Anthropic Console perspt --provider-type anthropic --model claude-3-opus-20240229
Rate Limiting: - Anthropic has strict rate limits for new accounts - Build up usage history for higher limits - Use Haiku model for testing and development
Google AI (Gemini) Provider Issues¶
API Key and Setup Problems
Error: Google AI API key not valid
Error: Gemini model access denied
Solutions:
API Key Configuration:
# Get API key from Google AI Studio export GOOGLE_API_KEY="your-api-key" # Alternative environment variable export GEMINI_API_KEY="your-api-key" # Test API access curl -H "Content-Type: application/json" \ "https://generativelanguage.googleapis.com/v1beta/models?key=$GOOGLE_API_KEY"
Model Selection:
# Recommended models perspt --provider-type google --model gemini-1.5-flash # Fast, cost-effective perspt --provider-type google --model gemini-1.5-pro # Balanced capability perspt --provider-type google --model gemini-1.5-pro-exp # Experimental features
Geographic Restrictions: - Some Gemini models have geographic limitations - Check Google AI availability in your region - Use VPN if necessary and allowed by Google’s terms
Groq Provider Issues¶
Service Availability
Error: Groq service temporarily unavailable
Error: Model inference timeout
Solutions:
Service Reliability: - Groq prioritizes speed over availability - Configure fallback providers for production use - Monitor Groq status page for outages
Model Selection:
# Fast inference models perspt --provider-type groq --model llama-3.1-8b-instant perspt --provider-type groq --model mixtral-8x7b-32768 perspt --provider-type groq --model gemma-7b-it
Timeout Configuration:
{ "provider_type": "groq", "timeout": 30, "retry_attempts": 2, "fallback_provider": "openai" }
Cohere Provider Issues¶
API Integration Problems
Error: Cohere API authentication failed
Error: Model 'command-r-plus' not accessible
Solutions:
API Key Setup:
export COHERE_API_KEY="your-api-key" # Test API access curl -H "Authorization: Bearer $COHERE_API_KEY" \ https://api.cohere.ai/v1/models
Model Access:
# Available Cohere models perspt --provider-type cohere --model command-r perspt --provider-type cohere --model command-r-plus perspt --provider-type cohere --model command-light
XAI (Grok) Provider Issues¶
Grok Model Access
Error: XAI API key invalid
Error: Grok model not available
Solutions:
API Configuration:
export XAI_API_KEY="your-api-key" # Check available models perspt --provider-type xai --list-models
Model Selection:
# Available Grok models perspt --provider-type xai --model grok-beta
Ollama (Local) Provider Issues¶
Service Connection Problems
Error: Could not connect to Ollama server
Error: Model not found in Ollama
Solutions:
Ollama Service Management:
# Check if Ollama is running curl http://localhost:11434/api/tags # Start Ollama service ollama serve # Start as background service (macOS) brew services start ollama
Model Management:
# List installed models ollama list # Install popular models ollama pull llama3.2:8b ollama pull mistral:7b ollama pull codellama:7b # Remove unused models to save space ollama rm unused-model
Resource Optimization:
# Check system resources htop nvidia-smi # For GPU users # Use smaller models for limited resources ollama pull llama3.2:3b # 3B parameters ollama pull phi3:mini # Microsoft Phi-3 Mini
Configuration Tuning:
{ "provider_type": "ollama", "base_url": "http://localhost:11434", "options": { "num_gpu": 1, // Number of GPU layers "num_thread": 8, // CPU threads "num_ctx": 4096, // Context window "temperature": 0.7, "top_p": 0.9 } }
Performance Optimization¶
Response Time Optimization¶
Model Selection for Speed
# Fastest models by provider
perspt --provider-type groq --model llama-3.1-8b-instant # Groq (fastest)
perspt --provider-type openai --model gpt-4o-mini # OpenAI (fast)
perspt --provider-type google --model gemini-1.5-flash # Google (fast)
perspt --provider-type anthropic --model claude-3-5-haiku-20241022 # Anthropic (fast)
Configuration Tuning
{
"performance": {
"max_tokens": 1000, // Limit response length
"stream": true, // Enable streaming
"timeout": 15, // Shorter timeout
"parallel_requests": 2, // Multiple requests
"cache_responses": true // Cache similar queries
}
}
Memory and Resource Management¶
System Resource Monitoring
# Monitor CPU and memory usage
top -p $(pgrep perspt)
# Monitor network usage
iftop -i any -f "host api.openai.com"
# Check disk usage for logs and cache
du -sh ~/.config/perspt/
Resource Optimization
{
"resource_limits": {
"max_history_size": 50, // Limit conversation history
"cache_size_mb": 100, // Limit cache size
"log_rotation_size": "10MB", // Rotate logs
"cleanup_interval": "24h" // Regular cleanup
}
}
Network Performance¶
Connection Optimization
{
"network": {
"keep_alive": true, // Reuse connections
"connection_pool_size": 5, // Pool connections
"dns_cache": true, // Cache DNS lookups
"compression": true // Enable compression
}
}
Regional Configuration
{
"provider_endpoints": {
"openai": "https://api.openai.com", // US
"anthropic": "https://api.anthropic.com", // US
"google": "https://generativelanguage.googleapis.com" // Global
}
}
Advanced Recovery Procedures¶
Complete System Reset¶
Full Configuration Reset
# Backup current configuration
cp -r ~/.config/perspt ~/.config/perspt.backup.$(date +%Y%m%d)
# Remove all Perspt data
rm -rf ~/.config/perspt/
rm -rf ~/.local/share/perspt/
rm -rf ~/.cache/perspt/
# Clear temporary files
rm -rf /tmp/perspt*
# Recreate default configuration
perspt --create-default-config
Selective Reset Options
# Reset only configuration
rm ~/.config/perspt/config.json
perspt --setup
# Clear only cache
rm -rf ~/.config/perspt/cache/
# Clear only conversation history
rm -rf ~/.config/perspt/history/
# Reset only logs
rm ~/.config/perspt/*.log
Emergency Fallback Procedures¶
Provider Fallback Chain
{
"fallback_chain": [
{
"provider_type": "openai",
"model": "gpt-4o-mini",
"on_failure": "next"
},
{
"provider_type": "anthropic",
"model": "claude-3-5-haiku-20241022",
"on_failure": "next"
},
{
"provider_type": "ollama",
"model": "llama3.2:8b",
"on_failure": "fail"
}
]
}
Manual Override Mode
# Force specific provider regardless of config
perspt --force-provider openai --force-model gpt-3.5-turbo
# Use minimal configuration
perspt --no-config --api-key sk-... --provider-type openai
# Debug mode with maximum verbosity
perspt --debug --verbose --log-level trace
Data Recovery¶
Conversation History Recovery
# Check for backup files
ls ~/.config/perspt/history/*.backup
# Restore from backup
cp ~/.config/perspt/history/conversation.backup \
~/.config/perspt/history/conversation.json
# Export conversations before reset
perspt --export-history ~/perspt-backup.json
Configuration Recovery
# Restore from automatic backup
cp ~/.config/perspt/config.json.backup ~/.config/perspt/config.json
# Recreate from environment variables
perspt --config-from-env
# Interactive configuration rebuild
perspt --reconfigure
Version Migration Issues¶
Upgrading from allms to genai
# Backup old configuration
cp ~/.config/perspt/config.json ~/.config/perspt/config.allms.backup
# Run migration script
perspt --migrate-config
# Manual migration if needed
perspt --validate-config --fix-issues
Downgrade Procedures
# Install specific version
cargo install perspt --version 0.2.0
# Use version-specific configuration
cp ~/.config/perspt/config.v0.2.0.json ~/.config/perspt/config.json
Emergency Contact and Support¶
Critical Issue Escalation¶
For production-critical issues:
Immediate Workarounds: - Switch to backup providers - Use local models (Ollama) for offline capability - Enable debug logging for detailed diagnosis
Community Support Channels: - GitHub Issues: https://github.com/eonseed/perspt/issues - Discord Community: [Link to Discord] - Reddit: r/perspt
Enterprise Support: - Priority ticket system - Direct developer contact - Custom configuration assistance
Issue Documentation Template¶
When reporting issues, include this information:
**Environment Information:**
- OS: [macOS 14.1 / Ubuntu 22.04 / Windows 11]
- Perspt Version: [perspt --version]
- Installation Method: [cargo / brew / binary]
**Configuration:**
- Provider: [openai / anthropic / google / etc.]
- Model: [gpt-4o-mini / claude-3-5-sonnet / etc.]
- Config file: [attach sanitized config.json]
**Error Details:**
- Full error message: [exact text]
- Error code: [if available]
- Stack trace: [if available]
**Reproduction Steps:**
1. [Step 1]
2. [Step 2]
3. [Error occurs]
**Expected vs Actual Behavior:**
- Expected: [what should happen]
- Actual: [what actually happens]
**Additional Context:**
- Network environment: [corporate / home / proxy]
- Recent changes: [configuration / system updates]
- Workarounds attempted: [list what you've tried]
Recovery Verification¶
After resolving issues, verify system health:
# Test basic functionality
perspt --provider-type openai --model gpt-3.5-turbo --test-connection
# Verify configuration
perspt --validate-config
# Test streaming
echo "Hello" | perspt --provider-type openai --model gpt-4o-mini --stream
# Check all providers
for provider in openai anthropic google groq; do
echo "Testing $provider..."
perspt --provider-type $provider --list-models
done