👁️ Perspt: Your Terminal’s Window to the AI World 🤖¶
"The keyboard hums, the screen aglow,
AI's wisdom, a steady flow.
Will robots take over, it's quite the fright,
Or just provide insights, day and night?
We ponder and chat, with code as our guide,
Is AI our helper or our human pride?"
Perspt (pronounced “perspect,” short for Personal Spectrum Pertaining Thoughts) is a
high-performance command-line interface (CLI) application that gives you a peek into the mind of Large Language Models (LLMs).
Built with Rust for maximum speed and reliability, it allows you to chat with the latest AI models from multiple
providers directly in your terminal using a modern, unified interface powered by the cutting-edge genai
crate.
Get up and running with Perspt in minutes. Install, configure, and start chatting with AI models.
Complete guide to using Perspt effectively, from basic chat to advanced features.
Deep dive into Perspt’s architecture, contribute to the project, and extend functionality.
Comprehensive API documentation generated from source code comments.
✨ Key Features¶
🤖 |
Zero-Config Startup: Automatic provider detection from environment variables - just set your API key and run |
🎨 |
Interactive Chat Interface: A colorful and responsive chat interface powered by Ratatui |
⚡ |
Streaming Responses: Real-time streaming of LLM responses for an interactive experience |
🔀 |
Multiple Provider Support: Seamlessly switch between OpenAI, Anthropic, Google, Groq, Cohere, XAI, DeepSeek, and Ollama |
🚀 |
Dynamic Model Discovery: Automatically discovers available models without manual updates |
⚙️ |
Configurable: Flexible configuration via JSON files or command-line arguments |
🔄 |
Input Queuing: Type new questions while AI is responding - inputs are queued and processed sequentially |
📜 |
Markdown Rendering: Beautiful markdown support directly in the terminal |
🛡️ |
Graceful Error Handling: Robust handling of network issues, API errors, and edge cases |
🎯 Supported AI Providers¶
GPT-4.1 - Latest and most advanced model
GPT-4o series - GPT-4o, GPT-4o-mini for fast responses
o1 reasoning models - o1-preview, o1-mini, o3-mini
GPT-4 series - GPT-4-turbo, GPT-4 for complex tasks
Latest model variants automatically supported
Claude 3.5 (latest Sonnet, Haiku)
Claude 3 (Opus, Sonnet, Haiku)
Latest Claude models
Gemini 2.5 Pro - Latest multimodal model
Gemini Pro, Gemini 1.5 Pro/Flash
PaLM models
Llama 3.2 - Latest Meta model
CodeLlama - Code-specialized models
Mistral - Fast and capable models
Qwen - Multilingual models
All popular open-source models
Groq: Ultra-fast Llama 3.x inference
Cohere: Command R/R+ models
XAI: Grok models
DeepSeek: Advanced reasoning models
Note
Perspt leverages the powerful genai crate for unified LLM access, ensuring automatic support for new models and providers with cutting-edge features like reasoning model support.
📋 Perspt¶
Getting Started
- Introduction to Perspt
- Getting Started
- Installation Guide
- Configuration Guide
- Automatic Provider Detection (Zero-Config)
- Manual Configuration Methods
- Environment Variables
- Configuration File
- Provider Configuration
- Command-Line Options
- UI Customization
- Behavior Settings
- Advanced Configuration
- Multiple Configurations
- Configuration Validation
- Configuration Templates
- Migration and Import
- Best Practices
- Troubleshooting
User Guide
- User Guide
- Basic Usage
- Advanced Features
- Configuration Profiles and Multi-Provider Setup
- Enhanced Streaming and Real-time Features
- Advanced Conversation Patterns
- Session Persistence
- Multi-Model Conversations
- Plugin System
- Advanced Conversation Patterns
- Automation and Scripting
- Configuration Validation
- Performance Optimization
- Custom Integrations
- Next Steps
- AI Providers
- Troubleshooting
Developer Guide
API Reference
- API Reference
- Module Overview
- Configuration Module
- LLM Provider Module
- User Interface Module
- Main Module
- Overview
- Architecture Overview
- Module Dependencies
- Key Structures and Interfaces
- Error Handling
- Configuration System
- Async Architecture
- Type Safety
- Performance Considerations
- API Stability
- Usage Examples
- Usage Examples
- Testing APIs
- Testing APIs
- Documentation Generation
- Best Practices
- Module Overview
- Configuration Module
- LLM Provider Module
- User Interface Module
- Main Module
Appendices
🔗 Quick Links¶
Repository: GitHub
Crates.io: perspt
Issues: Bug Reports & Feature Requests
Discussions: Community Chat