Expand description
§Perspt - Performance LLM Chat CLI
A high-performance terminal-based chat application for interacting with various Large Language Models (LLMs) through a unified interface. Built with Rust for speed and reliability.
§Overview
Perspt provides a beautiful terminal user interface for chatting with multiple LLM providers including:
- OpenAI (GPT models)
- Anthropic (Claude models)
- Google (Gemini models)
- Groq (Fast inference models)
- Cohere (Command models)
- XAI (Grok models)
- DeepSeek (Chat and reasoning models)
- Ollama (Local models)
§Features
- Unified API: Single interface for multiple LLM providers
- Real-time streaming: Live response streaming for better user experience
- Robust error handling: Comprehensive panic recovery and error categorization
- Configuration management: Flexible JSON-based configuration
- Terminal UI: Beautiful, responsive terminal interface with markdown rendering
- Model discovery: Automatic model listing and validation
§Architecture
The application follows a modular architecture:
main
: Entry point, CLI argument parsing, and application initializationconfig
: Configuration management and loadingllm_provider
: LLM provider abstraction and implementationui
: Terminal user interface and event handling
§Usage
# Basic usage with default OpenAI provider
perspt
# Specify a different provider
perspt --provider-type anthropic --model-name claude-3-sonnet-20240229
# Use custom configuration file
perspt --config /path/to/config.json
# List available models for current provider
perspt --list-models
§Configuration
See AppConfig
for detailed configuration options.
The application uses JSON configuration files to manage provider settings,
API keys, and UI preferences.
§Error Handling
The application implements comprehensive error handling and panic recovery. All critical operations are wrapped in appropriate error contexts for! better debugging and user experience.
Modules§
- config 🔒
- Configuration Management Module
- llm_
provider 🔒 - LLM Provider Module (llm_provider.rs)
- ui 🔒
- User Interface Module (ui.rs)
Constants§
- EOT_
SIGNAL - End-of-transmission signal used to indicate completion of streaming responses. This constant is used throughout the application to signal when an LLM has finished sending its response.
Statics§
- TERMINAL_
RAW_ 🔒MODE - Global flag to track terminal raw mode state for proper cleanup during panics. This mutex-protected boolean ensures that the terminal state can be properly restored even when the application panics, preventing terminal corruption.
Functions§
- cleanup_
terminal 🔒 - Cleans up terminal state and restores normal operation.
- handle_
events - Handles terminal events and user input in the main application loop.
- initialize_
terminal 🔒 - Initializes the terminal for TUI operation.
- initiate_
llm_ 🔒request - Initiates an asynchronous LLM request with proper state management.
- list_
available_ 🔒models - Lists all available models for the current LLM provider.
- main 🔒
- Main application entry point.
- set_
raw_ 🔒mode_ flag - Updates the global terminal raw mode flag.
- setup_
panic_ 🔒hook - Sets up a comprehensive panic hook that ensures proper terminal restoration.
- truncate_
message 🔒 - Truncates a message to a specified maximum length for display purposes.