Crate perspt

Source
Expand description

§Perspt - Performance LLM Chat CLI

A high-performance terminal-based chat application for interacting with various Large Language Models (LLMs) through a unified interface. Built with Rust for speed and reliability.

§Overview

Perspt provides a beautiful terminal user interface for chatting with multiple LLM providers including:

  • OpenAI (GPT models)
  • Anthropic (Claude models)
  • Google (Gemini models)
  • Groq (Fast inference models)
  • Cohere (Command models)
  • XAI (Grok models)
  • DeepSeek (Chat and reasoning models)
  • Ollama (Local models)

§Features

  • Unified API: Single interface for multiple LLM providers
  • Real-time streaming: Live response streaming for better user experience
  • Robust error handling: Comprehensive panic recovery and error categorization
  • Configuration management: Flexible JSON-based configuration
  • Terminal UI: Beautiful, responsive terminal interface with markdown rendering
  • Model discovery: Automatic model listing and validation

§Architecture

The application follows a modular architecture:

  • main: Entry point, CLI argument parsing, and application initialization
  • config: Configuration management and loading
  • llm_provider: LLM provider abstraction and implementation
  • ui: Terminal user interface and event handling

§Usage

# Basic usage with default OpenAI provider
perspt

# Specify a different provider
perspt --provider-type anthropic --model-name claude-3-sonnet-20240229

# Use custom configuration file
perspt --config /path/to/config.json

# List available models for current provider
perspt --list-models

§Configuration

See AppConfig for detailed configuration options. The application uses JSON configuration files to manage provider settings, API keys, and UI preferences.

§Error Handling

The application implements comprehensive error handling and panic recovery. All critical operations are wrapped in appropriate error contexts for! better debugging and user experience.

Modules§

config 🔒
Configuration Management Module
llm_provider 🔒
LLM Provider Module (llm_provider.rs)
ui 🔒
User Interface Module (ui.rs)

Constants§

EOT_SIGNAL
End-of-transmission signal used to indicate completion of streaming responses. This constant is used throughout the application to signal when an LLM has finished sending its response.

Statics§

TERMINAL_RAW_MODE 🔒
Global flag to track terminal raw mode state for proper cleanup during panics. This mutex-protected boolean ensures that the terminal state can be properly restored even when the application panics, preventing terminal corruption.

Functions§

cleanup_terminal 🔒
Cleans up terminal state and restores normal operation.
handle_events
Handles terminal events and user input in the main application loop.
initialize_terminal 🔒
Initializes the terminal for TUI operation.
initiate_llm_request 🔒
Initiates an asynchronous LLM request with proper state management.
list_available_models 🔒
Lists all available models for the current LLM provider.
main 🔒
Main application entry point.
set_raw_mode_flag 🔒
Updates the global terminal raw mode flag.
setup_panic_hook 🔒
Sets up a comprehensive panic hook that ensures proper terminal restoration.
truncate_message 🔒
Truncates a message to a specified maximum length for display purposes.