ChangelogΒΆ
All notable changes to Perspt will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
[0.5.2] - 2026-01-07ΒΆ
ChangedΒΆ
Documentation Sync: Complete synchronization of CLI documentation with actual implementation
Updated README.md, installation.rst, getting-started.rst, configuration.rst, providers.rst, troubleshooting.rst
Replaced deprecated CLI flags (
--provider-type,--api-key,--list-models,--model-name) with correct subcommand syntaxAll CLI examples now use environment variables + subcommands (e.g.,
perspt chat --model)
Architecture Documentation: Fixed all architecture diagrams to show 7 crates and 10 subcommands
Added missing
perspt-sandboxto introduction.rst diagramUpdated subcommand count from 8 to 10 across all documentation
Added
logsandsimple-chatcommands to CLI reference tables
Unified Crate Versioning: All 7 crates now share version 0.5.2
[0.5.1] - 2026-01-04ΒΆ
FixedΒΆ
TUI Virtual Scrolling: Complete refactor of chat message rendering to fix truncation and scroll bugs
Implemented virtual scrolling with manual viewport slicing (inspired by Codex TUI architecture)
Fixed logical vs visual line mismatch that caused auto-scroll to stop before reaching bottom
Eliminated u16 overflow in scroll offset (now uses usize for unlimited line counts)
Added immediate re-render after stream finalization to eliminate βwait for next queryβ lag
Added
textwrapandunicode-widthdependencies for accurate text wrapping
Clippy Compliance: Resolved all clippy warnings in perspt-tui crate
Converted single-arm match statements to if expressions
Simplified duplicate conditional branches
Applied derive(Default) where appropriate
Technical DetailsΒΆ
The chat TUI now uses a βtranscriptβ rendering model:
Line Collection: All messages are collected as (text, style) tuples
Manual Wrapping: Text is wrapped to viewport width using Unicode-aware width calculations
Viewport Slicing: Only visible lines are rendered (skip/take based on scroll offset)
No Paragraph::scroll(): Removes reliance on Ratatuiβs internal scroll which had u16 limits
This ensures:
Long AI responses display completely without truncation
Auto-scroll reaches the true bottom of content
Scrollbar accurately reflects position in large conversations
Terminal resize is handled gracefully
[0.5.0] - 2025-12-23ΒΆ
AddedΒΆ
SRBN Agent Mode (PSP-000004): Stabilized Recursive Barrier Network for autonomous coding
Lyapunov energy-based stability verification
Multi-tier model architecture (Architect, Actuator, Verifier, Speculator)
LSP integration via
tyserver for real-time type checkingpytest integration for test-driven verification
Merkle ledger for change tracking and rollbacks
7-Crate Workspace Architecture: Complete restructure for modularity
perspt-cli: CLI entry point with subcommandsperspt-core: LLM provider abstractionperspt-tui: Terminal UI (Ratatui-based)perspt-agent: SRBN orchestration engineperspt-policy: Starlark-based policy engineperspt-sandbox: Process isolationperspt-store: DuckDB session persistence and LLM logging
ChangedΒΆ
BREAKING:
--simple-cliflag replaced withsimple-chatsubcommand# Old (deprecated) perspt --simple-cli # New perspt simple-chat
Default model: Now
gemini-2.0-flash-litewith auto-detectionDocumentation: Complete rewrite using 4C/ID methodology
RemovedΒΆ
Legacy single-binary
src/implementation (now incrates/)Old configuration format (use
perspt.tomlor env vars)
[0.4.6] - 2025-06-29ΒΆ
AddedΒΆ
π₯οΈ Simple CLI Mode (PSP-000003): Brand new minimal command-line interface for direct Q&A without TUI overlay
--simple-cliflag enables Unix-style prompt interface perfect for scripting and automation--log-file <FILE>option provides built-in session logging with timestampsReal-time streaming responses in simple text format for immediate feedback
Clean exit handling with
Ctrl+D,exitcommand, orCtrl+CinterruptWorks seamlessly with all existing providers, models, and authentication methods
Perfect for accessibility needs, scripting workflows, and Unix philosophy adherents
Scripting and Automation Support: Simple CLI mode enables powerful automation scenarios
Pipe questions directly:
echo "Question?" | perspt --simple-cliBatch processing with session logging for documentation and audit trails
Environment integration with aliases and shell scripts
Robust error handling that doesnβt terminate sessions
Enhanced Accessibility: Simple CLI mode provides better screen reader compatibility and simpler interaction model
Technical DetailsΒΆ
New
cli.rsmodule implementing async command-line loop with streaming supportIntegration with existing
GenAIProviderfor consistent behavior across interface modesComprehensive error handling with graceful degradation for individual request failures
Session logging format compatible with standard text processing tools
ChangedΒΆ
Enhanced CLI Argument Support: Added
--simple-cliand--log-filearguments with proper validationUpdated Documentation: Comprehensive updates to README and Perspt book covering simple CLI mode usage
Improved Help Text: Clear descriptions of new simple CLI mode options and use cases
Examples and Use CasesΒΆ
# Basic usage
perspt --simple-cli
# With session logging
perspt --simple-cli --log-file research-session.txt
# Scripting integration
echo "Explain quantum computing" | perspt --simple-cli
# Environment setup
alias ai="perspt --simple-cli"
ai-log() { perspt --simple-cli --log-file "$1"; }
[0.4.2] - 2025-06-09ΒΆ
AddedΒΆ
π€ Zero-Config Automatic Provider Detection: Perspt now automatically detects and configures available providers based on environment variables
Set any supported API key (
OPENAI_API_KEY,ANTHROPIC_API_KEY, etc.) and simply runpersptNo configuration files or CLI arguments needed for basic usage
Intelligent priority-based selection: OpenAI β Anthropic β Gemini β Groq β Cohere β XAI β DeepSeek β Ollama
Automatic default model selection for each detected provider
Graceful fallback with helpful error messages when no providers are found
Enhanced Error Handling: Clear, actionable error messages when no providers are configured
Comprehensive Provider Support: All major LLM providers now supported for automatic detection
Local Model Auto-Detection: Ollama automatically detected when running locally (no API key required)
ChangedΒΆ
Improved User Experience: Launch Perspt instantly with just an API key - no config required
Better Documentation: Updated getting-started guide and configuration documentation with zero-config examples
Streamlined Workflow: Reduced friction for new users getting started
Technical DetailsΒΆ
Added
detect_available_provider()function inconfig.rsfor environment-based provider detectionEnhanced
load_config()to use automatic detection when no explicit configuration is providedComprehensive test coverage for all provider detection scenarios and edge cases
[0.4.1] - 2025-06-03ΒΆ
AddedΒΆ
Enhanced documentation with Sphinx
Comprehensive API reference
Developer guide for contributors
ChangedΒΆ
Improved error messages for better user experience
Optimized memory usage for large conversations
FixedΒΆ
Fixed terminal cleanup on panic
Resolved configuration file parsing edge cases
[0.4.0] - 2025-05-29ΒΆ
AddedΒΆ
Multi-provider support: OpenAI, Anthropic, Google, Groq, Cohere, XAI, DeepSeek, and Ollama
Dynamic model discovery: Automatic detection of available models
Input queuing: Type new messages while AI is responding
Markdown rendering: Rich text formatting in terminal
Streaming responses: Real-time display of AI responses
Comprehensive configuration: JSON files and environment variables
Beautiful terminal UI: Powered by Ratatui with modern design
Graceful error handling: User-friendly error messages and recovery
Technical HighlightsΒΆ
Built with Rust for maximum performance and safety
Leverages genai crate for unified LLM access
Async/await architecture with Tokio
Comprehensive test suite with unit and integration tests
Memory-safe with zero-copy operations where possible
Supported ProvidersΒΆ
OpenAI: GPT-4, GPT-4-turbo, GPT-4o series, GPT-3.5-turbo
Anthropic: Claude 3 models (via genai)
Google: Gemini models (via genai)
Groq: Ultra-fast Llama inference
Cohere: Command R/R+ models
XAI: Grok models
DeepSeek: Advanced reasoning models
Ollama: Local model hosting
Configuration FeaturesΒΆ
Multiple configuration file locations
Environment variable support
Command-line argument overrides
Provider-specific settings
UI customization options
User Interface FeaturesΒΆ
Real-time chat interface
Syntax highlighting for code blocks
Scrollable message history
Keyboard shortcuts for productivity
Status indicators and progress feedback
Responsive design that adapts to terminal size
[0.3.0] - 2025-05-15ΒΆ
AddedΒΆ
Multi-provider foundation with genai crate
Configuration file validation
Improved error categorization
ChangedΒΆ
Refactored provider architecture for extensibility
Enhanced UI responsiveness
Better handling of long responses
FixedΒΆ
Terminal state cleanup on unexpected exit
Configuration merging precedence
Memory leaks in streaming responses
[0.2.0] - 2025-05-01ΒΆ
AddedΒΆ
Streaming response support
Basic configuration file support
Terminal UI with Ratatui
OpenAI provider implementation
ChangedΒΆ
Migrated from simple CLI to TUI interface
Improved async architecture
Better error handling patterns
FixedΒΆ
Terminal rendering issues
API request timeout handling
Configuration loading edge cases
[0.1.0] - 2025-04-15ΒΆ
AddedΒΆ
Initial release
Basic OpenAI integration
Simple command-line interface
Environment variable configuration
Basic chat functionality
FeaturesΒΆ
Support for GPT-3.5 and GPT-4 models
API key authentication
Simple text-based conversations
Basic error handling
Migration GuidesΒΆ
Upgrading from 0.3.x to 0.4.0ΒΆ
Configuration Changes:
The configuration format has been enhanced. Old configurations will continue to work, but consider updating:
// Old format (still supported)
{
"api_key": "sk-...",
"model": "gpt-4"
}
// New format (recommended)
{
"api_key": "sk-...",
"default_model": "gpt-4o-mini",
"provider_type": "openai",
"providers": {
"openai": "https://api.openai.com/v1"
}
}
Command Line Changes:
Some command-line flags have been updated:
# Old
perspt --model gpt-4
# New
perspt --model-name gpt-4
API Changes:
If youβre using Perspt as a library, some function signatures have changed:
// Old
provider.send_request(message, model).await?;
// New
provider.send_chat_request(message, model, &config, &tx).await?;
Upgrading from 0.2.x to 0.3.0ΒΆ
New Dependencies:
Update your Cargo.toml if building from source:
[dependencies]
tokio = { version = "1.0", features = ["full"] }
# ... other dependencies updated
Configuration Location:
Configuration files now support multiple locations. Move your config file to:
~/.config/perspt/config.json (Linux)
~/Library/Application Support/perspt/config.json (macOS)
%APPDATA%/perspt/config.json (Windows)
Breaking ChangesΒΆ
Version 0.4.0ΒΆ
Provider trait changes: LLMProvider trait now requires async fn methods
Configuration structure: Some configuration keys renamed for consistency
Error types: Custom error types replace generic error handling
Streaming interface: Response handling now uses channels instead of callbacks
Version 0.3.0ΒΆ
Async runtime: Switched to full async architecture
UI framework: Migrated from custom rendering to Ratatui
Configuration format: Enhanced JSON schema with validation
Version 0.2.0ΒΆ
Interface change: Moved from CLI to TUI
Provider abstraction: Introduced provider trait system
Async support: Added Tokio async runtime
Deprecation NoticesΒΆ
The following features are deprecated and will be removed in future versions:
Version 0.5.0 (Upcoming)ΒΆ
Legacy configuration keys: Old configuration format support will be removed
Synchronous API: All provider methods must be async
Direct model specification: Use provider + model pattern instead
Version 0.6.0 (Planned)ΒΆ
Environment variable precedence: Will change to match command-line precedence
Default provider: Will change from OpenAI to provider-agnostic selection
Known IssuesΒΆ
Current Version (0.4.0)ΒΆ
Windows terminal compatibility: Some Unicode characters may not display correctly on older Windows terminals
Large conversation history: Memory usage increases with very long conversations (>1000 messages)
Network interruption: Streaming responses may be interrupted during network issues
Ollama connectivity: Local models may require manual service restart after system reboot
Workarounds:
# For Windows terminal issues
# Use Windows Terminal or enable UTF-8 support
# For memory issues with large histories
perspt --max-history 500
# For network issues
perspt --timeout 60 --max-retries 5
Planned FeaturesΒΆ
Version 0.5.0 (Next Release)ΒΆ
Local model support: Integration with Ollama and other local LLM servers
Plugin system: Support for custom providers and UI extensions
Conversation persistence: Save and restore chat sessions
Multi-conversation support: Multiple chat tabs in single session
Enhanced markdown: Tables, math equations, and diagrams
Voice input: Speech-to-text support for hands-free operation
Version 0.6.0 (Future)ΒΆ
Collaborative features: Share conversations and collaborate with others
IDE integration: VS Code extension and other editor plugins
Mobile companion: Mobile app for conversation sync
Advanced AI features: Function calling, tool use, and agent capabilities
Performance analytics: Response time tracking and optimization suggestions
Version 1.0.0 (Stable Release)ΒΆ
API stability guarantee: Stable public API with semantic versioning
Enterprise features: SSO, audit logging, and compliance features
Advanced customization: Themes, layouts, and workflow customization
Comprehensive integrations: GitHub, Slack, Discord, and more
Professional support: Documentation, training, and enterprise support
ContributingΒΆ
We welcome contributions! Please see our Contributing for guidelines.
Types of contributions: - Bug reports and feature requests - Code contributions and optimizations - Documentation improvements - Testing and quality assurance - Community support and advocacy
How to contribute:
Check existing issues and discussions
Fork the repository
Create a feature branch
Make your changes with tests
Submit a pull request
SupportΒΆ
GitHub Issues: Bug Reports
Discussions: Community Chat
Documentation: This guide and API reference
Email: support@perspt.dev (for enterprise inquiries)
LicenseΒΆ
Perspt is released under the LGPL v3 License. See License for details.
AcknowledgmentsΒΆ
Special thanks to:
The Rust community for excellent tooling and libraries
Ratatui developers for the amazing TUI framework
genai crate maintainers for unified LLM access
All contributors and users who help improve Perspt
See also
Installation Guide - How to install or upgrade Perspt
Getting Started - Quick start guide for new users
Contributing - How to contribute to the project