Introduction to Perspt¶
██████╗ ███████╗██████╗ ███████╗██████╗ ████████╗ ██╔══██╗██╔════╝██╔══██╗██╔════╝██╔══██╗╚══██╔══╝ ██████╔╝█████╗ ██████╔╝███████╗██████╔╝ ██║ ██╔═══╝ ██╔══╝ ██╔══██╗╚════██║██╔═══╝ ██║ ██║ ███████╗██║ ██║███████║██║ ██║ ╚═╝ ╚══════╝╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝
Your Terminal's Window to the AI World
What is Perspt?¶
Perspt (pronounced “perspect,” short for Personal Spectrum Pertaining Thoughts) is a high-performance, terminal-based interface to Large Language Models. It serves two complementary purposes:
A simple CLI for testing LLM services — Connect to OpenAI, Anthropic, Google Gemini, Groq, Cohere, xAI, DeepSeek, or Ollama with a single command. Auto-detect your API key, chat interactively in a beautiful TUI, or pipe responses through the simple-chat mode. Perspt makes it effortless to evaluate and compare different LLM providers from your terminal.
An experimental implementation of the SRBN engine — Perspt’s agent mode is a practical implementation of the Stabilized Recursive Barrier Network (SRBN) framework described in the paper “Stability is All You Need: Lyapunov-Guided Hierarchies for Long-Horizon LLM Reliability” by Vikrant R. and Ronak R. (pre-publication). The SRBN engine decomposes coding tasks into DAGs, uses Lyapunov energy as a stability measure through multi-stage verification barriers, and commits only when energy converges — applying control-theoretic ideas to autonomous code generation. The theoretical framework is mature; the implementation is under active development and has not yet been benchmarked.
Version 0.5.9 “心砺光华” Highlights
Robust Correction Loop Contracts (PSP-7):
Structured Artifact Bundle Format — Switched correction prompt from free-form output to a strict JSON schema, ensuring the LLM explicitly declares target paths and artifacts.
Typed Parse Pipeline — Replaced Option-based extraction with a 5-layer fail-closed parse pipeline that classifies retries (Retarget, MalformedRetry, SupportFileViolation, Replan) for intelligent convergence.
Manifest Policy Enforcement — Added semantic validation to prevent implicit mutation of root manifests unless explicitly requested.
Strict Budget Exhaustion — Upgraded budget checks to respect step and revision caps alongside cost, preventing runaway loops.
LLM & Ecosystem Maintenance:
GenAI Upgrade — Bumped
genaiprovider dependency to 0.5.3 (stable bug fixes) and removed dead code leaking provider types.Rust 1.95 Readiness — Fully resolved all Clippy linting errors, including
unnecessary-sort-byandcollapsible-match.
LLM CLI:
Multi-Provider Chat — 8 providers with zero-config auto-detection
Beautiful TUI — Markdown rendering, streaming responses, scroll navigation
Simple CLI Mode — Pipe-friendly
simple-chatfor scripting and logging
Architecture¶
Perspt is built as a 9-crate Rust workspace:
![digraph arch {
rankdir=TB;
node [shape=box, style="rounded,filled", fontname="Helvetica", fontsize=10];
subgraph cluster_cli {
label="User Interface";
style=dashed;
cli [label="perspt-cli\n10 commands", fillcolor="#4ECDC4"];
tui [label="perspt-tui\nTerminal UI", fillcolor="#96CEB4"];
}
subgraph cluster_core {
label="Core Engine";
style=dashed;
core [label="perspt-core\nLLM Provider + Types", fillcolor="#45B7D1"];
agent [label="perspt-agent\nSRBN Engine", fillcolor="#FFEAA7"];
store [label="perspt-store\nDuckDB Sessions", fillcolor="#B8D4E3"];
}
subgraph cluster_security {
label="Security";
style=dashed;
policy [label="perspt-policy\nStarlark Rules", fillcolor="#DDA0DD"];
sandbox [label="perspt-sandbox\nIsolation", fillcolor="#F8B739"];
}
subgraph cluster_monitoring {
label="Monitoring";
style=dashed;
dashboard [label="perspt-dashboard\nWeb Dashboard", fillcolor="#FFB6C1"];
}
cli -> tui;
cli -> agent;
cli -> dashboard;
agent -> core;
agent -> store;
agent -> policy;
agent -> sandbox;
dashboard -> store;
dashboard -> core;
}](_images/graphviz-335200ae8750fa12bb82b2b312953e5fa17a6af1.png)
Perspt Architecture Overview¶
Key Features¶
SRBN |
Agent Mode |
Experimental autonomous multi-file coding. Plans a task DAG, generates artifact bundles per node, verifies with LSP + tests, retries until energy converges, and commits to the ledger. |
LLM |
Multi-Provider |
OpenAI, Anthropic, Google Gemini, Groq, Cohere, XAI, DeepSeek, and Ollama (local). Zero-config auto-detection from environment variables. |
LSP |
Sensor Architecture |
Plugin-driven LSP selection: |
Test |
Test Runner |
pytest integration with weighted failure scoring. Critical tests carry weight 10, high-priority 3, low-priority 1. Produces V_log. |
Ledger |
Merkle Ledger |
DuckDB-backed cryptographic change tracking. Supports rollback, session resume, energy history, and escalation reports. |
Policy |
Security |
Starlark policy engine validates commands before execution. Workspace-bound enforcement prevents escaping the project directory. |
Budget |
Token Budget |
Per-session cost tracking with |
TUI |
Terminal UI |
Ratatui-based with markdown rendering, diff viewer, task tree, dashboard, review modal, and logs viewer. |
Web |
Dashboard |
Browser-based real-time monitoring of agent execution, energy, LLM telemetry, sandbox branches, and decision traces. |
SRBN: Stabilized Recursive Barrier Network¶
The SRBN engine in Perspt is based on the paper “Stability is All You Need: Lyapunov-Guided Hierarchies for Long-Horizon LLM Reliability” by Vikrant R. and Ronak R. (pre-publication). The paper introduces a topological framework that reformulates LLM agency as a sheaf-theoretic control problem, replacing probabilistic search with Lyapunov stability analysis. Key theoretical contributions include:
Input-to-State Stability (ISS) proof showing bounded reasoning errors result in bounded system deviation (paper result)
Flow Matching Barriers that project diverging agent trajectories back onto the safe manifold (paper result)
Adaptive Flow Speculation for latency reduction via branch prediction
Theoretical reliability scaling from exponential decay to logarithmic: \(O(\log N)\) (paper prediction)
Perspt implements this theory as an experimental coding agent, governed by PSP-7. The mathematical framework is mature; empirical benchmarks on this implementation have not yet been published.
![digraph srbn {
rankdir=LR;
node [shape=box, style="rounded,filled", fontname="Helvetica", fontsize=10];
detect [label="Detect\nPlugins", fillcolor="#E0F7FA"];
plan [label="Plan\n(Architect)", fillcolor="#E8F5E9"];
gen [label="Generate\n(Actuator)", fillcolor="#FFF3E0"];
verify [label="Verify\n(LSP+Tests)", fillcolor="#F3E5F5"];
check [label="V(x) < e?", shape=diamond, fillcolor="#FFECB3"];
sheaf [label="Sheaf\nValidation", fillcolor="#E8EAF6"];
commit [label="Commit\n(Ledger)", fillcolor="#C8E6C9"];
detect -> plan;
plan -> gen;
gen -> verify;
verify -> check;
check -> gen [label="retry", style=dashed, color="#E53935"];
check -> sheaf [label="stable"];
sheaf -> commit;
}](_images/graphviz-4adda41540a7662287ab6bc257cced1bcaacd1a2.png)
SRBN Control Flow (PSP-5)¶
Lyapunov Energy:
V_syn — LSP diagnostic count (errors + warnings)
V_str — Structural contract violations
V_log — Weighted test failures (pytest)
V_boot — Bootstrap command exit codes (build, init)
V_sheaf — Cross-node consistency failures
Default weights: alpha = 1.0, beta = 0.5, gamma = 2.0. Convergence threshold: epsilon = 0.10.
CLI Commands¶
Command |
Description |
Example |
|---|---|---|
|
Interactive TUI chat (default) |
|
|
Autonomous multi-file coding (experimental) |
|
|
Initialize project config |
|
|
View or edit configuration |
|
|
Query Merkle change history |
|
|
Session lifecycle and energy |
|
|
Cancel running session |
|
|
Resume interrupted session |
|
|
View LLM request logs |
|
|
Real-time web monitoring |
|
|
CLI chat without TUI |
|
Supported Providers¶
Provider |
Environment Variable |
Notes |
|---|---|---|
OpenAI |
|
GPT-4o, o3-mini, o1-preview, GPT-4.1 |
Anthropic |
|
Claude Sonnet 4, Claude Opus 4 |
|
Gemini 2.5 Pro/Flash, Gemini 3.1 Pro/Flash |
|
Groq |
|
Llama-based models (ultra-fast inference) |
Cohere |
|
Command R+ |
XAI |
|
Grok |
DeepSeek |
|
DeepSeek Coder |
Ollama |
(none) |
Local models — no API key required |
Philosophy¶
The keyboard hums, the screen aglow,AI’s wisdom, a steady flow.Through SRBN’s loop, stability we find,Code that works, refined and aligned.—The Perspt Manifesto
Perspt embodies the belief that AI tools should be:
Accessible — A simple
persptcommand connects you to any LLM providerFast — Rust-native performance with async streaming
Stable — Lyapunov energy guides convergence before commit (SRBN agent, based on paper theory)
Secure — Policy-controlled execution with workspace bounds
Extensible — Modular 9-crate architecture
Experimental — A testbed for control-theoretic approaches to LLM reliability
Next Steps¶
Get running in 5 minutes.
Autonomous multi-file coding.
Understand the 9-crate design.