Getting StartedΒΆ
Welcome to Perspt! This guide will get you up and running with your first AI conversation in just a few minutes.
PrerequisitesΒΆ
Before installing Perspt, ensure you have the following:
System RequirementsΒΆ
Component |
Requirement |
|---|---|
Operating System |
Linux, macOS, or Windows |
Rust Toolchain |
Rust 1.82.0 or later |
Terminal |
Any modern terminal emulator |
Internet Connection |
Required for AI provider API calls |
API KeysΒΆ
Youβll need an API key from at least one AI provider:
Visit OpenAI Platform
Sign up or log in to your account
Navigate to API Keys section
Create a new API key
Copy and save it securely
export OPENAI_API_KEY="sk-your-openai-api-key-here"
Visit Anthropic Console
Sign up or log in
Go to API Keys
Generate a new key
Save it securely
export ANTHROPIC_API_KEY="your-anthropic-api-key-here"
Visit Google AI Studio
Create or select a project
Generate API key
Configure authentication
export GOOGLE_API_KEY="your-google-api-key-here"
Install Ollama from ollama.ai
Pull a model
Start Ollama service
ollama pull llama3.2
# Ollama service starts automatically
Quick InstallationΒΆ
Method 1: From Source (Recommended)ΒΆ
# Clone the repository
git clone https://github.com/eonseed/perspt.git
cd perspt
# Build the project
cargo build --release
# Install to your PATH (optional)
cargo install --path .
# Or run directly
./target/release/perspt
Method 2: Using CargoΒΆ
# Install from crates.io (when published)
cargo install perspt
# Run Perspt
perspt
Method 3: Download BinaryΒΆ
# Download the latest release (replace with actual URL)
curl -L https://github.com/eonseed/perspt/releases/latest/download/perspt-linux-x86_64.tar.gz | tar xz
# Make executable and move to PATH
chmod +x perspt
sudo mv perspt /usr/local/bin/
Your First ConversationΒΆ
Letβs start your first AI conversation with Perspt! You can choose between two interface modes:
Interactive TUI Mode - Rich terminal interface with markdown rendering (default)
Simple CLI Mode - Minimal command-line interface for scripting and accessibility (NEW!)
Zero-Config Quick StartΒΆ
NEW! Perspt now features intelligent automatic provider detection. Simply set an environment variable for any supported provider, and Perspt will automatically detect and use it - no additional configuration needed!
Note
Automatic Provider Detection Priority:
OpenAI (
OPENAI_API_KEY)Anthropic (
ANTHROPIC_API_KEY)Google Gemini (
GEMINI_API_KEY)Groq (
GROQ_API_KEY)Cohere (
COHERE_API_KEY)XAI (
XAI_API_KEY)DeepSeek (
DEEPSEEK_API_KEY)Ollama (no API key needed - auto-detected if running)
Rich terminal interface with markdown rendering and scrollable history:
# Set your API key
export OPENAI_API_KEY="sk-your-actual-api-key-here"
# Launch Perspt in TUI mode (default)
perspt
# Automatically uses OpenAI with gpt-4o-mini
Minimal command-line interface perfect for scripting and accessibility:
# Set your API key
export OPENAI_API_KEY="sk-your-actual-api-key-here"
# Launch Perspt in simple CLI mode
perspt simple-chat
# Unix-style prompt with streaming responses
# Set your API key
export ANTHROPIC_API_KEY="sk-ant-your-key"
# TUI mode (default)
perspt
# Simple CLI mode
perspt simple-chat
# Automatically uses Anthropic with claude-3-5-sonnet-20241022
# Set your API key
export GEMINI_API_KEY="your-gemini-key"
# TUI mode (default)
perspt
# Simple CLI mode with logging
perspt simple-chat --log-file gemini-session.txt
# Automatically uses Gemini with gemini-1.5-flash
# Just make sure Ollama is running
ollama serve
# TUI mode (default)
perspt
# Simple CLI mode for scripting
perspt simple-chat
# Auto-detects Ollama if no other providers found
Step 1: Set Your API Key (Manual Configuration)ΒΆ
If you prefer manual configuration or want to override automatic detection:
# For OpenAI (most common)
export OPENAI_API_KEY="sk-your-actual-api-key-here"
# Verify it's set
echo $OPENAI_API_KEY
Step 2: Launch PersptΒΆ
Choose between TUI mode (rich interface) or Simple CLI mode (minimal interface):
TUI Mode (Default)
# Start with automatic detection (recommended)
perspt
# Or specify model explicitly
perspt chat --model gpt-4o-mini
You should see a welcome screen like this:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Welcome to Perspt! β
β Your Terminal's Window to AI β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β Provider: OpenAI β
β Model: gpt-4o-mini β
β Status: Ready β
β β
β Type your message and press Enter to start chatting! β
β Press Ctrl+C to exit β
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
You:
Simple CLI Mode (NEW!)
# Start simple CLI mode
perspt simple-chat
# With session logging
perspt simple-chat --log-file my-session.txt
You should see a minimal interface like this:
Perspt Simple CLI Mode
Model: gpt-4o-mini
Type 'exit' or press Ctrl+D to quit.
>
The simple CLI mode is perfect for:
Scripting and automation
Accessibility (screen readers)
Unix-style workflows
Session logging
Lightweight usage
Step 3: Start ChattingΒΆ
Type your first message and press Enter:
You: Hello! Can you explain what Rust is in simple terms?
Assistant: Hello! Rust is a modern programming language that's designed to be both
fast and safe. Here are the key things that make Rust special:
**Speed**: Rust programs run as fast as C and C++ programs because it compiles
directly to machine code.
**Safety**: Unlike C/C++, Rust prevents common programming errors like accessing
invalid memory or data races in concurrent programs.
**No Garbage Collector**: Rust manages memory automatically without needing a
garbage collector, which keeps programs fast and predictable.
**Growing Ecosystem**: It's increasingly used for web backends, system programming,
blockchain, and even WebAssembly applications.
Think of Rust as giving you the performance of low-level languages like C, but
with the safety and ergonomics of higher-level languages like Python or Java.
You:
Congratulations! π Youβve successfully started your first conversation with Perspt.
Basic CommandsΒΆ
While chatting in TUI mode, you can use these keyboard shortcuts:
Shortcut |
Action |
|---|---|
Enter |
Send your message |
Ctrl+J or Ctrl+Enter |
Insert a newline in the input |
PageUp / PageDown |
Scroll through chat history (10 lines) |
Ctrl+Up / Ctrl+Down |
Scroll through chat history (1 line) |
Mouse wheel |
Scroll through chat history (3 lines) |
Ctrl+C or Ctrl+Q |
Exit Perspt |
TUI Scrolling BehaviorΒΆ
The chat interface features intelligent scrolling:
Auto-scroll during streaming: New AI response content is automatically scrolled into view
Manual scroll disables auto-scroll: Scrolling up lets you read previous content without interruption
Return to bottom: Scrolling to the end of the conversation re-enables auto-scroll
The TUI uses virtual scrolling to efficiently handle very long conversations, rendering only the visible portion of the chat history.
Switching ModelsΒΆ
You can easily switch between different AI models and providers:
OpenAI ModelsΒΆ
# Use GPT-4
perspt chat --model gpt-4
# Use GPT-4 Turbo
perspt chat --model gpt-4-turbo-preview
# Use GPT-4o Mini (recommended for most use cases)
perspt chat --model gpt-4o-mini
# Use latest GPT-4.1
perspt chat --model gpt-4.1
Other ProvidersΒΆ
# Use Anthropic Claude (auto-detected from ANTHROPIC_API_KEY)
perspt chat --model claude-3-sonnet-20240229
# Use Google Gemini (auto-detected from GEMINI_API_KEY)
perspt chat --model gemini-pro
# Use Ollama (Local - auto-detected)
perspt chat --model llama3.2
Note
The provider is auto-detected based on the model name and available API keys.
You donβt need to specify --provider in most cases.
Basic ConfigurationΒΆ
For frequent use, create a configuration file to set your preferences:
Create Config FileΒΆ
# Create a config.json file
touch config.json
Add your configuration:
{
"api_key": "your-api-key-here",
"default_model": "gpt-4o-mini",
"default_provider": "openai",
"provider_type": "openai"
}
Use Config FileΒΆ
# Use your configuration file
perspt --config config.json
# Or place config.json in the same directory as perspt
perspt
Common First-Time IssuesΒΆ
Issue: βAPI key not foundβΒΆ
Solution: Make sure your API key is properly set:
# Check if the key is set
echo $OPENAI_API_KEY
# If empty, set it again
export OPENAI_API_KEY="sk-your-key-here"
Issue: βModel not availableβΒΆ
Solution: Check available models for your provider:
# List available models
perspt --list-models
# Use a specific model that's available
perspt --model-name gpt-4o-mini
Issue: βNetwork connection failedβΒΆ
Solution: Check your internet connection and API key permissions:
# Test with a simple curl command
curl -H "Authorization: Bearer $OPENAI_API_KEY" \
"https://api.openai.com/v1/models"
Issue: Terminal display problemsΒΆ
Solution: Ensure your terminal supports modern features:
# Try a different terminal emulator if needed
# Update your terminal if it's very old
# Check terminal size
echo $COLUMNS x $LINES
Next StepsΒΆ
Now that you have Perspt running, explore these topics:
Learn about advanced features, keyboard shortcuts, and productivity tips.
Customize Perspt to fit your workflow perfectly.
Set up and switch between different AI providers.
Discover powerful features like input queuing and markdown rendering.
Example ConversationsΒΆ
Here are some example conversations to inspire your Perspt usage:
Code ReviewΒΆ
You: Can you review this Rust function?
```rust
fn calculate_fibonacci(n: u32) -> u32 {
if n <= 1 {
return n;
}
calculate_fibonacci(n - 1) + calculate_fibonacci(n - 2)
}
```
Assistant: I see you've implemented a recursive Fibonacci function. Here's my review:
**Issues:**
1. **Performance**: This has exponential time complexity O(2^n) due to repeated calculations
2. **Stack overflow**: Large values of `n` will cause stack overflow
3. **Type limitations**: `u32` limits the range of Fibonacci numbers you can calculate
**Improved version:**
```rust
fn fibonacci_iterative(n: u32) -> u64 {
if n <= 1 {
return n as u64;
}
let (mut a, mut b) = (0u64, 1u64);
for _ in 2..=n {
let temp = a + b;
a = b;
b = temp;
}
b
}
```
This iterative version is O(n) time and O(1) space, much more efficient!
Technical ExplanationΒΆ
You: Explain how async/await works in Rust
Assistant: Async/await in Rust is a powerful feature for handling concurrent operations.
Here's how it works:
## The Basics
**Async Functions**: Return a `Future` instead of executing immediately
```rust
async fn fetch_data() -> String {
// This doesn't run until awaited
"data".to_string()
}
```
**Await**: Actually executes the future
```rust
let result = fetch_data().await;
```
## Under the Hood
1. **State Machine**: The compiler transforms async functions into state machines
2. **Zero-cost**: No heap allocations for simple futures
3. **Cooperative**: Tasks yield control at await points
## Key Concepts
- **Futures are lazy**: They don't do anything until polled
- **Executors**: Runtime systems (like Tokio) that drive futures to completion
- **Non-blocking**: Awaiting doesn't block the thread
This makes Rust excellent for high-performance concurrent applications!
TroubleshootingΒΆ
No Provider DetectedΒΆ
If you see an error message like this when launching Perspt:
β No LLM provider configured!
To get started, either:
1. Set an environment variable for a supported provider:
β’ OPENAI_API_KEY=sk-your-key
β’ ANTHROPIC_API_KEY=sk-ant-your-key
β’ GEMINI_API_KEY=your-key
# ... (shows all supported providers)
2. Use command line arguments:
perspt --provider openai --api-key sk-your-key
Solution: Set at least one API key environment variable:
# Quick fix - set any supported provider
export OPENAI_API_KEY="sk-your-actual-key"
perspt # Should now auto-detect and start
Provider PriorityΒΆ
If you have multiple API keys set and want to use a specific provider:
# Override automatic detection
perspt --provider anthropic # Forces Anthropic even if OpenAI key is set
# Or unset other providers temporarily
unset OPENAI_API_KEY
export ANTHROPIC_API_KEY="your-key"
perspt # Now auto-detects Anthropic
Connection IssuesΒΆ
If Perspt detects your provider but canβt connect:
Check your API key: Ensure itβs valid and has sufficient credits
Test your connection: Try a simple curl request to the providerβs API
Check firewall: Ensure your network allows HTTPS connections
Try Ollama: For offline usage, install Ollama for local models
# Test OpenAI connection
curl -H "Authorization: Bearer $OPENAI_API_KEY" \
https://api.openai.com/v1/models
Tips for SuccessΒΆ
Start Simple: Begin with basic conversations before exploring advanced features
Experiment: Try different models and providers to find what works best for your use case
Use Configuration: Set up a config file for your most common settings
Join the Community: Connect with other Perspt users for tips and support
Stay Updated: Check for updates regularly to get new features and improvements
See also
Installation Guide - Detailed installation instructions
Configuration Guide - Complete configuration guide
Basic Usage - Everyday usage patterns
Troubleshooting - Common issues and solutions