Contributing¶
Welcome to the Perspt project! This guide will help you get started with contributing to Perspt, whether you’re fixing bugs, adding features, or improving documentation.
Getting Started¶
Prerequisites¶
Before contributing, ensure you have:
Rust (latest stable version)
Git for version control
A GitHub account for pull requests
Code editor with Rust support (VS Code with rust-analyzer recommended)
Development Environment Setup¶
Fork and Clone:
# Fork the repository on GitHub, then: git clone https://github.com/YOUR_USERNAME/perspt.git cd perspt
Set up the development environment:
# Install Rust if not already installed curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh # Install additional components rustup component add clippy rustfmt # Install development dependencies (optional but recommended) cargo install cargo-watch cargo-nextest
Set up API keys for testing:
# Copy example config cp config.json.example config.json # Edit config.json with your API keys (optional for basic development) # Or set environment variables: export OPENAI_API_KEY="your-key-here" export ANTHROPIC_API_KEY="your-key-here"
Verify the setup:
# Build the project cargo build # Run tests (some may be skipped without API keys) cargo test # Check formatting and linting cargo fmt --check cargo clippy -- -D warnings # Test the application cargo run -- "Hello, can you help me?"
Development Workflow¶
Branch Strategy¶
We follow a simplified Git flow:
main: Stable, production-ready code
develop: Integration branch for new features
feature/: Feature development branches
fix/: Bug fix branches
docs/: Documentation improvement branches
Creating a Feature Branch¶
# Ensure you're on the latest develop branch
git checkout develop
git pull origin develop
# Create a new feature branch
git checkout -b feature/your-feature-name
# Make your changes
# ...
# Commit your changes
git add .
git commit -m "feat: add your feature description"
# Push to your fork
git push origin feature/your-feature-name
Code Style and Standards¶
Rust Style Guide¶
We follow the official Rust style guide with these additions:
Formatting:
# Auto-format your code
cargo fmt
Linting:
# Check for common issues
cargo clippy -- -D warnings
Documentation:
/// Brief description of the function.
///
/// More detailed explanation if needed.
///
/// # Arguments
///
/// * `param1` - Description of parameter
/// * `param2` - Description of parameter
///
/// # Returns
///
/// Description of return value
///
/// # Errors
///
/// Description of possible errors
///
/// # Examples
///
/// ```
/// let result = function_name(arg1, arg2);
/// assert_eq!(result, expected);
/// ```
pub fn function_name(param1: Type1, param2: Type2) -> Result<ReturnType, Error> {
// Implementation
}
Naming Conventions¶
Functions and variables: snake_case
Types and traits: PascalCase
Constants: SCREAMING_SNAKE_CASE
Modules: snake_case
// Good
pub struct LlmProvider;
pub trait ConfigManager;
pub fn process_message() -> Result<String, Error>;
pub const DEFAULT_TIMEOUT: Duration = Duration::from_secs(30);
// Avoid
pub struct llmProvider;
pub trait configManager;
pub fn ProcessMessage() -> Result<String, Error>;
Error Handling¶
Use the thiserror crate for error definitions:
use thiserror::Error;
#[derive(Error, Debug)]
pub enum ConfigError {
#[error("Configuration file not found: {path}")]
FileNotFound { path: String },
#[error("Invalid configuration: {reason}")]
Invalid { reason: String },
#[error("IO error: {0}")]
Io(#[from] std::io::Error),
}
Testing Guidelines¶
Test Structure¶
Organize tests in the same file as the code they test:
pub struct MessageProcessor {
config: Config,
}
impl MessageProcessor {
pub fn new(config: Config) -> Self {
Self { config }
}
pub async fn process(&self, input: &str) -> Result<String, ProcessError> {
// Implementation using GenAI crate
validate_message(input)?;
let response = send_message(&self.config, input, tx).await?;
Ok(response)
}
}
#[cfg(test)]
mod tests {
use super::*;
use tokio::sync::mpsc;
#[test]
fn test_message_validation() {
let processor = MessageProcessor::new(Config::default());
assert!(processor.validate_message("valid message").is_ok());
assert!(processor.validate_message("").is_err());
}
#[tokio::test]
async fn test_async_processing() {
// Skip if no API key available
if std::env::var("OPENAI_API_KEY").is_err() {
return;
}
let config = Config {
provider: "openai".to_string(),
api_key: std::env::var("OPENAI_API_KEY").ok(),
model: Some("gpt-3.5-turbo".to_string()),
..Default::default()
};
let (tx, mut rx) = mpsc::unbounded_channel();
let result = send_message(&config, "test", tx).await;
assert!(result.is_ok());
}
}
Integration Tests¶
Place integration tests in the tests/ directory:
// tests/integration_test.rs
use perspt::config::Config;
use perspt::llm_provider::send_message;
use std::env;
use tokio::sync::mpsc;
#[tokio::test]
async fn test_full_conversation_flow() {
// Skip if no API keys available
if env::var("OPENAI_API_KEY").is_err() {
return;
}
let config = Config {
provider: "openai".to_string(),
api_key: env::var("OPENAI_API_KEY").ok(),
model: Some("gpt-3.5-turbo".to_string()),
temperature: Some(0.7),
max_tokens: Some(100),
timeout_seconds: Some(30),
};
let (tx, mut rx) = mpsc::unbounded_channel();
// Test streaming response
let result = send_message(&config, "Hello, how are you?", tx).await;
assert!(result.is_ok());
// Verify we receive streaming content
let mut received_content = String::new();
while let Ok(content) = rx.try_recv() {
received_content.push_str(&content);
}
assert!(!received_content.is_empty());
}
#[test]
fn test_config_loading_hierarchy() {
// Test config loading from different sources
let config = Config::load();
assert!(config.is_ok());
}
Test Categories¶
We have several categories of tests:
Unit Tests: Test individual functions and methods
# Run only unit tests cargo test --lib
Integration Tests: Test module interactions
# Run integration tests cargo test --test '*'
API Tests: Test against real APIs (require API keys)
# Run with API keys set OPENAI_API_KEY=xxx ANTHROPIC_API_KEY=yyy cargo test
UI Tests: Test terminal UI components
# Run UI tests (may require TTY) cargo test ui::tests
Test Utilities¶
Use these utilities for consistent testing:
// Test configuration helper
impl Config {
pub fn test_config() -> Self {
Config {
provider: "test".to_string(),
api_key: Some("test-key".to_string()),
model: Some("test-model".to_string()),
temperature: Some(0.7),
max_tokens: Some(100),
timeout_seconds: Some(30),
}
}
}
// Mock message sender for testing
pub async fn mock_send_message(
_config: &Config,
message: &str,
tx: tokio::sync::mpsc::UnboundedSender<String>,
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
tx.send(format!("Mock response to: {}", message))?;
Ok(())
}
Running Tests¶
# Run all tests
cargo test
# Run tests with output
cargo test -- --nocapture
# Run specific test
cargo test test_name
# Run tests with coverage (requires cargo-tarpaulin)
cargo install cargo-tarpaulin
cargo tarpaulin --out Html
Pull Request Process¶
Before Submitting¶
Ensure tests pass:
cargo test cargo clippy -- -D warnings cargo fmt --check
Update documentation if needed
Add tests for new functionality
Update changelog if applicable
PR Description Template¶
When creating a pull request, use this template:
## Description
Brief description of changes made.
## Type of Change
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Documentation update
## Testing
- [ ] Unit tests added/updated
- [ ] Integration tests added/updated
- [ ] Manual testing performed
## Checklist
- [ ] Code follows the project's style guidelines
- [ ] Self-review completed
- [ ] Comments added to hard-to-understand areas
- [ ] Documentation updated
- [ ] No new warnings introduced
Review Process¶
Automated checks must pass (CI/CD pipeline)
Code review by at least one maintainer
Testing in development environment
Final approval and merge
Areas for Contribution¶
Good First Issues¶
Look for issues labeled good first issue:
Documentation improvements and typo fixes
Configuration validation enhancements
Error message improvements
Test coverage improvements
Code formatting and cleanup
Example configurations for new providers
Feature Development¶
Major areas where contributions are welcome:
New AI Provider Support:
// Add support for new providers in llm_provider.rs
pub async fn send_message_custom_provider(
config: &Config,
message: &str,
tx: UnboundedSender<String>,
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
// Use the GenAI crate to add new provider support
let client = genai::Client::builder()
.with_api_key(&config.api_key.unwrap_or_default())
.build()?;
let chat_req = genai::chat::ChatRequest::new(vec![
genai::chat::ChatMessage::user(message)
]);
let stream = client.exec_stream(&config.model.clone().unwrap_or_default(), chat_req).await?;
// Handle streaming response
// Implementation details...
Ok(())
}
UI Component Enhancements:
// Add new Ratatui components in ui.rs
pub struct CustomWidget {
content: String,
scroll_offset: u16,
}
impl CustomWidget {
pub fn render(&self, area: Rect, buf: &mut Buffer) {
let block = Block::default()
.borders(Borders::ALL)
.title("Custom Feature");
let inner = block.inner(area);
block.render(area, buf);
// Custom rendering logic using Ratatui
self.render_content(inner, buf);
}
}
Configuration System Extensions:
// Extend Config struct in config.rs
#[derive(Debug, Deserialize, Serialize, Clone)]
pub struct ExtendedConfig {
#[serde(flatten)]
pub base: Config,
// New configuration options
pub custom_endpoints: Option<HashMap<String, String>>,
pub retry_config: Option<RetryConfig>,
pub logging_config: Option<LoggingConfig>,
}
Performance and Reliability:
Streaming response optimizations
Better error handling and recovery
Configuration validation improvements
Memory usage optimizations for large conversations
Connection pooling and retry logic
Developer Experience:
Better debugging tools and logging
Enhanced error messages with suggestions
Configuration validation with helpful feedback
Developer-friendly CLI options
Bug Reports and Issues¶
Filing Bug Reports¶
When filing a bug report, include:
Clear description of the issue
Steps to reproduce the problem
Expected behavior vs actual behavior
Environment information:
- OS: [e.g., macOS 12.0, Ubuntu 20.04] - Perspt version: [e.g., 1.0.0] - Rust version: [e.g., 1.70.0] - Provider: [e.g., OpenAI GPT-4]
Configuration (sanitized):
{ "provider": "openai", "model": "gpt-4", "api_key": "[REDACTED]" }
Error messages (full text)
Log files if available
Feature Requests¶
For feature requests, provide:
Clear description of the desired feature
Use case and motivation
Proposed implementation (if you have ideas)
Alternatives considered
Additional context or examples
Documentation Contributions¶
Types of Documentation¶
API documentation: Rust doc comments in source code
Developer guides: Sphinx documentation in docs/perspt_book/
README: Project overview and quick start
Examples: Sample configurations and use cases
Changelog: Version history and migration guides
Documentation Standards¶
Use clear, concise language
Include working code examples that match current implementation
Keep examples up-to-date with current API and dependencies
Cross-reference related sections using Sphinx references
Follow reStructuredText formatting for Sphinx docs
Building Documentation¶
Rust API Documentation:
# Generate and open Rust documentation
cargo doc --open --no-deps --all-features
Sphinx Documentation:
# Build HTML documentation
cd docs/perspt_book
uv run make html
# Build PDF documentation
uv run make latexpdf
# Clean and rebuild everything
uv run make clean && uv run make html && uv run make latexpdf
Watch Mode for Development:
# Auto-rebuild on changes
cd docs/perspt_book
uv run sphinx-autobuild source build/html
Available VS Code Tasks:
You can also use the VS Code tasks for documentation:
“Build Sphinx HTML Documentation”
“Build Sphinx PDF Documentation”
“Watch and Auto-build HTML Documentation”
“Open Sphinx HTML Documentation”
“Validate Documentation Links”
Writing Documentation¶
Code Examples: Ensure all code examples compile and work:
// Good: Complete, working example
use perspt::config::Config;
use tokio::sync::mpsc;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let config = Config::load()?;
let (tx, mut rx) = mpsc::unbounded_channel();
perspt::llm_provider::send_message(&config, "Hello", tx).await?;
while let Some(response) = rx.recv().await {
println!("{}", response);
}
Ok(())
}
Configuration Examples: Use realistic, sanitized configs:
{
"provider": "openai",
"api_key": "${OPENAI_API_KEY}",
"model": "gpt-4",
"temperature": 0.7,
"max_tokens": 2000,
"timeout_seconds": 30
}
Community Guidelines¶
Code of Conduct¶
We follow the Rust Code of Conduct. In summary:
Be friendly and patient
Be welcoming
Be considerate
Be respectful
Be careful in word choice
When we disagree, try to understand why
Communication Channels¶
GitHub Issues: Bug reports and feature requests
GitHub Discussions: General questions and ideas
Discord/Slack: Real-time community chat
Email: Direct contact with maintainers
Recognition¶
Contributors are recognized in:
CONTRIBUTORS.md: List of all contributors
Release notes: Major contributions highlighted
Documentation: Author attribution where appropriate
Community highlights: Regular contributor spotlights
Release Process¶
Version Numbering¶
We follow Semantic Versioning (SemVer):
MAJOR: Breaking changes
MINOR: New features (backward compatible)
PATCH: Bug fixes (backward compatible)
Release Cycle¶
Major releases: Every 6-12 months
Minor releases: Every 1-3 months
Patch releases: As needed for critical fixes
Next Steps¶
See the following documentation for more detailed information:
Architecture - Understanding Perspt’s internal architecture
Extending Perspt - How to extend Perspt with new features
Testing - Testing strategies and best practices
User Guide - User guide for understanding the application
API Reference - API reference documentation
Development Workflow Tips¶
Using VS Code Tasks¶
The project includes several VS Code tasks for common development activities:
# Available tasks (use Ctrl+Shift+P -> "Tasks: Run Task"):
- "Generate Documentation" (cargo doc)
- "Build Sphinx HTML Documentation"
- "Build Sphinx PDF Documentation"
- "Watch and Auto-build HTML Documentation"
- "Clean and Build All Documentation"
- "Validate Documentation Links"
Hot Reloading During Development¶
For faster development cycles:
# Watch for changes and rebuild
cargo install cargo-watch
cargo watch -x 'build'
# Watch and run tests
cargo watch -x 'test'
# Watch and run with sample input
cargo watch -x 'run -- "test message"'
Debugging¶
Enable Debug Logging:
# Set environment variable for detailed logs
export RUST_LOG=debug
cargo run -- "your message"
Debug Streaming Issues:
The project includes debug scripts:
# Debug long responses and streaming
./debug-long-response.sh
Use Rust Debugger:
// Add debug prints in your code
eprintln!("Debug: config = {:?}", config);
// Use dbg! macro for quick debugging
let result = dbg!(some_function());
Project Structure Understanding¶
Key files and their purposes:
src/main.rs
: CLI entry point, panic handling, terminal setupsrc/config.rs
: Configuration loading and validationsrc/llm_provider.rs
: GenAI integration and streamingsrc/ui.rs
: Ratatui terminal UI componentsCargo.toml
: Dependencies and project metadataconfig.json.example
: Sample configuration filedocs/perspt_book/
: Sphinx documentation sourcetests/
: Integration testsvalidate-docs.sh
: Documentation validation script
Common Development Patterns¶
Error Handling Pattern:
use anyhow::{Context, Result};
use thiserror::Error;
#[derive(Error, Debug)]
pub enum MyError {
#[error("Configuration error: {0}")]
Config(String),
#[error("Network error")]
Network(#[from] reqwest::Error),
}
pub fn example_function() -> Result<String> {
let config = load_config()
.context("Failed to load configuration")?;
process_config(&config)
.context("Failed to process configuration")
}
Async/Await Pattern:
use tokio::sync::mpsc::{UnboundedSender, UnboundedReceiver};
pub async fn stream_handler(
mut rx: UnboundedReceiver<String>,
tx: UnboundedSender<String>,
) -> Result<()> {
while let Some(message) = rx.recv().await {
let processed = process_message(&message).await?;
tx.send(processed).context("Failed to send processed message")?;
}
Ok(())
}
Configuration Pattern:
use serde::{Deserialize, Serialize};
#[derive(Debug, Deserialize, Serialize, Clone)]
pub struct ModuleConfig {
pub enabled: bool,
pub timeout: Option<u64>,
#[serde(default)]
pub advanced_options: AdvancedOptions,
}
impl Default for ModuleConfig {
fn default() -> Self {
Self {
enabled: true,
timeout: Some(30),
advanced_options: AdvancedOptions::default(),
}
}
}
Dependency Management¶
Adding New Dependencies:
# Add a new dependency
cargo add serde --features derive
# Add a development dependency
cargo add --dev mockall
# Add an optional dependency
cargo add optional-dep --optional
Dependency Guidelines:
Minimize dependencies: Only add what’s necessary
Use well-maintained crates: Check recent updates and issues
Consider security: Use
cargo audit
to check for vulnerabilitiesVersion pinning: Be specific about versions in Cargo.toml
# Good: Specific versions
serde = { version = "1.0.196", features = ["derive"] }
tokio = { version = "1.36.0", features = ["full"] }
# Avoid: Wildcard versions
serde = "*"
Security Auditing:
# Install cargo-audit
cargo install cargo-audit
# Run security audit
cargo audit
# Update advisories database
cargo audit --update
Release Process¶
Version Bumping:
# Update version in Cargo.toml
# Update CHANGELOG.md with changes
# Create release notes
# Tag the release
git tag -a v1.2.0 -m "Release version 1.2.0"
git push origin v1.2.0
Pre-release Checklist:
All tests pass:
cargo test
Documentation builds:
cargo doc
No clippy warnings:
cargo clippy -- -D warnings
Code formatted:
cargo fmt --check
CHANGELOG.md updated
Version bumped in Cargo.toml
Security audit clean:
cargo audit
Release Notes Template:
## Version X.Y.Z - YYYY-MM-DD
### Added
- New features and enhancements
### Changed
- Breaking changes and modifications
### Fixed
- Bug fixes and issue resolutions
### Security
- Security-related changes
### Dependencies
- Updated dependencies
Performance Profiling¶
CPU Profiling:
# Install profiling tools
cargo install cargo-flamegraph
# Profile your application
cargo flamegraph --bin perspt -- "test message"
Memory Profiling:
# Use valgrind (Linux/macOS)
cargo build
valgrind --tool=massif target/debug/perspt "test message"
Benchmarking:
// Add to benches/benchmark.rs
use criterion::{black_box, criterion_group, criterion_main, Criterion};
fn benchmark_message_processing(c: &mut Criterion) {
c.bench_function("process_message", |b| {
b.iter(|| {
let result = process_message(black_box("test input"));
result
})
});
}
criterion_group!(benches, benchmark_message_processing);
criterion_main!(benches);
Troubleshooting Common Issues¶
Build Failures:
# Clean build artifacts
cargo clean
# Update toolchain
rustup update
# Rebuild dependencies
cargo build
Test Failures:
# Run tests with output
cargo test -- --nocapture
# Run a specific test
cargo test test_name -- --exact
# Run ignored tests
cargo test -- --ignored
API Key Issues:
# Check environment variables
env | grep -i api
# Verify config file
cat ~/.config/perspt/config.json
# Test with explicit config
echo '{"provider":"openai","api_key":"test"}' | cargo run
Documentation Build Issues:
# Check Python/uv installation
uv --version
# Reinstall dependencies
cd docs/perspt_book
uv sync
# Clean and rebuild
uv run make clean && uv run make html
Getting Help¶
If you encounter issues or need guidance:
Check existing issues on GitHub
Search the documentation for similar problems
Ask in discussions for general questions
Create a detailed issue for bugs or feature requests
Join the community chat for real-time help
When asking for help, include:
Your operating system and version
Rust version (
rustc --version
)Perspt version or commit hash
Full error messages
Steps to reproduce the issue
Your configuration (sanitized)
Final Notes¶
Code Quality:
Write self-documenting code with clear variable names
Add comments for complex logic
Keep functions small and focused
Use meaningful error messages
Follow Rust idioms and best practices
Testing Philosophy:
Test behavior, not implementation
Write tests before fixing bugs (TDD when possible)
Cover edge cases and error conditions
Use descriptive test names
Keep tests fast and reliable
Documentation Philosophy:
Document the “why”, not just the “what”
Keep examples current and working
Use real-world scenarios in examples
Cross-reference related concepts
Update docs with code changes
Ready to contribute? Here’s your next steps:
Fork the repository and set up your environment
Find an issue to work on or propose a new feature
Read the codebase to understand the current patterns
Start small with documentation or simple fixes
Ask questions early and often
Submit your PR with tests and documentation
Welcome to the Perspt development community! 🎉