Optimization Benchmarking with Rust
Comprehensive Rust-based benchmarking framework for optimization algorithm evaluation. Developed as part of the QQN optimizer research, this framework enables reproducible comparison of optimization algorithms across 62 benchmark problems with automated statistical analysis and multi-format reporting.
Key Features
Comprehensive Benchmark Suite
62 carefully curated optimization problems spanning convex, non-convex, multimodal, and machine learning landscapes.
- Convex functions: Sphere, Matyas, Zakharov across multiple dimensions
- Non-convex unimodal: Rosenbrock, Beale, Himmelblau with ill-conditioning
- Highly multimodal: Rastrigin, Ackley, Schwefel testing global optimization
- Machine learning: Neural networks, SVM, regression problems
- Scalability testing: 2D, 5D, and 10D variants for dimension analysis
- Problem-specific convergence thresholds based on calibration runs
Statistical Analysis Engine
Rigorous statistical testing framework ensuring meaningful algorithm comparisons with automated hypothesis testing.
- Welch's t-test for unequal variances with automatic application
- Cohen's d effect size calculation for practical significance
- Bonferroni correction for multiple comparison validity
- Win/loss/tie matrices with statistical significance thresholds
- Friedman test for overall performance ranking validation
- Automated generation of publication-ready statistical tables
Multi-Format Reporting
Automated generation of comprehensive reports in multiple formats for different audiences and use cases.
- Markdown reports with embedded visualizations for web viewing
- LaTeX documents ready for academic publication
- CSV exports for custom statistical analysis
- Detailed per-run logs for debugging and deep analysis
- Convergence plots and performance profiles
- Problem-family vs optimizer-family comparison matrices
Reproducible Research Infrastructure
Built-in reproducibility features ensuring all results can be independently verified and extended.
- Fixed random seeds for deterministic results
- Version-controlled benchmark definitions
- Automated result archiving and comparison
- Docker support for environment consistency
- Extensible architecture for adding new optimizers
- Standardized optimizer interface for fair comparison
Getting Started
# Clone the repository
git clone https://github.com/SimiaCryptus/qqn-optimizer.git
cd qqn-optimizer
# Build the benchmarking framework
cargo build --release
Docker Installation
# Build Docker image
docker build -t qqn-benchmark .
# Run benchmarks in container
docker run -v $(pwd)/results:/results qqn-benchmark
Quick Example
# Build the benchmarking framework
cargo build --release
# Run a quick benchmark (subset of problems)
cargo run --release -- --quick
# Run full benchmark suite
cargo run --release -- --full
# Generate reports in specific format
cargo run --release -- --format latex
cargo run --release -- --format markdown
Technical Details
Technologies
Requirements
- Rust 1.70 or higher
- Cargo package manager
- Optional: LaTeX for PDF report generation
- 4GB RAM minimum (8GB recommended for full suite)
Contribute to the Project
This is an open-source project. Contributions, bug reports, and feature requests are welcome from the community.