Implement spend_cpu_cycles in assembly

The compiler was unrolling it previously, so we're doing assembly now for consistency.
This commit is contained in:
2025-09-05 15:16:49 -04:00
parent ffe7ab0a3e
commit 0357a41dd8
10 changed files with 84 additions and 28 deletions

View File

@@ -2,7 +2,7 @@
## Summary
WeaselDB's /ok health check endpoint achieves 1M requests/second with 650ns of configurable CPU work per request through the 4-stage commit pipeline, while maintaining 0% CPU usage when idle. The configurable CPU work serves both as a health check (validating the full pipeline) and as a benchmarking tool for measuring per-request processing capacity.
WeaselDB's /ok health check endpoint achieves 1M requests/second with 740ns of configurable CPU work per request through the 4-stage commit pipeline, while maintaining 0% CPU usage when idle. The configurable CPU work serves both as a health check (validating the full pipeline) and as a benchmarking tool for measuring per-request processing capacity.
## Performance Metrics
@@ -22,9 +22,9 @@ WeaselDB's /ok health check endpoint achieves 1M requests/second with 650ns of c
**Health Check Pipeline (/ok endpoint)**:
- **Throughput**: 1.0M requests/second
- **Configurable CPU work**: 650ns (7000 iterations, validated with nanobench)
- **Configurable CPU work**: 740ns (4000 iterations, validated with nanobench)
- **Theoretical maximum CPU time**: 1000ns (1,000,000,000ns ÷ 1,000,000 req/s)
- **CPU work efficiency**: 65% (650ns ÷ 1000ns)
- **CPU work efficiency**: 74% (740ns ÷ 1000ns)
- **Pipeline stages**: Sequence (noop) → Resolve (CPU work) → Persist (response) → Release (cleanup)
- **CPU usage when idle**: 0%
@@ -41,12 +41,9 @@ WeaselDB's /ok health check endpoint achieves 1M requests/second with 650ns of c
- **Maintained**: 100,000 spin iterations necessary to prevent thread descheduling
- **Result**: Same throughput with more efficient spinning
**Stage-0 Batch Size Optimization**:
- **Changed**: Stage-0 max batch size from unlimited to 1
**Resolve Batch Size Optimization**:
- **Changed**: Resolve max batch size from unlimited to 1
- **Mechanism**: Single-item processing checks for work more frequently, keeping the thread in fast coordination paths instead of expensive spin/wait cycles
- **Profile evidence**: Coordination overhead reduced from ~11% to ~5.6% CPU time
- **Result**: Additional 12.7% increase in serial CPU budget (488ns → 550ns)
- **Overall improvement**: 38.9% increase from baseline (396ns → 550ns)
### Request Flow
@@ -56,8 +53,8 @@ I/O Threads (8) → HttpHandler::on_batch_complete() → Commit Pipeline
↑ ↓
| Stage 0: Sequence (noop)
| ↓
| Stage 1: Resolve (650ns CPU work)
| (spend_cpu_cycles(7000))
| Stage 1: Resolve (740ns CPU work)
| (spend_cpu_cycles(4000))
| ↓
| Stage 2: Persist (generate response)
| (send "OK" response)
@@ -71,8 +68,8 @@ I/O Threads (8) → HttpHandler::on_batch_complete() → Commit Pipeline
## Test Configuration
- Server: test_benchmark_config.toml with 8 io_threads, 8 epoll_instances
- Configuration: `ok_resolve_iterations = 7000` (650ns CPU work)
- Configuration: `ok_resolve_iterations = 4000` (740ns CPU work)
- Load tester: targeting /ok endpoint
- Benchmark validation: ./bench_cpu_work 7000
- Benchmark validation: ./bench_cpu_work 4000
- Build: ninja
- Command: ./weaseldb --config test_benchmark_config.toml