Files
weaseldb/threading_performance_report.md

71 lines
3.3 KiB
Markdown

# WeaselDB Threading Performance Analysis Report
## Summary
WeaselDB achieved 1.3M requests/second throughput using a two-stage ThreadPipeline with futex wake optimization, delivering 550ns serial CPU time per request while maintaining 0% CPU usage when idle. Higher serial CPU time means more CPU budget available for serial processing.
## Performance Metrics
### Throughput
- **1.3M requests/second** over unix socket
- 8 I/O threads with 8 epoll instances
- Load tester used 12 network threads
- Max latency: 4ms out of 90M requests
- **0% CPU usage when idle** (optimized futex wake implementation)
### Threading Architecture
- Two-stage pipeline: Stage-0 (noop) → Stage-1 (connection return)
- Lock-free coordination using atomic ring buffer
- **Optimized futex wake**: Only wake on final pipeline stage
- Each request "processed" serially on single thread
### Performance Characteristics
**Optimized Pipeline Mode**:
- **Throughput**: 1.3M requests/second
- **Serial CPU time per request**: 550ns (validated with nanobench)
- **Theoretical maximum serial CPU time**: 769ns (1,000,000,000ns ÷ 1,300,000 req/s)
- **Serial efficiency**: 71.5% (550ns ÷ 769ns)
- **CPU usage when idle**: 0%
### Key Optimizations
**Futex Wake Reduction**:
- **Previous approach**: Futex wake at every pipeline stage (10% CPU overhead)
- **Optimized approach**: Futex wake only at final stage to wake producers. Stages now do their futex wait on the beginning of the pipeline instead of the previous stage.
- **Result**: 23% increase in serial CPU budget (396ns → 488ns)
- **Benefits**: Higher throughput per CPU cycle + idle efficiency
**CPU-Friendly Spin Loop**:
- **Added**: `_mm_pause()` intrinsics in polling loop to reduce power consumption and improve hyperthreading efficiency
- **Maintained**: 100,000 spin iterations necessary to prevent thread descheduling
- **Result**: Same throughput with more efficient spinning
**Stage-0 Batch Size Optimization**:
- **Changed**: Stage-0 max batch size from unlimited to 1
- **Mechanism**: Single-item processing checks for work more frequently, keeping the thread in fast coordination paths instead of expensive spin/wait cycles
- **Profile evidence**: Coordination overhead reduced from ~11% to ~5.6% CPU time
- **Result**: Additional 12.7% increase in serial CPU budget (488ns → 550ns)
- **Overall improvement**: 38.9% increase from baseline (396ns → 550ns)
### Request Flow
```
I/O Threads (8) → HttpHandler::on_batch_complete() → ThreadPipeline
↑ ↓
| Stage 0: Noop thread
| (550ns serial CPU per request)
| (batch size: 1)
| ↓
| Stage 1: Connection return
| (optimized futex wake)
| ↓
└─────────────────────── Server::release_back_to_server()
```
## Test Configuration
- Server: test_config.toml with 8 io_threads, 8 epoll_instances
- Load tester: ./load_tester --network-threads 12
- Build: ninja
- Command: ./weaseldb --config test_config.toml