Update threading_performance_report with new futex wake behavior
This commit is contained in:
@@ -2,60 +2,51 @@
|
||||
|
||||
## Summary
|
||||
|
||||
WeaselDB achieved 1.3M requests/second throughput using a two-stage ThreadPipeline, with 396ns serial CPU time per request. Higher serial CPU time means more CPU budget available for serial processing.
|
||||
WeaselDB achieved 1.3M requests/second throughput using a two-stage ThreadPipeline with futex wake optimization, delivering 488ns serial CPU time per request while maintaining 0% CPU usage when idle. Higher serial CPU time means more CPU budget available for serial processing.
|
||||
|
||||
## Performance Metrics
|
||||
|
||||
### Throughput
|
||||
- Non-blocking: 1.3M requests/second over unix socket
|
||||
- Blocking: 1.1M requests/second over unix socket
|
||||
- **1.3M requests/second** over unix socket
|
||||
- 8 I/O threads with 8 epoll instances
|
||||
- Load tester used 12 network threads
|
||||
- Max latency: 4ms out of 90M requests
|
||||
- **0% CPU usage when idle** (optimized futex wake implementation)
|
||||
|
||||
### Threading Architecture
|
||||
- Two-stage pipeline: Stage-0 (noop) → Stage-1 (connection return)
|
||||
- Lock-free coordination using atomic ring buffer
|
||||
- **Optimized futex wake**: Only wake on final pipeline stage
|
||||
- Each request "processed" serially on single thread
|
||||
|
||||
### Non-blocking vs Blocking Acquisition
|
||||
### Performance Characteristics
|
||||
|
||||
**Non-blocking acquisition (`mayBlock=false`)**:
|
||||
- Throughput: 1.3M requests/second (maintained with up to 1200 loop iterations)
|
||||
- Stage-0 CPU: 100% (10% futex wake, 90% other)
|
||||
- Serial CPU time per request: 396ns (1200 iterations, validated with nanobench)
|
||||
- Theoretical maximum serial CPU time: 769ns (1,000,000,000ns ÷ 1,300,000 req/s)
|
||||
- Serial efficiency: 51.5% (396ns ÷ 769ns)
|
||||
- 100% CPU usage when idle
|
||||
**Optimized Pipeline Mode**:
|
||||
- **Throughput**: 1.3M requests/second
|
||||
- **Serial CPU time per request**: 488ns (validated with nanobench)
|
||||
- **Theoretical maximum serial CPU time**: 769ns (1,000,000,000ns ÷ 1,300,000 req/s)
|
||||
- **Serial efficiency**: 63.4% (488ns ÷ 769ns)
|
||||
- **CPU usage when idle**: 0%
|
||||
|
||||
**Blocking acquisition (`mayBlock=true`)**:
|
||||
- Throughput: 1.1M requests/second (800 loop iterations)
|
||||
- Stage-0 CPU: 100% total (18% sched_yield, 8% futex wait, 7% futex wake, 67% other)
|
||||
- Serial CPU time per request: 266ns (800 iterations, validated with nanobench)
|
||||
- Theoretical maximum serial CPU time: 909ns (1,000,000,000ns ÷ 1,100,000 req/s)
|
||||
- Serial efficiency: 29.3% (266ns ÷ 909ns)
|
||||
- 0% CPU usage when idle
|
||||
### Key Optimization: Futex Wake Reduction
|
||||
- **Previous approach**: Futex wake at every pipeline stage (10% CPU overhead)
|
||||
- **Optimized approach**: Futex wake only at final stage to wake producers. Stages now do their futex wait on the beginning of the pipeline instead of the previous stage.
|
||||
- **Result**: 23% increase in serial CPU budget (396ns → 488ns)
|
||||
- **Benefits**: Higher throughput per CPU cycle + idle efficiency
|
||||
|
||||
### Request Flow
|
||||
```
|
||||
I/O Threads (8) → HttpHandler::on_batch_complete() → ThreadPipeline
|
||||
↑ ↓
|
||||
| Stage 0: Noop thread
|
||||
| (396ns serial CPU per request)
|
||||
| (488ns serial CPU per request)
|
||||
| ↓
|
||||
| Stage 1: Connection return
|
||||
| (optimized futex wake)
|
||||
| ↓
|
||||
└─────────────────────── Server::release_back_to_server()
|
||||
```
|
||||
|
||||
### Pipeline Configuration
|
||||
- Stage 0: 1 noop thread
|
||||
- Stage 1: 2 worker threads for connection return
|
||||
- Atomic counters with shared ring buffer
|
||||
|
||||
### Memory Management
|
||||
- Transfer ownership of the connection along the pipeline
|
||||
|
||||
## Test Configuration
|
||||
|
||||
- Server: test_config.toml with 8 io_threads, 8 epoll_instances
|
||||
|
||||
Reference in New Issue
Block a user