Update threading_performance_report with new futex wake behavior

This commit is contained in:
2025-08-26 16:21:05 -04:00
parent a734760b60
commit 431befe9bd

View File

@@ -2,60 +2,51 @@
## Summary ## Summary
WeaselDB achieved 1.3M requests/second throughput using a two-stage ThreadPipeline, with 396ns serial CPU time per request. Higher serial CPU time means more CPU budget available for serial processing. WeaselDB achieved 1.3M requests/second throughput using a two-stage ThreadPipeline with futex wake optimization, delivering 488ns serial CPU time per request while maintaining 0% CPU usage when idle. Higher serial CPU time means more CPU budget available for serial processing.
## Performance Metrics ## Performance Metrics
### Throughput ### Throughput
- Non-blocking: 1.3M requests/second over unix socket - **1.3M requests/second** over unix socket
- Blocking: 1.1M requests/second over unix socket
- 8 I/O threads with 8 epoll instances - 8 I/O threads with 8 epoll instances
- Load tester used 12 network threads - Load tester used 12 network threads
- Max latency: 4ms out of 90M requests - Max latency: 4ms out of 90M requests
- **0% CPU usage when idle** (optimized futex wake implementation)
### Threading Architecture ### Threading Architecture
- Two-stage pipeline: Stage-0 (noop) → Stage-1 (connection return) - Two-stage pipeline: Stage-0 (noop) → Stage-1 (connection return)
- Lock-free coordination using atomic ring buffer - Lock-free coordination using atomic ring buffer
- **Optimized futex wake**: Only wake on final pipeline stage
- Each request "processed" serially on single thread - Each request "processed" serially on single thread
### Non-blocking vs Blocking Acquisition ### Performance Characteristics
**Non-blocking acquisition (`mayBlock=false`)**: **Optimized Pipeline Mode**:
- Throughput: 1.3M requests/second (maintained with up to 1200 loop iterations) - **Throughput**: 1.3M requests/second
- Stage-0 CPU: 100% (10% futex wake, 90% other) - **Serial CPU time per request**: 488ns (validated with nanobench)
- Serial CPU time per request: 396ns (1200 iterations, validated with nanobench) - **Theoretical maximum serial CPU time**: 769ns (1,000,000,000ns ÷ 1,300,000 req/s)
- Theoretical maximum serial CPU time: 769ns (1,000,000,000ns ÷ 1,300,000 req/s) - **Serial efficiency**: 63.4% (488ns ÷ 769ns)
- Serial efficiency: 51.5% (396ns ÷ 769ns) - **CPU usage when idle**: 0%
- 100% CPU usage when idle
**Blocking acquisition (`mayBlock=true`)**: ### Key Optimization: Futex Wake Reduction
- Throughput: 1.1M requests/second (800 loop iterations) - **Previous approach**: Futex wake at every pipeline stage (10% CPU overhead)
- Stage-0 CPU: 100% total (18% sched_yield, 8% futex wait, 7% futex wake, 67% other) - **Optimized approach**: Futex wake only at final stage to wake producers. Stages now do their futex wait on the beginning of the pipeline instead of the previous stage.
- Serial CPU time per request: 266ns (800 iterations, validated with nanobench) - **Result**: 23% increase in serial CPU budget (396ns → 488ns)
- Theoretical maximum serial CPU time: 909ns (1,000,000,000ns ÷ 1,100,000 req/s) - **Benefits**: Higher throughput per CPU cycle + idle efficiency
- Serial efficiency: 29.3% (266ns ÷ 909ns)
- 0% CPU usage when idle
### Request Flow ### Request Flow
``` ```
I/O Threads (8) → HttpHandler::on_batch_complete() → ThreadPipeline I/O Threads (8) → HttpHandler::on_batch_complete() → ThreadPipeline
↑ ↓ ↑ ↓
| Stage 0: Noop thread | Stage 0: Noop thread
| (396ns serial CPU per request) | (488ns serial CPU per request)
| ↓ | ↓
| Stage 1: Connection return | Stage 1: Connection return
| (optimized futex wake)
| ↓ | ↓
└─────────────────────── Server::release_back_to_server() └─────────────────────── Server::release_back_to_server()
``` ```
### Pipeline Configuration
- Stage 0: 1 noop thread
- Stage 1: 2 worker threads for connection return
- Atomic counters with shared ring buffer
### Memory Management
- Transfer ownership of the connection along the pipeline
## Test Configuration ## Test Configuration
- Server: test_config.toml with 8 io_threads, 8 epoll_instances - Server: test_config.toml with 8 io_threads, 8 epoll_instances