Files
weaseldb/threading_performance_report.md

3.8 KiB

WeaselDB Threading Performance Analysis Report

Summary

WeaselDB's /ok health check endpoint achieves 1M requests/second with 740ns of configurable CPU work per request through the 4-stage commit pipeline, while maintaining 0% CPU usage when idle. The configurable CPU work serves both as a health check (validating the full pipeline) and as a benchmarking tool for measuring per-request processing capacity.

Performance Metrics

Throughput

  • 1.0M requests/second /ok health check endpoint (4-stage commit pipeline)
  • 8 I/O threads with 8 epoll instances
  • Load tester used 12 network threads
  • 0% CPU usage when idle (optimized futex wake implementation)

Threading Architecture

  • Four-stage commit pipeline: Sequence → Resolve → Persist → Release
  • Lock-free coordination using atomic ring buffer
  • Optimized futex wake: Only wake on final pipeline stage
  • Configurable CPU work performed serially in resolve stage

Performance Characteristics

Health Check Pipeline (/ok endpoint):

  • Throughput: 1.0M requests/second
  • Configurable CPU work: 740ns (4000 iterations, validated with nanobench)
  • Theoretical maximum CPU time: 1000ns (1,000,000,000ns ÷ 1,000,000 req/s)
  • CPU work efficiency: 74% (740ns ÷ 1000ns)
  • Pipeline stages: Sequence (noop) → Resolve (CPU work) → Persist (response) → Release (cleanup)
  • CPU usage when idle: 0%

Key Optimizations

Futex Wake Reduction:

  • Previous approach: Futex wake at every pipeline stage (10% CPU overhead)
  • Optimized approach: Futex wake only at final stage to wake producers. Stages now do their futex wait on the beginning of the pipeline instead of the previous stage.
  • Result: 23% increase in serial CPU budget (396ns → 488ns)
  • Benefits: Higher throughput per CPU cycle + idle efficiency

CPU-Friendly Spin Loop:

  • Added: _mm_pause() intrinsics in polling loop to reduce power consumption and improve hyperthreading efficiency
  • Maintained: 100,000 spin iterations necessary to prevent thread descheduling
  • Result: Same throughput with more efficient spinning

Resolve Batch Size Optimization:

  • Changed: Resolve max batch size from unlimited to 1
  • Mechanism: Single-item processing checks for work more frequently, keeping the thread in fast coordination paths instead of expensive spin/wait cycles

Request Flow

Health Check Pipeline (/ok endpoint):

I/O Threads (8) → HttpHandler::on_batch_complete() → Commit Pipeline
    ↑                                                        ↓
    |                                                 Stage 0: Sequence (noop)
    |                                                        ↓
    |                                                 Stage 1: Resolve (740ns CPU work)
    |                                                 (spend_cpu_cycles(4000))
    |                                                        ↓
    |                                                 Stage 2: Persist (generate response)
    |                                                 (send "OK" response)
    |                                                        ↓
    |                                                 Stage 3: Release (connection return)
    |                                                 (optimized futex wake)
    |                                                        ↓
    └─────────────────────── Server::release_back_to_server()

Test Configuration

  • Server: test_benchmark_config.toml with 8 io_threads, 8 epoll_instances
  • Configuration: ok_resolve_iterations = 4000 (740ns CPU work)
  • Load tester: targeting /ok endpoint
  • Benchmark validation: ./bench_cpu_work 4000
  • Build: ninja
  • Command: ./weaseldb --config test_benchmark_config.toml