From dabead7d6b7e462790b592233d52bdf71ca2dd6d Mon Sep 17 00:00:00 2001 From: Andrew Noyes Date: Tue, 26 Aug 2025 17:08:20 -0400 Subject: [PATCH] Explain hypothesis for batch size 1 helping --- threading_performance_report.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/threading_performance_report.md b/threading_performance_report.md index 1d45663..407f2be 100644 --- a/threading_performance_report.md +++ b/threading_performance_report.md @@ -43,6 +43,8 @@ WeaselDB achieved 1.3M requests/second throughput using a two-stage ThreadPipeli **Stage-0 Batch Size Optimization**: - **Changed**: Stage-0 max batch size from unlimited to 1 +- **Mechanism**: Single-item processing checks for work more frequently, keeping the thread in fast coordination paths instead of expensive spin/wait cycles +- **Profile evidence**: Coordination overhead reduced from ~11% to ~5.6% CPU time - **Result**: Additional 12.7% increase in serial CPU budget (488ns → 550ns) - **Overall improvement**: 38.9% increase from baseline (396ns → 550ns)