5.1 KiB
5.1 KiB
HTTP Server Design
Overview
High-performance HTTP server implementation for WeaselDB using epoll-based event loop architecture with configurable threading model and lock-free communication between components.
Architecture
Threading Model
Accept Threads
- Configurable number of dedicated accept threads
- Each thread calls
acceptin a loop on listening socket acceptreturns file descriptor used to construct connection object- Connection posted to epoll with EPOLLIN | EPOLLONESHOT interest
Network Threads
- Configurable number of network I/O threads
- Each thread calls epoll_wait() to receive connection ownership
- EPOLLONESHOT ensures clean ownership transfer without thundering herd
- Handle connection state transitions and HTTP protocol processing
- Either respond directly or post the connection to a service
Service Threads
- Service Pipelines (e.g., commit and status can share pipeline)
- Dequeue connections from shared LMAX Disruptor inspired ring buffer
src/ThreadPipeline.h - Process business logic and generate responses
- Post completed connections back to epoll for response writing
Connection Lifecycle
- Accept: Accept thread creates connection, posts to epoll with read interest
- Read: Network thread reads data, pumps llhttp streaming parser
- Route: Complete request path triggers handler association
- Process: Handler either:
- Processes in network thread (fast path)
- Posts to service ring buffer (slow path)
- Respond: Connection posted to epoll with write interest
- Write: Network thread writes response data
- Keep-alive: Connection posted back with read interest for next request
HTTP Protocol Support
Request Processing
- llhttp streaming parser for incremental request parsing
- Content-Type based routing to appropriate service handlers
- HTTP/1.1 keep-alive support for connection reuse
- Malformed request handling with proper HTTP error responses
Response Handling
- Direct response writing by network threads
- Partial write support with epoll re-registration
- Connection reuse after complete response transmission
Memory Management
Arena Allocation
- Per-connection arena allocator for request-scoped memory
- Arena reset after each response to prepare for next request
- Zero-copy string views pointing to arena-allocated memory
- Efficient bulk deallocation when connection closes
Ring Buffer Communication
- Ring buffer for high-performance inter-thread communication
- Backpressure handling via ring buffer capacity limits
Error Handling
Backpressure Response
- Ring buffer full: Network thread writes immediate error response
- Partial write: Connection re-posted to epoll with write interest
- Connections are rejected if there are too many concurrent connections. Tries to maintain a configurable limit (best effort only.)
Protocol Errors
- Malformed HTTP: Proper HTTP error responses (400 Bad Request, etc.)
- Partial I/O: Re-registration with epoll for completion
- Client disconnection: Clean up connection state
Integration Points
Parser Selection
- Content-Type header inspection for parser routing
- CommitRequestParser extensibility for future format support (protobuf, msgpack)
Configuration System
- TOML configuration using existing config system
- Configurable parameters:
- Accept thread count
- Network thread count
- Ring buffer sizes
- Connection timeouts
- Keep-alive settings
Request Routing
- Path-based routing to service handlers
- Handler registration for extensible endpoint support
Performance Characteristics
Scalability
- Configurable parallelism for accept and network operations
- Lock-free communication eliminates contention bottlenecks
- EPOLLONESHOT semantics prevent thundering herd effects and manages connection ownership
Efficiency
- Zero-copy I/O where possible using string views
- Arena allocation eliminates per-request malloc overhead
- Connection reuse reduces accept/close syscall overhead
- Streaming parser handles large requests incrementally
Resource Management
- Bounded memory usage via arena reset and ring buffer limits
- Graceful backpressure prevents resource exhaustion
Implementation Phases
Phase 1: Core Infrastructure
- Basic epoll event loop with accept/network threads
- Connection state management and lifecycle
- llhttp integration for request parsing
Phase 2: Service Integration
- Request routing and handler registration
- Arena-based memory management
Phase 3: Protocol Features
- HTTP/1.1 keep-alive support
- Error handling and backpressure responses
- Configuration system integration
Phase 4: Content Handling
- Content-Type based parser selection
- CommitRequest processing integration
- Response serialization and transmission
Configuration Schema
[http]
accept_threads = 2
network_threads = 8
bind_address = "0.0.0.0"
port = 8080
keepalive_timeout_sec = 30
request_timeout_sec = 10
[http.pipelines.commit_status]
ring_buffer_size = 1024
threads = 4