Policy Enforcement Pipeline
Every packet traverses a multi-stage pipeline where each stage makes critical decisions. The entire pipeline must complete in under 10 microseconds to maintain wire speed.
📊 Packet Processing Pipeline
Ingress
Classify
Policy
QoS
Egress
Total Pipeline Latency: 5-10µs (Hardware) vs 50-500µs (Software)
Hardware vs Software Policy
Software-based policy enforcement cannot scale to AI workloads. Only dedicated hardware can make 595 million decisions per second with deterministic latency.
💻 Software (CPU)
- Latency 50-500µs
- Throughput 1-10 Mpps
- Jitter High
- CPU Usage 100%
- Bypass Risk Possible
🔧 Hardware (DPU)
- Latency 5-10µs
- Throughput 595 Mpps
- Jitter Minimal
- CPU Usage 0%
- Bypass Risk None
Flow Classification
The DPU extracts multiple fields from each packet and matches against classification rules to determine tenant identity, priority, and applicable policies.
🔍 Packet Field Extraction
Traffic Priority Classes
DPUs assign traffic to priority queues based on classification results. Higher priority traffic gets guaranteed bandwidth and lower latency.
🎯 Priority Queue Hierarchy
Critical
High
Normal
Background
Policy Decision Flow
Each packet goes through a decision tree that determines its fate. Actions include: allow, deny, rate-limit, or redirect—all in hardware.