03 Performance Engineering

The Wire Speed Challenge

Understanding why processing packets at 100-400 Gbps without dropping any is one of the hardest problems in computer engineering.

What Does Wire Speed Mean?

"Wire speed" means processing every single packet at the full line rate of the network interface—no drops, no queuing delays, no bottlenecks. At modern speeds, the numbers are staggering.

Network Speed Evolution
25 Gbps
25G
100 Gbps
100G
200 Gbps
200G
400 Gbps
400G
400
Gbps Bandwidth
595M
Packets/Second
1.7
Nanoseconds/Packet
~5
CPU Cycles/Packet
⚡ The Impossible Math
At 400 Gbps with minimum-sized packets (64 bytes), you must process nearly 600 million packets per second. That's 1.7 nanoseconds per packet—about 5 CPU clock cycles on a 3 GHz processor. No CPU can do meaningful work in 5 cycles.

The Latency Breakdown

Every operation adds latency. At wire speed, even nanoseconds matter. Here's what happens in the time budget of a single packet.

What Happens in Microseconds
< 1 μs
Wire-to-MAC
Physical signal reception, clock recovery, and MAC processing
1-2 μs
Packet Parsing
Header extraction, flow identification, checksum verification
2-5 μs
Policy Lookup
Flow table match, QoS classification, security rules
5-10 μs
Action & Forward
Packet modification, scheduling, transmission

What Must Happen Per Packet

For proper tenant isolation, every packet must go through multiple processing stages. Each stage must complete in sub-microsecond time at wire speed.

Packet Processing Pipeline
📥
Receive
<100 ns
🔍
Classify
~200 ns
📋
Policy
~500 ns
📊
Queue
~200 ns
📤
Transmit
<100 ns
💡 The Hardware Solution
CPUs can't process packets this fast—they're designed for complex, variable workloads. DPUs use dedicated hardware pipelines that process packets in parallel, achieving wire-speed by design rather than brute force.