Part 2 to this series presented packetization delay, or the delay incurred as all systems, along the message path, create and reshape packets. Part 3 of this series on measuring the latency in messaging systems will focus on serialization delay, or the delay in moving packets from the Network Interface Controller’s (NIC) transmit buffer to the wire.
Minimizing Serialization Delay
Larger bandwidth technologies play a much greater roll in reducing the serialization delay than does changing the packet size. This is because serialization delay is a function of packet size and transmission rate expressed as:
Serialization Delay = Size of Packet (bits) / Transmission Rate (bps)
A packet size of 1500 bytes, transmitted using the T1 technology (1,544,000 bps) would produce a serialization delay of about 8 milliseconds. The same 1500 byte packet using 56K modem technology (57, 344 bps) would result in a 200 millisecond serialization delay, whereas using Gigabit Ethernet technology (1,000,000,000 bps) would reduce the 1500 byte packet’s serialization delay to 11 microseconds.
In part 4 of this series, I’ll cover the third of the three latency delays, namely propagation delay.