Network latencies have become increasingly important for the performance of web servers and cloud computing platforms. Identifying network-related tail latencies and reasoning about their potential causes is especially important to gauge application run-time in online data-intensive applications, where the 99th percentile latency of individual operations can significantly affect the the overall latency of requests. This paper deconstructs the "tail at scale" effect across TCP-IP, UDP-IP, and RDMA network protocols. Prior scholarly works have analyzed tail latencies caused by extrinsic network parameters like network congestion and flow fairness. Contrary to existing literature, we identify surprising rare tails in TCP-IP round-trip measurements that are as enormous as 110x higher than the median latency. Our experimental design eliminates network congestion as a tail-inducing factor. Moreover, we observe similar extreme tails in UDP-IP packet exchanges, ruling out additional TCP-IP protocol operations as the root cause of tail latency. However, we are unable to reproduce similar tail latencies in RDMA packet exchanges, which leads us to conclude that the TCP/UDP protocol stack within the operating system kernel is likely the primary source of extreme latency tails.