lxgr
The fact that such a simple endpoint-only control loop (mostly) just works is one of the most amazing parts of the Internet to me.

Still, it has its limits – the most obvious one to me is mentioned in the article: The optimal buffer size for each hop that TCP (or similarly flow-controlled traffic) traverses depends on the average delay and bandwidth, but these are per-flow quantities which an intermediate router can't properly estimate in principle.

I've played around with Linux's excellent fq_codel a bit with promising results in the past, but ultimately, that's not a solution in the spirit of the Internet (dumb routers, smart endpoints) since it depends on looking at individual flows, something routers aren't really supposed to have to do.

I really hope that something like Google's BBR will become the de-facto TCP congestion control algorithm in the end.

suprjami
I looked into congestion control earlier in the year.

It's hilarious to see the things which were hard coded in the first BSD implementation, like a 4 KiB socket buffer size.

Also, as far as I can see, Bill Joy DID actually have linear backoff but it was hidden behind an always-false condition, so his hand-written lesser float backoffs were chosen instead. This seems like a fairly tragic coding error to me.

mannyv
The ATM part was amusing.

It's still used in the telco space; xDSL runs over ATM, generally speaking.

It's not obvious that TCP/IP would win. I wonder how much of its victory was because it was basically free; the specs were public, and pretty much everyone build an implementation.

Compare that to the other protocols, which tended to be stuck to a vendor (DECnet, AppleTalk, IPX, token ring). That makes interoperability difficult, since in general companies didn't want to license their tech to someone else because it was a competitive advantage.

You can still find the free TCP/IP implementations out there (like tinytcp, which is still floating around).

0xDEF
The author of the original RFC 896 from 1984 and namesake of Nagle's algorithm is John Nagle who is an active user here on Hacker News:

https://news.ycombinator.com/user?id=animats

https://datatracker.ietf.org/doc/html/rfc896

1letterunixname
When latency decreased with faster connections but in an environment with packet loss, a new problem emerged: TCP incast collapse. There was plenty of available bandwidth but TCP transfers would slow down, sometimes to a crawl, without QoS, packet shaping, or bandwidth limitations.

Mitigations include high resolution retransmit timers (RTO) and FQCN.

Didn't help: SACK, TCP Reno, and New Reno.

https://dl.acm.org/doi/10.1145/1592681.1592693

https://dl.acm.org/doi/pdf/10.1145/1592681.1592693

throwawaaarrgh
The greatest sin of the engineer is to believe that superior technology is all that's needed to succeed.
zdw
This is a syndicated from the upstream blog, which gets content from Bruce and Larry earlier:

https://systemsapproach.substack.com/p/how-congestion-contro...

dtaht
The article kind of missed that Van was the author of RED, and later he and kathie nichols did codel, and later he was on the team that did BBR.

What would the internet have looked like without Van?

sr.ht