It's hilarious to see the things which were hard coded in the first BSD implementation, like a 4 KiB socket buffer size.
Also, as far as I can see, Bill Joy DID actually have linear backoff but it was hidden behind an always-false condition, so his hand-written lesser float backoffs were chosen instead. This seems like a fairly tragic coding error to me.
It's still used in the telco space; xDSL runs over ATM, generally speaking.
It's not obvious that TCP/IP would win. I wonder how much of its victory was because it was basically free; the specs were public, and pretty much everyone build an implementation.
Compare that to the other protocols, which tended to be stuck to a vendor (DECnet, AppleTalk, IPX, token ring). That makes interoperability difficult, since in general companies didn't want to license their tech to someone else because it was a competitive advantage.
You can still find the free TCP/IP implementations out there (like tinytcp, which is still floating around).
Mitigations include high resolution retransmit timers (RTO) and FQCN.
Didn't help: SACK, TCP Reno, and New Reno.
https://systemsapproach.substack.com/p/how-congestion-contro...
What would the internet have looked like without Van?
Still, it has its limits – the most obvious one to me is mentioned in the article: The optimal buffer size for each hop that TCP (or similarly flow-controlled traffic) traverses depends on the average delay and bandwidth, but these are per-flow quantities which an intermediate router can't properly estimate in principle.
I've played around with Linux's excellent fq_codel a bit with promising results in the past, but ultimately, that's not a solution in the spirit of the Internet (dumb routers, smart endpoints) since it depends on looking at individual flows, something routers aren't really supposed to have to do.
I really hope that something like Google's BBR will become the de-facto TCP congestion control algorithm in the end.