I'm trying to understand the implications of this thing. Please forgive my ignorance. So, a DPU (data progressing unit) is basically a daughter board that has multiple CPUs, big network, modest RAM, but also NVMe capabilities? There's a bit of architectural stuff i didn't understated in the article.

Is this the world going full circle and re introducing mini-pcs of the 80's and 90's era?

I understand, I think, the usefulness of having these in a data centre, each customer having their own DPU which would present to them as a bare metal device.

I understand, I think, the crypto guys loving this for compute power, easily expandible.

I also understand this is not for average consumers... but this is HN... what other uses can we put through this/ what advantages does this physical architecture give us?

If anyone could elucidate ....? It looks exciting, but I'm not sure of the scope.

What is old is new again. IBM mainframes had a concept called a "channel controller". Everything connected to the mainframe basically was a computer itself that offloaded the main system. Every DASD (disk), communication link was its own system.

Random - the Commodore 1541 drive (all of their drives really) had their own 65xx CPU and some RAM (2K IIRC for the 1541). There was copy/backup software where you could hook up 2 devices, load the program and then disconnect the drives from the computer. You could put the master in the first drive, blank disk in the second and every time you swapped the backup it would make a new copy.

Does anyone know the price of these cards? Trying this out in a home setup would be extremely awesome, but most likely prohibitively expensive.
Are there any other old-farts in here who are thinking "S-100 Bus"???

Everything was run from the backplane. Want a new CPU? Plug one in!

I’ve found ZFS performance on NVME to be somewhat disappointing, it’s a shame there’s no benchmarks.
DPUs are cheap blade servers you can mount in a tower? Is that a somewhat good analogy?
Because "DPU" is a thing...