Our first was just an existing card with a small daughter card with pals an sram on it, it was so easy that we got our own logos put on many of the chips to put the competition off the scent, we designed that one in days and got it to market in weeks.
We immediately started on 2 more designs. The next was all FPGA, it was as big a nubus card as one could build, it pulled too much power, and tilted under it's own weight out of the bus socket (Mac's didn't use screws to hold cards in place, that happened when you closed the chassis). We got it out the door about the point that the competition beat the first board's performance.
The final card was built with custom silicon, designed backwards from "how fast can we possibly make the vrams go if we use all the tricks?", In this case we essentially bet the company on whether a new ~200 pin plastic packaging technology was viable. This design really soaked the competition.
In those days big monitors didn't work on just any card so if you owned the high end graphics card biz you owned the high end monitor biz too ... The 3 card play above was worth more than $120m
Yes, the cable will add approximately ~0.3ns of additional latency due to the added 10cm of distance.
This is how it looks like:
I chuckled a little at this because I used to wonder the same thing until I had to actually bring up a GDDR6 interface. Basically the reason GDDR6 is able to run so much faster is because we assume that everything is soldered down, and not socketed/slotted.
Back when I worked for a GPU company, I occasionally had conversations with co-workers about how ridiculous it was that we put a giant heavy heatsink CPU, and a low profile cooler on the GPU, which in today's day and age produces way more heat! I'm of the opinion that we make mini ATX shaped graphics cards so that you bolt them behind your motherboard (though you would need a different case that had standoffs in both directions.)
By the way, what actually dissatisfies me is the majority of mainboards having too few PCIex slots. Whenever I buy a PC I want a great extensible future-proof mainboard + very basic everything incl. a cheap graphics card so I can upgrade different parts the moments I feel like . Unfortunately such many-slot maininboards seem to all target the luxury gamer/miner segment and be many times more expensive than ordinary ones. I don't understand why some extra slots have to raise the cost up 10 times.
Bring back proper desktop cases!
2. Are there any GPUs that actually have performed physical damage on a motherboard slot?
3. GPUs are already 2-wide by default, and some are 3-wide. 4-wide GPUs will have more support from the chassis. This seems like the simpler solution, especially since most people rarely have a 2nd add in card at all in their computers these days.
4. Perhaps the real issue is that PCIe extenders need to become a thing again, and GPUs can be placed in an anchored point elsewhere on the chassis. However, extending up to 4-wide GPUs seems more likely (because PCIe needs to get faster-and-faster. GPU-to-CPU communications is growing more and more important, so the PCIe 5 and PCIe 6 lanes are going to be harder and harder to extend out).
For now, its probably just an absurd look, but I'm not 100% convinced we have a real problem yet. For years, GPUs have drawn more power than the CPU/Motherboard combined, because GPUs perform most of the work in video games (ie: Matrix multiplication to move the list of vertices to the right location, and pixel-shaders to calculate the angle of light/shadows).
This is the ASUS RTX 4090 ROG STRIX. Air cooled, no waterblock. That is a mini-ITX form factor motherboard, hence why it looks so comically large by comparison.
This is one of the physically smallest 4090s launching. Its confirmed weight is 2325g, or 5 ⅛ lbs. Just the card, not the card in its packaging.
And if we are to reform our computer chassis anyways, we could move the PSU to straddle the motherboard and the video card and even have the VRM inside. High amperage "comb" connectors exist and VRM daughtercard motherboards existed https://c1.neweggimages.com/NeweggImage/productimage/13-131-... Change the form factor so two 120mm fans fit, one in front, one in the back.
So you would have three 120mm front-to-back tunnels: one for the video card, one for the PSU, one for the CPU.
I'm using a pretty heavy modern GPU (ASRock OC Formula 6900XT) in a Cooler Master HAF XB with that layout, and sagging and torquing is not much of a concern. The worst part is just fitting it in, since there's like 2mm between the front plate and the board-- you have to remove the fans so you can angle the card enough to fit.
I also suspect that if we went to the 80's style "a full length card is XXX millimetres long, and we'll provide little rails at the far end of the case to capture the far end of a card that length" design, it would help too, but that would be hard to ensure with today's exotic heatsink designs and power plug clearances.
The GPU is the main part of the machine by cost, weight, complexity, power consumption. And it's not even close.
New NVIDIA cards will draw 450W, and, even if you lower that in settings, the all package will still need to be manufactured to support those 450W at various levels.
I wonder what are games doing that require that extra power, seriously. I, personally, would much prefer to slightly have to lower settings (or expect devs to take at least some basic steps to optimize their games) than have a 450W behemoth living inside my computer.
Meaning, 40xx series will be an obvious pass for me. My 1080 Ti is actually still great in almost all aspects.
The 4090 Ti looks fantastic too. Totally worth the risk of fire.
No, that was just a rumour that was floating around. The 4080 16GB model is 340W TGP, the 12 GB is 285W TGP out of the box. The 3080 (10 GB) was 320W TGP, as a comparison point.
But yea, I have to say peak power consumption has to be regulated so companies compete in efficiency, not raw power.
Basically we need to move away from slots!
But seriously, 450 Watts on this day and age of increasing energy prices? Crazy.
It's a PC case orthodoxy issue, really. People want plugs at the back of the box, which dictates how the GPU must sit in the case, and disagreement on GPU sizing means no brackets. Solve these two issues and life gets a lot better.
Or, solve it like SFF case guys solved this problem, by using a PCIE extender cable to allow the GPU to be mounted wherever you like.
It does feel like GPUs are getting rather ridiculous and pushing the limits. PCIe SIG seems to keep having to specify editions of the PCIe add-in-card electromechanical spec authorising higher power draws, and sometimes it seems like these limits imposed by the standard are just ignored.
Indeed, PCI standards were for adding interfaces to personal desktop computers after all. It does seem ill suited to host 450W cooperative sub-computers.
A more common approach to heavy expansion card is VME style chassis design. Off top of my head, NeXTcube, NEC C Bus, Eurocard uses this arrangement in consumer space, and many blade servers enclosures, carrier grade routers, and military digital equipments employ similar designs as well.
They're simply getting too big, power hungry and hot to keep in colocated in the case.
Sounds reasonable, we already used to have separate CPU and FPU sockets in the distant past.
However, isn't it nice every extension card incl. GPU cards uses the same unified connector standard and can by replaced with anything very different in place? Wouldn't switching to the an MXM form-factor, introducing an extra kind of slot, be a step back? Haven't we once ditched a dedicated GPU card slot (AGP) in favour of unification already?
I have long thought the bitcoin miners were onto something, with pcie risers galore. In my head I know pcb is cheap and connectors - cables arent but it always seemed so tempting anyways; very small boards, cpu & memory (or onchip memory) & vrm, and then just pipes to peripherals & network (and with specs like CXL 3.0, we kind of could be getting both at once).
Nobody agrees on anything anymore. We need standards like those created 30 years ago. But everyone wants to do their own thing without realizing the reason for the inter compatibility is because people got over themselves and worked together.
What we could do is have AIO cooling like CPUs, more affordable than the current solutions or the "water block" designs from the brands.
Or, have more sandwich designs like Sliger which have a mini itx and a PCIe card parallel and connected via a ribbon. I don't think there is any claimed performance loss due to the cable.
I'd guess if excessive stress on the PCIe slot was a problem, it'd be solved by combining a good 2-3 slot mount on the back side and enough aluminium+plastic to hold the rest.
Seriously though, I imagine it's only a matter of time before these engineering decisions are themselves handed off to machines.
2. The real problem, in my opinion, is out of control power consumption. Get these cards back down to 200 W. That's already twice the power delivery of a mainstream CPU socket.
I was also thinking of a case where it can handle the cooling of a deshrouded GPU. Perhaps we should delegate the cooling options to the user without having to void warranty.
Hopefully the energy prices in Europe will force chip makers to work on that. I mean, only if they want to sell something over here.
or provide a dedicated gpu slot with a riser-like mount that allows for gpu to be mounted separately from the actual board ( something what laptop owners do with external gpus) ?
this way gpu could be any size and might have cooling on either side - or an external solution.
Or maybe FPGA's onboard for custom per use case. I hope that's why AMD merg d with Xilinx.