There are only few use cases, where the new kind of SSR (with hydration) is worth it. An example are e-commerce sites where you want the customer to see all your great products as soon as possible and then some seconds later to be able to interact with your side fluently to buy something. These kind of scenarios paired with low end devices are the only proper use case. And I say consciously "only proper" because SSR comes with downsides as well:

- SSR is always slower than static sites

- SSR is often slower than CSR - especially when using a small and fast framework like Solid, Svelte or Vue3

- When rendering on the edge and using a central API (e.g. for the DB) SSR is always slower when interacting with the site than CSR because of the extra hop from your central API to the edge to the browser instead of from the central API directly to the browser

- SSR is always more complex and therefore more fragile, however this complexity is handled by the frameworks and the cloud providers

- SSR is always more expensive - especially at scale.

- Traditional SSR with html templates will scale your site much better, simply because traditional languages like Go or Java or C# are scaling much better than NodeJS rendering the site via JS

We owe the technology of the "new" SSR and genius stuff like islands many very smart and passionate people.

Overall, this article not balanced at all. It is pointing out only some potential benefits. It simply is a marketing post for their product.

Much more prefer the htmx way of SSR parts of the page dynamically. Also, totally server-side agnostic, so we can use what we prefer. Clojure in our case.


If you don't do server side rendering, you don't (almost) automatically get a set of nice REST endpoints that return JSON/XML/ETC? I get that the abstraction might be nice for security, but at least for corporate intranet applications, a nicely structured, secured (e.g ODATA) webapi you query for client side rendering has the added benefit that it can be invoked programmatically with REST by other authorized parties. Obviously, you want the standard DDoS and security protections, but this fact alone, has turned me off server side rendering alone. Isn't it also nice from a computation cost standpoint to let the client do the rendering? I suppose UX could suffer and for external facing apps, this is likely of the utmost importance. Happy to be educated if I'm unaware of something else.
I may be misunderstanding this, but isomorphic SSR sounds an awful lot like the Java Server Faces concept of a server side DOM that is streamed to the client. JSF was largely dropped by java developers because it ended up scaling poorly, which makes sense since it violates one of the main constraints that Roy Fielding proposed for the webs REST-ful architecture: statelessness.

An alternative approach is to retain the statelessness of the first option they outline (I don't understand why it isn't "true" SSR): use normal, server-rendered HTML, but improve the experience by using htmx (which I made) so that you don't need to do a full page refresh.

This keeps things simple and stateless, so no server side replication of UI state, but improves the user experience. From what I understand of the isomorphic solution, it appears much simpler. And, since your server side doesn't need to be isomorphic, you can use whatever language you'd like to produce the HTML.

> Performance is higher with the server because the HTML is already generated and ready to be displayed when the page is loaded.

If you can reasonably cache the response, SSR wins on first page load, no question. On the first page dynamic render "it depends", can be SPA or SSR. 2nd page render a well built SPA just wins.

"it depends....." Server CPU cores are slower than consumer cores of similar eras. They run in energy efficient bands because of data center power concerns. They are long lived and therefore are often old. They are often segmented and on shared infrastructure. And if the server is experiencing load, seldom an issue on the client's system, you have that to deal with also. Your latency for generating said page can easily be multi-second. As I've experienced on many a dynamic site.

Using the client's system as a rendering system can reduce your overall cloud compute requirements allowing you to scale more easily and cheaply. The user's system can be made more responsive by not shipping any additional full page markup for a navigation and minimizing latency by avoiding network calls where reasonable.

On dynamic pages, do you compress on the fly? This can increase latency for the response. If not, page weight suffers compared to static compressed assets such as a JS ball that can be highly compressed well ahead of time at brotli -11. I never brotli -11 in flight compression. Brotli -0 and gzip -1.

This is for well built systems. Crap SPAs will be crap, just as crap SSR will similarly be crap. I think crap SPAs smell worse to most - so there's that.

> Compatibility is higher with server-side rendering because, again, the HTML is generated on the server, so it is not dependent on the end browser.

If you use features the end client doesn't support, regardless of where you generate the markup, then it won't work. Both servers and clients can be very feature aware. caniuse is your friend. This is not a rule you can generalize.

> Complexity is lower because the server does most of the work of generating the HTML so can often be implemented with a simpler and smaller codebase.

Meh. Debatable. What's hard is mixing the two. Where is your state and how do you manage it?

If you're primarily a backend engineer the backend will feel more natural. If you're primarily a front end engineer the SPA will feel more natural.

What on earth does this tangled mess "simplify"? I've been at it with web dev for over 20 years and I was left scratching my head. The trouble with going down the JS rabbit hole is that you lose perspective on simplicity. **d help us if this becomes the new hotness. Oh, wait it already is. Oh well, until next month ...
Shoutout to Sveltekit which does SSR and client-side navigation by default! https://kit.svelte.dev/
Most SPA is totally unnecessary and a big waste of time.

People are worrying about the speed of SSR when they should be worrying about the developer time on the client which is several orders of magnitude more.

I think people have fallen in love so much with complex Javascript frameworks that they’ve forgotten how easy it is to get to an MVP with SSR.

Speed is important.

Speed of development is even more important for businesses in this era who have to get to revenue faster.

And that’s why things like Phoenix LiveView and its counterparts in other languages is catching on so quickly.

People are getting fatigued with the latest flavor of the month JS framework.

But what do I know… I’m just a lowly “developer” working for crumbs. Never even finished a CS degree. Sigh.

Mindshare will go towards rendering javascript components on the server since that's another complex problem that's fun to solve. That's good! We shouldn't have to give up the productivity gains of tools like React to improve time-to-interactive and other performance stats.

That said... I'm not going to pretend it's an urgent need and will wait for these tools to mature.

Idk Im not so much into web development but isn't ssr rendering much more expesive? I just move all the processing/calculation to my/server side instead of the clients. This means for a business with many clients I have to pay for the stuff that the clients themselves could have done instead...
I've been using svelte for years and love it.

I've been using sveltekit for years and still struggle with it.

With sveltekit, I'm never really sure when to use prerender. I'm never sure how and where my code will run if I switch to another adapter.

With pure svelte, my most ergonomic way of working is using a database like pocketbase or hasura 100% client side with my JavaScript, so the server is a static web server. It's got real time subscriptions, graphql so my client code resembles the shape of my server side data, and a great authentication story that isn't confusing middleware.

I'm sure SSR is better for performance, but it always seems to require the use of tricky code that never works like I expect it to.

Am I missing something?

Does anyone know the stats about what's being served?

For things like blogs, server-side HTML with a sprinkle of client-side Javascript (or WASM) makes a lot of sense.

But for applications, where you're doing, you know, work and stuff, in-browser HTML makes a lot more sense.

The thing is, as a developer, most of the work is in applications. (It's not like we need to keep writing new blog engines all the time.) Thus, even though most actual usage of a browser might be server-side HTML, most of our development time will be spent in in-browser HTML.

I love Deno, I hope it succeeds, but I'm disappointed to see them so confidently publishing a broad assertion like this that's very weakly argued, and heavily biased towards promoting their own position in the stack

> Compatibility is higher with server-side rendering because, again, the HTML is generated on the server, so it is not dependent on the end browser.

Excuse my bluntness, but this is complete nonsense. Browser incompatibility in 2023 is mostly limited, in my experience, to 1) dark corners of CSS behavior, and 2) newer, high-power features like WebRTC. #1 is going to be the same regardless of where your HTML is rendered, and if you're using #2, server-side rendering probably isn't an option for what you're trying to do anyway. I can confidently say browser compatibility has roughly zero effect on core app logic or HTML generation today.

> Complexity is lower because the server does most of the work of generating the HTML so can often be implemented with a simpler and smaller codebase.

This, again, is totally hand-wavy and mostly nonsensical. It's entirely dependent on what kind of app, what kind of features/logic it has, etc. Server-rendering certain apps can definitely be simpler than client-rendering them! And the opposite can just as easily be true.

> Performance is higher with the server because the HTML is already generated and ready to be displayed when the page is loaded.

This is only partly true, and it's really the only partly-valid point. Modern statically-rendered front-ends will show you the initial content very quickly, and then will update quickly as you navigate, but there is a JS loading + hydration delay between seeing the landing page content and being able to interact with it at the beginning. You certainly don't need "a desktop...with a wired internet connection" for that part of the experience to be good, but I'm sure it's less than ideal for people with limited bandwidth. It's something that can be optimized and minimized in various ways (splitting code to make the landing page bundle smaller, reducing the number of nested components that need to be hydrated, etc), but it's a recurring challenge for sure.

The tech being demonstrated here is interesting, but I wish they'd let it stand on its own instead of trying to make sweeping statements about the next "tock" of the web trend. As the senior dev trope goes, the answer to nearly everything is "it depends". It shows immaturity or bias to proclaim that the future is a single thing.

The problem is more fundamental and it’s this; web apps are broken and have been from the beginning. They were created to solve the problems related to software distribution and updates but these problems were solved in the early 2000s when broadband became prevalent and it was no longer painful to download large software packages.

The early straw man was that downloading apps was too daunting a task for users and yet some how they managed to download and update email clients, word processors, iTunes and ironically browsers themselves.

Since I began my career in 1995 I’ve seen application architecture pundits proclaim the correct way to develop applications go from thick client native to thin client native to thin client web to thick client web back to thick client native (iOS & Android) and now, according to the article back to thin client web. I’ll submit the best model is thick client native using the “web” as a communication backbone for networked features.

And the cycle starts anew...
The article seems to contradict itself:

The first example shows the server rendering a handlebars template and then sending that as a response to the client -- it's then stated that this "isn't true SSR"

Then the same thing is done without a template language, using strings instead, and this is some different kind of SSR altogether and the "true SSR".

Which also seems to insinuate that only JS/TS are capable of SSR?

  Server-side rendering! Well, kinda. While it is rendered on the server, this is non-interactive.

  This client.js file is available to both the server and the client — it is the isomorphic JavaScript we need for true SSR. We’re using the render function within the server to render the HTML initially, but then we're also using render within the client to render updates.
My first contact with HTTP and HTML forms was an immediate throwback to my mainframe experience. The browser was like a supermodern 3270 terminal, getting screens from the server, sending data back, getting another screen and so on.

There were a number of products that allowed a web app to maintain a 3270 connection to the mainframe and render the terminal screens as an HTML form. Fascinating stuff.

It’s also totally fine to not SSR if the benefits aren’t important to you.
This is why I'm really excited about htmx [1]. No need to write isomorphic javascript at all. You can still use server side templates but have interactive web pages.

[1] https://htmx.org/

There are 2 things that are orthogonal in current trend. This SSR buzz is not actually selling Server Side Rendering, they are selling 'one language to rule them all' (they call this dumb name "isomorphic").

Therefore, they are not solving all the problems of client-server + best UX constraints. Basically the problems we have all this time comes from:

  1) There's a long physical distance between client and server 
  2) Resource and its authorization have to be on server.
  2) There's the need for fast interaction so some copy of data and optimistic logic need to be on client.
The "isomorphic" reusable code doesn't solve [latency + chatty + consistent data] VS [fast interaction + bloat client + inconsistent data] trade-off. At this point I don't know why they think that is innovation.
IME the big gains nearly always come from how data is surfaced and cached from the storage layer.

You may get some nominal gains from sending less JS or having the server render the html, but IME the vast majority of apps have much bigger wins to be had further down the stack.

> JavaScript got good

"A script on this page may be busy, or it may have stopped responding. You can stop the script now, or you can continue to see if the script will complete." - would like to have a word with you...

If you’re interested in server side rendered multiplayer 3D worlds, my project webspaces[1] lets you render HTML and get a 3D World.

[1] https://webspaces.space

The issue I have with SSR is that it offloads processing power onto the server. That means I have to pay more as the host instead of relying on user's browser to handle the compute "for free".
"A fully server side rendered version with isomorphic JS and a shared data model"

Seriously, how did we get there? Having dealt with jsp, jsf (myfaces, trinidad, adf...), asp.net, asp mvc, angular, plain html/css/js, how is is possible for FE web dev to be such a mess? So much complexity, for what? How many have to deal with millions of visit per day? Or even month? It seems to me history is quickly forgotten and new generations know very little about the past.

Keep it simple, please.

You can do HTML templating directly in JS using tagged template literals[1], and a library to deal with problems like XSS attacks[2].

[1]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

[2]: https://workers.tools/html/

WASM-side rendering. :)
I feel behind here, my company doesn't do any SSR and theres basically no way we could port it over. Definitely missing out on some key concepts.

I could build some SSR apps on my own but like, the real hard stuff about development comes >1 year in when you start running into those deep complexity issues. Can't really simulate that in a tiny pet project.

I think that biggest issue with page size is not due to client side rendering, but rather thanks to bundling and idea that you need to download the same minified Lo-Dash on each and every page. Why can’t we just use public CDNs is beyond my understanding.

I really like client side apps. They are so much more responsive. The only problem is with bundle sizes.

Note: Remix is not built on React, as the article states.

Of all the new ways of thinking, Remix is the leader in not promoting a specific paid delivery platform. So in that sense I can see why people might want to mitigate its advantages by trying to tie it to React.

(having said that, Shopify might tie it down more, but I see no evidence so far)

The future of the web is most web developers losing their jobs for failing to be good stewards of their platform.
If you are looking for server-side rendering that enables rich, react-like user experiences, check out LiveViewJS (http://liveviewjs.com). Supports both Deno and Node runtimes.

(Full disclosure: am author)

Is SSR still much better for seo?
Anyone remember There? It's still around[0], but I don't know how active it is.

I'm pretty sure it relied on in-browser XSLT to do a lot of its magic.

[0] https://there.com

If web sites have to so dynamic, I much prefer that the computation involved is done on their machine than on mine. I simply don't trust random web sites enough to let them run code on my machines.
The future is not to stick to a single religion but to apply one's brains when architecting solution as it all depends on multiple factors and there are no silver bullets in this universe.

    But all that code is necessary to make our sites work the way we want.
It's necessary to make shitty websites that are impossible to load on slow/flaky connections.
> Performance is higher with the server because the HTML is already generated and ready to be displayed when the page is loaded.

I don't see why we should assume the server is faster at processing the input data into HTML than the client is. It could very easily be that the client device does this faster. SSR additionally prevents progressive rendering, since you must generat eall the HTML ahead of time, which can make pages feel slower. Also HTML+JS data size can be larger than data+JS size (and you /may/ need the data anyway for the SSR version to do hydration). Of course all this varies, which is why it's silly to claim a general principle.

The claim that server side rendering is faster than client side rendering is interesting.. How come one machine(the server), is better than 1000 machines(the client)?
It is tragicomic that the slogan "We should do things on the server instead of the browser" is becoming popular again as browser technologies evolve.
It depends on what you're building. Choose the best tool for the job. Every time. Don't just default to your favorite.
I use Remix for this. Remix is 4 things:

A compiler

A server-side HTTP handler

A server framework

A browser framework

You can actually use Remix as just a server-side framework without using any browser JavaScript at all.

Haha. Lolnope.jpg
"But all that code is necessary to make our sites work the way we want."

Yes, but, on the other hand, is it?

“Server side rendering” is such a terrible term. The server isn’t doing rendering, the browser is. The server is sending a complete well-formed DOM for the client to render. Well done, modern devs! A plain .html file does that.

I really hope some of the heavy front-end frameworks die a death, some common sense prevails, and we get a lighter, faster loading, more responsive web. I can dream.

And the future beyond that will be client-side rendering. In the beginning everything was rendered on the mainframe; then CICS allowed partial screen updates and even dynamic green screen design. Then the early web where everything was server which made the job of web indexing much easier. Then we moved back to rich client apps -- applets, flash, eventually SPAs -- with no way for search engines to easily index things. A best of all worlds scenario is a rich UI that only needs to make API calls to update the display, keeping performance fast and content flicker-free (and the server-side API could have an agreed upon standard for being indexed -- or submitting updates for indexing -- to search engines).

There is no truly perfect scheme, only ways in which we think we can improve on the status quo by swinging the pendulum back and forth.

In theory, the "modern" frontend frameworks could be useful for a subset of applications. In practice, they are wildly overused, largely (IMHO) because front-end developers have forgotten how to build without them.

If I gave this as an example, people would say I'm being unfair to the front-end folks. But since Deno posted it, I think it's fair say that it's overkill to use a front-end framework like React (mentioned as a comparator in TFA) to implement add to cart functionality on an e-commerce site. And that for users with slow browsers, slow/spotty Internet, etc., an architecture that uses a heavy front-end framework produces a worse overall experience than what most e-commerce sites were able to do in 1999.

Edit: IMHO all of this is an artifact of mobile taking a front seat to the Web. So we end up with less-than-optimal Web experiences due to overuse of front-end JS everywhere; otherwise shops would have to build separate backends for mobile and Web. This, because an optimal Web backend tends to produce display-ready HTML instead of JSON for a browser-based client application to prepare for display. Directly reusing a mobile backend for Web browsers is suboptimal for most sites.

"The future of the Web is what suits our business model" /s

But in all seriousness, the web has websites, it has apps, it has games. Pick a tool that's appropriate for the job and forget about what is the past/present/future.

The thing that went wrong with front-end frameworks imho was that instead of being what was promised: you could update UI elements with NO NEED to contact the server at all, only posting back when something needed persisting, instead it became an excuse that every action on the front-end needed to call an API or 3 so we've ended up with over-complicated apps that instead of not relying on the backend are relying on it more than ever.

Any little glitch, slowdown or unavailability is affecting you not only once on page load but potentially with every single interaction. To make it worse, a lot of backend interactions are not made interactively or synchronously where the user might expect to wait a little while, they are made in the background causing all manner of edge cases that make apps somewhere from very slow to virtually unuseable.

I guess it's that old adage that people will make use of whatever you offer them, even if they go too far.

I'm always amused to hear web types speak of grinding HTML, CSS, and JavaScript down to somewhat simpler HTML, CSS, and JavaScript as "rendering". Rendering, to graphics people, is when you make pixels.
It's obviously nonsense. The lowest latency cache and state storage is clientside. You can piss around with multi regions and SSR to minimize latency but that's just placing a lot of regional caches near your users. The nearest place is in their actual browser -> offline first is the future
I think it's a bit ridiculous to call it "server side rendering". It is called HTTP
Can someone explain me: Deno is becoming such a confusing framework, initially NodeJS alternative now it seems to me that is trying to compete with NextJs?
> Performance is higher with the server because the HTML is already generated and ready to be displayed when the page is loaded.

but the page is loaded later because you have to wait for the server to perform this work. There is no reduction in total work, probably an absolute increase because some logic is duplicated. If there is a speed improvement it is because the server has more clock cycles available than the client, but this is not always true.

> Complexity is lower because the server does most of the work of generating the HTML so can often be implemented with a simpler and smaller codebase.

Huh? It takes less code to build a string in a datacenter than it does in a browser?

The "modern" state of the web. I miss old school html with little to no javascript. It is all java in the browser all over again. Or flash. Same old same old. Very few websites need any of this stuff. It is just a bunch of junior devs wishing they worked for FB I guess ergo them guzzling react like there is no tomorrow.
I really don’t like server side rendering.

I like my react apps to be static files served from a plain HTML server.

Client-side rendering needs to rebrand as local-first. Then the cycle will start anew.
Next step: server rendered PNGs. Browser not required.
npm install common-sense
No. Because HTML is not the future of the Web.
>ctrl+f “State”

>ctrl+f “Effect”

>0 results

I only skimmed through the post, but seems like it’s ignoring the main reasons why CSR is needed?

Am fairly certain the move back to the server has more to do with the development of heavy AI data modelling that can't be offloaded to the client.

Don't believe the tech itself is Anything but a sign of where the utilities are moving

"Server-side rendering" is destined to rule the future purely because of control. In the future consumer devices will be simplified, much more streamlined, and completely locked down. They will be used for the single purpose of displaying streamed, pre-packaged, pre-layed-out content from servers.