An alternative approach is to retain the statelessness of the first option they outline (I don't understand why it isn't "true" SSR): use normal, server-rendered HTML, but improve the experience by using htmx (which I made) so that you don't need to do a full page refresh.
This keeps things simple and stateless, so no server side replication of UI state, but improves the user experience. From what I understand of the isomorphic solution, it appears much simpler. And, since your server side doesn't need to be isomorphic, you can use whatever language you'd like to produce the HTML.
If you can reasonably cache the response, SSR wins on first page load, no question. On the first page dynamic render "it depends", can be SPA or SSR. 2nd page render a well built SPA just wins.
"it depends....." Server CPU cores are slower than consumer cores of similar eras. They run in energy efficient bands because of data center power concerns. They are long lived and therefore are often old. They are often segmented and on shared infrastructure. And if the server is experiencing load, seldom an issue on the client's system, you have that to deal with also. Your latency for generating said page can easily be multi-second. As I've experienced on many a dynamic site.
Using the client's system as a rendering system can reduce your overall cloud compute requirements allowing you to scale more easily and cheaply. The user's system can be made more responsive by not shipping any additional full page markup for a navigation and minimizing latency by avoiding network calls where reasonable.
On dynamic pages, do you compress on the fly? This can increase latency for the response. If not, page weight suffers compared to static compressed assets such as a JS ball that can be highly compressed well ahead of time at brotli -11. I never brotli -11 in flight compression. Brotli -0 and gzip -1.
This is for well built systems. Crap SPAs will be crap, just as crap SSR will similarly be crap. I think crap SPAs smell worse to most - so there's that.
> Compatibility is higher with server-side rendering because, again, the HTML is generated on the server, so it is not dependent on the end browser.
If you use features the end client doesn't support, regardless of where you generate the markup, then it won't work. Both servers and clients can be very feature aware. caniuse is your friend. This is not a rule you can generalize.
> Complexity is lower because the server does most of the work of generating the HTML so can often be implemented with a simpler and smaller codebase.
Meh. Debatable. What's hard is mixing the two. Where is your state and how do you manage it?
If you're primarily a backend engineer the backend will feel more natural. If you're primarily a front end engineer the SPA will feel more natural.
People are worrying about the speed of SSR when they should be worrying about the developer time on the client which is several orders of magnitude more.
I think people have fallen in love so much with complex Javascript frameworks that they’ve forgotten how easy it is to get to an MVP with SSR.
Speed is important.
Speed of development is even more important for businesses in this era who have to get to revenue faster.
And that’s why things like Phoenix LiveView and its counterparts in other languages is catching on so quickly.
People are getting fatigued with the latest flavor of the month JS framework.
But what do I know… I’m just a lowly “developer” working for crumbs. Never even finished a CS degree. Sigh.
That said... I'm not going to pretend it's an urgent need and will wait for these tools to mature.
I've been using sveltekit for years and still struggle with it.
With sveltekit, I'm never really sure when to use prerender. I'm never sure how and where my code will run if I switch to another adapter.
With pure svelte, my most ergonomic way of working is using a database like pocketbase or hasura 100% client side with my JavaScript, so the server is a static web server. It's got real time subscriptions, graphql so my client code resembles the shape of my server side data, and a great authentication story that isn't confusing middleware.
I'm sure SSR is better for performance, but it always seems to require the use of tricky code that never works like I expect it to.
Am I missing something?
For things like blogs, server-side HTML with a sprinkle of client-side Javascript (or WASM) makes a lot of sense.
But for applications, where you're doing, you know, work and stuff, in-browser HTML makes a lot more sense.
The thing is, as a developer, most of the work is in applications. (It's not like we need to keep writing new blog engines all the time.) Thus, even though most actual usage of a browser might be server-side HTML, most of our development time will be spent in in-browser HTML.
> Compatibility is higher with server-side rendering because, again, the HTML is generated on the server, so it is not dependent on the end browser.
Excuse my bluntness, but this is complete nonsense. Browser incompatibility in 2023 is mostly limited, in my experience, to 1) dark corners of CSS behavior, and 2) newer, high-power features like WebRTC. #1 is going to be the same regardless of where your HTML is rendered, and if you're using #2, server-side rendering probably isn't an option for what you're trying to do anyway. I can confidently say browser compatibility has roughly zero effect on core app logic or HTML generation today.
> Complexity is lower because the server does most of the work of generating the HTML so can often be implemented with a simpler and smaller codebase.
This, again, is totally hand-wavy and mostly nonsensical. It's entirely dependent on what kind of app, what kind of features/logic it has, etc. Server-rendering certain apps can definitely be simpler than client-rendering them! And the opposite can just as easily be true.
> Performance is higher with the server because the HTML is already generated and ready to be displayed when the page is loaded.
This is only partly true, and it's really the only partly-valid point. Modern statically-rendered front-ends will show you the initial content very quickly, and then will update quickly as you navigate, but there is a JS loading + hydration delay between seeing the landing page content and being able to interact with it at the beginning. You certainly don't need "a desktop...with a wired internet connection" for that part of the experience to be good, but I'm sure it's less than ideal for people with limited bandwidth. It's something that can be optimized and minimized in various ways (splitting code to make the landing page bundle smaller, reducing the number of nested components that need to be hydrated, etc), but it's a recurring challenge for sure.
The tech being demonstrated here is interesting, but I wish they'd let it stand on its own instead of trying to make sweeping statements about the next "tock" of the web trend. As the senior dev trope goes, the answer to nearly everything is "it depends". It shows immaturity or bias to proclaim that the future is a single thing.
The early straw man was that downloading apps was too daunting a task for users and yet some how they managed to download and update email clients, word processors, iTunes and ironically browsers themselves.
Since I began my career in 1995 I’ve seen application architecture pundits proclaim the correct way to develop applications go from thick client native to thin client native to thin client web to thick client web back to thick client native (iOS & Android) and now, according to the article back to thin client web. I’ll submit the best model is thick client native using the “web” as a communication backbone for networked features.
The first example shows the server rendering a handlebars template and then sending that as a response to the client -- it's then stated that this "isn't true SSR"
Then the same thing is done without a template language, using strings instead, and this is some different kind of SSR altogether and the "true SSR".
Which also seems to insinuate that only JS/TS are capable of SSR?
Server-side rendering! Well, kinda. While it is rendered on the server, this is non-interactive.
This client.js file is available to both the server and the client — it is the isomorphic JavaScript we need for true SSR. We’re using the render function within the server to render the HTML initially, but then we're also using render within the client to render updates.
There were a number of products that allowed a web app to maintain a 3270 connection to the mainframe and render the terminal screens as an HTML form. Fascinating stuff.
Therefore, they are not solving all the problems of client-server + best UX constraints. Basically the problems we have all this time comes from:
1) There's a long physical distance between client and server
2) Resource and its authorization have to be on server.
2) There's the need for fast interaction so some copy of data and optimistic logic need to be on client.
The "isomorphic" reusable code doesn't solve [latency + chatty + consistent data] VS [fast interaction + bloat client + inconsistent data] trade-off. At this point I don't know why they think that is innovation.
You may get some nominal gains from sending less JS or having the server render the html, but IME the vast majority of apps have much bigger wins to be had further down the stack.
"A script on this page may be busy, or it may have stopped responding. You can stop the script now, or you can continue to see if the script will complete." - would like to have a word with you...
Seriously, how did we get there? Having dealt with jsp, jsf (myfaces, trinidad, adf...), asp.net, asp mvc, angular, plain html/css/js, how is is possible for FE web dev to be such a mess? So much complexity, for what? How many have to deal with millions of visit per day? Or even month? It seems to me history is quickly forgotten and new generations know very little about the past.
Keep it simple, please.
[1]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
I could build some SSR apps on my own but like, the real hard stuff about development comes >1 year in when you start running into those deep complexity issues. Can't really simulate that in a tiny pet project.
I really like client side apps. They are so much more responsive. The only problem is with bundle sizes.
Of all the new ways of thinking, Remix is the leader in not promoting a specific paid delivery platform. So in that sense I can see why people might want to mitigate its advantages by trying to tie it to React.
(having said that, Shopify might tie it down more, but I see no evidence so far)
(Full disclosure: am author)
I'm pretty sure it relied on in-browser XSLT to do a lot of its magic.
But all that code is necessary to make our sites work the way we want.
It's necessary to make shitty websites that are impossible to load on slow/flaky connections.
I don't see why we should assume the server is faster at processing the input data into HTML than the client is. It could very easily be that the client device does this faster. SSR additionally prevents progressive rendering, since you must generat eall the HTML ahead of time, which can make pages feel slower. Also HTML+JS data size can be larger than data+JS size (and you /may/ need the data anyway for the SSR version to do hydration). Of course all this varies, which is why it's silly to claim a general principle.
A compiler
A server-side HTTP handler
A server framework
A browser framework
You can actually use Remix as just a server-side framework without using any browser JavaScript at all.
Yes, but, on the other hand, is it?
I really hope some of the heavy front-end frameworks die a death, some common sense prevails, and we get a lighter, faster loading, more responsive web. I can dream.
There is no truly perfect scheme, only ways in which we think we can improve on the status quo by swinging the pendulum back and forth.
If I gave this as an example, people would say I'm being unfair to the front-end folks. But since Deno posted it, I think it's fair say that it's overkill to use a front-end framework like React (mentioned as a comparator in TFA) to implement add to cart functionality on an e-commerce site. And that for users with slow browsers, slow/spotty Internet, etc., an architecture that uses a heavy front-end framework produces a worse overall experience than what most e-commerce sites were able to do in 1999.
Edit: IMHO all of this is an artifact of mobile taking a front seat to the Web. So we end up with less-than-optimal Web experiences due to overuse of front-end JS everywhere; otherwise shops would have to build separate backends for mobile and Web. This, because an optimal Web backend tends to produce display-ready HTML instead of JSON for a browser-based client application to prepare for display. Directly reusing a mobile backend for Web browsers is suboptimal for most sites.
But in all seriousness, the web has websites, it has apps, it has games. Pick a tool that's appropriate for the job and forget about what is the past/present/future.
Any little glitch, slowdown or unavailability is affecting you not only once on page load but potentially with every single interaction. To make it worse, a lot of backend interactions are not made interactively or synchronously where the user might expect to wait a little while, they are made in the background causing all manner of edge cases that make apps somewhere from very slow to virtually unuseable.
I guess it's that old adage that people will make use of whatever you offer them, even if they go too far.
but the page is loaded later because you have to wait for the server to perform this work. There is no reduction in total work, probably an absolute increase because some logic is duplicated. If there is a speed improvement it is because the server has more clock cycles available than the client, but this is not always true.
> Complexity is lower because the server does most of the work of generating the HTML so can often be implemented with a simpler and smaller codebase.
Huh? It takes less code to build a string in a datacenter than it does in a browser?
I like my react apps to be static files served from a plain HTML server.
>ctrl+f “Effect”
>0 results
I only skimmed through the post, but seems like it’s ignoring the main reasons why CSR is needed?
Don't believe the tech itself is Anything but a sign of where the utilities are moving
- SSR is always slower than static sites
- SSR is often slower than CSR - especially when using a small and fast framework like Solid, Svelte or Vue3
- When rendering on the edge and using a central API (e.g. for the DB) SSR is always slower when interacting with the site than CSR because of the extra hop from your central API to the edge to the browser instead of from the central API directly to the browser
- SSR is always more complex and therefore more fragile, however this complexity is handled by the frameworks and the cloud providers
- SSR is always more expensive - especially at scale.
- Traditional SSR with html templates will scale your site much better, simply because traditional languages like Go or Java or C# are scaling much better than NodeJS rendering the site via JS
We owe the technology of the "new" SSR and genius stuff like islands many very smart and passionate people.
Overall, this article not balanced at all. It is pointing out only some potential benefits. It simply is a marketing post for their product.