Try working in banking for 20 years, stuck behind at least 1 layer of citrix living in citrix inception. Latency for every keystroke, your brain starts to add latency to latency that is not there to compensate for a life lived wearing citrix latency goggles.
I would have agreed until I started working at Google. Also, you should completely avoid having Remote Desktop and instead use ssh + an editor that works with remote files.

At Google we have a custom fork of VSCode running in browser and builds can either be distributed or run on my Linux VM to utilize build cache.

I liked it so much I started doing a similar setup for small side projects. Just boot up the Cloud Console on GCP and start coding.

Advantages are:

- Accessible from anywhere (I use my pc, my laptop, etc. The env is always the same)

- More compute (I can attach more CPU + more RAM flexibly)

- Less friction to start (minimum environment setup, most tools are preinstalled)

- Datacenter Network speeds + Artifacts cached (installing dependencies is fast)


- network dependence

There are some adjustments that need to be made to your workflow. And for some applications you are dependent on having the correct tooling. However, my personal prediction is most companies will move to this type of development workflow as the tooling improves.

Nothing beats working directly on a fast but quiet workstation sitting next to my table.

At least for me, the productivity gains associated with quicker builds, IDE resyncs (CLion, looking at you) or just being able to have email, chat, calendar and an active video conference running without making the system crawl to a halt or long latency spikes are huge. 3-4k for a machine that will likely last 2-3 years is nothing in comparison.

Yep. Having worked in these environments, this solution is almost always sold to companies that are working around shitty hard to reproduce software stacks, staff trust issues, scale up difficulties and checkbox security cargo cults. The resulting outcome is usually increased staff turnover, increased cost and decreased productivity. Most of this they are having trouble rationalising or acknowledging still.

You don't want to work for those companies.

It's notably different if you have a cloud VM running linux and you're connecting to it with VScode or something over SSH. That's borderline acceptable. The reality is usually some horrible AWS, Azure or Citrix portalised solution however.

I don't get the use case. Why would you even consider using a cloud desktop?

Even a very low-spec laptop is going to run a simple graphical desktop environment like Xfce just fine. Watching a youtube video, browsing the web and even video conferencing can be handled with any new-ish laptop.

And in reality, you still want a reliable laptop with decent keyboard, long battery life, good display and so on. So you won't end up on a low spec machine to begin with.

For computation heavy dev stuff a simple SSH access is good enough. It can be a very smooth experience with a locally running VS Code or something.

With cloud gaming you can stream 4K games at 60FPS, with clarity and quality for fast moving objects.

Why does remote desktop still shit itself when I move around MS Word with a few pictures?

I know a tier 1 financial company that offer 100k / year developers a slow VM and from there you have to log into another VM. The VMs are dual core, 8GB. I watch in horror as each keypress takes more than a second. The amount of lost productiviry is in millions

Shadow offers remote desktop environment with GPU acceleration where you can run games and it feels responsive and decent.

The cost of a new fullly-spec'd workstation + high performance laptop is tiny compared to the salary of good software developers. Managements have a warped sense of how to save money and as a result grossly hurt morale and productivity where it matters the most.
Another one is an unholy confluence of corporate compliance bullshit

Connecting to the remote machine needs to go through corporate SSO (in a browser) that then starts the native remote client. Policy requires MFA, strong, frequently changed passwords and Windows Hello on the laptop. Policy also requires screen lock after 5 minutes. For some reason policy also requires disabling copy-paste to remote machines.

The end result is that the remote session gets locked out every 5 minutes when you do something in the laptop's browser instead. To log back in one either has to enter a long, complicated password (can't paste it from the password manager!) or use an mfa code. Hardware tokens don't work either due to unreliable USB forwarding.

Having to jump through those hoops once or twice a day would be tolerable, dozens of times is grating.

I assume the policies are written for all the worst-case scenarios where people remote in from private, shared devices or use a laptop in a public place. But they add a lot of unnecessary friction when a laptop is used from a lockable home office.

Have generally been skeptical of 'cloud desktop' but... I had a friend who got in to sales for a cloud desktop provider about 6-7 years ago. There was only one real strong use case and she sold to that niche. Some specific cad/modeling/rendering vertical had software they used, and it was a CPU bear. Running that 'remotely' in the cloud was much faster than anything they could have locally. Managing all the licensing and security/perms there was an added benefit, but she was also mostly selling to smaller firms that didn't have full time staff to handle that.

For the market she was in, at the time, there was a moderately clear win. I watched a pitch, and the speed diff was real. The productivity gains in many folks saving an hour or two in rendering time was easily worth the... I can't remember - $200/month/seat maybe? Outside of those types of use cases, the benefits were harder to justify. And... in 2020+... unsure if local desktop CPU caught up enough that the benefits were lower.

I'm glad he brought up accessibility. My company has been working on a remote desktop product [1] that addresses this issue, particularly for blind users. The connection carries audio output from the remote machine, and the keyboard input handling code on both sides is designed to work with the quirks of screen readers, so running a screen reader on the remote machine works well. Beyond that, if the remote machine isn't running a screen reader, there's a way to get speech output on the controlling machine using the open-source NVDA screen reader for Windows, without requiring audio output on the remote machine. We still need to work on Braille output and screen magnification, and we've only started thinking about alternate input methods, so this doesn't cover everything, but the problems are solvable.

[1]: I don't normally self-promote commercial products like this, but this is relevant to the article, and I thought people might find it interesting.

It is good that these "water is wet" statements get written down so we can point humidity-skeptical people to them from time to time.

The deeper problem is the sad state of affairs of distributed computing for the end user:

* Application instances expect to be the only ones modifying the files that underlie the document being edited. Most of them simply bail out when the files get modified by another application.

* The default is "one device = one (local) filesystem" which is the exact opposite to what everyone needs: "one person = one (distributed) filesystem."

* The case for local-only filesystems only addresses corner cases, or deficient distributed file systems that fail to uphold basic security constraints (such as "my data is only in my devices" or "no SPOF" for my data).

* Whatever gets pushed to the cloud becomes strongly dependent on devices and vendors. Users end up handcuffed to a specific hardware (iCloud) or software (Android) if they want to have any chance of interacting with their own documents from their own devices.

* What we need is not cloud desktops, or cloud storage. We need local desktops with a decent distributed filesystem, and vendor agnostic access to that filesystem from all our devices.

Using a third-party cloud only ensures that all work scenarios enjoy the same lowest-common denominator.

My preference is to select one of the work contexts (e.g., the office) as primary and to put a workstation there, then remote to that workstation from secondary contexts (e.g., at home). This configuration gives me first-class computing where I need it most, in the primary context, and a decent second-class option when I need to work in other contexts.

I happily worked with this configuration for more than a decade and found it served all of my local and remote needs.

This also requires access to a stable fast network at all times. Local internet goes down, AWS/AZURE/GCP goes down and I'm stuck. With my laptop setup, I can work anywhere anytime, as long as I have power. I'll need network access at some point to commit code or pick up changed libraries, but that can be managed.
I've done this for a long while, and I always come back to the only two viable competitors in the space (that don't require enterprise licensing).

Nomachine and ThinLinc.

Everything else is fine for the occasional remote desktop administration, but they all have a combination of bad video quality, no audio, no keyboard shortcut capture or bad scaling options.

It's about two things: 1) latency, 2) cost.

For the latency: 100ms is where the threshold is. Above 100ms, you start to really notice the latency and it becomes annoying to the point that you even start making mistakes while typing. Let's take an example: the average latency from my home laptop to a server in the AWS cloud is 20ms. If I add a GUI remote solution (such is Xpra, which is pretty good wrt latency), the latency increases to 60-80ms (and this is just for remoting a single GUI app like VSCode, but not the whole desktop). Now you add a latency of the app itself, which for VSCode is about 50ms. The total latency becomes 110-130ms. So latency-wise the experience of working with a cloud desktop is noticibly worse than my local developer laptop.

For the cost: my developer laptop costs about $1500. 16 cores, 32 GB of RAM, 1TB SSD. The equivalent cloud desktop setup would probably be around $400 a month. So in just 4 months the cost of the cloud desktop will exceed the cost of the laptop.

In my opinion, cloud desktops only make sense when you're not sure how much capacity you need. Is 4 or 8 cores enough for your work? 16 or 64GB of RAM? The cloud desktop setup is flexible. You need more you allocate more. But once the capacity is known, you should switch to your own hardware to significantly reduce the cost and actually improve the experience.

> Modern IDEs tend to support SSHing out to remote hosts to perform builds there, so as long as you're ok with source code being visible on laptops you can at least shift the "I need a workstation with a bunch of CPU" problem out to the cloud.

I'd mention SSH port forwarding in this section. For webdev you'll want to run your server on the remote host and use the local web browser. SSH port forwarding works great for this. I recently used this setup to get some extra RAM for a short project that could only be run as a collection of memory hungry microservices. This way I could get the whole thing running on one box; I spun down the server once the project was done.

Cloud desktops were fine a few years ago in Windows 7 times when the desktop was 2D and was remoted as GDI instructions.

Windows 10 made everything 3D, so now not having a GPU assigned to a virtual machine means everything is first rendered into a bitmap and then sent over wire as a movie. This causes additional delay, JPEG-like artifacts and instability.

I work at the company the gives a chromebook and a cloud desktop. Works great for me.

VS Code for the development, SSH for the remote desktop, most of operations can be done via VSCode anyways. Chrome RDP is slow, I agree. I never use it anyways.

Give me a powerful workstation at work. Keep the laptop, I'm not doing extra work from home. Of course I have an even more tricked out system at home, but that's for playing games and working on my side-hustle.
All these dev-machines-in-a-cloud sound wonderful from a security, compliance, and onboarding perspective. What is often forgotten is that this is now a service you’re operating and a massive SPOF. If it goes down (and it will), productivity drops to exactly zero. It’s like sending your devs home until it’s fixed.
I don’t see Xpra mentioned in comments yet.

Works pretty well for us since remote windows and local are seamlessly integrated and managed by local WM. Solves the multi monitor issues. Definitely lower latency than vnc or rdp or nomachine from our testing. Windows, Mac and Linux clients all work well.

The later versions of RDP is miles ahead of any other remote desktop protocol in my experience. I used to use it for gaming years ago (from Windows machine to Mac) - it really isn't that bad if latency and bandwidth is acceptable.

As others say, it is very hard sometimes to detect what is local and what isn't with RDP. Everything seems to just work, even using the Mac client.

Compare this with everything else I've used and it's a real janky JPEG compression mess.

As a software dev, I like using whatever as a local env but SSH'ing into something more powerful to perform any heavy lifting. There's also tools like VSCode Remote that make it almost like developing locally. That said, the most taxing tools that I use regularly are things like video conferencing and "collaboration" tools like Miro. These things are hell.
Using something like Coder to provision workspaces and VS Code, SSH, and Wireguard/Tailscale are an absolute dream.

I hope for many in these comments to experience this (especially Coder V2 which is far more flexible in provisioning workspaces) instead of the RDP non-sense that others need to suffer through.

While Parsec is good (and Nvidia GameSteam + Moonlight seems better in my experience) it really isn't good enough to use instead.

Also, honestly, with the advent of things like Tailscale I think it'll become more and more common to have a desktop + a nice, but weaker/cheaper device (Chromebook, MBA, etc.) that you can securely access at your desk or remotely if you want. It's what I personally do with my Desktop and M1 MBA right now.

Also want to add that dedicated servers aren't that expensive by comparison and you can get a lot of value paying like $100/month and using that remotely.

Funny that I should be reading this on such a "cloud" desktop.

I have a Raspberry Pi[1] running Remmina and accessing a number of different machines via RDP - A personal Fedora 36 desktop running in an LXC container[2], a Windows VM on Azure, and various other similar environments. I am typing this on that Pi, through that Fedora session, pushed to a 2560x1080 display. Typing and typical browsing is almost indistinguishable from "being there". Coding too. It is only noticeable (on the Pi) when large parts of the screen update and the little thing has to chug along, but I'd rather have this completely silent setup than an Intel NUC.

For work, I do have spanking new a work-issue laptop, but it is fairly recent and the fans spin up whenever I launch anything of consequence, so I am still logging in to a virtual desktop environment for everything up to (and including) audio calls (RDP has pretty decent audio support these days). Video and display sharing I still do locally, mostly because it's usual to switch environments during a call, but I have full multiple display support, and the connection can handle my 5K2K and 4K displays just fine.

I've been doing this for a decade or so, ever since I could use Citrix over GPRS. The user experience is fantastic - even at that time I could literally close my session in the evening, take a morning flight to Milan, pop open my laptop and continue where I had left off, over a piddling sub-64Kbps link.

With the right setup (and experience), latency issues mostly vanish. These days you can push a full 3D rendered desktop over DSL with either optimized RDP or game streaming, so the real constraints typically come from IT restrictions and people wanting to micromanage their environments.

That said, I also use VS Code Remote, and it works great for me as well over SSH. But it's just easier to spin up a VM/container and do that from my iPad :)

[1] - [2] -

Edit: Remembered I shot this video of it running over Wi-Fi, unoptimized:

Disclamer: I'm part of the IsardVDI project (

With RDP, in our experience, the latency issue is nonexistent. We've even have successfully run workstations editing 4k video with 0 issues. Yes, for those extreme cases you need a GPU (and the only option is NVIDIA Grid, which are really expensive cards with licensing on top of that), but for the most part, if the hypervisor has a good CPU, it's more than enough, we have clients that even use RDP through the browser.

You don't even need to have a really good internet connection. Also, SPICE is really good too, with really good desktop integration

vscode remote ssh is a decent compromise - the interface is "local" the stuff is remote. Hardly noticable that it isn't true local. The second the entire thing is piped through a VNC/RDP type setup it becomes shyte to use.
I have never got this set up, but I think a hybrid approach could be quite good; something like pytest-xdist remote SSH builders perhaps. (Maybe you can rsync the diffs in the background though, instead of when you hit the "test" button, to speed things up?)

Running a local-first setup is nice for things like iteratively step-debugging your latest changes on a single test case, but being able to push the diffs to a fast remote build server (elastic cluster?) to speed up the "run all the tests" action would be nice.

I think you can do this with Clang remote builders too. I hear Bazel has this.

Is this something that anyone has experience with? It seems like it could be the best of both worlds, from a compute performance standpoint.

(As others have noted, the other big benefit of a cloud desktop is that you don't have to spend time setting up your dev environment, which is constant toil for new developers; Github mentioned this as a big contributor of friction in

nobody thinks that. not in their sane mind. except for the "you vill ovn nothing, und we vill be happy" technocrats, but their reasons for that are their own.
Cloud desktops are not a cure-all, but they do have their uses. I have "cloud" desktops hosted in my home (on a NUC server) and also on cheap VPS instances, depending on my particular needs, whether that be an isolated environment for dangerous work, or just having a portable desktop that I can connect to from whatever device I have at hand, continuing from where I last left off.

GPU-intensive desktops are pretty much a no-go, but Mate desktop works beautifully and does what a desktop should: Manage my environment and get the hell out of my way.

Browsing on the remote desktop is anything but smooth, but it's good enough for development. I'm not going to stream video on it, though.

There's a tiny bit of keystroke latency, but not enough to matter IMO. I'm using Chrome remote desktop so YMMV. Running Steam on a cloud desktop is possible, but it's an exercise in madness.

I do it all using LXD to keep things relatively distro agnostic. I've posted the Python script I use here:

I've been using this in one form or another for a very long time. Starting when we had Windows desktops with bad Linux support, and back then you couldn't run both OS's on the same machine at the same time.

So I stuck a workstation with Linux on it in a closet. Fired up VNC and I could hit it from home, my cubicle, the road, wherever. It's evolved over the years as things became faster and more secure. It became a co-located server, then a VPS, and now it's a shared setup on a beefy server.

It maintains it's state no matter where I go. I can open up two ore more sessions for two ore more monitors. But it's more useful to just surf the web or open PDF's or whatever on the local machine. Copy and paste is pretty seamless these days. And wherever it's located, has a much better network connection than I do.

You still have a latency problem with large files (CD .iso in the old days, a 10GB package these days). I don't play games so I don't really know how that goes. But for development it works great, as well as just a general workstation.

I am studying if cloud IDEs are better option than giving laptops to team (From cost and managing PoV). Any experiences around cloud IDEs for teams.
A combination of AWS Workspaces and Appstream solutions worked fine for us, with a few hundred developers and data scientists, spread all over the world, as FTEs (for the first category of products) and contractors on various continents (for the latter), including some M&As we conducted and continue to undergo, which require(d) very short time to bringing new teams up to speed.
It’d be nice if the author mentioned what cloud desktops and how good people think they are. Don’t tell me it’s worse than I think without specifying what you think I think!

I know numerous gaming companies that swear by Parsec. Except the author doesn’t appear to be talking about Parsec tier cloud desktops. But then again it’s not clear what the author is talking about

I think I found a good remote dev environment recently - basically "just use Mutagen to sync files to a server close to you". That keeps the source of truth (code files) local but outsources the compute.

I started working at ~4mo ago and quickly realized I could setup a nice remote dev environment since there are regions close to me (super low latency).

I setup a VM to run SSH to sync/forward ports. It turn off when I'm not using it (after a configured timeout, it sniffs for SSH connections and exits if there are none - which stops the VM), and uses Mutagen to sync files. The source of truth is my local files, so my local IDE's work great (they're working against the local file system).

I wrapped it up in a little tool I'm calling Vessel, which talks to Fly's "Machines API"

I know this sounds gross, but I wonder if Chromebooks could benefit from being able to trigger local browser actions in the remote browser. So you can click "open in new tab" in your remote and it opens in your local browser. Bonus points if the remote session is also in a tab, so it just switches you away and you can come back easily.
> Modern IDEs tend to support SSHing out to remote hosts to perform builds there, so as long as you're ok with source code being visible on laptops you can at least shift the "I need a workstation with a bunch of CPU" problem out to the cloud.

JetBrains has Gateway[1] and VSCode has remote Dev tools[2]

Gateway's performance is very dependent on the network connectivity. If you have bad ping, you're going to curse the world seeing the input delay.

VSCode seems to be caching the files locally and updating them separately. With bad internet, you still get the native input lag.



VSCode Remote SSH’d into a cloud desktop is superior to local development. With the hardware being remote you can afford to get something that’s ultra fast and can run 24/7, and it still feels just as snappy and responsive as running it locally. I think this will be the standard for development within 3 years.
Remote code development tools like Gitpod and Codespaces may be a good answer to the issues from the post. They sit in the middle between purely using SSH and full remote desktop experience, so feel like using your local machine while giving you access to computing power from the cloud.
Privacy is also terrible in cloud desktops (and cloud apps) but many (most?) developers do not see it as a concern. It's too late to push for privacy in cloud software - especially when it's developers who are the strongest advocates for user-tracking in cloud desktops.
My company did this to us. I'm thinking of quitting.
I don’t use a graphical cloud setup, but I do use a vps for all of my development.

It is nice having a single cloud based machine that is accessible via ssh on any of my physical devices.

I have a dev environment closer to production, ssl and publicly accessible urls for testing services and sharing to compare designs and UI changes, etc.

Fantastic setup for anyone that likes a vim+tmux workflow. Only a single environment to keep up to date and configured. Daily snapshots and backups.

Keeps the cost of other hardware down as well.. I can work effectively on cheap hardware which certainly offsets the server costs. I did a cost rundown before and it was like ~15 years of my vps and cheap hardware equal to a single entry level MacBook Pro.

Been working remotely for 6 years. My vm is in New York and I live in rural France. I connect over a 4g connection. I have a 24" monitor.

The stuff is seamless. I mean it. I hate lagging, I hate stuff that doesn't work. But this does work. Really well.

Multiple times I've caught coworkers starting a gotomeeting/zoom meeting in their VDI and they can't figure out why they can't use their LOCAL USB conference microphone/speaker array.

Yet another use case where VDI falls down.

I've had this in the last few jobs. One reason for it is that to move desks in NYC requires a union employee to move the computer which ends up costing a few thousand dollars. With terminals and a cloud PC you avoid this.
I've had good results with offloading work to a cloud based server where I ran my docker containers during development. Just CLI though, I left the graphical part on the client side. As mentioned by others, VSCode with remote SSH was a blessing for such a scenario.

There are benefits; I could scale up my workstation even for an hour or so, with more memory or a fancier cpu. And it was easier to share my work with other (remote) colleagues; because they were at another timezone I could leave the server up for them when needed, while I shut my laptop down for the day and see their feedback the next day.

For the past 5 years I've used a macbook to ssh in to a Linux VM as my development environment. It was great for the work that I was doing (distributed web systems).

Now that I've changed jobs and I'm developing a desktop app again, I'm back on a physical Linux box under my desk, and I really miss the old experience. It was great to never care about a mac change tanking your productivity (i.e. I was totally unperturbed by the m1 switch), and it was also great not to have to run a Linux desktop environment, which it turns out is still a big pain.

I thought about buying a desktop again after almost 30 years, bury it in some room at my home and use my laptop as remote desktop. I work in different rooms as seasons go by (so no air conditioning), sometimes even morning vs afternoon. Not a common use case I guess but that's exactly the point of the article.

I think I'll keep using my laptop as primary and only machine because many of the scenarios in the article also apply to me and what if I have to visit a customer? It never happened again since the pandemic but it could.

Looks like I'm in the minority here. I use a VMWare Horizon VM as my primary desktop environment, and I love it! Working from home, VMWare Horizon has much better performance than using a VPN for many things including X11 forwarding to Linux computers on-site, RDP to Windows computers on-site, and accessing CIFS/SMB file shares. And when I do go on-site, I can connect to it from any computer, either using a kiosk, or a colleague's computer if we are collaborating, or any conference room computer.
Its the latency that really gets me. Having all my text be delayed and having moving windows around be just painfully slow is annoying. Windows does a lot of tricks to hide desktop latency and that is with the CPUs right there and GPUs accelerating it. I have also run into lack of dedicated IO issues too that resulted in bad performance combined with CPU performance being subpar because cloud CPUs tend to be low frequency and a lot of desktop software depends on single thread performance still.
This is the terrible norm in the semiconductor industry. VNCs everywhere. Almost all EDA GUIs are only designed for Red Hat or CentOS so IT makes everyone connect to a datacenter and start an X server. Having interviewed at/worked at these companies, I know that Intel, NVIDIA, and Apple all do this.

It sucks. Your productivity plummets because each keystroke lags and it makes you lose your train of thought. When there's an outage, no one can do any work at all.

Perhaps others have had better experience, but whenever I've used a remote GUI VM (e.g. through the Guacamole web browser interface), the latency is painfully noticeable when typing. So much so that I've usually dropped down to a local terminal connecting to the vm by ssh and doing everything in Emacs (or Vim, if that's how you're bent), or having JetBrains edit stuff remotely. But then you're back to needing a decent workstation.
Not bad article, but intro deeply confuses the issue. If you want fast builds for your org, use a build server farm that is much faster than the fastest workstation, and sma incremental builds locally or on your cloud desktop. A local UX machine (15" Mac not built in the Dark Ages of 2016-2020, + desk monitor, good for 5+ years) with a FUSE mounted remote storage and a build farm is a great combo.
Not sure who thinks these are good. Perhaps for basic programming or in slow moving orgs but outside these two use cases cloud desktops are horrible.
Reading his description of the issues, it sure sounds like they're as good as I'd think. I'm more optimistic about the gitpod-style remote development environments, honestly. But that post describes exactly what I expected from a "cloud desktop." I wouldn't expect someone to get much done if I inflicted that kind of work environment on them.
Not a fan of cloud desktops or any "virtualized" desktop. Experience is typically subpar and the worst part is that it requires a stable internet connection. What's the point of that when many of us are working remote and mobile?
We underestimate how latency matters for user experience.

Like the famous "100ms = 1% of sales" at Amazon

I’ve been developing for the last few years on a cloud VM and love it. Latency has never been a serious issue for me.

It lets me use Linux as my daily driver, I have a highly capable machine with large L2/L3 cache, a lot of RAM, many CPUs — and it’s totally portable.

Not to mention that the internet speeds on the cloud VM are incredible — easily 1gbps+ wherever I am in the world. This is a selling point folks forget.

The combination of speed (hardware and network) and always being on (can leave compilation tasks etc. running) is very nice.

I’ve used Citrix and the modern Chrome Remote Desktop experience is generally an order of magnitude better.

Working on a bus with wifi, typically fine. Even working from Asia with the VM in California, great.

The only issue I have with cloud is that for personal it’s expensive. Google compute VMs are a lot more than equivalent workstations per year for similar hardware afaict.

That’s the question I’m curious how folks work around.

> But even a fast laptop is slower than a decent workstation, and if your developers want a local build environment they're probably going to want a decent workstation. They'll want a fast (and expensive) laptop as well, though, because they're not going to carry their workstation home with them and obviously you expect them to be able to work from home.

What kind of builds require more than one of the new MacBook Pro’s?

And what about using cloud development environments instead of a fully remote desktop? I haven’t properly tested GitHub Codespaces, but it seems to me that a lightweight laptop (ie cheapest MB Air, if Apple) with MDM plus codespaces can work really well.

Sure, not everyone is using these tools, but to state that devs in general need both a beefy workstation and a laptop sounds a bit outlandish to me.

It’s hard to take this seriously when it doesn’t explore the why. Skimping on MacBooks is a pretty niche use case for cloud development.

Imo security drives this decision, and being able Work remotely is the benefit.

> aren’t as good as you think

Who thought they were any good in the first place?

Is there a cloud desktop product where I can select a development environment and instantly RDP into it pre-configured and ready to compile code with libraries installed etc?
Mirror for those that get blocked by dreamwidth like me:
IMHO remote desktops (cloud or DaaS) is a terrible idea with even more terrible executions. I never tried one that could measure up to a local environment.
> aren't as good as you'd think

I always find titles like this clickbaity, because the author has no idea what anyone in the audience would think.

I wouldn't think cloud desktops would be good at all, so if they're not as good as I'd think... they must be pretty darn bad.
This is where I wish Plan 9 had caught on. It lets you run remote graphical apps more seamlessly than any remote desktop I know of today.
Citrix HDX 3D-Pro on a GPU enabled VM works pretty well in my experience. Even driving multiple screens on the end user system.
I thought they'd be pretty fuckin bad, you're telling me they're worse than that?
Whoa, I thought they are absolutely pointless. Are they even worse?
i feel like not much has improved when working with teamviewer or anydesk over using x11-forwarding (plain and with compression) or vnc derivatives

remindes of an old document by stuart cheshire

I find it hard to imagine they're less good than I think.
Makes me miss SunRay... it just worked
Different workloads require different tools.

I work with Rust and TypeScript projects - MBP M1 Pro 32Gb RAM is 110% enough.

wwarner looks very promising and addresses some of these issues
Few personal notes:

- workstation model means working on a good (physical) desktop setup, large main monitor, eventual other(s) monitor(s), good keyboard, perhaps a thumb trackball instead of a mouse etc, oh, sure potentially the same can happen with a docked laptop but...

- ...laptop model means being able to move. If we WFH there aren't much reasons to move, well except when moving means relocate elsewhere. In practice MOST laptop users do not use their computers to be operational on the go but as a desktop replacer in suboptimal improvised setups, while those who need a good laptop can hardly find one.

The real issue came out in lack of knowledge from most about how remote works should be done. We have seen a big PR campaigns for more than a decades about nomadic workers who works on unstable and limited mobile network with PCMCIA/3G modems cards, then USB stick/HSPA stuff, portable hotspot, mobile tethering etc in a bar (so with potentially hostile and distracting surroundings) or on a beach (added to a potentially hostile climate/environment for mobile devices) and this model who push from "big notebook" to netbook to ultrabooks etc obviously fails miserably since it can't really work. We can work in such setup for a limited period of time for limited tasks but nothing more.

Now many start to admit that the solution is going back to the classic desk BUT this means every home need a room with a proper setup and so is an effort on both the worker and the company. A thing most reject.

Substantially: it's about time to tell things clear. The modern web is CRAP made to sell services instead of empower users through IT plus an admission that classic commercial desktop model is also CRAP. We damn need real desktops with document-based UIs, working locally and syncing just data that need to be synced. As we do as humans, anyone who do a certain job with a significant degree of independence in a single company.

To do so from remote we need a damn room per worker, well equipped, rented to the employer for a fair rate and establish clear contracts on that work paradigm.

Try to keep up the crappy surveillance capitalism business who can be translated in "rent someone else services, own nothing", in the trace of WEF/2030 famous video it's a very expensive absurdity. Try to keep up hybrid craps to avoid real capex is another absurdity.

Those who are eligible to work from home and want such paradigm should offer a proper room for that, companies should be clear "you are hired for remote works AND REMOTE ONLY, eventual travel to meed in persons must not happen more than once in a while" where the timeframe vary depending on the company and workers geographical distance.

Let's do that and we all benefit, companies and workers together in a win-win move those only loser will be GAFAM and friends (from Citrix to Bomgar). Avoid doing so and we will keep an inconsistent liquid situation that can be trivially called like the famous Full Metal Jacket Sg. Hartman definition on the most common amphibious thing so called ...

Not sure if this is the Cloud not Thin clients but in a school licensing killed any Thin Client attempts.

It was just too hard.

I never got to the stage they hint at, if a tiny amount of things won't work, does it means the whole idea fails?

If you only have Word/Excel/internet etc in one lab inevitably someone will ask for X,Y,Z. Is the money saved on computer and maintenance and benefits of instant installs/upgrades worth more or less than the property & teacher/student time costs of that lab running at 90% useability.

But licensing stopped the experiment.

Who uses a physical workstation anymore? (Besides artists.)

OP is suggesting a complete remote desktop for Office applications, like Video Conferencing. Ironically, for all the crap X takes, it could actually pull this off. Moreso for Wayland. I'm surprised there isn't a graphics client/server model out there as good as X after ~40 years. But I think the problem is too much layering: trying to put a VM in the cloud as an office desktop requires way too much bandwidth & latency through a remote desktop without a client/server graphics mode. The tools are there, they just aren't being used because they are missing a security layer.

I haven't used a physical workstation at my desk since 1999, and I was a designer/architect at Intel for decades. Everything was done via VNC. Back then it was called "distributed computing" with AFS, so it was a "proto-cloud". And before that I used a sun workstation to telnet into beefier computers. This was AIX/SunOS/LInux based.

Granted, I was not videoconferencing, but there's no reason why the desktop needs to be rendered in the VM (including the video stream!!!), then encoded, then decoded, then rendered again. It's just dumb.