At Google we have a custom fork of VSCode running in browser and builds can either be distributed or run on my Linux VM to utilize build cache.
I liked it so much I started doing a similar setup for small side projects. Just boot up the Cloud Console on GCP and start coding.
- Accessible from anywhere (I use my pc, my laptop, etc. The env is always the same)
- More compute (I can attach more CPU + more RAM flexibly)
- Less friction to start (minimum environment setup, most tools are preinstalled)
- Datacenter Network speeds + Artifacts cached (installing dependencies is fast)
- network dependence
There are some adjustments that need to be made to your workflow. And for some applications you are dependent on having the correct tooling. However, my personal prediction is most companies will move to this type of development workflow as the tooling improves.
At least for me, the productivity gains associated with quicker builds, IDE resyncs (CLion, looking at you) or just being able to have email, chat, calendar and an active video conference running without making the system crawl to a halt or long latency spikes are huge. 3-4k for a machine that will likely last 2-3 years is nothing in comparison.
You don't want to work for those companies.
It's notably different if you have a cloud VM running linux and you're connecting to it with VScode or something over SSH. That's borderline acceptable. The reality is usually some horrible AWS, Azure or Citrix portalised solution however.
Even a very low-spec laptop is going to run a simple graphical desktop environment like Xfce just fine. Watching a youtube video, browsing the web and even video conferencing can be handled with any new-ish laptop.
And in reality, you still want a reliable laptop with decent keyboard, long battery life, good display and so on. So you won't end up on a low spec machine to begin with.
For computation heavy dev stuff a simple SSH access is good enough. It can be a very smooth experience with a locally running VS Code or something.
Why does remote desktop still shit itself when I move around MS Word with a few pictures?
I know a tier 1 financial company that offer 100k / year developers a slow VM and from there you have to log into another VM. The VMs are dual core, 8GB. I watch in horror as each keypress takes more than a second. The amount of lost productiviry is in millions
Shadow offers remote desktop environment with GPU acceleration where you can run games and it feels responsive and decent.
Connecting to the remote machine needs to go through corporate SSO (in a browser) that then starts the native remote client. Policy requires MFA, strong, frequently changed passwords and Windows Hello on the laptop. Policy also requires screen lock after 5 minutes. For some reason policy also requires disabling copy-paste to remote machines.
The end result is that the remote session gets locked out every 5 minutes when you do something in the laptop's browser instead. To log back in one either has to enter a long, complicated password (can't paste it from the password manager!) or use an mfa code. Hardware tokens don't work either due to unreliable USB forwarding.
Having to jump through those hoops once or twice a day would be tolerable, dozens of times is grating.
I assume the policies are written for all the worst-case scenarios where people remote in from private, shared devices or use a laptop in a public place. But they add a lot of unnecessary friction when a laptop is used from a lockable home office.
For the market she was in, at the time, there was a moderately clear win. I watched a pitch, and the speed diff was real. The productivity gains in many folks saving an hour or two in rendering time was easily worth the... I can't remember - $200/month/seat maybe? Outside of those types of use cases, the benefits were harder to justify. And... in 2020+... unsure if local desktop CPU caught up enough that the benefits were lower.
: https://getrim.app/ I don't normally self-promote commercial products like this, but this is relevant to the article, and I thought people might find it interesting.
The deeper problem is the sad state of affairs of distributed computing for the end user:
* Application instances expect to be the only ones modifying the files that underlie the document being edited. Most of them simply bail out when the files get modified by another application.
* The default is "one device = one (local) filesystem" which is the exact opposite to what everyone needs: "one person = one (distributed) filesystem."
* The case for local-only filesystems only addresses corner cases, or deficient distributed file systems that fail to uphold basic security constraints (such as "my data is only in my devices" or "no SPOF" for my data).
* Whatever gets pushed to the cloud becomes strongly dependent on devices and vendors. Users end up handcuffed to a specific hardware (iCloud) or software (Android) if they want to have any chance of interacting with their own documents from their own devices.
* What we need is not cloud desktops, or cloud storage. We need local desktops with a decent distributed filesystem, and vendor agnostic access to that filesystem from all our devices.
My preference is to select one of the work contexts (e.g., the office) as primary and to put a workstation there, then remote to that workstation from secondary contexts (e.g., at home). This configuration gives me first-class computing where I need it most, in the primary context, and a decent second-class option when I need to work in other contexts.
I happily worked with this configuration for more than a decade and found it served all of my local and remote needs.
Nomachine and ThinLinc.
Everything else is fine for the occasional remote desktop administration, but they all have a combination of bad video quality, no audio, no keyboard shortcut capture or bad scaling options.
For the latency: 100ms is where the threshold is. Above 100ms, you start to really notice the latency and it becomes annoying to the point that you even start making mistakes while typing. Let's take an example: the average latency from my home laptop to a server in the AWS cloud is 20ms. If I add a GUI remote solution (such is Xpra, which is pretty good wrt latency), the latency increases to 60-80ms (and this is just for remoting a single GUI app like VSCode, but not the whole desktop). Now you add a latency of the app itself, which for VSCode is about 50ms. The total latency becomes 110-130ms. So latency-wise the experience of working with a cloud desktop is noticibly worse than my local developer laptop.
For the cost: my developer laptop costs about $1500. 16 cores, 32 GB of RAM, 1TB SSD. The equivalent cloud desktop setup would probably be around $400 a month. So in just 4 months the cost of the cloud desktop will exceed the cost of the laptop.
In my opinion, cloud desktops only make sense when you're not sure how much capacity you need. Is 4 or 8 cores enough for your work? 16 or 64GB of RAM? The cloud desktop setup is flexible. You need more you allocate more. But once the capacity is known, you should switch to your own hardware to significantly reduce the cost and actually improve the experience.
I'd mention SSH port forwarding in this section. For webdev you'll want to run your server on the remote host and use the local web browser. SSH port forwarding works great for this. I recently used this setup to get some extra RAM for a short project that could only be run as a collection of memory hungry microservices. This way I could get the whole thing running on one box; I spun down the server once the project was done.
Windows 10 made everything 3D, so now not having a GPU assigned to a virtual machine means everything is first rendered into a bitmap and then sent over wire as a movie. This causes additional delay, JPEG-like artifacts and instability.
VS Code for the development, SSH for the remote desktop, most of operations can be done via VSCode anyways. Chrome RDP is slow, I agree. I never use it anyways.
Works pretty well for us since remote windows and local are seamlessly integrated and managed by local WM. Solves the multi monitor issues. Definitely lower latency than vnc or rdp or nomachine from our testing. Windows, Mac and Linux clients all work well.
As others say, it is very hard sometimes to detect what is local and what isn't with RDP. Everything seems to just work, even using the Mac client.
Compare this with everything else I've used and it's a real janky JPEG compression mess.
I hope for many in these comments to experience this (especially Coder V2 which is far more flexible in provisioning workspaces) instead of the RDP non-sense that others need to suffer through.
While Parsec is good (and Nvidia GameSteam + Moonlight seems better in my experience) it really isn't good enough to use instead.
Also, honestly, with the advent of things like Tailscale I think it'll become more and more common to have a desktop + a nice, but weaker/cheaper device (Chromebook, MBA, etc.) that you can securely access at your desk or remotely if you want. It's what I personally do with my Desktop and M1 MBA right now.
Also want to add that dedicated servers aren't that expensive by comparison and you can get a lot of value paying like $100/month and using that remotely.
I have a Raspberry Pi running Remmina and accessing a number of different machines via RDP - A personal Fedora 36 desktop running in an LXC container, a Windows VM on Azure, and various other similar environments. I am typing this on that Pi, through that Fedora session, pushed to a 2560x1080 display. Typing and typical browsing is almost indistinguishable from "being there". Coding too. It is only noticeable (on the Pi) when large parts of the screen update and the little thing has to chug along, but I'd rather have this completely silent setup than an Intel NUC.
For work, I do have spanking new a work-issue laptop, but it is fairly recent and the fans spin up whenever I launch anything of consequence, so I am still logging in to a virtual desktop environment for everything up to (and including) audio calls (RDP has pretty decent audio support these days). Video and display sharing I still do locally, mostly because it's usual to switch environments during a call, but I have full multiple display support, and the connection can handle my 5K2K and 4K displays just fine.
I've been doing this for a decade or so, ever since I could use Citrix over GPRS. The user experience is fantastic - even at that time I could literally close my session in the evening, take a morning flight to Milan, pop open my laptop and continue where I had left off, over a piddling sub-64Kbps link.
With the right setup (and experience), latency issues mostly vanish. These days you can push a full 3D rendered desktop over DSL with either optimized RDP or game streaming, so the real constraints typically come from IT restrictions and people wanting to micromanage their environments.
That said, I also use VS Code Remote, and it works great for me as well over SSH. But it's just easier to spin up a VM/container and do that from my iPad :)
Edit: Remembered I shot this video of it running over Wi-Fi, unoptimized: https://twitter.com/rcarmo/status/1561397639215665153?s=20&t...
With RDP, in our experience, the latency issue is nonexistent. We've even have successfully run workstations editing 4k video with 0 issues. Yes, for those extreme cases you need a GPU (and the only option is NVIDIA Grid, which are really expensive cards with licensing on top of that), but for the most part, if the hypervisor has a good CPU, it's more than enough, we have clients that even use RDP through the browser.
You don't even need to have a really good internet connection. Also, SPICE is really good too, with really good desktop integration
Running a local-first setup is nice for things like iteratively step-debugging your latest changes on a single test case, but being able to push the diffs to a fast remote build server (elastic cluster?) to speed up the "run all the tests" action would be nice.
I think you can do this with Clang remote builders too. I hear Bazel has this.
Is this something that anyone has experience with? It seems like it could be the best of both worlds, from a compute performance standpoint.
(As others have noted, the other big benefit of a cloud desktop is that you don't have to spend time setting up your dev environment, which is constant toil for new developers; Github mentioned this as a big contributor of friction in https://github.blog/2021-08-11-githubs-engineering-team-move....)
GPU-intensive desktops are pretty much a no-go, but Mate desktop works beautifully and does what a desktop should: Manage my environment and get the hell out of my way.
Browsing on the remote desktop is anything but smooth, but it's good enough for development. I'm not going to stream video on it, though.
There's a tiny bit of keystroke latency, but not enough to matter IMO. I'm using Chrome remote desktop so YMMV. Running Steam on a cloud desktop is possible, but it's an exercise in madness.
I do it all using LXD to keep things relatively distro agnostic. I've posted the Python script I use here: https://github.com/kstenerud/lxc-launch
So I stuck a workstation with Linux on it in a closet. Fired up VNC and I could hit it from home, my cubicle, the road, wherever. It's evolved over the years as things became faster and more secure. It became a co-located server, then a VPS, and now it's a shared setup on a beefy server.
It maintains it's state no matter where I go. I can open up two ore more sessions for two ore more monitors. But it's more useful to just surf the web or open PDF's or whatever on the local machine. Copy and paste is pretty seamless these days. And wherever it's located, has a much better network connection than I do.
You still have a latency problem with large files (CD .iso in the old days, a 10GB package these days). I don't play games so I don't really know how that goes. But for development it works great, as well as just a general workstation.
I know numerous gaming companies that swear by Parsec. Except the author doesn’t appear to be talking about Parsec tier cloud desktops. But then again it’s not clear what the author is talking about
I started working at Fly.io ~4mo ago and quickly realized I could setup a nice remote dev environment since there are regions close to me (super low latency).
I setup a VM to run SSH to sync/forward ports. It turn off when I'm not using it (after a configured timeout, it sniffs for SSH connections and exits if there are none - which stops the VM), and uses Mutagen to sync files. The source of truth is my local files, so my local IDE's work great (they're working against the local file system).
I wrapped it up in a little tool I'm calling Vessel https://github.com/Vessel-App/vessel-cli, which talks to Fly's "Machines API"
JetBrains has Gateway and VSCode has remote Dev tools
Gateway's performance is very dependent on the network connectivity. If you have bad ping, you're going to curse the world seeing the input delay.
VSCode seems to be caching the files locally and updating them separately. With bad internet, you still get the native input lag.
It is nice having a single cloud based machine that is accessible via ssh on any of my physical devices.
I have a dev environment closer to production, ssl and publicly accessible urls for testing services and sharing to compare designs and UI changes, etc.
Fantastic setup for anyone that likes a vim+tmux workflow. Only a single environment to keep up to date and configured. Daily snapshots and backups.
Keeps the cost of other hardware down as well.. I can work effectively on cheap hardware which certainly offsets the server costs. I did a cost rundown before and it was like ~15 years of my vps and cheap hardware equal to a single entry level MacBook Pro.
The stuff is seamless. I mean it. I hate lagging, I hate stuff that doesn't work. But this does work. Really well.
Yet another use case where VDI falls down.
There are benefits; I could scale up my workstation even for an hour or so, with more memory or a fancier cpu. And it was easier to share my work with other (remote) colleagues; because they were at another timezone I could leave the server up for them when needed, while I shut my laptop down for the day and see their feedback the next day.
Now that I've changed jobs and I'm developing a desktop app again, I'm back on a physical Linux box under my desk, and I really miss the old experience. It was great to never care about a mac change tanking your productivity (i.e. I was totally unperturbed by the m1 switch), and it was also great not to have to run a Linux desktop environment, which it turns out is still a big pain.
I think I'll keep using my laptop as primary and only machine because many of the scenarios in the article also apply to me and what if I have to visit a customer? It never happened again since the pandemic but it could.
It sucks. Your productivity plummets because each keystroke lags and it makes you lose your train of thought. When there's an outage, no one can do any work at all.
Like the famous "100ms = 1% of sales" at Amazon https://news.ycombinator.com/item?id=273900
It lets me use Linux as my daily driver, I have a highly capable machine with large L2/L3 cache, a lot of RAM, many CPUs — and it’s totally portable.
Not to mention that the internet speeds on the cloud VM are incredible — easily 1gbps+ wherever I am in the world. This is a selling point folks forget.
The combination of speed (hardware and network) and always being on (can leave compilation tasks etc. running) is very nice.
I’ve used Citrix and the modern Chrome Remote Desktop experience is generally an order of magnitude better.
Working on a bus with wifi, typically fine. Even working from Asia with the VM in California, great.
The only issue I have with cloud is that for personal it’s expensive. Google compute VMs are a lot more than equivalent workstations per year for similar hardware afaict.
That’s the question I’m curious how folks work around.
What kind of builds require more than one of the new MacBook Pro’s?
And what about using cloud development environments instead of a fully remote desktop? I haven’t properly tested GitHub Codespaces, but it seems to me that a lightweight laptop (ie cheapest MB Air, if Apple) with MDM plus codespaces can work really well.
Sure, not everyone is using these tools, but to state that devs in general need both a beefy workstation and a laptop sounds a bit outlandish to me.
Imo security drives this decision, and being able Work remotely is the benefit.
Who thought they were any good in the first place?
I always find titles like this clickbaity, because the author has no idea what anyone in the audience would think.
remindes of an old document by stuart cheshire
I work with Rust and TypeScript projects - MBP M1 Pro 32Gb RAM is 110% enough.
- workstation model means working on a good (physical) desktop setup, large main monitor, eventual other(s) monitor(s), good keyboard, perhaps a thumb trackball instead of a mouse etc, oh, sure potentially the same can happen with a docked laptop but...
- ...laptop model means being able to move. If we WFH there aren't much reasons to move, well except when moving means relocate elsewhere. In practice MOST laptop users do not use their computers to be operational on the go but as a desktop replacer in suboptimal improvised setups, while those who need a good laptop can hardly find one.
The real issue came out in lack of knowledge from most about how remote works should be done. We have seen a big PR campaigns for more than a decades about nomadic workers who works on unstable and limited mobile network with PCMCIA/3G modems cards, then USB stick/HSPA stuff, portable hotspot, mobile tethering etc in a bar (so with potentially hostile and distracting surroundings) or on a beach (added to a potentially hostile climate/environment for mobile devices) and this model who push from "big notebook" to netbook to ultrabooks etc obviously fails miserably since it can't really work. We can work in such setup for a limited period of time for limited tasks but nothing more.
Now many start to admit that the solution is going back to the classic desk BUT this means every home need a room with a proper setup and so is an effort on both the worker and the company. A thing most reject.
Substantially: it's about time to tell things clear. The modern web is CRAP made to sell services instead of empower users through IT plus an admission that classic commercial desktop model is also CRAP. We damn need real desktops with document-based UIs, working locally and syncing just data that need to be synced. As we do as humans, anyone who do a certain job with a significant degree of independence in a single company.
To do so from remote we need a damn room per worker, well equipped, rented to the employer for a fair rate and establish clear contracts on that work paradigm.
Try to keep up the crappy surveillance capitalism business who can be translated in "rent someone else services, own nothing", in the trace of WEF/2030 https://youtu.be/Hx3DhoLFO4s famous video it's a very expensive absurdity. Try to keep up hybrid craps to avoid real capex is another absurdity.
Those who are eligible to work from home and want such paradigm should offer a proper room for that, companies should be clear "you are hired for remote works AND REMOTE ONLY, eventual travel to meed in persons must not happen more than once in a while" where the timeframe vary depending on the company and workers geographical distance.
Let's do that and we all benefit, companies and workers together in a win-win move those only loser will be GAFAM and friends (from Citrix to Bomgar). Avoid doing so and we will keep an inconsistent liquid situation that can be trivially called like the famous Full Metal Jacket Sg. Hartman definition on the most common amphibious thing so called ...
It was just too hard.
I never got to the stage they hint at, if a tiny amount of things won't work, does it means the whole idea fails?
If you only have Word/Excel/internet etc in one lab inevitably someone will ask for X,Y,Z. Is the money saved on computer and maintenance and benefits of instant installs/upgrades worth more or less than the property & teacher/student time costs of that lab running at 90% useability.
But licensing stopped the experiment.
OP is suggesting a complete remote desktop for Office applications, like Video Conferencing. Ironically, for all the crap X takes, it could actually pull this off. Moreso for Wayland. I'm surprised there isn't a graphics client/server model out there as good as X after ~40 years. But I think the problem is too much layering: trying to put a VM in the cloud as an office desktop requires way too much bandwidth & latency through a remote desktop without a client/server graphics mode. The tools are there, they just aren't being used because they are missing a security layer.
I haven't used a physical workstation at my desk since 1999, and I was a designer/architect at Intel for decades. Everything was done via VNC. Back then it was called "distributed computing" with AFS, so it was a "proto-cloud". And before that I used a sun workstation to telnet into beefier computers. This was AIX/SunOS/LInux based.
Granted, I was not videoconferencing, but there's no reason why the desktop needs to be rendered in the VM (including the video stream!!!), then encoded, then decoded, then rendered again. It's just dumb.