> Own your own PAAS. Infrastructure at a fraction of the cost. Powered by Docker, you can install Dokku on any hardware. Use it on inexpensive cloud providers. Use the extra cash to buy a pony or feed kittens. You'll save tens of dollars a year on your dog photo sharing website.
I'm interesting in self-hosted AWS api compatible solutions.
Parse is also interesting. In fact I discovered that there is a whole new world outside AWS and DO. It no longer makes sense for me to run weekend projects on AWS outright, thats really for large enterprise applications.
What I would love is to be able to deploy my docker images on my "own"cloud running on a dozen droplets/VPS around the world and bill my users at the same rate that AWS is charging. I figure I would be able to undercut AWS significantly with this method.
Storing 2TB on AWS was price shock for me, makes no sense when there are dedicated servers with a ton of storage and bandwidth that I could just use instead of being "serverless".
However, I find it to be a hall of mirrors, wondering if anybody on HN has been in my shoes and how they are handling it. I guess my biggest concerns:
- uptime availability (can I promise close to AWS?)
- security (how do i harden my box/selfcloud and have peace of mind?)
- IAM type of roles/permissions (how do i emulate something like IAM for my team?)
Like the billing from AWS is getting to a point where I feel like we can use bunch of droplets behind some "selfcloud" that manages all these things for me. I'm sure a solution exists, I just often have trouble recalling their names. Dokku makes it easy to remember because it sorta rhymes with Heroku.
https://github.com/piku/webapp-tutorial/blob/master/README.m...
It is a way to get the same sort of developer experience benefit as dokku, but without dockers and containers, using plain UNIXy tools on a single Linux node. The above link explains how it works.
For those worried about a Single Point of Failure, you can't have your cake and eat it too. Docker and Dokku is a single point of failure when you build directly on it, I guess. Kubernetes mitigates that.
We're still here, keeping the tires on. I haven't personally tested K8s 1.24 with Workflow yet, but we are actively seeking maintainers on all of the cloud vendors.
If you find that something is broken and want to see it fixed, talk to me on Slack. I'm aware of at least Azure that has broken their Bucket Storage API since Workflow was originally published by Deis, and it hasn't been quite fixed yet.
Documented support for Google Cloud and AWS are both extant as well, although I've heard one person call into question whether Google Cloud storage will work out of the box, or if it still needs upgrades. We'd like to add DigitalOcean support as well, I already know it basically works, we just need to add documentation and probably also give it a quick spit-shine and polish.
Dokku on the other hand is less heavyweight and does not require as much work for maintenance, since it does not have to be decentralized or protect against any SPOF. It's a wonder that more people do not want to use this. I've heard of at least one more Deis Workflow fork in the wild, possibly in better or worse states of maintenance than Team Hephy, but if you are looking to get off of Heroku then you could certainly do much worse.
We do need volunteers as we do not have time to maintain everything by ourselves, (come talk to us on the Slack if you're interested in joining the fun!)
The single point of failure though, is something people worry too much about. If you are serious about what you host, you do need to have another host ready to go.
Majority of workloads can be down for a few hours without customers worrying anyway.
We actually use it for hosting quite large workloads, but we do have a few bar e metal hosts we manually provision different services across. At a moment's notice we can quickly push the app to a different server and would be down for a few tens of minutes at most, minutes normally.
I just don't get the expense of Heroku just because your (in reality) non-essential workload can't be down for a few seconds.
From a dev perspective, it’s been awesome.
I have zero monitoring on it, so I have no idea if users love it. But analytics look good.
Either way. Impressed. And appreciate all the efforts behind the project.
Edit: Rails, pg, sidekiq, etc.
Hetzner offers 48 dedicated vcores/192 GB RAM in a single machine for less than 500€/month. I can totally see many business working for years within these boundaries. Also, not every company need Google-grade high availability.
I was very disappointed that it did not have tcp/udp proxy support. Fortunately the plug-in system allowed us to extend this functionality in a day.
Dokku is very nice for this use-case. No need to have a complicated kubernetes setup.
And I bet you the same goes for performance, mine can share data between cores atomically.
[0] https://caprover.com/