imiric
I like the focus on simplicity of uMon, and agree with author's criticism of behemoths like Grafana.

But looking at the installation instructions[1], I can't help but think that their reluctance to use Docker feels contrarian for no reason (and the quip about it being "out of fashion" completely misguided). This whole procedure could be automated in a Dockerfile, and actually running uMon would be vastly simplified. Docker itself is not much more than a wrapper around Linux primitives, and if they dislike it specifically for e.g. having to run a server and run containers as root, there are plenty of other lighterweight container alternatives.

There's an argument to be made that the "Simple" Network Management Protocol they're a fan of is far from being simple either[2]. Configuring the security features of v3 is not a simple task, and entire books have been written about SNMP as well. They conveniently ignore this by using v2c and making access public, which might not be acceptable in real-world deployments.

I'm all for choosing "simple" tools and stacks over "complex" ones, for whatever definition of those terms one chooses to use, and I strive to do that in my own projects whenever possible, but simplicity is not an inherent property of old and battle-tested technologies. We should be careful to not be biased for technology we happen to be familiar with, but be pragmatic about picking the right tool for the job that fits our requirements, regardless of its age or familiarity.

[1]: https://tomscii.sig7.se/umon/#Installation%20and%20getting%2...

[2]: I have a pet peeve about tools or protocols with "simple" or "trivial" in their name. They almost always end up being the opposite of that as they mature, and the name becomes an alluring mirage tricking you into its abyss of hidden complexity. I'm looking at you SMTP, TFTP...

ahofmann
I like the concept of simple monitoring. Simple means it is simple to install, simple to maintain and simple to use. For me, this is netdata. Netdata could be much more, but I just install it on whatever machine and never think about it again. And when something is strange on that machine, I go to http://localhost:19999 and look around.
nine_k
I like the idea of simplicity and doing exactly what you need. And the single executable.

The words "stupid simple" and "C++" together make me scratch my head though. C++ itself is not simple, and you have to recompile if you need to change something (and sometimes you inevitably do), which is slow. I'd likely go with a relatively simple C program that embeds the FFI for RRDtool and other stuff, and embeds Lua, or, better yet, Janet. Then most of the thing could be written in these languages, and would be easy to tweak when need be. That would still allow for a single executable + a single config file, on top of the logic already embedded. (But the author went and built the thing, and I did not, so their solution so far outperforms mine.)

dobin
I was also overwhelmed by Grafana and co. In the time required to install it, i coded a simple monitoring alternative DMSR "Does My Shit Run" in python. Each agent has plugins which basically just sends a data structure to the monitoring server, which will display it as yaml. No persistence, history, graphs or similar. uMon looks like a behemoth in comparison.

Github: https://github.com/dobin/dmsr

Live: https://mon.yookiterm.ch

kburman
I really like the idea behind μMon. It reminds me of when software was simpler. I remember using a program called "Everything" by voidtools. It was small but could search a lot of files quickly. Nowadays, some projects use big tools like Elasticsearch just to search a few things. Some even use PostgreSQL, a big database, for small tasks. I wish more software would keep things simple.
roger_
I was recently looking for an ultra minimal monitoring solution for OpenWrt and other lightweight systems (Pi’s, etc.) and was disappointed not to find one that met my needs (negligible CPU, disk space and RAM).

I ended up hacking together a shell script to send data to Home Assistant (via MQTT) which runs on pretty much any system that has at least netcat: https://github.com/roger-/hass-sysmon

posix86
Why's everyone hating on Grafana? I find it fairly easy to use, it has a good balance between power & simplicity. And with docker you can make it run in seconds.
pphysch
This doesn't seem much simpler than Prometheus+Grafana, if it all.

Some pushback:

- SNMP sucks. It's very limited, difficult to secure, etc. I've spent a lot of time with it, and it's more complex than Prometheus' simple HTTP metrics model. I use it where I have to (non-server devices), but I prefer dealing with Prometheus.

- Grafana is not necessarily complex. It's powerful, and you can waste a lot of time overinvesting in dashboard design, but that's not required. It can be used quite elegantly.

μMon does seem like "old school for the sake of old school". SNMP and RRDTool were designed when memory & bandwidth were much more limited. I will happily trade the overheads of HTTP and static Go binaries for the much superior UX they offer.

louwrentius
I do run Grafana + InfluxDB at home and I agree it’s not trivial to setup. Grafana in particular makes creating informative, easy to read graphs/dashboards a PhD worthy endeavor.

Yet, I run some hobby projects that collect data and this setup is absolutely perfect for it. I even challenged myself to use SSL for the InfluxDB server (run small CA).

Also, I use slack-based alerting through Grafana, for example if a disk would fill up, or something is down.

So it’s really about what your needs are.

And often, basic metrics about systems like CPU usage, load or network traffic doesn’t tell you anything useful or actionable.

Borg3
Not bad. I like KISS concepts. I personally run old Cacti instance for monitoring here. Not that simple as uMon, but not very complicated either. And even wrote cacti_con CLI like graph viewer, to see specific port of that fat 100+ ports campus switches I had at work :)
ilyt
https://collectd.org/ does the gathering (and writing to RRDTool database, if you so desire) part very well. Many plugins, easy to add more (just return one line of text)

Still need RRD viewere but that's not a huge stack

And it scales all the way to hundreds of hosts, as on top of network send/receive of stats it supports few other write formats aside from just RRD files.

vsviridov
Looks more or less like munin...
rcarmo
Rrdtool is still my go-to for a lot of things. The only functionality I would like to have from it is an SVG version of the charts that allowed for panning and zooming into particular points in the past.
ComputerGuru
This really speaks to me - rrdtool is criminally underutilized. Great work!

I did something different but in a similar vein for one server network. We had Seq already deployed for log monitoring so instead of setting up a separate network/node/app health monitoring interface I configured everything to regularly ping seq with a structured log message containing the needed data that could be extracted and graphed with seq’s limited oob charting abilities in the dashboard. Not perfect, but simpler.

golem14
Looks nice. I would like to use something like this to remotely monitor machines. Currently use Prometheus (but without Grafana), since the alerting and built-in graphing is sufficient.

But agree with OP that Prometheus feels more complex than need be for simple use cases. But so does sendmail ;)

evilc00kie
Nice work!

I often think about the "reinventing the wheel" argument. Isn't open source about diversity? There are so many fork, clones, "Yet another..."'s (yacc, yaml,...).

So many times I'm looking for suitable go libraries that solve a certain problem. There might be a few out there but every lib has its own pros and cons. Having the possibility to choose is great. Nothing sucks more than depending on a unmaintained clib nobody cares without alternatives.

The only counter-example that comes in my mind is crypto. You don't want to do your own crypto.

mayli
fyi, here is a very similar project.

https://github.com/pommi/CGP

amar0c
Monitoring should be: "central" location with GUI/graphs + agents per bunch of servers. Let me chose from dropdown what I want to see.

If I have to deploy this on each machine than it makes no sense. I know SNMP is able to be used like this, but is μMon ?

new_user_final
Is there any simple monitoring system for kubernetes that will monitor memory and CPU usage for each deployments and node? Prometheus and Grafana is good but too much configuration. I also like stats page of HAProxy. Something like that for per service?
wtcactus
Nice nifty project, he had me until the "no alerting" part.

Anyway, I might still deploy this in a Proxmox homelab where I don't want to fight with the complexity of a grafana dashboard.

random3
it's 2023. Tracing has been around for 20 years(dtrace, x-trace https://cs.brown.edu/~rfonseca/pubs/xtr-nsdi07.pdf).

Very simple logging, if not structured, while not completely useless it's not very useful either. except for maybe showing some nice charts.

Any serious monitoring tool is useful when it can explain things and only tracing gives you causal information.

basemi
Not to be confused with http://umonfw.com/
gjulianm
I honestly don't get the criticisms of the Prometheus + Grafana stack.

> A full-blown time-series database (with gigabytes of rolling on-disk data).

Prometheus has a setting that allows you to limit the space used by the database. I'm not sure however how one can do monitoring without a time-series database.

> Several Go binaries dozens of megabytes each, also consuming runtime resources.

Compared to most monitoring tools I've tested, the Prometheus exporters are usually fairly lightweight in relation to the amount of metrics they generate. Also, "several dozens of megabytes" doesn't seem like too much when we're usually talking about disk spaces in the gigabytes...

> Lengthy configuration files and lengthy argument lists to said binaries.

Configuration files, yes if you want to change all the defaults. Argument lists, not really. In reality, a Docker deployment of Grafana + Prometheus is 20 lines in a docker-compose.yml file. Configuration files come with defaults if you install it to the system.

By the way, I'm not sure that configuring a FastCGI server will be easier than configuring a Docker compose file...

> Systems continuously talking to each other over the network (even when nobody is looking at any dashboard), periodically pulling metrics from nodes into Prometheus, which in turn runs all sorts of consolidation routines on that data. A constant source of noise in otherwise idling systems.

Not necessarily. Systems talk to each other over the network if you configure them to do so. You can always install a Prometheus + Grafana on every node if you don't want to do central monitoring and you'll have no network noise.

> A mind-boggingly complex web front-end (Grafana) with its own database, tons of JavaScript running in my browser, and role-based access control over multiple users.

Grafana, complex? I think dragging and dropping panels with query builders that don't even require you to know the query language are far better than defining graphs in shell scripts.

> A bespoke query language to pull metrics into dashboards, and lots of specialized knowledge in how to build useful dashboards. It is all meant to be intuitive, but man, is it complicated!

Again, this is not a problem of the stack. Building useful dashboards is complicated no matter what tool you use.

> maintenance: ongoing upgrades & migrations

Not really. Both Prometheus and Grafana are usually very stable and you don't need to upgrade if you don't want to. I have a monitoring stack built with it in my homelab and I haven't updated it in two years, and it still works. Of course I don't have the new shiny features, but it works.

To me, it seems that the author is conflating the complexity of the tool with the complexity of monitoring itself. Yes, monitoring is hard. Knowing which metrics to show, which to pull, how to retain them, it's hard. Knowing how to present those metrics to users is also hard. But this tool doesn't solve that. In the end, I don't know how useful it is to make a custom tool that collects very limited metrics based on other ancient, limited, buggy tools (SNMP, RRD, FastCGI...) that is missing even basic UX features like being able to zoom or pan on charts.

toxik
What about netdata?
sr.ht