I'm not sure there's anything true on this list that is, in 2023, interesting; maybe you could argue they were in 2005.
The irony is, Ranum went on to work at Tenable, which is itself a firm that violates most of these tenets.
We had a user click an email and get phished.
We tried training the users with tools like KnowBe4, banners above the emails that say things like THIS IS AN OUTSIDE EMAIL BE VERY CAREFUL WHEN CLICKING LINKS. Didn't help.
The email was a very generic looking "Kindly view the attached invoice"
The attached invoice was a PDF file
The link went to some suspicious looking domain
The page the link brought up was a shoddy impersonation of a OneDrive login
In just minutes, the users machine was infected, it emailed itself to all of their Outlook contacts...
So this means nothing in this list detected a goddamn thing:
Next-generation firewall
AI-powered security
'MACHINE LEARNING'
'Prevent lateral spread'
enterprise defense suite with threat protection and threat detection capabilities designed to identify and stop attacks
AV software that was advertised to 'Flag malicious phishing emails and scam websites'
'Defend against ransomware and other online dangers'
'Block dangerous websites that can steal personal data'
the cloud-based filtering service that protects your organization against spam, malware, and other email threats
And the company that we pay a huge sum of money to 'delivers threat detection, incident response, and compliance management in one unified platform' didn't make a peep.But, we are up to the standards of quite a few acronyms.
It's all a useless shitshow. And plenty of productivity-hurting false flags happen all the time.
"Penetrate and Patch" is supposedly dumb. But what do we practically do with that? We've seen in the last decade or so a lot of long-lived software everyone thought was secure get caught with massive security bugs. Well, once some software you depend on has infact been found to have a bug, what's there to do but patch it? If some software has never had a bug found in it, does that actually mean that it's secure, or just that no skilled hackers have ever really looked hard at it?
Also web browsers face a constant stream of security issues. But so what? What are we supposed to do instead? Any simpler version doesn't have the features we demand, so you're stuck in a boring corner of the world.
"Default Permit" - nice idea in most cases. I've never heard of a computer that's actually capable of only letting your most commonly used apps run though. It's not very clear how you'd do that, and ensure none of them were ever tampered with, or be able to do development involving frequently producing new binaries, or figure out how to make sure no malicious code ever took advantage of whatever mechanism you want to use to make app development not terrible. And everyone already gripes about how locked-down iOS devices are, wouldn't this mean making everything at least that locked down or more?
As an old I strongly object to the corruption of the terms "hacking" and "hacker" in the diatribe following this heading. I'm a fan of hacker culture, in the old sense, and encourage our developers to adopt a hacker mindset when approaching the problems they're trying to solve. Hacking is cool.
That’s like saying “Why don’t they just design locks that are unpickable?”
They’ve been working on that, for a while. But you need to know what you’re protecting against. Anyone who watches The Lock Picking Lawyer knows about the swaths of new locks vulnerable to comb attacks - a simple attack that had been solved for almost a hundred years but somehow major lock manufacturers forgot about.
You can’t build something safe without considering potential vulnerabilities, that’s just a frustratingly naive thing to say.
"Default Deny" was, for a while, called "App Store". However, the app store vendors have done much better at keeping out things for competitive reasons than at keeping out things for security reasons.
Sure, but how does one get the knowledge on how to secure systems? Half the job of a security engineer is thinking like an attacker and trying to poke holes in it. Key mitigations like ASLR and stack canaries are so effective because they specifically block off key resources and techniques that attackers use. It would be downright impossible to invent these mitigations (or even meaningfully understand them) if you did not already have a firm grasp on memory corruption and ROP. I'm not sure it's an argument I actually care to defend, but I do honestly believe that you can't be a strong security engineer if you don't have a grasp on the techniques your adversaries use.
#1: Default permit: people don't like to spend energy, especially not upfront. Integrating "Permit by default" systems is much faster than setting them up with proper authentication, authorization and access rights. Permit default just works, starts quickly, and works fast.
#2: Enumerating badness: you mean, like how we name every single strain of viruses? So now we enumerate computer badness too.
#3: Penetrate and patch: very similar to how our laws work, I think. There are people who create injustises, and later the legal code is upgraded to handle that. Again, reactive, like in #1.
#4: Hacking is cool - well, other criminals are cool too, like pirates and maffiosos, and so on. People are drawn to power.
#5: Educating users: someone has to, doesn't they, if they haven't learnt the thing by themselves? You can't make everyone go away if they are dumb, if you need them.
#6: Action is Better Than Inaction: This one, I think, imitates business. There's a lot of ways to make money in business, and being there early is one of them.
That said, I really enjoyed the article. Permit by default is especially dumb, it was really funny when mongo installed itself with no password and listen on public IP, default port. And how long it took them to patch that. And how that haven't burned the public goodwill! So maybe these things are not really dumb after all?
Cool, so a pen test?
> One of the best ways to discourage hacking on the Internet is to ... pay them tens of thousands of dollars to do "penetration tests" against your systems, right? Wrong! "Hacking is Cool" is a really dumb idea.
...
Most of these are well thought out and still relevant 17 years later. #4 -- particularly the "don't learn offensive security skills as a defender" idea -- was dumb in 2005, and its dumb now. Its also, unsurprisingly, not advice the author has himself followed.
https://www.nemid.nu/dk-da/om-nemid/historien_om_nemid
https://www.borger.dk/internet-og-sikkerhed/mitid
full disclosure - I worked on the JavaScript implementation of NemID. My problems with it are not the implementation, but the whole concept.
As a sysadmin, I took this approach as well. On the local machine, the server(s) would log normally. But, when I set-up centralized logging, I set-up a list of log entries that wouldn't normally interest me day-to-day. The server would only send to a central logging server things that weren't on this list. What was left were usually problems that I would need to pay attention to and they got fixed faster.
The rest of the uninteresting log entries would just be audited from time to time.
On the matter of security, every user that logs in on a daily basis gets logged with their IP address. Anytime that a user logged in with a different IP - it would get logged to the central log server and I would be notified. Most of the time, it was harmless.. but there were enough times I would find a compromised account in a sea of normal day-to-day login activity.
When your logs are full of normal things in it, it's easy to miss important details.
e.g.
> Let me put it to you in different terms: if "Penetrate and Patch" was effective, we would have run out of security bugs in Internet Explorer by now. What has it been? 2 or 3 a month for 10 years?
I agree with the point that "Penetrate and Patch" shouldn't be the primary strategy, but the author seems to write it off entirely with a viewpoint like "you should just write software and build systems that don't have security bugs". Well yes, of course that would be nice, but that's not feasible. And some software is much more difficult to get right than other kinds.
"Penetrate and Patch" is a useful piece of security in that (a) it can catch what slips through the cracks, (b) it provides a sort of incentive mechanism to get it right in the first place, and (c) it simply isn't possible to build bug-free systems.
The author claims that "Penetrate and Patch" finding bugs every month as evidence that it's bad, but isn't it the opposite? You cannot be bug free, so in fact any incremental progress/fixes is in fact good.
All that said, I do agree that all of this starts with secure by design. "Penetrate and Patch" isn't a good primary strategy and cannot replace Doing It Right. But I think it complements it well.
I would be interested to hear the author's thoughts on what has changed in the 18+ years since it was written.
That didn't age well. In the era of growing corruption in government and business alike hacking becomes important way through which people can actually learn anything about their overlord's shady deals.
/enterprise
Great point, but the emphasis on system administration instead of the broken nature of operating systems causes the point to be missed.
This has aged poorly; nowadays, the most notable attacks are conducted by state actors (e.g., Russia and China) or for-profit criminal groups (e.g., ransomware) rather than lone hackers doing it for fun.
That's like, entirely unrelated. Black hats are motivated by monetary gains, not scout badges. The proliferation of internet made "for fun" hackers minority and irrelevant factor (or benefit, as they might actually report a bug instead of sow mayhem) when it comes to security.
But of course, the dumbest idea in computer security is that it always comes last on the budget list.
An odd suggestion in an otherwise relatively uncontroversial article. It implicitly trains your users in a bunch of unpleasant things:
* clicking on some URL in an email, typing your password into whatever webpage pops up, downloading the blob it serves you and opening it (after clicking through the browser's "this was downloaded from the internet, are you sure?" warning) is a perfectly normal and legitimate part of the working day
* one needs to find ways to obfuscate documents of types that aren't on the IT whitelist so one can send them to one's colleagues so they can do their jobs (and no, the corporate whitelists never capture everything people urgently need to share in order to do their jobs)
* since everyone now does that habitually, receiving an automangled email with a link to an attachment which has its actual payload contained in several layers of archive obfuscation wrapper is perfectly normal because that's just what you have to do to share stuff with your colleagues now
These could, of course, be mitigated by suitably educating users, but since the practice is advocated in a section about user education never working, that is unlikely to happen.
How can you engineer a good lock without investing all the ways it can be bypassed by lockpicking lawyer?
The first two points are alright, then it just veers off the rails
Its the same as any other project in life: you track mistakes and address them.
1) Offer solutions not process/procedure:
Devs want to make secure systems, but they have VERY LIMITED TIME. Security is always something that is #1 in the bullet points of a presentation of priorities, and always a distant priority in the boots on the ground of features and keeping shit running.
What I've noticed is that the security team doesn't want to be responsible for cleanup or doing lots of work or engineering. They want to make presentations for the upper management, pick some enterprise partners to impose on the orgs, and kick back in offices. Most know little about cryptography or major incidents. If a great security practice like "sync ssh keys" or other things that may require a bit of legwork, they don't want to do it.
They'd rather load down the devs. They'd rather come in and review the architecture rather than provide drop-in solutions. If something needs customization for interface with SSO or getting credentials, they drop the integration in the devs laps. Who's supposed to be the experts here? The security team should own whatever craptastic enterprisey shit they select, and ALSO be responsible for making it useful to the dev org.
The biggest example of this is the desire for "minimum permission". Take AWS for example with its explosive number of permissions, old and new permissions models, and very complicated webs of "do I need this permission" and "what permission does this error message mean I'm missing". And ye gods, the dumb magic numbers in the JSON, but anyway. If the security team wants AWS roles with "minimum viable permission" THEY need to be experts in the permission model and craft these VERY COMPLICATED permission sets FOR THE DEVS. And if the Devs need more, they need to very quickly provide (say < 1/2 day) new permissions in case some new S3 bucket is needed or some new AWS service is needed. But security teams don't want to do such gruntwork.
2) recognize that automated infrastructure is the rule, not the exception, aka the devs are not the enemy
It took sooo long for ssh keys to become prevalent in development that people weren't ssh'ing in using passwords. Like, decades. This practice represented a big leap in administration productivity and probably was more secure.
And you could automate on top of it in shell scripts, not leave passwords in .history, lots of good things.
And the security industry wants you to undo it. Wants TOTP passwords from your phone hand-typed, wants a web page to pop up to gain temporary credentials, pretends you know how long your process will run so those temporary credentials won't expire and if you do, what, you're supposed to manually re-authenticate?
Security at my last job wanted an ssh replacement to be used (the enterprise security industry is waging war on ssh/sshd) that if I used it from the command line IT POPPED UP A BROWSER PAGE. And no way to automate this for any task.
In general security teams seem obsessed with making devs lives as hard as possible. Are most leaks via dev channels? In my experience the BIG leaks are "County Password Inspector", phishing, disgruntled/angry employees selling access. Well, and credentials checked into github. Most places I've worked at have involved this steadily slide into less and less usability by the devs, at GREAT cost to productivity, for questionable payoff in actual platform security.
Meanwhile, no joke, ssl protocols on internal password reset sites were using such poor algorithms that Chrome was refusing to display it. Githubs were open to the public that shouldn't have been. 8-character limit passwords with proscribed character usage.
Nuts.
> 6) Action is Better Than Inaction
I’m a fan of the
> don’t just do something, stand there!
This person has a fundamentally mistaken idea of how airliners and, therefore, security systems as a whole work. Yes, airliners have the occasional problem. That's why they have:
* checklists and inspections, to catch them beforehand
* communications, to catch them while they're evolving
* redundancies, to turn ramified problems nobody caught into annoyances instead of disasters
No matter how some people whine and moan, "Just Be Perfect" fails to be an actionable plan.
Also: Hackers will be cool as long as DRM and planned obsolescence/designed-in insecurities exist.
EDIT: I deleted most of my post because I found it was repeated up and down the comments which I am so relieved to see. I kept my post because I want newcomers to hear as many voices in objection to OP's outdated essay as possible.
You can trust large-organization to secure your device.
(especially for orgs that give themselves, advertisers or apps more access to the device than you have)
And yet, just yesterday I've seen a TV ad explaining how to not get phished out of your money through your banking app.
I think it a running theme in this document that author displays severe lack of understanding how security becomes hard as soon as you let anyone do anything online.
I guess author of this post is no longer with us because they got heart attack when npm and similar rose to prominence.
The Six Dumbest Ideas in Computer Security (2005) - https://news.ycombinator.com/item?id=28068725 - Aug 2021 (21 comments)
The Six Dumbest Ideas in Computer Security (2005) - https://news.ycombinator.com/item?id=14369342 - May 2017 (6 comments)
The Six Dumbest Ideas in Computer Security - https://news.ycombinator.com/item?id=12483067 - Sept 2016 (11 comments)
The Six Dumbest Ideas in Computer Security - https://news.ycombinator.com/item?id=522900 - March 2009 (20 comments)
The Six Dumbest Ideas in Computer Security - https://news.ycombinator.com/item?id=167850 - April 2008 (1 comment)
The Six Dumbest Ideas in Computer Security (2005) - https://news.ycombinator.com/item?id=35811 - July 2007 (2 comments)