The Six Dumbest Ideas in Computer Security (2005) - https://news.ycombinator.com/item?id=28068725 - Aug 2021 (21 comments)

The Six Dumbest Ideas in Computer Security (2005) - https://news.ycombinator.com/item?id=14369342 - May 2017 (6 comments)

The Six Dumbest Ideas in Computer Security - https://news.ycombinator.com/item?id=12483067 - Sept 2016 (11 comments)

The Six Dumbest Ideas in Computer Security - https://news.ycombinator.com/item?id=522900 - March 2009 (20 comments)

The Six Dumbest Ideas in Computer Security - https://news.ycombinator.com/item?id=167850 - April 2008 (1 comment)

The Six Dumbest Ideas in Computer Security (2005) - https://news.ycombinator.com/item?id=35811 - July 2007 (2 comments)

A reminder that a big part of the subtext of this piece is a reactionary movement against vulnerability research that Ranum was at the vanguard of. Along with Schneier, Ranum spent a lot of energy railing against people who found and exploited vulnerabilities (as you can see from items #2, #3, and #4). It hasn't aged well.

I'm not sure there's anything true on this list that is, in 2023, interesting; maybe you could argue they were in 2005.

The irony is, Ranum went on to work at Tenable, which is itself a firm that violates most of these tenets.

This describes the security industry as a whole.

We had a user click an email and get phished.

We tried training the users with tools like KnowBe4, banners above the emails that say things like THIS IS AN OUTSIDE EMAIL BE VERY CAREFUL WHEN CLICKING LINKS. Didn't help.

The email was a very generic looking "Kindly view the attached invoice"

The attached invoice was a PDF file

The link went to some suspicious looking domain

The page the link brought up was a shoddy impersonation of a OneDrive login

In just minutes, the users machine was infected, it emailed itself to all of their Outlook contacts...

So this means nothing in this list detected a goddamn thing:

    Next-generation firewall
    AI-powered security
    'Prevent lateral spread'
    enterprise defense suite with threat protection and threat detection capabilities designed to identify and stop attacks
    AV software that was advertised to 'Flag malicious phishing emails and scam websites'
    'Defend against ransomware and other online dangers'
    'Block dangerous websites that can steal personal data'
    the cloud-based filtering service that protects your organization against spam, malware, and other email threats
And the company that we pay a huge sum of money to 'delivers threat detection, incident response, and compliance management in one unified platform' didn't make a peep.

But, we are up to the standards of quite a few acronyms.

It's all a useless shitshow. And plenty of productivity-hurting false flags happen all the time.

I wonder how well we all think this article has aged?

"Penetrate and Patch" is supposedly dumb. But what do we practically do with that? We've seen in the last decade or so a lot of long-lived software everyone thought was secure get caught with massive security bugs. Well, once some software you depend on has infact been found to have a bug, what's there to do but patch it? If some software has never had a bug found in it, does that actually mean that it's secure, or just that no skilled hackers have ever really looked hard at it?

Also web browsers face a constant stream of security issues. But so what? What are we supposed to do instead? Any simpler version doesn't have the features we demand, so you're stuck in a boring corner of the world.

"Default Permit" - nice idea in most cases. I've never heard of a computer that's actually capable of only letting your most commonly used apps run though. It's not very clear how you'd do that, and ensure none of them were ever tampered with, or be able to do development involving frequently producing new binaries, or figure out how to make sure no malicious code ever took advantage of whatever mechanism you want to use to make app development not terrible. And everyone already gripes about how locked-down iOS devices are, wouldn't this mean making everything at least that locked down or more?

> #4) Hacking is Cool

As an old I strongly object to the corruption of the terms "hacking" and "hacker" in the diatribe following this heading. I'm a fan of hacker culture, in the old sense, and encourage our developers to adopt a hacker mindset when approaching the problems they're trying to solve. Hacking is cool.

> Wouldn't it be more sensible to learn how to design security systems that are hack-proof than to learn how to identify security systems that are dumb?

That’s like saying “Why don’t they just design locks that are unpickable?”

They’ve been working on that, for a while. But you need to know what you’re protecting against. Anyone who watches The Lock Picking Lawyer knows about the swaths of new locks vulnerable to comb attacks - a simple attack that had been solved for almost a hundred years but somehow major lock manufacturers forgot about.

You can’t build something safe without considering potential vulnerabilities, that’s just a frustratingly naive thing to say.

Back in 2005 the idea that you shouldn't run every bit of executable code sent to you was drilled into people. Nowadays you can't use a commercial/institutional websites without doing the modern equivalent of opening random email attachments.

"Default Deny" was, for a while, called "App Store". However, the app store vendors have done much better at keeping out things for competitive reasons than at keeping out things for security reasons.

Looking back on that era, the hate towards hackers feels really misplaced. Yeah, at the time it was more local and more dominated by people doing it for the lolz but we kinda owe them a debt of gratitude. If they hadn't gotten everyone to stop being lazy about security we'd be in a very different place now, surrounded by rouge states and agencies launching hyper sophisticated attacks on infrastructure and data. That was also the era that trained the current generation of cybersecurity experts.
>Wouldn't it be more sensible to learn how to design security systems that are hack-proof than to learn how to identify security systems that are dumb?

Sure, but how does one get the knowledge on how to secure systems? Half the job of a security engineer is thinking like an attacker and trying to poke holes in it. Key mitigations like ASLR and stack canaries are so effective because they specifically block off key resources and techniques that attackers use. It would be downright impossible to invent these mitigations (or even meaningfully understand them) if you did not already have a firm grasp on memory corruption and ROP. I'm not sure it's an argument I actually care to defend, but I do honestly believe that you can't be a strong security engineer if you don't have a grasp on the techniques your adversaries use.

Not convinced these are the dumbest (none of them is quite as dumb as requiring special characters in passwords, for example, and I'm not sure the fourth is dumb at all), or that they're six ideas. The first two are the same, and the third one is a special case of the same thing.
It's super interesting to read this list as someone young enough that the first time I was ever prompted to consider computer security was in a college course almost a decade after this was written. Although different terminology was used, some of the ideas, like "Default Permit" and "Enumerating Badness" were so heavily discouraged when I first started studying that it's almost hard to imagine them being considered good practice so recently before (although even today they're common enough that it's still worth calling out, so maybe this wasn't uncommon knowledge at the time either). On the other hand, the next two ideas, "penetrate and patch" along with "hacking is cool" certainly don't seem to be as reviled as the author would like, and I don't think that the latter was a dead idea within a decade like they suggested. Trying to interpret them charitably, I could believe that the intention here was to decry the lack of proper threat modeling that was done in advance at the time (which still is a real issue today). On the other hand, reading it at face value sounds like the idea that if you think enough in advance and just "don't write bugs" that your product will be 100% secure and never need any patching, which I don't think is a good take. I'd counter that it's essentially the same as the fallacy they mention later, "We don't need host security, we have a good firewall"; proper design up front is a good "firewall" to stop bugs from coming in, but it's not a substitute for having proper mitigations for when they do inevitably occur.
What I think happened is that with computing, humanity began to build a new world, a Different World that's not like the other, old world outside. But since humans were building it, it became just like that. It has the same buildup, the same issues, the same dumbness as the original, real world.

#1: Default permit: people don't like to spend energy, especially not upfront. Integrating "Permit by default" systems is much faster than setting them up with proper authentication, authorization and access rights. Permit default just works, starts quickly, and works fast.

#2: Enumerating badness: you mean, like how we name every single strain of viruses? So now we enumerate computer badness too.

#3: Penetrate and patch: very similar to how our laws work, I think. There are people who create injustises, and later the legal code is upgraded to handle that. Again, reactive, like in #1.

#4: Hacking is cool - well, other criminals are cool too, like pirates and maffiosos, and so on. People are drawn to power.

#5: Educating users: someone has to, doesn't they, if they haven't learnt the thing by themselves? You can't make everyone go away if they are dumb, if you need them.

#6: Action is Better Than Inaction: This one, I think, imitates business. There's a lot of ways to make money in business, and being there early is one of them.

That said, I really enjoyed the article. Permit by default is especially dumb, it was really funny when mongo installed itself with no password and listen on public IP, default port. And how long it took them to patch that. And how that haven't burned the public goodwill! So maybe these things are not really dumb after all?

> A few years ago I worked on analyzing a website's security posture as part of an E-banking security project.

Cool, so a pen test?

> One of the best ways to discourage hacking on the Internet is to ... pay them tens of thousands of dollars to do "penetration tests" against your systems, right? Wrong! "Hacking is Cool" is a really dumb idea.


Most of these are well thought out and still relevant 17 years later. #4 -- particularly the "don't learn offensive security skills as a defender" idea -- was dumb in 2005, and its dumb now. Its also, unsurprisingly, not advice the author has himself followed.

I feel let down as a Dane that neither NemID or MitID deserve a mention.



full disclosure - I worked on the JavaScript implementation of NemID. My problems with it are not the implementation, but the whole concept.

> but the second version used what I termed "Artificial Ignorance" - a process whereby you throw away the log entries you know aren't interesting. If there's anything left after you've thrown away the stuff you know isn't interesting, then the leftovers must be interesting. This approach worked amazingly well, and detected a number of very interesting operational conditions and errors that it simply never would have occurred to me to look for.

As a sysadmin, I took this approach as well. On the local machine, the server(s) would log normally. But, when I set-up centralized logging, I set-up a list of log entries that wouldn't normally interest me day-to-day. The server would only send to a central logging server things that weren't on this list. What was left were usually problems that I would need to pay attention to and they got fixed faster.

The rest of the uninteresting log entries would just be audited from time to time.

On the matter of security, every user that logs in on a daily basis gets logged with their IP address. Anytime that a user logged in with a different IP - it would get logged to the central log server and I would be notified. Most of the time, it was harmless.. but there were enough times I would find a compromised account in a sea of normal day-to-day login activity.

When your logs are full of normal things in it, it's easy to miss important details.

I agreed with much of the article and points made. Maybe I'm missing something (if so, would love to learn!) but I felt that the "Penetrate and Patch" section was a little naive.


> Let me put it to you in different terms: if "Penetrate and Patch" was effective, we would have run out of security bugs in Internet Explorer by now. What has it been? 2 or 3 a month for 10 years?

I agree with the point that "Penetrate and Patch" shouldn't be the primary strategy, but the author seems to write it off entirely with a viewpoint like "you should just write software and build systems that don't have security bugs". Well yes, of course that would be nice, but that's not feasible. And some software is much more difficult to get right than other kinds.

"Penetrate and Patch" is a useful piece of security in that (a) it can catch what slips through the cracks, (b) it provides a sort of incentive mechanism to get it right in the first place, and (c) it simply isn't possible to build bug-free systems.

The author claims that "Penetrate and Patch" finding bugs every month as evidence that it's bad, but isn't it the opposite? You cannot be bug free, so in fact any incremental progress/fixes is in fact good.

All that said, I do agree that all of this starts with secure by design. "Penetrate and Patch" isn't a good primary strategy and cannot replace Doing It Right. But I think it complements it well.

According to Slashdot this article was online since at least September 2005.

I would be interested to hear the author's thoughts on what has changed in the 18+ years since it was written.

> My prediction is that the "Hacking is Cool" dumb idea will be a dead idea in the next 10 years.

That didn't age well. In the era of growing corruption in government and business alike hacking becomes important way through which people can actually learn anything about their overlord's shady deals.

How about "our users can't tell the difference between a DOS attack and us having screwed something up" plus "the people that want to sue us for sucking are at war with the people that want us to look successful to get a promotion for hiring good vendors" etc.


>The real question to ask is not "can we educate our users to be better at security?" it is "why do we need to educate our users at all?"

Great point, but the emphasis on system administration instead of the broken nature of operating systems causes the point to be missed.

> #4 ... "Hacking is Cool" is a really dumb idea.

This has aged poorly; nowadays, the most notable attacks are conducted by state actors (e.g., Russia and China) or for-profit criminal groups (e.g., ransomware) rather than lone hackers doing it for fun.

I guess this man's internet heaven is filled by lobotomized users who can only exchange emails with a list of approved correspondents and browse only whitelisted websites. He, of course, gets to approve the lists.
> One of the best ways to get rid of cockroaches in your kitchen is to scatter bread-crumbs under the stove, right? Wrong! That's a dumb idea. One of the best ways to discourage hacking on the Internet is to give the hackers stock options, buy the books they write about their exploits, take classes on "extreme hacking kung fu" and pay them tens of thousands of dollars to do "penetration tests" against your systems, right? Wrong! "Hacking is Cool" is a really dumb idea.

That's like, entirely unrelated. Black hats are motivated by monetary gains, not scout badges. The proliferation of internet made "for fun" hackers minority and irrelevant factor (or benefit, as they might actually report a bug instead of sow mayhem) when it comes to security.

My favorite dumbest idea: autorun.

But of course, the dumbest idea in computer security is that it always comes last on the budget list.

Penetration testing probably is the dumbest. You will not be sure if it is an honey pot or a real security vulnerability.
> A better idea might be to simply quarantine all attachments as they come into the enterprise, delete all the executables outright, and store the few file types you decide are acceptable on a staging server where users can log in with an SSL- enabled browser

An odd suggestion in an otherwise relatively uncontroversial article. It implicitly trains your users in a bunch of unpleasant things:

* clicking on some URL in an email, typing your password into whatever webpage pops up, downloading the blob it serves you and opening it (after clicking through the browser's "this was downloaded from the internet, are you sure?" warning) is a perfectly normal and legitimate part of the working day

* one needs to find ways to obfuscate documents of types that aren't on the IT whitelist so one can send them to one's colleagues so they can do their jobs (and no, the corporate whitelists never capture everything people urgently need to share in order to do their jobs)

* since everyone now does that habitually, receiving an automangled email with a link to an attachment which has its actual payload contained in several layers of archive obfuscation wrapper is perfectly normal because that's just what you have to do to share stuff with your colleagues now

These could, of course, be mitigated by suitably educating users, but since the practice is advocated in a section about user education never working, that is unlikely to happen.

> Wouldn't it be more sensible to learn how to design security systems that are hack-proof than to learn how to identify security systems that are dumb?

How can you engineer a good lock without investing all the ways it can be bypassed by lockpicking lawyer?

That hasn't aged well, at all, lol.

The first two points are alright, then it just veers off the rails

I like some of this, but "enumerating badness is a bad idea" is just wrong. Quantifying errors is an important part of tracking progress in software work.

Its the same as any other project in life: you track mistakes and address them.

User education is not dumb. Services that send test phishing emails and check that people mark them as such are a good idea. It gets people used to receiving suspect emails and dealing with them.
Sorry guy who wrote this article in 2005, hacking is definitely cool.
The clown who wrote this thinks the word hacking means cracking. That automatically negates everything he says. I would not take anything in this article seriously.
I think this misses two or three big points:

1) Offer solutions not process/procedure:

Devs want to make secure systems, but they have VERY LIMITED TIME. Security is always something that is #1 in the bullet points of a presentation of priorities, and always a distant priority in the boots on the ground of features and keeping shit running.

What I've noticed is that the security team doesn't want to be responsible for cleanup or doing lots of work or engineering. They want to make presentations for the upper management, pick some enterprise partners to impose on the orgs, and kick back in offices. Most know little about cryptography or major incidents. If a great security practice like "sync ssh keys" or other things that may require a bit of legwork, they don't want to do it.

They'd rather load down the devs. They'd rather come in and review the architecture rather than provide drop-in solutions. If something needs customization for interface with SSO or getting credentials, they drop the integration in the devs laps. Who's supposed to be the experts here? The security team should own whatever craptastic enterprisey shit they select, and ALSO be responsible for making it useful to the dev org.

The biggest example of this is the desire for "minimum permission". Take AWS for example with its explosive number of permissions, old and new permissions models, and very complicated webs of "do I need this permission" and "what permission does this error message mean I'm missing". And ye gods, the dumb magic numbers in the JSON, but anyway. If the security team wants AWS roles with "minimum viable permission" THEY need to be experts in the permission model and craft these VERY COMPLICATED permission sets FOR THE DEVS. And if the Devs need more, they need to very quickly provide (say < 1/2 day) new permissions in case some new S3 bucket is needed or some new AWS service is needed. But security teams don't want to do such gruntwork.

2) recognize that automated infrastructure is the rule, not the exception, aka the devs are not the enemy

It took sooo long for ssh keys to become prevalent in development that people weren't ssh'ing in using passwords. Like, decades. This practice represented a big leap in administration productivity and probably was more secure.

And you could automate on top of it in shell scripts, not leave passwords in .history, lots of good things.

And the security industry wants you to undo it. Wants TOTP passwords from your phone hand-typed, wants a web page to pop up to gain temporary credentials, pretends you know how long your process will run so those temporary credentials won't expire and if you do, what, you're supposed to manually re-authenticate?

Security at my last job wanted an ssh replacement to be used (the enterprise security industry is waging war on ssh/sshd) that if I used it from the command line IT POPPED UP A BROWSER PAGE. And no way to automate this for any task.

In general security teams seem obsessed with making devs lives as hard as possible. Are most leaks via dev channels? In my experience the BIG leaks are "County Password Inspector", phishing, disgruntled/angry employees selling access. Well, and credentials checked into github. Most places I've worked at have involved this steadily slide into less and less usability by the devs, at GREAT cost to productivity, for questionable payoff in actual platform security.

Meanwhile, no joke, ssl protocols on internal password reset sites were using such poor algorithms that Chrome was refusing to display it. Githubs were open to the public that shouldn't have been. 8-character limit passwords with proscribed character usage.


Isn't the author Enumerating Badness in that article?

> 6) Action is Better Than Inaction

I’m a fan of the

> don’t just do something, stand there!

> "We can't stop the occasional problem" - yes, you can. Would you travel on commercial airliners if you thought that the aviation industry took this approach with your life? I didn't think so.

This person has a fundamentally mistaken idea of how airliners and, therefore, security systems as a whole work. Yes, airliners have the occasional problem. That's why they have:

* checklists and inspections, to catch them beforehand

* communications, to catch them while they're evolving

* redundancies, to turn ramified problems nobody caught into annoyances instead of disasters

No matter how some people whine and moan, "Just Be Perfect" fails to be an actionable plan.

Also: Hackers will be cool as long as DRM and planned obsolescence/designed-in insecurities exist.

No billion dollar company is anywhere near as glib has the author makes them out to be. Maybe my experience varies, but the European companies I've worked with have strict cybersecurity liability, and they take every aspect of security seriously and do not just pat themselves on the back smugly, as OP portrays. Maybe this was the case in the 90's, but it sure is not the case today.

EDIT: I deleted most of my post because I found it was repeated up and down the comments which I am so relieved to see. I kept my post because I want newcomers to hear as many voices in objection to OP's outdated essay as possible.

If I could come up with one dumb idea it would be something like:

You can trust large-organization to secure your device.

(especially for orgs that give themselves, advertisers or apps more access to the device than you have)

> My prediction is that in 10 years users that need education will be out of the high-tech workforce entirely, or will be self-training at home in order to stay competitive in the job market. My guess is that this will extend to knowing not to open weird attachments from strangers.

And yet, just yesterday I've seen a TV ad explaining how to not get phished out of your money through your banking app.

I think it a running theme in this document that author displays severe lack of understanding how security becomes hard as soon as you let anyone do anything online.

> #1) Default Permit

I guess author of this post is no longer with us because they got heart attack when npm and similar rose to prominence.