Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Dumb Ideas in Computer Security (2005) (ranum.com)
114 points by corv on Jan 2, 2015 | hide | past | favorite | 56 comments


  The most recognizable form in which the "Default Permit"
  dumb idea manifests itself is in firewall rules. [...] 
  The opposite of "Default Permit" is "Default Deny" and 
  it is a really good idea.
And now you know why everything from version control systems to video conferencing software tunnel things over HTTP!


I think there's a significant difference here between inbound and outbound firewalls.

Edit: I think about it like getting into a stadium, only a few entrances with ticket takers, but there are lots of doors that are exit-only.


There is an extent to which that's true, but it's still subject to the same consequences. If you block all "incoming" traffic then developers react to it. Apps that need incoming data just maintain a persistent connection to a third party server which passes the incoming data over the open connection.

And the third party server doesn't have any magic logic that couldn't be built into the endpoints, it's effectively just a router to work around the restrictive firewall. But now you have a third party in a position to spy on you or impose censorship.


I agree with your observation and I think the author was suggesting that whitelisting is a more viable solution compared to blacklisting when it's necessary to open a port.


Not the dumbest by any means.

What this idea seems to miss is that most of the value created in computing - not just among professional software developers but also among ordinary users - comes when people do things that the creators hadn't thought of. You could run every program in its own isolated space where it can only touch its own stuff, and this would eliminate a lot of vulnerabilities (we see this today on mobile). But as soon as you want to script your image resizes based on a spreadsheet or whatever, that model breaks down (or else you end up with everything tunneled over the few approved vectors; iphone users dropbox files just so that they can open them on the same phone in a different program). And worse, users end up utterly beholden to the few entities with "real" access.

It's very easy to create a secure computer, by turning it off. But that doesn't help anyone. In the long run, the cure of deny-by-default is worse than the disease.


I totally agree. For a long time I had this feeling that improving security is basically killing all fun and joy. When I was learning to code, there was no such thing as Data Execution Prevention on Windows, and you could even access and write to memory of another process. What that meant was interesting and fun things. For instance, you could write programs that totally tweaked software that was poorly written or that you just didn't like. You could make computers do your bidding instead of being locked in the ways of thinking of original software authors. Every single improvement in the field of computer security was, and is, slowly eroding this fun.

Back then I thought about it mostly as killing fun; nowdays I consider it killing of flexibility. Computers are increasingly being locked down and dumbed down to prevent users from doing anything the authors don't want them to do. It's especially visible in mobile. I can understand the rationale behind it, that an average Joe needs protection from the heavily complicated magic box in front of him. But for those who understand and know how to use that magic, security means less joy, less flexibility, less power.


Indeed, every time I hear advocacy of "safer" languages, formal verification, heavily-constrained runtime environments, etc., it's a rather odd feeling; on one hand I think there are definite advantages (e.g. I don't want government backdoors as much as anyone else advocating strong security), but on the other hand, I can't help but think that the strong security of these systems can often be used against the user. One of the most common examples of this is jailbreaking - which basically wouldn't exist at all if systems were built with strong proofs of security. There are also those who think the problem is at the policy level and strongly secure systems should be strongly advocated anyway, but since there has been very little in the way of laws protecting users' freedom to control the hardware and software they own (and instead the opposite has been happening), often the most practical way to freedom is insecurity. As that famous saying goes, "those who give up freedom for security deserve neither."

I can understand the rationale behind it, that an average Joe needs protection from the heavily complicated magic box in front of him. But for those who understand and know how to use that magic, security means less joy, less flexibility, less power.

This might be an issue of societal attitudes - there seems to be much emphasis on systems being "easy to use" without any knowledge, which ultimately leads to systems in which the user has absolutely no control; after all, the easiest system to "use" is one that makes all the decisions for the user. And as users start making fewer decisions because of this automation, the "advanced features" that they would otherwise use get used less, leading to their eventual removal. The decreasing motivation of users to learn about and control their systems causes systems to allow less control (and consequently become easier to "use"), feeding back to decreased motivation of the users in a vicious cycle. The biggest concern is that the developers of the future are the users of today, so the increased "automation"/decreased flexibility feeds them too.

It's hard to find a solution to this, but I think it's one that will need to involve a huge shift in societal attitudes - from "I don't know how to do X but Y will do it for me so I don't care about learning more" to "I don't know X so I'll learn how, then find a way to use Y to do it better".


The problem is that learning how to do things is really expensive for many people. I think the fundamental problem isn't user-centered design but lopsided bifurcation in who the users are.

For a product like a recipe/grocery app, I genuinely don't care about abstraction layers it uses below its UI. I just want my wife and I to plan our meals and do our grocery shopping in a time-efficient way. I don't want to run into situations where "it doesn't work" and I have to figure out why.

For a product like gulp.js, vim, or google chrome's layout engine, I really do care about those abstraction because I am trying to build things with them and I expect that I'll run into situations where "it doesn't work" and I have to figure out why. So give me the conceptual tools to do so. If that means source-diving without clear documentation about how to do that spelunking, fine. I'll procrastinate on it and be unhappy, but I'll do it. So starting from that perspective, any improvement on the fixing-things experience is a blessing rather than an irrelevance.

But some things necessarily straddle both. It would be very difficult to fund a hardware manufacturing supply chain for a phone without mass-market appeal. Yet the app ecosystem that produces much of the value of a smartphone requires some ability to do things the phone manufacturer did not anticipate, even if it within a playpen.


Another issue is that modern personal computers make it massively easy to hide what it is doing.

The phone before me seems to be idle as best i can tell, but if i fire up a process viewer i see 20+ ongoing tasks.

This in contrast mechanical devices of old that only did the one task they were built to do, and were very "loud" (not just auditory) about when they were doing it.

Heck, recently i found myself wondering about hooking up some kind of audio system to wireshark and play around with having various packet traffic produce various sounds. This after reading about a guy that set up his phone to play certain noises to his hearing aid based on the characteristics of wifi networks encountered when walking around town.

Consider also that we suspect something is up with a car or similar because the steering wheel develops an odd rattle, or some unfamiliar noise is heard when doing certain things.

Never mind the old analog modem handshake where we could tell by experience when we got a bad connection, compared to the modern variant where we have to check some "dashboard" to tell if we are connected at all.


> And as users start making fewer decisions because of this automation, the "advanced features" that they would otherwise use get used less, leading to their eventual removal.

That much is probably inevitable. When they build a house they provide you a light switch that will cause the room to be illuminated and then put up drywall everywhere which makes it less easy for you to add new switches and outlets. What they don't do is fill the walls with cement and epoxy everything with a screw thread and then, when a light bulb burns out which is permanently affixed to the frame of the building, tell you to buy House 2.0.

It's one thing to not spend time adding a feature most people don't use, but which the owner can add later with some work. It's quite another thing to actively spend effort to prevent the owners from exercising control over their own devices.


Something like how it becomes harder and harder to jailbreak android phones, while at the same time Chromebooks ship with a "developers switch"?

I find myself thinking of Doctorow's arguments about "(civil) war on computing"...


#1 I've never understood this as a concept. Why on earth would you first run an FTP server and then block the port it uses?

Sure it helps if someone sneakily installs stuff but that's the reason why so many things are nowadays tunneled via HTTP. Because everything is always blocked.

The point about load balancer whitelisting was a good one.

#2 works only if you choose iOS style environment where a single entity, Apple in that case, decides what to run and what not to run.

Otherwise it falls to the "Cute rabbit" category. E.g: If user gets a mail "Click here to see a cute rabbit!" they will click everything, bypass all security dialogs just so they can see the cute rabbit. And/or they will grow desensitised and click "Yes" on all dialogs. The old UAC dialog in Windows Vista was an excellent example of this. Everyone just automatically clicked yes because it popped up all the time.

#3 is just "Don't write buggy software". Yeah. We wouldn't if we were smart enough.


I would say you should probably never run an FTP server at all these days (perhaps in a chroot jail, or container, but seriously, why do you need it?). I've worked on website migrations where I've only opened up 22, 80 and 443 only to be told that they have just signed a new contract with a 3rd party and they require FTP and MySQL to be open to their (usually disparate) range of IPs.

I usually try to educate the 3rd party on using something like SSH tunneling, and have, on occasion, sent them screenshotted docs on how to do so. This works much more effectively than preaching security at them. Make it easy and they usually follow.

Regarding the "Cute rabbit" theory, I've heard this discussed as both a security design issue, and a UI design issue. I tend to think the solution to this needs to come from the UI side, but it's difficult as apparently we're driven by "punishment and reward" and a computer can't easily admonish you for doing the wrong thing. This feedback is probably required to prevent people clicking on everything remotely lagomorphic.

I suggest a mouse that gives you an electric shock every time you click on an identifiably spammy / malwary link.

You could even customise it to give you a shock when you break your HN noprocrast settings!


Yeah blocking ports is lame. The worst type of IT guy you can meet it's the "i-wont-open-port-53627-for-your-app-because-those-ports-are-dangerous" asshole.


I really don't understand how "3) Penetrate and Patch" is a bad idea. The argument is:

> Your software and systems should be secure by design and should have been designed with flaw-handling in mind

I'm not sure I would argue that systems that have been hacked were intentionally insecure by design, but that the developers thought it was secure by design but they were just wrong. It seems totally unrealistic to say "just make it secure instead" as the solution, especially when machines are connected to the outside world.


I read it as "fixing the symptom" vs. "fixing the underlying problem". Suppose a remote attacker can trigger a buffer overflow by sending a malformed packet to your software. Your response might be to simply patch in a regex that checks for malformed packets, which fixes the immediate problem, but ignores possible deeper issues with memory management, remote access, etc.


It's more than that, it's a statement that adding software to fix software just means there's more software to fix.

The correct solution is to remove software - there's no point in fixing it if you have no legitimate need for it anyway. Then you can concentrate on the software that you do need.

For example, do you remember Blaster? It was a worm that exploited a service in windows 2000 and XP, a service that very few people actually need, and probably shouldn't have been included in home versions of windows at all. If it hadn't been included or enabled, Blaster, walachia and other worms would never have had the reach that they did.


Agreed. It would seem that unless your system is totally free of all forms of dependencies (including the human kind), arguing that patching is a bad idea simply because it shouldn't have to be patched in the first place is just poor advice.


It seems the author originally published this in 2005; reaching the bottom: "Morrisdale, PA Sept 1, 2005." On the linking page, http://www.ranum.com/security/computer_security/index.html, it states, "(originally written for certifiedsecuritypro.com)."

Might the submitter have meant to make a point about the predictions made by the author, ten years ago? For example, "My prediction is that the "Hacking is Cool" dumb idea will be a dead idea in the next 10 years."


My intention was to initiate a debate about the main points given that they are still not widely followed a decade later.

Security is finally beginning to get the attention it deserves and it might be valuable to reevaluate the ideas presented in the link.


It is called cargo cult computing.

People understand the security importance, they hear the words, and instead of applying them, they disguise as monkeys and are repeating the words ad nauseum while throwing bananas at people's faces expecting security to happen.

It is also happening in OS, sysadmins, science...

You are not the only one to experience this.

Stay light hearted :) anyway, you are closer to the retirement they are, thus you will have less mess to clean.


Hacking is no longer cool. It's gone pro. It's now done by organized crime and Government intelligence agencies. These are much tougher opponents than the script kiddies of a decade ago.


Perhaps another 10 years from now, rogue AI will be the primary opponent, making the pro hackers of today look like the script kiddies.


We're already seeing some automatic exploit discovery. Fuzzing tools plus machine learning turn out to be able to find exploits. That can only get more powerful.


Yeah i recall seeing someone posting about an adaptive fuzzer here, that had produced jpeg images and whatsnot when given some rather generic starting parameters.


Makes me think of Gibson's second novel of the Mirrorshades trilogy, where the opening chapters were about teen being lent an "ICE breaker" supposedly produced by a Soviet military AI.


Thanks. We added the year to the title.


It depends on how you define 'hacking' so there is no easy answer. There's not even a single definition of it in the mainstream now.


Pretty good ideas, but transition fom current-state to the author's ideas will bring a lot of pain.

#1 - Default Permit - if aliens arrived and hacked all of our systems to enforce "default deny" - we'd all die within a week. Nothing would work. Game over, civilization. Ask the author if he'd sanction a "default deny" switchover on the systems that control his mom's respirator.

If that sounds like an indictment of status quo, you're right. But how do you handle a skid? Not by turning the wheel full lock in the opposite direction, certainly.

Hard to argue with #2 or #3 but of course alternatives depend greatly on the availability of talent, of which there is precious little. Just getting things to work requires most of our bandwidth.

Now, if I ruled the IT world, my devs would be ex-chemical engineers (a profession that seems adept at understanding dynamic processes) who have world-class comprehension of every layer of my tech stack AND who were provided with unlimited budgets and time, with proper prototyping and other great engineering practices.

#4 - Hacking is cool if in service of a noble goal, just as murder is when fighting an aggressive enemy or burglary is when it uncovers something nefarious. But otherwise, it's just plain B&E.

#5 - Educating users. I think we have another transition issue on our hands - most of today's users are clueless, sure. But tomorrow's will only be on guard against what works today. Maybe the author thinks everyone over a certain age is obsolete, or that only security people know what's best for users. In either case, let's hope the author doesn't attempt to fit their template onto other aspects of society.

#6 - Assuming the author is right, if everyone adopts a wait-and-see approach, who will act as the early adopters whose brains one can pick? Certainly IT has a lot of gratuitous product / idea churn, and cluelessness seems to prevail in a lot of decisions, but I doubt fastidious conservatism will fare any better.


At the earlier 2000's there were (black hat) hackers getting rich by selling books. And books that teached how to hack where quite popular.


I still prefer to go by the Jargon File's definition of "hacker" - "A person who enjoys exploring the details of programmable systems and stretching their capabilities, as opposed to most users, who prefer to learn only the minimum necessary." or as defined by (RFC) 1392, the Internet Users' Glossary - "A person who delights in having an intimate understanding of the internal workings of a system, computers and computer networks in particular."

In which case, hacking IS cool.


I read this article a few years ago and thoroughly enjoyed it, but I've had trouble solving Default Permit in practice. Does anyone know of a free software operating system that white-lists its software (say by having the white-list be tied to the package manager) without having to do a lot of tinkering/configuring? I know you can do it with certain Linux add-ons that provide mandatory access control like SELinux and grsecurity, but it always seems like they introduce problems that you then have to troubleshoot for hours before your system is usable again. Bonus points for if this distro uses ASLR, W^X, and other countermeasures to avoid violating the security policy through code injection.


I don't know of any viable desktop solutions for this issue, but it seems to be how iOS works. Unless you jail break your phone you cannot download or install anything not on the app store (where everything goes through a review boardI believe).

And I haven't heard much about iOS viruses really. Maybe for the jail broken ones?

A quick google search turns up this malware [0] which only works through USB. So the system does seem to be working pretty well all in all. Reducing the attack vectors from everything to direct hardware connections only is a pretty big accomplishment I'd think.

[0] - http://www.computerworld.com/article/2843764/security0/wirel...


Yes, you are correct. All ios apps must be code signed by Apple which is pretty stringent in it's reviews. This is why you won’t find any apps in the app store attempting to spawn /bin/bash or mimic its functionality.

I believe apple is now requiring all osx apps on the app store be code signed as well.


Not free, by Windows can do this quite well. You whitelist the system and program files dir, and audit acls. Works fine. It's called Software Restriction Policy. There's also AppLocker, but MS limits that to certain SKUs because they forgot it's not the 90s anymore. Actually, HyperV has SRP, and it's free as in beer.

Seems like a simple path based system would work on Linux, as long as there aren't debuggers and such available to users.


It seems to me that mounting everything user-writable as noexec should be very similar to what you want. That said, it will not prevent a user from executing arbitrary code, but will make it much more complicated than "download this bunary and click it". (It's still possible to use a debugger to load something into an existing process; IIRC you can LD_PRELOAD a library from a noexec location, etc.)


What the author does not seem to grasp is security is fundamentally about managing risk. Consider his statement that "IT executives seem to break down into two categories: the 'early adopters' and the 'pause and thinkers.' Over the course of my career, I've noticed that dramatically fewer of the 'early adopters' build successful, secure, mission-critical systems." His measure of success is on whether or not the the software is secure and not whether or not it fills a valid need.

Consider two divisions within a large company. One is the research and development division and the other is finance. I want the R&D department to be the early adopters and I want them to take risks. They will be the early adopters. On the other hand, I want the finance department to be locked down.

Consider the same logic as applied to the typical startup. A startup that is focusing most of their time and energy on security is probably playing the wrong side of the risk equation. They have more to risk by delaying their product than they do making sure it is locked down. On the flip side, an established company like a bank or insurance company has a lot more to lose and should focus more on security.


I think M j Ranum [1] probably has an exceptional understanding of risk.

In fact all of his points - the 6 dumb ideas - are places where he's pointing out that people are widely misinterpreting the risks, and putting their time and effort in the wrong place as a result. Most of the closing paragraphs for each point are suggestions on how to better prioritise your time and effort so you can get on whatever make you money.

And wrt your second paragraph, if we consider most companies, there is a 'default permit' on the network, between the workstations in the finance department, and the r&d department, unless they're in physically separate locations, and often even then. The workstations in the finance dept will almost certainly run any software at all, unless it's caught by the anti-virus (default permit, and enumerating badness). You're right that they should be locked down, but are they?

[1] http://en.wikipedia.org/wiki/Marcus_J._Ranum


That is a recurring trait in infosec circles. A kind of black and white thinking that can easily slip over into paranoia. Something they seem to share with military planners etc.


A sure way to identify someone trying to give you bad security advice is when they only talk about security implementation and nothing about what threat one is trying to defend against.

With out a threat model you can't balance useability with security, and you get rhetoric like "default block" or "its all about educating the user". However, no one actually do strictly follow the rhetoric so what you get is bunch of inconsistency, intrusiveness, and bad security. It also generate a lot of flip flap in security policy, security rules and user manuals, all to the despair of the people who are the users of the system.


Also: adding features that expand the range of potential hacking attempts.

I'm looking at you, wireless credit cards and cars with cellular phone connections.

Airgaps are not the be-all and end-all, but they certainly help.

(I have a family member with a car that not only has a cellular phone connection, but has enough things adjustable through the "entertainment console" (instrument panel brightness and length of time before lights shut off, among other things) that I'm pretty sure it's not airgapped. (It may be communicating via a simple protocol, but, although it's better than no security, it's not the same thing as an airgap.) Quite frankly, it scares me. Not so much for me or my family personally, but the potential consequences for society in generally. I can far too easily see someone being assassinated by, say, having the ABS lock the brakes on both left tires and release the brakes on both right tires next time someone is driving down a (non-divided) highway, for instance.)


I have an aesthetic fondness for these as anti-patterns, but then there is reality:

My 85-year-old mother allowed some random caller to talk her into giving them administrative access to her Windows XP system. I have convinced her to accept replacement of her system with a chromebook, but now I want to retrieve the photos she had on the always-attached backup media. Zero-tolerance for vulnerabilities would mean destroying all media that touched her compromised system, but I'm going to accept the imperfect assurance of malware scanning and then copy the photo files somewhere she can click to them.

"Less and less is done, until non-action is achieved; when nothing is done, nothing remains to be done." (Tao Te Ching 48; translated by Gia-fu Feng and Jane English).

I ask: is that a state of perfection or of death?


Unless there is something madly wrong going on, images etc are "sterile".


"Madly going wrong" is historically the case: https://technet.microsoft.com/en-us/library/security/ms04-02...


Makes one wonder if one should just unplug anything electronic and stick to books...


This is generally pretty awful.

The first point about default allow is moot today since nearly all protocols encapsulate all others. HTTP for instance can do anything via web sockets.

In addition, most malware is now "pulled" in via email or web rather than "pushed" via remote attacks. Firewalls are almost obsolescent.


Port and protocol are not necessarily the only way to implement whitelisting.

A solution like Little Snitch is a step in the right direction concerning networking in my opinion. Which application is opening the connection is just as important as the destination.

An enterprise version of such a program could allow network admins to set rules for different users and help spot unfamiliar applications that are acting suspiciously.

Firewalls aren't dead, they will just become better at inspecting traffic and classifying applications.

A higher degree of security is achievable, the only question is if customers are actually going to demand well engineered products. Until then companies are going to cut corners in order to out-compete each other.


I don't really agree with most of the ideas presented here. Setting your corporate firewall to "default deny" is just security theater, not real security. If a black hat installs malware somewhere, it can just as easily phone home via HTTP as on some random port. Arguably, the security theater of forcing everything on to HTTP actually makes it harder to spot anomalous traffic patterns. Earlier someone might have seen a lot of traffic on port 1234 and said "hmm, that's funny..." but with default deny it's impossible to see anything weird except by doing deep packet inspection.

As another poster here pointed out, hacking has gone pro. Clearly governments and organized crime have gotten into the business. The idea that people are going into hacking because "the media lionizes hackers" (like this blog post suggests) just seems kind of silly now. I think if anything, the media tends to exaggerate how scary most hackers are in order to sell more product.

I agree with the author that we are on a treadmill of patching vulnerabilities while creating new vulnerabilities. We'll never really get anywhere as long as we are on the treadmill. But this post doesn't point out any of the things that would actually help. For example, I think better sandboxing techniques in operating systems would help reduce the number of vulnerabilities. Unmanaged programming languages such as C/C++ are a perennial source of vulnerabilities, as everyone seems to know by now. Although most people don't seem to comment on this, languages with eval() such as Python, Perl, Lisp, SQL and Javascript have their own set of vulnerabilities that come from this construct. If we wanted to, we could get rid of these constructs.

Djikstra famously argued that computer science was not about computers. You could make a pretty good case that computer security is not (primarily) about computers, either. A lot of great hackers like Kevin Mitnick were able to penetrate systems just by calling up a technician and pretending to be someone they were not. Computers have given more power to individuals, but the problem of finding trustworthy individuals for your organization is no different than it was before computers came on the scene. A lot of hacks are really just cases of where too much information was shared with too many people who didn't need to have it, or systems were run with inadequate oversight... neither of which are technical problems.


Many in IT then and now think security was the domain of people who managed routers and proxies (also responsible from preventing non-approved use, like too much downloading or reading bad things) and IT minions who install an anti-virus. From the network perspective, they block 'malicious' javascript (impossible!) and bad websites (outsourced of course) and they block all ports and websites they can in a constant struggle against actual business needs.

These people are doing what they can, but it mostly involves buying snake oil because they don't understand software.

The idea of running someone else's code (javascript, flash, macros) on the same box as the confidential data, i.e. the end user box, is completely mad when sandboxing technology is still in its infancy. Letting unverified parsers touch untrusted data is mad too, but confining everything (SELinux style) is too hard for most. But why do we let these parsers talk to the network? Why can't I prevent my android/iOS apps from talking to any site on the internet?

We need a sea change in expectation from users (corporate and individual) that their data won't leave their box without them knowing about it, that it will be authenticated and encrypted in transit, who will demand operating systems support for enforcing this, and demand that their applications are written to expect only limited and mediated access to the network, file system and kernel.


An example of what he calls "Enumerating Badness" and "Default Permit" is seen today in web application firewalls (WAFs) that try to block XSS payloads. It conceals the real problem, which is a vulnerability in the web app itself, and it's a tall order to expect the WAF to capture 100% of XSS payloads.


Traditional host anti-virus software is another good example of 'enumerating badness'. It's certainly hard to keep-up when you take this approach.


>Wouldn't it be more sensible to learn how to design security systems that are hack-proof than to learn how to identify security systems that are dumb?

Part of the way you design hack-proof security systems is by identifying the dumb ways of doing things and then not doing them.


It's actually quite easy to make a business that is absolutely secure. You simply turn off all of your computers, fire all of your employees, sell off all your assets and empty your bank accounts. There, now you have a business that's absolutely unhackable.

What this hopefully demonstrates is that security is never the final goal, it's always a proximate goal and will always exist in tension with other goals.

Default Permit: We've all heard stories about dumb corporations where programmers couldn't even install a compiler without filling in a form and waiting a month. Default Deny kills productivity in a workplace because of the 80/20 rule. It's very easy to whitelist the 20% of apps that account for 80% of the usage but nigh impossible to whitelist the 80% of apps that are specific and niche. There was a post a few days ago about some guy working for immigration who figured out how to automate his job with UI scripting. That would have never happened under a Default Deny workplace.

Enumerating Badness: Like he said, this is a specialized version of Default Permit and all the same criticisms apply.

Penetrate and Patch: Building software that is architecturally secure is hard because it often imposes global constraints against your code. Global constraints get harder to implement as you start distributing your team in space and time and as you need to adapt code to changing requirements. Penetrate and Patch works because it allows you to deliver code quicker which is overwhelmingly more important than delivering secure code.

Hacking is cool: That he brings up spammers instantly undermines this argument since there's nothing less cool than spammers, yet spam has grown as fast if not faster than hacking over time. The threat vectors most of concern to companies nowadays are from nation states, organized crime and economic extortionists who could care less how "cool" their job.

Educating users: I remember when GMail started stripping out .exes from emails so we would start sending exes inside of zips. Then GMail started inspecting .zip files so we would change the file extension to .zip2. Then Google started detecting ZIP signatures so we started sending encrypted zip files just so we could email exes to each other. Why? Because emailing exes turned out to be a really, really useful thing to do. Any kind of paternalistic security policy inevitably ends up damaging more productive work than it does protecting against threats.

Action is better than inaction: Tell that to all of the industries that have been disrupted because they didn't stay sufficiently on top of trends. There are pros and cons to being an early adopter vs a late adopter but one is not universally better than the other.

The main point through all of this is that when bad security practices are widespread, it's usually because it's in conflict with some other business goal and there are rational reasons why security lost out. There aren't many silver bullet fixes because if there were, they would have been deployed already.


> I remember when GMail started stripping out .exes from emails so we would start sending exes inside of zips. Then GMail started inspecting .zip files so we would change the file extension to .zip2. Then Google started detecting ZIP signatures so we started sending encrypted zip files just so we could email exes to each other.

I recently emailed a zip file containing an .exe through gmail. Gmail refused to send it when the file extension was .zip, but saw nothing wrong with the same file as a .dat . Has there been a policy change?


Not sure where i read it. But i find myself reminded about a sarcastic claim that a truly secure computer was one that was turned off, unplugged from all sockets, put inside a safe, encased in concrete, and sunk to the bottom of the Mariana Trench.


Mostly good points.

This point, "Unless your system was supposed to be hackable then it shouldn't be hackable." wasn't one of them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: