Hacker News new | past | comments | ask | show | jobs | submit login
Pledge() – a new mitigation mechanism in OpenBSD (openbsd.org)
173 points by acchan on Nov 10, 2015 | hide | past | favorite | 109 comments



The problem with granular privileges is that programs want too many of them. See any Android flashlight app. Theo is getting good results on tightening up the classic UNIX command line tools. Has he tried EMACS yet?

Also, a bigger problem than system calls is what parts of the file system the program can access. The concept that a program has all the privileges of its owner is the biggest single problem with permissions.

What might work is having a few general classes of programs, with appropriate restrictions. Consider, for example, permission set "game, single player":

- Can read anything in its install package. - Can read/write only to working directory associated with product/user combination. - Can go full screen, use audio output (not microphone), access mouse/touch, etc.

That seems reasonable. Angry Birds could run under those restrictions. For some games, the DRM won't work, the anti-cheating won't work, the ads won't work, the in-game purchasing won't work, the updater won't work, and the social leader board won't work. Still, it would be reasonable to require in an app store that games still work locked down to that level, even if some features are disabled.

One way an app store might make this work is that programs which require very limited permissions are easy to get into the store. Programs which require extensive permissions go into the "adults only" section of the store, or have to go through a source code audit at the developer's expense.


Hence why I am a supporter that Apple and Microsoft keep pushing their sandboxing approaches, even with developers screaming along the way.

Eventually we will get it right, because Theo is right, normal people will just disable security, because it is an hassle on their eyes.

One just needs to search the online forums for people asking how to run as root on Mac OS X or Windows.


Exactly why I hate the argument that goes along the lines of "if people disable it, they must not want security!" - or if people buy cheap phones, they must not want it either - and so on.

The people are not at fault, and they don't know about these things. When the security is bad, it's a design problem - therefore, it's the system/platform/app developer's fault (although app developers are much less at fault than the platform developers, since they can also only control what's given to them by the platform vendor).


I think there's a slide to that effect in this presentation: that the development process involves upgrading the apps to the point at which the security can be made mandatory.


One of the most irritating things about Theo is that he keeps being right.


> Theo is getting good results on tightening up the classic UNIX command line tools. Has he tried EMACS yet?

You may want to take a closer look at the slides about what has been pledged so far. httpd, smtpd, ntpd, relayd, slowcgi, xterm... They're not quite emacs, but they're also not just command line tools.


That's probably OpenBSD httpd, not Apache httpd - a vastly simpler creature.


"game, single player"

I don't think this is a permission set so much as it should be a security domain. At the moment we have the (user,group) tuple. The lesson of mobile OSs is that this needs to be (application,user,group) or possibly (vendor,user,group) - because the vendor/application developer is a potentially hostile actor.

Each app having its own "home" directory eliminates so many problems. It gives you a new problem, which is that apps are no longer composable. The solution to that is probably to put the work of choosing which applications are allowed to open which files back into the Finder/Explorer part of the system (which would be able to see everywhere) and let it do the opening.


The problem on android is that you'll have to agree to the list of privileges again in case of an update, so developers usually just grab all the privileges they might need in the future.

Also on Android the thread model is malicious Apps, while on OpenBSD it's trustable executables getting bugs exploited. Malicious Apps can trick with the privileges, exploits have to fight with the privileges that are already set.


Do OpenBSD roll their own version of Emacs? Just securing the base OS with pledge() would be a great achievment imho. And they have covered quite a few applications so far. This includes hairballs like openssl and things like radiusd, sshd. I'm impressed.

http://www.openbsd.org/papers/hackfest2015-pledge/mgp00032.h...



You are right in that programs may (or more likely do) want too many privileges. This is not the problem with granular privileges, but a problem with enforcement. Mobile OSes implement granular privileges as a notification, not enforcement. The privileges should be outside of app control (though notification mechanism would still be convenient to loosen restrictions per app) and enforced.

I see problem that if we allow application to query its capabilites and bail out if flashlight cannot use microphone it kind of defeats the purpose of privileges. The way to go is to silently fail calls and have an API allowing to deal with that easily. The same microphone example - flashlight app could simply get silence from mic stream (makes harder to debug failing legit uses) or simply fail on attempts to open mic stream.


Somebody has a patch for Android which does that. You can deny a program access to your address book, The program still thinks it can access the address book, but sees an empty one.


Sounds like this could be implemented on Linux as a library on top of seccomp.

I'm not impressed by De Raadt's objection to seccomp. BPF programs may technically be turing-complete, but most of the things pledge() does can be implemented by a pretty simple seccomp filter that's just a flat list of conditionals implementing a whitelist or blacklist.

Meanwhile De Raadt points out, correctly, that voluntary security mechanisms will be ignored by most developers... but pledge() appears to be voluntary.

Seccomp-bpf is often used by sandboxes like Chrome or Sandstorm.io (of which I am lead developer), where it is not voluntary for the code that ends up being run inside the sandbox. But sandbox developers are likely to want seccomp's customizeability over pledge's ease-of-use.

So while it's nice that that pledge() is so easy to use, it strikes me that it's targeting the wrong audience with that design.


Your missing the point of pledge. It's for the developer to protect their code from the outside world. A malicious developer won't let we this and will try to obfuscate their intent and code. This is for the honest developer to mitigate the risk of a programming blunder to become a major exploit. Yes, its voluntary, but the developer has a self interest in using it.


No, I completely understand that that's the point of pledge.

But De Raadt's own slide 5 makes a convincing case that "optional security is irrelevant", and he dismisses SE Linux on that basis. I don't see why the same doesn't apply to pledge.

Don't get me wrong, I think it's great for this to be available and I would like to see a similar, easy-to-use seccomp wrapper available on Linux. But, sadly, app developers aren't likely to use it.


I'm extrapolating a bit here, since it's not that clear from the slides whether this is precisely what he meant, but I interpreted him to be criticizing optional security on the user side, which SELinux is. Since it's a set of user-configurable local system policies, users can, and often do, just use the most permissive policy possible to avoid having to debug SELinux problems. pledge() is optional from the perspective of the developer, but not the user, so once some piece of software implements it, it gets the protections without users having to set up an optional security policy, or being able to disable the one-and-only-one security policy (short of editing the pledge() calls out of the software and recompiling). And I gather that the OpenBSD developers will be patching software in the ports tree themselves, even if upstream doesn't, so that the OpenBSD version of as much software as possible uses it.


In practice SE Linux policies are typically maintained by distros, which seems similar to OpenBSD adding pledge() to ports. I'll grant that pledge() will likely have an easier time expressing meaningful policies for many apps, though, since it is applied after initialization and with knowledge of command-line parameters, etc. OTOH, the set of policies expressible with pledge() is much more limited.

But I really don't believe that user optional vs. developer optional makes a difference. The fact is that most app developers do not care to constrain themselves with mitigation layers. Most probably have no idea that this is even a thing they should consider, and of those that do most have other things they'd rather think about. Mitigation layers don't add new features and only fix hypothetical bugs which, sadly, most developers just don't care about.


The problem with SELinux is that they often get turned off by admins at the first hint of resistance. E.g. Apache won't read from the non-standard directory you've decided to put your app in? Off goes SELinux.

Building in lowest common denominator checks in the applications that the app developer can know won't get in the way makes it less likely the checks will get disabled.

E.g. your web server could disable filesystem access to paths it doesn't need after having read its config files and determined where it's going to log and where it's going to serve files from, so that things keep working as expected for users, possibly making exceptions for really stupid things (like exposing /etc). That would reduce the chance that users start looking for ways to just turn it off.

That makes the two approaches complementary.

I agree with you that most app developers do not care, but that's besides the point for OpenBSD: They care, and they control most of their own userland.

And you don't really need "most" apps to do it anyway. We'd get far just by having most of the highest profile internet facing server applications support it.


I haven't really read the thing but based off the comment here see if the following makes sense.

Lets say I'm the author of apache http server and I want to make sure that malicious attacks on the http server doesn't escape into remote code execution on the machine and use pledge (or something similar) to sandbox my own code. Taken from another angle when I know that my program deals with untrusted input can I sandbox my program to ensure that if the untrusted input escapes through the security implemented in my program logic is still sandboxed by the OS based off the policy I have set and not the user.


Yes, that's what it does.

But what's the right policy to set for Apache? Your PHP or whatever code running under Apache could need to do any arbitrary thing. So the pledge would need to be configurable. Probably many PHP developers and sysadmins wouldn't know how to configure it and/or wouldn't care so they'd just turn it off, just like with SE Linux.

Moreover, your Apache server running your PHP web app probably legitimately needs access to that web app's entire database, so you can't sandbox that access away no matter what you do. If someone hits you with a remote code execution, then your root filesystem may be fine but your database has now been compromised, and that's probably worse.

OTOH if you're running Apache as a simple static content server, then yeah, pledge() could provide some nice hardening.


Pledge is not intended for every possible usecase, thats why its part of the program code itself.

If the programmer themseves can make an intelligent decision about if and when to invoke pledge, rather than some predefined policy, you dont have to worry about every single usecase in existence and thus suffer the massively overwrought interfaces this requires. All a programmer has to do in the least effort case is delay pledge calls until after the problematic functionality, or perhaps not use pledge at all.

This is all while obtaining roughly equivilent benfits of something like selinux in a huge majority of cases.

The primary goal of pledge is to make using it as simple as it can possibly be, so it actually gets used.


For apache:

- Disallow writes to /etc

- Disallow reads of .htpasswd (this would require Apache to rely on a helper to do authentication out-of-process, so not that simple.

- Disallow writes of .htpasswd and .htaccess.

- Disallow writes to all config files evaluated by Apache (if they happens to be outside of /etc)

- Disallow writes to /bin, /sbin/, /usr

- By default disallow network connections to most ports (and yes, you can't prevent connections to the users databases, but preventing an attacker from port-scanning your internal network and trying to connect to other services that may not be sufficiently secured too would be helpful).

- Prevent inbound connections (e.g. lets say someone gets local code execution as the Apache user, and the server isn't sufficiently firewalled; congratulations, you're now running a remote shell that the attacker can use to do more indepth probes for other holes).

- By default disallow writes outside of document roots and a reasonably permissive set of scratch directories (this would probably need a switch to disable, but most won't need it).

Apache could also easily read its config before applying the rules, so adapting the rules to the config files is possible.

(some of the above will hit things like panel applications etc., but the vast majority of users won't run into them and so won't have a reason to disable them; unlike SELinux, the app developers also have the option of not providing a way to blanket disable the security, but instead provide config directives to whitelist directories that fits cleanly into the existing config system).

But for a very specific example of where pledge could have helped and where it'd have been much easier, here's a report of a remote root exploit in exim [1].

This root exploit consisted of using a buffer overflow to overwrite a config file that would then be evaluated, including macro definitions, as root.

If exim had been able to deny the inbound SMTP part to write anywhere but the mail spool this buffer overflow wouldn't have been exploitable (and yes, this case is simple enough that it would have been achievable with chroot() too, so it's more of a generic example of where voluntary restrictions are useful, not just pledge in particular)

[1] http://blog.iweb.com/en/2010/12/security-exploit-identified-...


You are right, if your webapp requires database connection, the OS has to allow it. That is where application-level security comes in. The OS can not be responsible for everything, the application has to be written securely as well.

This is complementary (and a bit orthogonal) to pledge().


I don't see how pledge is different from Capsicum which he criticises on this basis, one it is compiled in you cant disable it.

(Incidentally Capsicum does seem to be coming to Linux, albeit slowly, and as a self sandboxing technique it is nice to use).


> I don't see how pledge is different from Capsicum which he criticises on this basis, one it is compiled in you cant disable it.

This is addressed in the slides. Capsicum is 5 years old and used in 12 programs because it is difficult to implement. Pledge is 6 months old and used in over 400 programs already.


he is not critsizing capsicum as being optional, rather that it requires too much work and nobody bothers.


That's not the type of optional he means. If I put it in my program, it's not optional when my program runs. As a developer, I cannot depend on SELinux being enabled to protect my code from an exploit. I'm sure some developers won't use it, but that's just another indication of trust.


I think the simplicity is a key factor. Some security everyone can understand is better than good security few can understand.


> he dismisses SE Linux on that basis. I don't see why the same doesn't apply to pledge.

If you use a Linux distro that enables SE Linux, the second it gets in the way you can turn it off. If you install OpenBSD-current right now, all those utilities use pledge and you can't just turn it off.


You pretty much described seccomp. They do the same thing, just with a different interface.


> pledge() appears to be voluntary.

As they've used it so far it's not that voluntary for the user. When De Raadt says voluntary mitigations don't work, he's talking about mitigations that a sysadmin can easily disable via settings.

Unless developers build options to control it at runtime then in practice pledge() is a lot less voluntary than SE Linux which has a knob to enable or disable system-wide.


How does one whitelist open("/dev/null") with seccomp-bpf?


You can open /dev/null at init time and then lock down open(). Sure - it's not always possible, but it's a solution.


That's definitely a way better strategy than what I said -- when your code is well-designed-enough for it to work.


That is the Capsicum model, disable open when you go into secure mode. Note you can allow passing of file descriptors and an external program can open new files for you if you allow this. Many single task programs do not need to open files after initialisation.


I knew someone would ask that.

So, the way I'd recommend doing the filename whitelist is by setting up a mount namespace. Create a tmpfs, create the necessary directory tree inside it, bind-mount each whitelisted path in the tmpfs to the real file, then pivot_root into the tmpfs. This sounds complicated but is actually not very much code, and again a library could make it easier.

But I think you could also do it with pure seccomp. The trick is to copy the filename list into memory pages that you subsequently mark read-only. Then, have your seccomp filter whitelist specifically pointers to those strings, and prohibit making the pages writable again.

(Disclaimer: I just came up with this on a whim, it probably needs more thought.)


You don't even have to copy the filenames around and mess with changing page permissions - simply doing:

  static const char * const dev_null = "/dev/null";
and then whitelisting the pointer dev_null is sufficient, because string literals are stored in the text section which is mapped read-only.


That only works if you've locked down mprotect, mmap, and munmap.


Yes, the parent covered that.


> Create a tmpfs, create the necessary directory tree inside it, bind-mount each whitelisted path in the tmpfs to the real file, then pivot_root into the tmpfs

You've made an excellent case for pledge("rpath", ["/dev/null"]);


I am saying we can and should have a library that offers that interface, yes, but that having the lower-level building blocks available is also important.


AFAIK one can filter on the syscall's arguments.

https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux....

I think it's quite readable and one could modify it easily to whitelist open on "/dev/null".


You are completely missing the entire point, you should figure out what you are missing here.


Well, I have always thought that the best way to solve such problems are selective privileges. Android/iOS have privileges, though those are either all or nothing. Desktop basically has root/non-root.

I think this problem should be solved not by the developer: I pledge to only read files, but the user/administrator: you are only allowed to read files; and attempts to do that could either fail hard on opening the file in rw mode, or silently pipe data to /dev/null on writes.

As an added benefit this would teach programmers to actually expect calls to fail and easily test that by applying restrictions. In such environments a program could test its env during initialization and either refuse to start or try to work in a limited fashion. Blowing up during runtime is better than nothing, though still borderline acceptable. Yes, this helps catch misbehaving programs and more importantly highly helps mitigate exploits (exploit automagically runs in isolated jail).

Rachel by the Bay has awesome writing [1] on exactly this topic. Makes one think: how many of us have encountered failed fork()/malloc()/fopen()/execve() and are guilty of releasing buggy code, because "f it, highly unlikely, will fix later"?

[1]: https://rachelbythebay.com/w/2014/08/19/fork/


I think the point that the OpenBSD developers are trying to get across is that if privileges are left for the administrator to deal with, it doesn't get done in most cases. So that leaves the developers to deal with it.

Also the developers know if a program needs to write to files or not. Why even leave it as an administrator task to lock down the program, if we already knew that it will never, ever need to write to a file, unless it's being exploited?


Administrators (FOSS world) can chose where software is installed, what flags it is compiled with, etc., yet vast majority of us just apt-get/yum install and forget - trust the developers (not necessarily upstream) to chose sane defaults for us. The more paranoid use Gentoo/Slackware and fine-tune things themselves. But we are left with that option. Current semantics of pledge() do not leave us this option. There is nothing wrong with shipping default privilege config file along with app, but an option to say "f this shit, vim on my systems does not have access to sockets" without rebuilding from source would actually lead to better security.


>There is nothing wrong with shipping default privilege config file along with app

In principle no.

In the real world though, I think something else will happen. Someone tried to run a broken program. The solution suggested online will be: Just add/remove "this" in the configuration. Sure it fixes the immediate issue, but the fact is that program remains broken.

What "pledge" does is it requires the/a developer to fix the actual bug. The bug might be that the pledge call is wrong. Perhaps the program should have had more capabilities to start with. You just wouldn't know unless you read the code.


I understand this line of reasoning, though you can also find "solutions" like "disable SELinux". If we believe the bell curve then it should not be a surprise :) When it comes to security we basically have two options:

  * Delegate security configuration to developers, allowing them to open unpluggable holes
  * Delegate security configuration to users/admins, allowing them to shoot themselves in the foot
Developer can "fix" bugs by `pledge(EVERYTHING)` without actually finding the root cause, user can `privileges: ALL`, neither option protects us from foolishness. The core question is which option do we chose.

The most sane middle ground would be to allow users only to restrict privileges, not loosen up.


it would only lead to better security when anyone bothered, which from decades of experience, we know to be essentially never.


Optional things for development are optional so won't always be used. I guess rather than saying "this is a unix program" you could say, this is an OpenBSD program. Which would mean that it has undergone a set of processes for development where all the optional things are not turned off. For example: code review, static analysis, fuzz/quickcheck style testing, using pledge() appropriately, etc. But you'd need to define the set of processes that need to be followed before it could be called an 'OpenBSD program'. Replace 'OpenBSD program' with whatever you want to call this.

Who should set up the permissions? If you trust the developers, you trust them to set up the permissions for you. However, what happens of you don't trust them? You need administrators and users to be able to tweak what the programs are allowed to do.

What is really great about doing it at the developer level is that the administrator do not need to think about that stuff.

For large programs like emacs, or python, it is unfortunate that you can't disable the privs just for a functional call for example. I'd like to see a discussion on why this wouldn't work? I guess there is a good reason - but they don't say.

If you "allow only these privs until Done" where Done would be defined by the program jumping back to a point set by pledge() call. If the process does not allow writing on executable memory this would work (at least for non jit processes).

I wonder if there are ways this can be used by python. Perhaps using a fork this could be done. The program forks to run just the priv bit of code. It can use pledge() and drop all the stuff it doesn't need - do it's business - then die.


The "traditional" way of achieving something like this on Unix-like systems is indeed to fork and use the various available mechanisms (different uid's, chroot etc.) to reduce the attack surface but the problem is that it's a lot of work and the classic API's leave you with relatively limited opportunities to relinquish privileges.

Qmail is a good example of this philosophy: Many small binaries that isolate different functionality and are run as different users and mostly communicate via command line and pipes. It makes the attack surface small. But pledge() could have made it even smaller.


You dont need executable+writeable memory to run arbitrary code (that would let you reset a mutable pledge)

https://en.wikipedia.org/wiki/Return-oriented_programming


> Desktop basically has root/non-root.

UNIX in general yes, Mac OS X and Windows not.

The problem as Theo points out, is getting developers to use it.

For example, Windows has fine grained security for all object handles in the system, but as many security exploits show, almost no one makes proper use of them.


For fun I made a perl web app use this. Much simpler than systace or seccomp.

I use the path argument as simple form of chroot(2). Previously I had to create a vnd (think loopback device if you are coming from linux) to chroot nicely. On code updates, some process had to rsync static assets into the chroot (I preload all of the needed perl, then chroot()). On linux, the same app uses containers/namespaces. Leveraging read only bind mounts for static assets, seccomp, and various prctrl fiddling. All that ends up being a few hundred lines of code. With pledge is really just a few lines to call the syscall. Much easier to reason about.

Even if you end up having to allow most syscalls, the path argument alone IMHO makes it worth it.


One of the first things that sysadmins at my last place of work would do is turn off SELinux on new installs of RHEL.

There were too many times that SELinux would cause issues for them because they didn't understand the built-in policies and where to place stuff. As a security conscious person, it is a HUGE pain in the behind and I've spent many hours debugging SELinux and it's policies.


That is unfortunately a bad practice but there also people out there who leave it in enforcing mode and actually make an effort to use it.

I think selinux use has increased lately, at least from my perspective.

Where I work I am constantly forcing it on people and volunteer to solve any problems they might have to ease their transition.

Just like pledge would require passionate developers who actually care about implementing pledge on the application level, SElinux requires passionate sysadmins who actually care about using it, and about their co-workers using it.


On the other hand, SELinux protected me against the "venom" VM escape vulnerability earlier this year. That was nice :)



That's a very similar presentation, but this one includes some new details. (Notably that it was renamed from "tame" to "pledge")


Not to claim that they are remotely the same, but this reminds me of Microsoft Drawbridge.

Drawbridge classifies syscalls into groups and the syscalls an application is allowed to use is registered. When the application is executed a runtime gateway verifies that the application only uses the syscalls that was registered. Drawbridge does more things (generates a library that maps the 800+ syscalls to the group equivalent one etc.). But there are similar ideas.

I thought Drawbridge was neat, but seems not to have moved much beyond MSR.

http://research.microsoft.com/en-us/projects/drawbridge/


Interesting... Drawbridge sounds like rump kernels (which can be used in userland processes as well as in VMs), where everything the application does is turned into a small number of hypercalls (12ish IIRC). It seems like there are RISC and CISC forms of higher security system call interfaces (e.g. pledge needing sendsyslog(2) and SOCK_DNS). It is good to see both approaches getting more use :). I hope pledge is adopted widely as it seems like a good approach to easily get significant improvement (particularly when exec is not needed, since restrictions are not inherited).

A link to the pledge man page since I haven't seen it mentioned yet: http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man2/...


Some of the research went into Windows containers in Windows 10, where you can run containers directly on top of Hyper-V instances.

Eventually even how the sandboxing works in Windows 8/10 for store applications.


De Raadt sounds like a pleasant, wise human in his slides (let's not talk about the mailing list).

I'm really looking forward to 5.9 if it includes pledge as well as vmm (native hypervisor)


Really? Tame/pledge seems like a good interface but the dig at Linus seemed unnecessary, the masturbating monkey was totally uncalled for, and the coil of poop juvenile.

Seriously lowered my view of the presentation.


Linus was the one calling the OpenBSD developers "m...... monkeys" (apparently that word triggers the HN spamfilter, I vouched your comment), so I think the OpenBSD guys are allowed to be "a bit miffed".


Ah. I was not aware of that.

I don't blame them for having issues with the way Linux does things (after all they run under different philosophies). But they didn't have to stoop to the same level.

It really wouldn't surprise me if something similar ended up in Linux one day. It just seems so clean and simple for the most common cases and small utilities.


well, this is a good example of the dangers of dropping to the lower level of the opposition; taken out of context, the images in the slides have a counter-productive effect on the presentation. You just always have to hold higher standards, or get dragged into the mud.


This only happens when we're (and I do mean all of us, not only a general IT crowd, but people, society as a whole) are too serious about ourselves, up to the level of butt-tight about it. I can't get insulted by the shiny pile, I cant get insulted by the monkey. In best case I'll think its mildly funny and adds some... shine to the presentation, at worst I'll think its simply a lack of good taste and move on with it.


I'm not insulted, but I find them tasteless and distracting. Other humor would probably fit better.


Unfortunately just the new api is not enough. Developers still need to actually use it. When I researched seccomp and got really excited about it, I submitted a patch to memcached to enable a restrictive policy. The patch/pr is still there, months later. If the project doesn't care, no amazing tool is going to help us :-(


Theo understand that, and that's why he's pushing to get the core OpenBSD userspace tools to use pledge. And hopefully, like many other security innovations, patches will spread from OpenBSD to other operating systems.


In relation to mitigations, what are the "Loudmouth Linus" and "recent article in Washington Post" references about?


http://www.washingtonpost.com/sf/business/2015/11/05/net-of-...

Linus, in one of his less bright moments, called the OpenBSD team a bunch of masturbating monkeys [1]. Unfortunately, the Linux kernel's conspicuous lack of attack mitigation measures (compared to Win/Mac/OpenBSD/etc) does make one wonder who has been masturbating over the past few years.

[1] http://article.gmane.org/gmane.linux.kernel/706950 (from the article)

(to be clear: I like and use Linux a lot... but Linus's disregard for security is becoming a liability)


http://www.washingtonpost.com/sf/business/2015/11/05/net-of-...

Article could have been good but is a bit too sensationalist, e.g. pointing out that Ashley Madison runs Linux only to admit that it had nothing to do with their security breach -- OK, so why did you mention it, then?

(In truth, kernel security rarely matters for servers, because the application is usually the first line of defense. I say this as someone who runs one of the rare services where kernel security does matter, so yeah, I wish Linux did more hardening, but the article is misleading.)


Torvalds called openbsd devs "masturbating monkeys" for their focus on security. Wapo had an article on linux security the othe day.


I would love to see all the other operating systems adopt pledge(), but as is often the case with OpenBSD's security mitigations, it will be years before we see it happen (if at all).


This is kind-of a different case than most mitigations, though. It's right now only possible because OpenBSD takes a whole-system approach to development; how the different syscall groups work can change, and right now only follows "this seems like it aligns with how we usually do things in OpenBSD." But yes, I would like to see a similar mechanism appear in other operating systems.


OpenBSD is not unique here; Solaris has a "whole-system approach" to development as well, and has had the ability for programs and admins to easily do privilege drop or provide privilege separation for many years now.

Not only that, Solaris allows you to wrap programs without any source modifications easily using ppriv to drop or add privileges as required.

Almost every slide in the presentation that talks about how you would use the proposed pledge() interface applies to Solaris' privileges model as well.

Some relevant examples:

http://www.kernelthread.com/publications/security/solaris.ht... http://docs.oracle.com/cd/E23823_01/html/816-4863/ch3priv-25...

Solaris' role-based access control and advanced privileges model even lets you implement things like only allowing someone to become 'root' if both them and another person logs in at the same time. Think of the "two-keys required to unlock this door" sort of approach to security:

https://blogs.oracle.com/gbrunett/entry/enforcing_a_two_man_...


My point about OpenBSD having a "whole-system approach" was that the interface isn't necessarily general (yet); it just needs to meet the needs of the OpenBSD team as they exist today. When they realize it has a limitation, they can change it, no fuss, because they can commit to the whole system.

That said, Solaris' facilities seem useful, but from the documentation you linked, seems much more complicated than pledge(). They look similar conceptually, but Solaris' seems to be much more complicated to actually use.


The examples provided are some of the more complex cases that let you do advanced things.

You can shrink the amount of code required if you limit it to more simple cases as those shown in the slides.

For example, as derived from the OpenBSD presentation:

  if (pledge("stdio fattr", NULL) == -1):
     err(1, "pledge");
A similar (not completely equivalent, since OpenBSD chose some "interesting" definitions for their privileges, and is admittedly untested) example for Solaris might be:

  priv_set_t *tmp = priv_str_to_set(
     "PRIV_FILE_READ PRIV_FILE_WRITE PRIV_FILE_CHOWN_SELF",
     " ", NULL);

  /* Assert required privileges. */
  if (setppriv(PRIV_ON, PRIV_PERMITTED, tmp) == -1)
    err(1, "setppriv permitted");
  if (setppriv(PRIV_ON, PRIV_EFFECTIVE, tmp) == -1)
    err(1, "setppriv effective");

  priv_inverse(tmp);

  /* Drop all privileges not required. */
  (void) setppriv(PRIV_OFF, PRIV_PERMITTED, tmp);
The big difference, I think, between the Solaris interfaces and the OpenBSD ones are that Solaris allows the process to temporarily drop privileges and then add them back, or permanently drop them. From the proposed OpenBSD interfaces, it looks they only allow the permanent drop model.

There are a few convenience wrappers that might simplify the above further, but the real point is not to compare efficiency of interfaces, but capability.

Also, Solaris offers the ability to restrict privileges of programs without source code modification (imagine a program you don't quite trust and don't have the source code to). I didn't see that in the OpenBSD presentation.

In their defense, they're also clearly still working on these interfaces, so there can't yet be a fair comparison. Solaris has had privilege interfaces for over a decade, so the model presented is a bit more mature obviously.

The only thing I'd mention is that Solaris tries to provide a default set of privileges that represent things closer to administrative boundaries, rather then implementation-specific ones, as implementation can change, but the basic high-level operations do not.

For example, Solaris has a file read/write privilege, but doesn't bother letting you restrict the ability to set file timestamps separately because that doesn't seem like a useful thing to do. It does however, provide separate privilege(s) for manipulating ownership of files, since that's clearly a different category of operations. OpenBSD currently seems to be focused on the implementation instead of the administrative-level operations being performed.


Thanks for the reply! I've got several responses, that are mostly unrelated to eachother.

1. "interesting definitions": As you note, "OpenBSD chose some 'interesting' definitions for their privileges". This was the core of my original thesis: because of the whole-system approach, they were free to choose "interesting" definitions that closely matched their current-implementation-specific usage patterns, making the whole thing more convenient, but less general. That's all my original post was trying to say.

2. code size: I see that conceptually these 2 code samples work similarly, but one has 5x more LOC.

3. dropping and picking up isn't really dropping: If I drop privileges, but can pick them back up, then if I decide to misbehave, picking them back up is just something I do first. Dropping them with the possibility of picking them up is security theater, not actual security.

4. sandboxing is addressed in other presentations: Earlier presentations explain why having tools to restrict the privileges of programs without source modification is inadequate; a common pattern for programs is that they require some privileges during start up, but then don't need them for the rest of execution; a source-unaware mechanism would have to allow the start-up privileges for the entire execution. Otherwise, it's not too different than existing priv-sep tools.

5. pledge() isn't about administrative privileges, it's about code vulnerability mitigation. Security tools around administrative boundaries are important, sure. But that is fundamentally not the problem that pledge() is trying to solve; pledge is /mitigating/ against vulnerabilities in the /implementation/ of the program. It's saying "for the remainder of my execution, I should only do these types of operations; if I try to do anything else, I have cracked & compromised."


1. "interesting definitions"...

Understood.

2. code size: I see that conceptually these 2 code samples work similarly, but one has 5x more LOC.

I think that's nit-picking a bit, especially since, as I said, there are some other convenience functions that could be used depending on the situation. But regardless, it's hardly onerous. We're talking about 7 lines of actual code vs. 2 lines. That's not even worth arguing about, especially as most applications do this once if they're permanently dropping privileges.

3. dropping and picking up isn't really dropping: If I drop privileges, but can pick them back up, then if I decide to misbehave, picking them back up is just something I do first. Dropping them with the possibility of picking them up is security theater, not actual security.

No, it's actually not security theatre at all. I think you misunderstand the threat model that's trying to be addressed. The kernel is enforcing the restrictions.

Note that I also specifically said Solaris allows both -- the developer can temporarily drop privileges or can permanently drop them or use a combination thereof. Each style of interface is appropriate for a different situation.

For example, consider a case where your program allows the execution of a user script to retrieve a password required to unlock an SSL Private Key file (Apache does this). For the duration of that operation, you can set your effective privileges to be very limited, and thus any programs you fork/exec can also inherit those very limited privileges and you have additional mitigations against certain attacks.

Or consider any other scenario, where during that particular period of execution, you may have to accept untrusted user input. By limiting your effective privileges during that operation, you can significantly mitigate potential attacks against your application.

Privilege bracketing (as this is called) allows you to carefully control sensitive information to ensure that it is not compromised. It's important because sometimes programs do need a higher level of privilege, but only for short durations of program execution. Privilege bracketing is not always the correct answer for an application, but sometimes it is best and only practical choice.

The key benefit of privilege bracketing is to narrow the window of a program's vulnerability so that it is as small as possible, reducing the damage any exploit can do. For example, in order for a process to be able to write to a file, it is only necessary for that file to be opened for write access. In other words, we would assert privileges only for the open() call (relinquishing them immediately after); they should not be asserted for the write() call.

4. sandboxing is addressed in other presentations: Earlier presentations explain why having tools to restrict the privileges of programs without source modification is inadequate; a common pattern for programs is that they require some privileges during start up, but then don't need them for the rest of execution; a source-unaware mechanism would have to allow the start-up privileges for the entire execution. Otherwise, it's not too different than existing priv-sep tools.

Yes, sandboxing is not a complete answer, but if you don't have the source code to an application (which happens often in an enterprise environment), it is one of the best ways to ensure that an application behaves as expected.

As for handling applications that only need certain privileges for startup, Solaris provides a mitigation for that via SMF (Service Management Facility). In particular, it's possible to have SMF allow an extended set of privileges to start the service, and after the service is online, then modify the effective set for the program. That may not work for all programs, but it's still important.

As a historical example, you could modify Apache to start with a greater set of privileges than needed after startup:

http://www.c0t0d0s0.org/archives/4075-Less-known-Solaris-fea...

5. pledge() isn't about administrative privileges, it's about code vulnerability mitigation. Security tools around administrative boundaries are important, sure. But that is fundamentally not the problem that pledge() is trying to solve; pledge is /mitigating/ against vulnerabilities in the /implementation/ of the program. It's saying "for the remainder of my execution, I should only do these types of operations; if I try to do anything else, I have cracked & compromised."

Fundamentally, operations a program can perform is an administrative consideration - not just an implementation consideration. For example, consider a libc function that historically used a specific syscall, but is later optimised to no longer require a certain syscall (as the OpenBSD folks themselves point out as examples of things they are changing). If you represent the privilege in terms of the syscall, then when the implementation changes, either the program breaks or now the program can perform an operation you didn't previously want to allow.

Not only that, a privilege capability model needs to be represented in terms not only appropriate for a developer, but for an administrator as well. Especially since administrators will often require the ability to restrict the capabilities of programs for which they have no source access.

Solaris attempts to strike a balance between the two as you can see in the list of privileges here:

https://docs.oracle.com/cd/E53394_01/html/E54776/privileges-...

Also, this is precisely why both the ability to permanently and temporarily drop privileges is important. As an example, a program at startup could drop everything but the ability to read and write files permanently. Later, it could then further use privilege bracketing to temporarily drop the ability to read or write files (as appropriate) during certain parts of program execution.

In short, I think it's important to not limit perspective of privilege capability to a model where you treat the entire program as hostile -- privileges can be used not only by the administrator to ensure a program behaves as expected, but by the application itself to insulate itself from other bad actors.


Why do you feel this way when even OpenBSD hasn't fully worked out all the details (see the slides) and proven it will work well in practice.


Because I think whitelisting the syscalls at init and only being able to drop syscalls is a great idea.


I'm wondering how does this work in a world of plugins? (for example Windows/oSX world, where programs like Autodesk 3DSMax/Maya, Adobe Photoshop, etc. use plugins)?

Or applications that embed many utils into one?


The simplest case, which is still pretty useful, would be to just have each application require the superset of the syscall functionality any of its components (including any plugins) might require. That would result in fairly broad permissions needed for some apps, but in most cases probably still less than "everything". Plus you get a lot of low-hanging fruit closed off in the rest of the applications, which I think is the main target here: not to harden Photoshop, but to harden the many things in the base system that look more like file(1).

There have been actual exploits (multiple ones!) in file(1), where a bug in parsing can result in arbitrary code execution. That's really a failing of the permissions model: file(1) is a program that does nothing but read a file and print a result, so buggy parsing code should have a failure mode no worse than either it crashing, or printing the wrong result. But as-is, since it has the full permissions of the user who ran it, it can do things like email someone your SSH keys, or delete your home directory, which is functionality the binary clearly doesn't need access to for legitimate operation.


Applications that embed many utils into one almost always have a dispatch tree based on what they were called as and/or what their first argument is. In such an app, the app could figure out what job it's doing at the moment, and then call pledge() appropriately. Pledge() essentially says, "from this point forth, I pledge I will only need syscalls x, y, and z, so don't let me do anything else."

For plugins, that gets complicated. That's a case where there would need to be some reworking of how plugins are called, perhaps breaking the program into a series of plugins connected by pipes, where each individual program has its small set of abilities. But that's just an off the top of my head guess, there could very well be a better way of doing it.


It's not much different than seccomp/systrace/apparmor/grsec rbac/selinux in that regard. It's per process. So sure, if the plugin forks it could pledge(). Much the same way the plugin could seccomp once forked. Otherwise the plugins rules would be applied to the application.

All the same, even if the app used it with most syscalls enabled, it would reduce the attack surface.


Actually seccomp is per-thread. Small difference, but it does make some lighter use possible in case of plugins.


I toyed with a similar idea for Linux a while back. Posted about it but got busy and never followed up: https://lkml.org/lkml/2009/6/24/8 I'd forgotten about it entirely...


Is this a rename of the tame() function that was posted earlier?



Oh my mistake, I skipped the "About Openbsd" slide because I already knew what openbsd was :).


Yes. From the second page:

>- formerly known as tame()


Why is pledge() int rather than void? All the examples exit() in one way or another, so why doesn't pledge() do that for them?


The examples are just examples, the uses in the tree are different. For example, ksh doesn't exit if pledge fails, it just prints an error about why it failed and keeps running (imagine how fun it would be the shell did just keep terminating). Other programs use their own logging to report the error, for example httpd logs the error the same way it logs all other fatal errors before it exits.



I wonder if there is any reason to use pledge() to move between program stages rather than exec(). It would require arranging your executables a little differently, but should be tractable and probably backwards compatible, and the increased resolution would be exposed to existing MAC frameworks.


Dismissing SELinux as "optional security is irrelevant" seems pretty silly, since just as one can choose not to use SELinux, they can equally choose not to use OpenBSD. Either way it comes down to the choice of the administrator.

OK, people clearly disagree - but I'm still not seeing it, so can someone please explain what's more optional about SELinux than OpenBSD? I mean, I'm not trying to make some kind of a gratuitous dig here, I'm trying to make a serious contribution to the discussion.

They're different kinds of mitigation mechanisms anyway, that could easily work together. plege() (and seccomp-bpf) are mitigations intended to be applied by the application author, of the "I know my IRC client should never call ptrace()" sort. SELinux is a mitigation intended to be applied by the system administrator, of the "I know my ETL loader job should only need to read files labelled with loader-input label, write to the directory labelled with the loader-temp label, and connect to the syslog and database sockets" sort.


OpenBSD develoers have no control over other operating systems, but they can ensure running OpenBSD means mandatory pledge. It is a given that statements about OpenBSD tech apply only when running openbsd .

You are invoking some pretty ridiclous semantics to dispute "optional".


SELinux is on by default now with Android. So that's good. I don't know when it will be on by default for the majority of desktop and server users though.


Probably worth noting that seccomp bpf programs are pretty far from turing complete.

Though they are a bit complex.


OMG! Comic Sans again.


That's not quite Comic Sans. They've actually found a more irritating font, well done OpenBSD!

Now, the misspellings "priviledge" and "seperation" ...


Pretty sure that's not Comic Sans.


These security hipsters think it makes them cool.


No, they just enjoy upsetting people who don't like the font :)


Linus is gonna need some ice for that burn on page 3! Jeez!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: