I'm extrapolating a bit here, since it's not that clear from the slides whether this is precisely what he meant, but I interpreted him to be criticizing optional security on the user side, which SELinux is. Since it's a set of user-configurable local system policies, users can, and often do, just use the most permissive policy possible to avoid having to debug SELinux problems. pledge() is optional from the perspective of the developer, but not the user, so once some piece of software implements it, it gets the protections without users having to set up an optional security policy, or being able to disable the one-and-only-one security policy (short of editing the pledge() calls out of the software and recompiling). And I gather that the OpenBSD developers will be patching software in the ports tree themselves, even if upstream doesn't, so that the OpenBSD version of as much software as possible uses it.
In practice SE Linux policies are typically maintained by distros, which seems similar to OpenBSD adding pledge() to ports. I'll grant that pledge() will likely have an easier time expressing meaningful policies for many apps, though, since it is applied after initialization and with knowledge of command-line parameters, etc. OTOH, the set of policies expressible with pledge() is much more limited.
But I really don't believe that user optional vs. developer optional makes a difference. The fact is that most app developers do not care to constrain themselves with mitigation layers. Most probably have no idea that this is even a thing they should consider, and of those that do most have other things they'd rather think about. Mitigation layers don't add new features and only fix hypothetical bugs which, sadly, most developers just don't care about.
The problem with SELinux is that they often get turned off by admins at the first hint of resistance. E.g. Apache won't read from the non-standard directory you've decided to put your app in? Off goes SELinux.
Building in lowest common denominator checks in the applications that the app developer can know won't get in the way makes it less likely the checks will get disabled.
E.g. your web server could disable filesystem access to paths it doesn't need after having read its config files and determined where it's going to log and where it's going to serve files from, so that things keep working as expected for users, possibly making exceptions for really stupid things (like exposing /etc). That would reduce the chance that users start looking for ways to just turn it off.
That makes the two approaches complementary.
I agree with you that most app developers do not care, but that's besides the point for OpenBSD: They care, and they control most of their own userland.
And you don't really need "most" apps to do it anyway. We'd get far just by having most of the highest profile internet facing server applications support it.
I haven't really read the thing but based off the comment here see if the following makes sense.
Lets say I'm the author of apache http server and I want to make sure that malicious attacks on the http server doesn't escape into remote code execution on the machine and use pledge (or something similar) to sandbox my own code. Taken from another angle when I know that my program deals with untrusted input can I sandbox my program to ensure that if the untrusted input escapes through the security implemented in my program logic is still sandboxed by the OS based off the policy I have set and not the user.
But what's the right policy to set for Apache? Your PHP or whatever code running under Apache could need to do any arbitrary thing. So the pledge would need to be configurable. Probably many PHP developers and sysadmins wouldn't know how to configure it and/or wouldn't care so they'd just turn it off, just like with SE Linux.
Moreover, your Apache server running your PHP web app probably legitimately needs access to that web app's entire database, so you can't sandbox that access away no matter what you do. If someone hits you with a remote code execution, then your root filesystem may be fine but your database has now been compromised, and that's probably worse.
OTOH if you're running Apache as a simple static content server, then yeah, pledge() could provide some nice hardening.
Pledge is not intended for every possible usecase, thats why its part of the program code itself.
If the programmer themseves can make an intelligent decision about if and when to invoke pledge, rather than some predefined policy, you dont have to worry about every single usecase in existence and thus suffer the massively overwrought interfaces this requires. All a programmer has to do in the least effort case is delay pledge calls until after the problematic functionality, or perhaps not use pledge at all.
This is all while obtaining roughly equivilent benfits of something like selinux in a huge majority of cases.
The primary goal of pledge is to make using it as simple as it can possibly be, so it actually gets used.
- Disallow reads of .htpasswd (this would require Apache to rely on a helper to do authentication out-of-process, so not that simple.
- Disallow writes of .htpasswd and .htaccess.
- Disallow writes to all config files evaluated by Apache (if they happens to be outside of /etc)
- Disallow writes to /bin, /sbin/, /usr
- By default disallow network connections to most ports (and yes, you can't prevent connections to the users databases, but preventing an attacker from port-scanning your internal network and trying to connect to other services that may not be sufficiently secured too would be helpful).
- Prevent inbound connections (e.g. lets say someone gets local code execution as the Apache user, and the server isn't sufficiently firewalled; congratulations, you're now running a remote shell that the attacker can use to do more indepth probes for other holes).
- By default disallow writes outside of document roots and a reasonably permissive set of scratch directories (this would probably need a switch to disable, but most won't need it).
Apache could also easily read its config before applying the rules, so adapting the rules to the config files is possible.
(some of the above will hit things like panel applications etc., but the vast majority of users won't run into them and so won't have a reason to disable them; unlike SELinux, the app developers also have the option of not providing a way to blanket disable the security, but instead provide config directives to whitelist directories that fits cleanly into the existing config system).
But for a very specific example of where pledge could have helped and where it'd have been much easier, here's a report of a remote root exploit in exim [1].
This root exploit consisted of using a buffer overflow to overwrite a config file that would then be evaluated, including macro definitions, as root.
If exim had been able to deny the inbound SMTP part to write anywhere but the mail spool this buffer overflow wouldn't have been exploitable (and yes, this case is simple enough that it would have been achievable with chroot() too, so it's more of a generic example of where voluntary restrictions are useful, not just pledge in particular)
You are right, if your webapp requires database connection, the OS has to allow it. That is where application-level security comes in. The OS can not be responsible for everything, the application has to be written securely as well.
This is complementary (and a bit orthogonal) to pledge().
> I don't see how pledge is different from Capsicum which he criticises on this basis, one it is compiled in you cant disable it.
This is addressed in the slides. Capsicum is 5 years old and used in 12 programs because it is difficult to implement. Pledge is 6 months old and used in over 400 programs already.