There are over 100 comments accumulated on my last Leopard post. As usual, they’re better than the post itself. Since you’re probably in a hurry, I’ll spare you the effort of poring over them, and instead present our findings to date.
OS X Runtime Stack Security
A commenter asked if Leopard’s compiler included ProPolice. ProPolice (and/or SSP) is a C compiler extension that guards the call stack of a program, injecting tripwires onto the stack that will be set off by buffer overflows.
Leopard gcc ships with stack protection. There’s probably a simple answer about what OS X programs are compiled with it, but the best I can tell you is that some OS X programs appear to use it; you can see for yourself by loading a program in “gdb”, and disassembling some functions. SSP’d functions have an idiosyncratic prologue and call a “check stack” function in their epilogue.
Do we care? Meh. Stack protectors defend against the oldest, easiest-to-find memory corruption errors. You still find stack overflows in obscure enterprise code, or on embedded platforms that are hard to test. Also on AIX. But you’d be a little shocked to find one in privileged OS X code.
OS X Memory Randomization
A commenter asked if the OS X heap and stack were randomized. Stack memory stores the call stack, which in turn stores the sequence of functions and subroutines being used at any given time. Stack memory also stores most of the variables a program knows about when the program is compiled. Heap memory stores dynamic variables, which depend on the programs inputs rather than on the code itself.
I could now waste your time with a discussion of how valuable stack and heap randomization is, but it’s a moot point: the OS X stack and heap don’t appear to be randomized.
Do we care? A little. Heap overflows are relatively common, because dynamic memory usage is always a bit more complicated than stack memory usage.
An interesting point was made that the Mach-O ABI was inherently hard to randomize. We had noted that even Leopard’s Library Randomization was imperfect, as it kept the dynamic linker (and, as Ralf pointed out, the program text) at an exposed fixed address. Until those problems are fixed, you might as well not randomize. The commenter basically predicts that it will be awhile before this is resolved.
Do we care? Yes, in that Library Randomization is a major advertised security feature of Leopard. If you don’t randomize program text, it is straightforward to exploit memory corruption vulnerabilities.
W^X and Heap Security
Someone posited that the OS X memory model was now W^X. “Write XOR Execute” is an OpenBSD design idiom; it says that if something in memory is writeable, and therefore exposed to memory corruption, it should not at the same time be executable.
The OS X stack has been non-executable for quite a while. The OS X heap remains executable, a fact you can verify with a trivial piece of C code. Someone involved with PaX, the Linux runtime memory security extension, gave test results verifying that, and also showing that both the stack and the heap could be made executable by returning through the BSD “mprotect” system call.
Do we care? Yeah. This is an area where Leopard is noticeably lagging behind Vista. Read Marinescu’s talk at Black Hat; the Vista heap has an intricate protection scheme; Leopard seems to lack anything comparable.
OS X Sandboxes —- my favorite Leopard feature, one I’ll have more to say about later —- allow users to write policies that firewall the operating system off from different programs. It is possible to use a Sandbox to prevent iChat from running any other programs, or touching any sensitive files.
Sandboxes are apparently enforced by a kernel extension called “seatbelt”. Seatbelt is a cooler name than Sandbox. Seatbelt calls a program called “sandbox-compilerd” from the kernel when a sandboxed program runs. You’d want OS X to be careful with “sandbox-compilerd”, since it consumes complex input (whole Scheme programs) and runs out of the kernel. In GA Leopard, “sandbox-compilerd” is itself sandboxed (wooo a paradox) and runs under your own credentials.
Do we care? We do, but you shouldn’t; this is just trivia.
Sandboxes and Watson’s Vulnerabilities
Sandboxes were inspired by RBAC features in other operating systems, most notably Niels Provos’ OpenBSD Systrace. Systrace has a well-known vulnerability, first documented by Robert Watson and published formally this year at Usenix WOOT. The problem is a classic TOCTTOU (time-of-check-to-time-of-use) race condition. As an example, Systrace goes to look up a file in the filesystem to see if you can touch it. Between the time Systrace OK’s the operation and the kernel actually performs it, you can swap the safe file with a sensitive one. The kernel’s second lookup will return a different result, which Systrace cannot verify.
Nobody (that we know of) has audited OS X Sandboxes for race conditions. It’s hard to know whether they are present. It wouldn’t surprise us either way.
Do we care? No, not really. First, it’s just speculation. Second, I don’t have any evidence that TOCTTOU races in kernel wrappers are ever actually exploited. Right now, someone actively beating OS X Sandboxing is not writing commodity virus programs; you did something to piss them off.
A Brief Interlude
It is taking me longer to write this up than I expected. Sorry!
The Leopard Firewall
The consensus opinion is that it’s a step backwards. Most notably, it doesn’t filter outbound connections. Multiple commenters note that you can get outbound filtering from programs like Little Snitch.
Do we care? We don’t, but our Moms do. Outbound filtering is more valuable than inbound filtering; it catches “phone-home” malware. It’s not that hard to implement, and I’m surprised Leopard doesn’t do it.
Apple says, “Leopard can use digital signatures to verify that an application hasn’t been changed since it was created.” You can create these signatures with the “codesign” tool, verify them on the command line with “codesign -v”, and display them with “codesign -dvv”. To create a key to sign them under, go to Keychain Access, select “Certificate Assistant” from the app menu, and generate a new “Code Signing” certificate.
Code signatures appear to be enforced from the TMSafetyNet kernel
extension. I was just wrong about this, thanks Ralf.
Awesome. Two problems, though.
First: I haven’t yet found a place that checks these signatures. I tried Parental Controls and I tried Saved Passwords in Safari, both times testing by corrupting the binary in the same fashion as a virus. Evidently, the only thing “protected” by signatures is the Keychain, and the “protection” means that instead of accessing the Keychain transparently, you get a confirmation dialog that looks substantially similar to a Keychain dialog you probably click through several times a week.
Second: Even if they were validated, you can still inject unsigned libraries into applications when they launch; this is a core feature of the dynamic linker, which you enable with the “DYLDINSERTLIBRARIES” environment variable.
Do we care? It is very, very, very hard to build systems that gain security from code signing. There are like 10 posts, each longer than this one, that could go into explaining why that is. So, our take is, “no”. There was no way this was going to be a straightforward security win for Leopard. You care to the extend that you are irritated with Apple for marketing hyperbole.
Here’s an interesting one. You can lock down what executables an account can use. “Parental Controls” undersells this feature. Enterprises pay tens of dollars per desktop for aftermarket software to lock down desktops to trusted applications.
I assumed with a name like “Parental Controls” that the threat model was my 8 year old son. It’s not. Parental Controls are enforced in the kernel, which you can demonstrated by allowing an account Terminal.app and nothing else. Parental Controls will keep you from executing arbitrary programs; it’s enforced at execve()!
Here’s where it gets weird.
Terminal.app is not very useful without the several hundred Unix command line tools you invoke Terminal to get access to. And you can run these programs. They aren’t individually allowed or denied; that would be a nightmare to configure.
You can even execute the compiler, and build new programs. But you can’t execute them!
I originally thought, “Eureka! A place to actually witness Code Signing in action!” No such luck. Copy /bin/ls to /tmp (its signature remains intact), and you can’t run it. Copy “hello world”, with no signature, into “/bin” (as root, of course) and you can. This appears to be “trusted path execution” —- programs in certain directories run, others must be individually allowed.
Unfortunately, the feature is broken in the same way Code Signing is. Want to run an arbitrary program under Parental Controls lockdown? Change its “main()” to a GCC “contructor” function, and compile it as a dylib. Then “DYLDINSERTLIBRARIES” it into any allowed program. Your code runs, and has full access to the system.
Do we care? Kind of a lot, yeah. Not that we’re disappointed. Actually, even though we seem to be able to walk past this feature, it works way better than we expected it to; it is one silly flaw away from parity with expensive aftermarket Windows tools. This feature should be fixed, exposed somewhere besides “Parental Controls”, and relabeled “Secure Desktop”.