What We’ve Since Learned About Leopard Security Features

There are over 100 comments accumulated on my last Leopard post. As usual, they’re better than the post itself. Since you’re probably in a hurry, I’ll spare you the effort of poring over them, and instead present our findings to date.

OS X Runtime Stack Security

A commenter asked if Leopard’s compiler included ProPolice. ProPolice (and/or SSP) is a C compiler extension that guards the call stack of a program, injecting tripwires onto the stack that will be set off by buffer overflows.

Leopard gcc ships with stack protection. There’s probably a simple answer about what OS X programs are compiled with it, but the best I can tell you is that some OS X programs appear to use it; you can see for yourself by loading a program in “gdb”, and disassembling some functions. SSP’d functions have an idiosyncratic prologue and call a “check stack” function in their epilogue.

Do we care? Meh. Stack protectors defend against the oldest, easiest-to-find memory corruption errors. You still find stack overflows in obscure enterprise code, or on embedded platforms that are hard to test. Also on AIX. But you’d be a little shocked to find one in privileged OS X code.

OS X Memory Randomization

A commenter asked if the OS X heap and stack were randomized. Stack memory stores the call stack, which in turn stores the sequence of functions and subroutines being used at any given time. Stack memory also stores most of the variables a program knows about when the program is compiled. Heap memory stores dynamic variables, which depend on the programs inputs rather than on the code itself.

I could now waste your time with a discussion of how valuable stack and heap randomization is, but it’s a moot point: the OS X stack and heap don’t appear to be randomized.

Do we care? A little. Heap overflows are relatively common, because dynamic memory usage is always a bit more complicated than stack memory usage.

Library Randomization

An interesting point was made that the Mach-O ABI was inherently hard to randomize. We had noted that even Leopard’s Library Randomization was imperfect, as it kept the dynamic linker (and, as Ralf pointed out, the program text) at an exposed fixed address. Until those problems are fixed, you might as well not randomize. The commenter basically predicts that it will be awhile before this is resolved.

Do we care? Yes, in that Library Randomization is a major advertised security feature of Leopard. If you don’t randomize program text, it is straightforward to exploit memory corruption vulnerabilities.

W^X and Heap Security

Someone posited that the OS X memory model was now W^X. “Write XOR Execute” is an OpenBSD design idiom; it says that if something in memory is writeable, and therefore exposed to memory corruption, it should not at the same time be executable.

The OS X stack has been non-executable for quite a while. The OS X heap remains executable, a fact you can verify with a trivial piece of C code. Someone involved with PaX, the Linux runtime memory security extension, gave test results verifying that, and also showing that both the stack and the heap could be made executable by returning through the BSD “mprotect” system call.

Do we care? Yeah. This is an area where Leopard is noticeably lagging behind Vista. Read Marinescu’s talk at Black Hat; the Vista heap has an intricate protection scheme; Leopard seems to lack anything comparable.

Sandboxes

OS X Sandboxes —- my favorite Leopard feature, one I’ll have more to say about later —- allow users to write policies that firewall the operating system off from different programs. It is possible to use a Sandbox to prevent iChat from running any other programs, or touching any sensitive files.

Sandboxes are apparently enforced by a kernel extension called “seatbelt”. Seatbelt is a cooler name than Sandbox. Seatbelt calls a program called “sandbox-compilerd” from the kernel when a sandboxed program runs. You’d want OS X to be careful with “sandbox-compilerd”, since it consumes complex input (whole Scheme programs) and runs out of the kernel. In GA Leopard, “sandbox-compilerd” is itself sandboxed (wooo a paradox) and runs under your own credentials.

Do we care? We do, but you shouldn’t; this is just trivia.

Sandboxes and Watson’s Vulnerabilities

Sandboxes were inspired by RBAC features in other operating systems, most notably Niels Provos’ OpenBSD Systrace. Systrace has a well-known vulnerability, first documented by Robert Watson and published formally this year at Usenix WOOT. The problem is a classic TOCTTOU (time-of-check-to-time-of-use) race condition. As an example, Systrace goes to look up a file in the filesystem to see if you can touch it. Between the time Systrace OK’s the operation and the kernel actually performs it, you can swap the safe file with a sensitive one. The kernel’s second lookup will return a different result, which Systrace cannot verify.

Nobody (that we know of) has audited OS X Sandboxes for race conditions. It’s hard to know whether they are present. It wouldn’t surprise us either way.

Do we care? No, not really. First, it’s just speculation. Second, I don’t have any evidence that TOCTTOU races in kernel wrappers are ever actually exploited. Right now, someone actively beating OS X Sandboxing is not writing commodity virus programs; you did something to piss them off.

A Brief Interlude

It is taking me longer to write this up than I expected. Sorry!

The Leopard Firewall

The consensus opinion is that it’s a step backwards. Most notably, it doesn’t filter outbound connections. Multiple commenters note that you can get outbound filtering from programs like Little Snitch.

Do we care? We don’t, but our Moms do. Outbound filtering is more valuable than inbound filtering; it catches “phone-home” malware. It’s not that hard to implement, and I’m surprised Leopard doesn’t do it.

Code Signing

Apple says, “Leopard can use digital signatures to verify that an application hasn’t been changed since it was created.” You can create these signatures with the “codesign” tool, verify them on the command line with “codesign -v”, and display them with “codesign -dvv”. To create a key to sign them under, go to Keychain Access, select “Certificate Assistant” from the app menu, and generate a new “Code Signing” certificate.

Code signatures appear to be enforced from the TMSafetyNet kernel extension. I was just wrong about this, thanks Ralf.

Awesome. Two problems, though.

First: I haven’t yet found a place that checks these signatures. I tried Parental Controls and I tried Saved Passwords in Safari, both times testing by corrupting the binary in the same fashion as a virus. Evidently, the only thing “protected” by signatures is the Keychain, and the “protection” means that instead of accessing the Keychain transparently, you get a confirmation dialog that looks substantially similar to a Keychain dialog you probably click through several times a week.

Second: Even if they were validated, you can still inject unsigned libraries into applications when they launch; this is a core feature of the dynamic linker, which you enable with the “DYLDINSERTLIBRARIES” environment variable.

Do we care? It is very, very, very hard to build systems that gain security from code signing. There are like 10 posts, each longer than this one, that could go into explaining why that is. So, our take is, “no”. There was no way this was going to be a straightforward security win for Leopard. You care to the extend that you are irritated with Apple for marketing hyperbole.

Parental Controls

Here’s an interesting one. You can lock down what executables an account can use. “Parental Controls” undersells this feature. Enterprises pay tens of dollars per desktop for aftermarket software to lock down desktops to trusted applications.

I assumed with a name like “Parental Controls” that the threat model was my 8 year old son. It’s not. Parental Controls are enforced in the kernel, which you can demonstrated by allowing an account Terminal.app and nothing else. Parental Controls will keep you from executing arbitrary programs; it’s enforced at execve()!

Here’s where it gets weird.

Terminal.app is not very useful without the several hundred Unix command line tools you invoke Terminal to get access to. And you can run these programs. They aren’t individually allowed or denied; that would be a nightmare to configure.

You can even execute the compiler, and build new programs. But you can’t execute them!

I originally thought, “Eureka! A place to actually witness Code Signing in action!” No such luck. Copy /bin/ls to /tmp (its signature remains intact), and you can’t run it. Copy “hello world”, with no signature, into “/bin” (as root, of course) and you can. This appears to be “trusted path execution” —- programs in certain directories run, others must be individually allowed.

Unfortunately, the feature is broken in the same way Code Signing is. Want to run an arbitrary program under Parental Controls lockdown? Change its “main()” to a GCC “contructor” function, and compile it as a dylib. Then “DYLDINSERTLIBRARIES” it into any allowed program. Your code runs, and has full access to the system.

Do we care? Kind of a lot, yeah. Not that we’re disappointed. Actually, even though we seem to be able to walk past this feature, it works way better than we expected it to; it is one silly flaw away from parity with expensive aftermarket Windows tools. This feature should be fixed, exposed somewhere besides “Parental Controls”, and relabeled “Secure Desktop”.

24 Comments so far

  1. Theo de Raadt November 1st, 2007 7:15 pm

    It is strange how people keep saying that Robert Watson first disclosed the systrace problem. Quoting you:

    “Systrace has a well-known vulnerability, first documented by Robert Watson and published formally this year at Usenix WOOT.”

    First documented? Let me quote the systace(1) manual page itself:

    BUGS
    Applications that use clone()-like system calls to share the complete address space between processes may be able to replace system call arguments after they have been evaluated by systrace and escape policy enforcement.

    It says the same thing as Robert Watson’s paper. That comment was added to the manual page on July 21, 2002, by Niels Provos, the original author.

    It is a hard problem, which is why it has not been fixed. But come on, the problem was documented since nearly day 1.
    We’d welcome fixes for it, but try to keep the diff under 12,000 lines, ok? That’s how hard the problem is..

  2. Ralf November 1st, 2007 7:18 pm

    Thomas, a quick correction: The code signatures are not enforced by the TMSafetyNet kext. This kext is a kext for Time Machine. Code signatures are verified by the xnu kernel (not by a kext, but by the actual /mach_kernel that you boot, the core).
    Check cs_invalid_page() in bsd/kern/kern_proc.c (xnu-1228), cs_validate_page() in bsd/kern/ubc_subr.c and load_code_signature() in bsd/kern/mach_loader.c. Well, the kernel actually only calculates SHA-1 message digests on the (code) pages and compares them to known ones, I do not know yet which userspace program does the certificate validation.

    Also, reading your comment “you’d be a little shocked to find one in privileged OS X code.” I wasn’t quite sure whether I missed the irony again. But no, I’m not shocked, not at all. Come to think of it, I totally expected today’s Secunia advisory on CUPS (it was a 1-byte stack overwrite, a little bit harder to exploit but potentially possible on OSX as well). Actually, I expect more of the same for CUPS… It’s certainly one piece of software on the top of my to-sandbox list.

  3. Dave G. November 1st, 2007 7:21 pm

    I actually think stack protection is still relevent. Particularly, the client-side attack surface of large applications like Safari, iChat and Mail. Let alone the fact that there are enough third party developers that are not security conscious and developing for Mac OS X.

    I also generally like improvements where compilers and operating systems protect me from potentially unsafe code.

  4. Thomas Ptacek November 1st, 2007 7:25 pm

    I agree with Theo.

  5. Ralf November 1st, 2007 7:40 pm

    Dave: Actually, I think Apple screwed up big time with stack protection in Leopard. My testing so far is preliminary, so please take the following with one or possibly more grain(s) of salt.

    I’ve looked at XCode today. Long. I haven’t been able to find a single option to enable stack protection, neither for a project, nor globally. Nothing in the help either. What’s Apple’s famed dev environment, the Visual Studio for OSX? XCode, I thought.

    Also, compiling my own C code with -fstack-protector (using the system gcc) yields binaries that have undefined references to __stack_chk_fail and __stack_chk_guard. I don’t see these undef refs neither on Safari, iChat, Mail nor on CUPS.

    @Thomas: mDNSResponder doesn’t have these either, which makes me doubt your earlier statement. What did you see that made you claim that it was in the first Leopard post?

    Actually, I have concers that even with stack protection enabled, due to the way dyld works, things may be working suboptimal. But that needs further research still.

  6. Theo de Raadt November 1st, 2007 7:41 pm

    On it’s own stack protection is barely useful.

    But it is still very relevant when used with other things:

    1) Best-effort ALSR, especially top-of-stack and (as you
    mention) the shlib linker.
    2) W^X on the _entire_ address space (the mprotect
    component of this is often overstated though as it
    only gets involved in the 2nd or 3rd step of an
    attack, and the combination of all these things
    makes lib-return very hard.
    3) A sprinking of resistance in libc such as
    malloc/free paranoia, dtor/atexit carefulness, etc.
    4) PIE, just because it affects the known values on your
    stack SO MUCH..

    If you mix all 4 of these things together with compiler assisted stack protect, it is a very difficult (hostile)
    programming environment for an attacker.

    Those are the most effective schemes. Other crazy schemes have gone through our heads in the years, such as malloc padding things to unmapped pages exactly, or non-readable code segments (on platforms that support it, but you need to fix the switch-table code in compilers…) but most fancy things start causing problems. The amount of non-security third party bugs you start helping people fix gets unmanagable…

  7. Thomas Ptacek November 1st, 2007 7:41 pm

    … though do note, Theo, that Watson “documented” this in a mailing list post like 7 years ago.

  8. Thomas Ptacek November 1st, 2007 7:42 pm

    Ralf: I am guessing that you are right and I’m wrong. I think I may have been misinterpreting the offset call at the beginning of subroutines (a PIE artifact).

  9. Theo de Raadt November 1st, 2007 7:47 pm

    OK, you beat me. 7 years, by the way, is older than systrace(1) itself…. it being only 5 years ago.

    Unless you mean the generic TOCTOU problem. Well, that was known about way more than 7 years ago and documented in piles of other academic papers. It might be interesting to see how far back that goes.

    It is access(2) or stat(2) all over again.

  10. Theo de Raadt November 1st, 2007 7:47 pm

    OK, you beat me. 7 years, by the way, is older than systrace(1) itself…. it being only 5 years ago.

    Unless you mean the generic TOCTOU problem. Well, that was known about way more than 7 years ago and documented in piles of other academic papers. It might be interesting to see how far back that goes.

    It is access(2) or stat(2) all over again.

  11. Thomas Ptacek November 1st, 2007 7:53 pm

    I agree that the underlying class of vulnerabilities is ancient. I’m not standing by the number “7″, just noting that when we vetted the paper for WOOT, it was noted that there was an old mailing list post behind it — making it a bit weird as a new piece of research.

  12. jf November 2nd, 2007 4:41 am

    Hrm the SSP/ProPolice is confirmed? From what I saw they shipped with GCC 4.0.1, and SSP went into mainline in GCC 4.1?

    I’m guessing from what someone else commented here they attempted to put the protection in themselves, and screwed it up?

  13. Ralf (a.k.a someone else) November 2nd, 2007 12:55 pm

    jf: yes, SSP support in the OSX 10.5 system gcc is confirmed. Have a look at the gcc-5465 source to see for yourself, especially gcc/ChangeLog.apple. It is not present on 10.4.x gcc (for which the src is available as well) nor is it present in the stock gcc 4.0.1.

    It does not look like Apple compiled anything with -fstack-protector. Reasons are unknown. For an actual implementation of the stack protector you have to look into libgcc-8.1 (which, funnily enough, is derived from gcc-4.2.0).

  14. Hal B November 2nd, 2007 1:49 pm

    The heap is NX when you compile 64-bit.

    Try compiling your test program with
    gcc -arch x86_64 test.c

    This would require a 64-bit processor like the Core 2 Duo or whatever comes with the Mac Pro.

    Also, the main executable(but not dyld) is randomized if you compile with -Wl,-pie.

  15. Hal B November 2nd, 2007 1:53 pm

    “I could now waste your time with a discussion of how valuable stack and heap randomization is, but it’s a moot point: the OS X stack and heap don’t appear to be randomized.”

    I’d be interested in a discussion of that value (or lack thereof).

  16. Thomas Ptacek November 2nd, 2007 2:03 pm

    Start with these points:

    - All sorts of subtle little bugs that people don’t really audit for will disclose either (a) actual offsets into the heap or (b) the secrets used to generate those offsets.

    - Inputs to programs can drastically influence the predictability of allocations even when they are randomized.

    - In a significant number of cases, you can get leverage control of a CPU register and any known large span of opcodes to land code where you need to.

    - When you start randomizing the heap, you wind up in a losing battle against performance (the biggest problem in allocator design is fragmentation).

    - Retries.

    I know the conventional wisdom is that, done right, ASLR is very effective. And I do less shellcoding than almost anyone else involved in this argument. But I’m not sold on ASLR, and I am sold on things like Sandboxes and Systrace.

  17. Niels Provos November 3rd, 2007 11:46 am

    I am glad that Systrace is providing opportunity for such stimulating discussions. As Theo stated, the problems that Robert described in his WOOT paper were all well known. Robert’s contribution was a concise summary of the various problems.

    However, the TOCTOU issues in Systrace and their solution were even mentioned in the Systrace paper. The original implementation used a local-aside buffer for copyin. As a result, the system call arguments were stored in kernel address space and could not be changed by any sandboxed process.

    Other issues like flipping symlinks were resolved in a similar way. Once Systrace has canonicalized the path, the kernel is told that the path may not contain symlinks.

    If people are really interested in this topic, I recommend reading Tal Garfinkel’s paper on the Ostia system (2003) which presents a similar list of problems as those presented by Robert.

    Thomas, I am still waiting on your “Virtual Honeypots” book review :-)

  18. Thomas Ptacek November 3rd, 2007 1:25 pm

    Niels: it’s coming! The good news is, like 4 different Matasano people have now had a hand in it. =)

  19. Theo de Raadt November 3rd, 2007 2:45 pm

    A copyin buffer for the system call arguments is not nearly enough to solve this problem. It requires nearly an entire rewrite of how the kernel handles system calls.

    ioctl(), fcntl(), bind(), connect() and and a host of other system calls have arguments which point to a buffer which contains the important stuff — those buffers are still very much attackable.

    ioctl() actually is probably the hardest. In most BSD-derivatives the size is encoded in the 2nd argument, but the interpretation of the 3rd argument is not done until you hit deep driver code.

    A simple approach might be to “lock” the userland address space in such situations to provide atomicity, but that is very SMP hostile if not done perfectly.

    If the problem was simple it would have been solved a long time ago.

  20. Thomas Ptacek November 3rd, 2007 3:20 pm

    It doesn’t seem like the answer is “canonicalizing” the ioctl payload between userland and time-of-use. It seems to me like you can solve 95% of this problem by not letting untrusted code open sensitive devices or issue sensitive ioctl calls.

    There is certainly some trivial case of an ioctl that every program needs to issue that has an ambiguous interpretation. I am missing it. What is it?

    Great stuff, though. Thanks!

  21. Robert Watson November 7th, 2007 8:54 pm

    > First documented? Let me quote the systace(1) manual page itself:
    >
    > BUGS
    > Applications that use clone()-like system calls to share the complete address
    > space between processes may be able to replace system call arguments after
    > they have been evaluated by systrace and escape policy enforcement.
    >
    > It says the same thing as Robert Watson’s paper. That comment was added to
    > the manual page on July 21, 2002, by Niels Provos, the original author.

    Just to clarify history here slightly: the man page comment was added by Niels shortly after my private e-mail to him on July 19th, 2002 raising these issues and expressing my concern over the potential vulnerability of Systrace policies to such attacks.

    The point of the recent paper was to more thoroughly explore this class of vulnerabilities and previously undocumented exploit techniques (this being a workshop on exploiting vulnerabilities), not just in Systrace but in the general class of software wrappers used to enforce security policies. It turns out many anti-virus engines suffer from exactly the same class of problem, and many end users depend heavily on the reliability of anti-virus engines. It is very important that weaknesses in the approach, and especially the ease of exploiting the vulnerability, be understood by the more general software authoring community. Unlike OpenBSD users, these end-users will be significantly less sophisticated when it comes to avoiding downloading and running binaries from unreliable sources, and they don’t have a firm OS foundation to protect them, hence running with anti-virus software in the first place.

    Robert Watson

  22. chris holland November 14th, 2007 3:46 am

    The Parental Controls feature, or at least the portion of it that allows an admin user to create a Safe account under which only specific apps can be executed, has been in place since, i believe, Panther, but just never really touted.

    I’d briefly mentioned it here:

    http://theappleblog.com/2005/02/15/best-from-apple-protecting-computer/

  23. bobdole November 17th, 2007 12:05 am

    I’m just curious on the firewall comment…what OS comes with a outbound firewall enabled by default? It seems more like something which was on your wishlist and which you only are mentioning because you’re disappointed…It seems like there are plenty of other things you could put on that same wishlist and call out in this article :S

  24. anonymous December 3rd, 2007 5:28 pm

    concerning the filtering of outgoing connections: there is a open source project at sourceforge.net that aims at that problem (http://sourceforge.net/projects/ppfilter).

Leave a reply