At GUADEC 2013 in Brno, Czech
Republic, Stef Walter presented his
recent work to improve the security features of GNOME by removing
problematic—and frequently ignored—"security features."
The gist of Walter's approach is that interrupting users to force them
to make a security decision produces the wrong result most of the
time; far better is to try and determine the user's intent for the
task at hand, and design the application to work correctly without
intervention. This is a fairly abstract notion, but Walter presented
three concrete examples of it in action.
The users and the humans
He started off the session by tweaking the standard security
developer's notion of "the user." A "user," he said, is someone that
security people frequently get annoyed by; users click on the wrong
things, they fall for phishing attacks, and make plenty of other
mistakes. It is better to think of users in terms of "human beings,"
because "human beings" are active, creative, and use their computers
to do things—although they also get overwhelmed when faced with
too much information at once.
This is where security design enters into the picture. Humans' brains
filter out extraneous information, on a constant basis, as part of
making sense of the world. So developers should not be surprised when
those humans tune out or dismiss dialog boxes, for example. This means that
"if you force the user to be part of the security
system,"—primarily by forcing the user to make security
decisions—"you're gonna have a really bad time." He likened the
problem to a doctor that gives the patient all of the possible
treatment options: the patient will get frustrated and ask "what would
you do?" Software developers need to be prepared to make a strong
recommendation, rather than presenting all of the choices to the user.
Walter then had a few bits of wisdom to share from this approach to
security design. First, he said, the full extent of the humans'
involvement in security should be to identify themselves. You can ask them
for a password to prove who they are, but after that they should not
be interrupted with questions about security policy. Next, it is
important to remember that "professional users" are not different in
this regard. By "professionals" he seemed to mean developers, system
administrators, and others with knowledge of security systems. But
just because they have this knowledge does not mean they should be
interrupted.
That is because the worst possible time to ask the user to make a
risky decision is when they are in the middle of trying to do
something else, he said. "You're going to get results that are worse
than random chance."
Application to applications
For developers, Walter offered two design maxims. First:
Prompts are dubious, he said. If you are refactoring your
code and you see a user prompt, regard it with suspicion, asking if
you really need to prompt the user for a response. The end goal, he
said, should be to get rid of Yes/No prompts.
The second maxim follows from the first: Security prompts are
wrong. Or at least they are wrong 99% of the time or more, he
said. Sure, you ask for a password, but that is an identification
prompt, and passwords are an unfortunate fact of life. But prompts
that ask questions about security, like "Do you want to continue?" or
"Do you want to ignore this bad certificate?" are wrong. Furthermore,
he added, if you then make the user's choice permanent, you add insult
to injury.
He gave several examples of this bad design pattern, including the
all-too-familiar untrusted-certificate prompt from the web browser,
the "this software is signed by an untrusted provider" prompt from a
package manager, and an "a new update is available that fixes your
problem, please run the following command" prompt from Fedora's automatic bug reporting
tool.
The correct approach, he said, is instead to stop interrupting the user, let
the user take some action that expresses their intent, and then make a
decision based on that intent. In other words, figure out what the
user is trying to do, and design the software so that he can express
his intent while working.
A positive example in this regard is Android's Intents system,
which he called ripe with potential for getting it wrong, but actually
gets it right. So, for example, the "file open" Intent could
prompt the user with a bad dialog of the form "Application X has
requested read/write access to file /foo/bar/baz. Continue? Disallow?"
But, instead, it just opens up the file chooser and lets the user
select the desired file. Thus the user gets asked to take a clear
action, rather than asked a security-policy question.
A second, theoretical example would be the potentially private
information in the Exif tags of a photo. If the user starts to upload
a photo, the wrong approach would be to interrupt with a dialog asking
if the user is aware that there is private information in the Exif
tags. The better approach is simply to show the information (e.g.,
geographic location and a detailed timestamp) with the photo and make it
easy to clear out the information with a button click.
The fix is in
Walter then showed off three new pieces of work he is developing to
improve just such security-interruption problems. The first is the
removal of untrusted-certificate prompts. This garnered a round of
applause from the audience, although they were a bit more skeptical of
Walter's solution, which is to simply drop the connection.
Dropping the connection is usually the correct behavior on the
browser's part, he said, since the certificate problem is either an attack or a
server-side misconfiguration. But there is one major class of
exception, he added: enterprise certificate authorities (CAs). In
these situations, an enterprise deploys an "anchor" certificate for
its network which is not known to browsers out of the box. By adding
support for managing enterprise CAs, GNOME can handle these situations
without bringing back the untrusted certificate prompt.
Walter's solution is p11-kit-trust,
which implements a shared "Trust Store" where any crypto library can
store certificates, blacklists, credentials, or other information, and
they will automatically be accessible to all applications. So far,
NSS and GnuTLS support the Trust Store already, with a temporary
workaround in place for OpenSSL and Java. Packages are already
available for Debian and Fedora. There are command-line tools for
administrators to add new certificates to the store, but there are not
yet GUI tools or documentation. The same tools, he said, should be
used for installing test certificates, personal or self-signed
certificates, and other use-cases encountered by "professional" users.
The second new project is a change to how applications store
passwords. Right now, gnome-keyring stores all passwords for
all applications, but Walters noted that this is really surprising to
users, particularly when they learn that any application can request
any other application's stored passwords. The user's expectation, he
said, is that passwords are "account data" and would be stored with
other account information for the application. That is true, he
observed, but it has not been done in practice because there is not a
reliable way to encrypt all of this per-application storage.
The solution is libsecret, which
applications can use to encrypt and store passwords with their other
account information. Libsecret uses the Linux kernel keyring to hold
a session key that the applications request to use for encrypting
their saved passwords. Normally this session key is derived at the
start of the session from the user's login password, but other values
can also be returned to applications for policy reasons. Returning a
blank key, Walter said, means "store your data in the clear," while
not returning any value means the application is not permitted to save
data.
The third new feature Walter is working on is the solution to a
GNOME annoyance, in which the user is prompted at login time for the
password, even if they have logged in via another method (such as
fingerprint, PIN, or auto-login). The cause of this re-authentication
is that GNOME needs the user password to decrypt secret data; the same
double-step occurs when a user is prompted once for their password
when unlocking an encrypted hard disk, and again when logging in to
the session.
Walter's solution is a pluggable authentication module (PAM) called
pam_unsuck that, again, relies on the kernel keyring. The
kernel keyring will hold the user's password after login so it can be
reused. If an account does not use any password to log in, a password
will be created for it and saved in hardware-protected storage (where
possible). He noted that the decision to use auto-login,
fingerprints, or PINs already constitutes the user's conscious choice
to use an authentication method less secure than a password. This
scheme allows them to make that decision, it just prevents the
nuisances of being prompted for a password anyway.
Walter ended the session by imploring developers to "go forth and
kill ... prompts." There are many more places where changing the
user-interruption paradigm can help GNOME craft a more secure system
overall, he said, by putting fewer security decisions in front of the
user.
[The author wishes to thank the GNOME Foundation for assistance
with travel to GUADEC 2013.]
(
Log in to post comments)