Superbloom

People who think about computer security for a living sometimes cringe when they read about the subject in the popular press. Security is a complex and nuanced topic, and it’s easy to make assertions that don’t hold up to careful scrutiny.

One basic-but-unintuitive principle is that security is not a binary property: in the absence of other context, it’s hard to definitively say that a particular system or piece of software is “secure” or “insecure”. We can only say that a system is secure against a particular threat, or – more usefully – against a collection of threats, known as a “threat model”.

Justitia, Tehran Courthouse. Image CC BY-SA 3.0, Abolhassan Khan Sadighi,
Justitia, Tehran Courthouse.

For example, some people might say that using a VPN while browsing the web from a coffee shop is “secure”, because it prevents the jerk across the street with a cantenna from listening in and seeing what websites you go to. But if your threat model includes listeners with devices housed with internet service providers (or a government that operates VPNs), you might instead refer only to an option like Tor as “secure”.

As someone who has spent a lot of time thinking about security, it’s tempting to dismiss things as “insecure” when they don’t protect against the threats that I’m personally concerned about. Go too far down that path, though, and we find ourselves in a world where only the products that protect against the most extreme threats are considered acceptable. As with transportation safety and public health, we have to recognize that getting people to adopt a “good enough” solution – at least as a first step – is usually better than having them not change their behavior at all. In other words: it’s important to not let the perfect be the enemy of the good!

Just as security is not a binary property, it’s also important to not think of usability as an all-or-nothing game. Design thinking encourages us to ask not just whether humans in general find a piece of software usable, but to explore 1) the circumstances in which different groups of users might be motivated to use the software and 1) the needs that a piece of software must meet in order to sustain that motivation.

I think that this distinction is particularly important for software developers to bear in mind. It’s easy to get discouraged when someone tells you that the code you’ve slaved over “isn’t usable”. (Or get defensive – after all, there are plenty of people who seem to find it useable enough, or there wouldn’t be anyone to file all those feature requests.) I challenge you instead to dig deeper, and try to understand exactly what things the user found frustrating about their experience, and what expectations they had using the software that may be mismatched against the assumptions you have made in designing it.

Just as we can only say that software is “secure” against certain threats, so too must we define “usability” as a function of particular users with particular needs, backgrounds, and expectations. Working to understand those users will ultimately help our community build better software.