On Monday I had the pleasure of speaking at a Workshop on Cryptographic Agility and Interoperability held at the National Academies by the Forum on Cyber Resilience.
The assembled group of academics, policy-makers, and practitioners touched on a variety of problems around the practical application of cryptography in production software. The main focus was on the challenges and benefits associated with cryptosystems that can be updated or swapped out over time (and thus exhibit “agility”). The organizers asked us to consider questions such as the following.
- Why is cryptographic agility useful and what are its potential risks and impacts?
- What approaches have been attempted for improving cryptographic agility, and how successful have they been?
- How might privacy and human rights be affected by cryptographic agility?
- What are the consequences of cryptographic agility for the interoperability and usability of communications systems?
- What are the key opportunities for standards bodies, governments, researchers, systems developers, and other stakeholders with regard to cryptographic agility?
The Forum will issue an official report of what was said in due time; for now, here are some of the thoughts I shared with the group.
Who are the users?
Whenever I encounter a group of security experts talking about designing user-facing systems, I like to remind them that their users are almost certainly less experienced with security than they themselves are. This doesn’t mean that their users are stupid or ill-informed, and nine times out of ten it doesn’t mean that the experts should go about trying to educate their users to achieve a shared worldview, either. But it does mean that the experts need to put effort into building empathy with their users, and into setting them up for success.
Developers are users, too – of APIs, standards, and libraries. Image CC-BY 2.0 WOCInTechChat.
In the case of cryptographic agility, “users” aren’t just the consumers buying and using mass-market software. They are also the software developers, architects, and decision-makers who are trying to decide whether and how to integrate cryptography into their systems. These developers are the ones who must benefit first from policies, standards, and practices if we are to use cryptographic agility to achieve resilience against software vulnerabilities.
How to help end users
Developers want to do the right thing for their users. Users want to do the right thing to protect their data, too, but are often even less experienced with security than developers. One big risk of cryptographically-agile systems is that developers force decisions onto users who are ill-equipped to make them. (“Hmm, we support three different encryption schemes because we’re agile. Which one should we use? Let’s ask the user!”) What can developers do to help users?
-
Good defaults: Developers should choose default settings for users that are secure, and that strike a balance with performance. This goes against a custom of security-expert culture: without knowing the user’s threat model, it may feel wiser to set no default and let the user choose. However, many users find such decisions daunting; asking users to choose among unfamiliar options can lead to them being frustrated and giving up on the program, or guessing at an answer. At the other extreme, some developers may be tempted to set the default to the most conservative, cryptographically-strong setting. This can be problematic in cases where there is a significant performance impact.
-
Choices come with recommendations: In cases where the user must make a choice – or may be inclined to alter the default setting – the developer should offer guidance to help them. In some cases, this may involve simply stack-ranking the options (“Most secure” through “least secure”). In cases where there is not a clear ordering, another approach may be scenario-based menus that highlight the relative pros and cons of each option (“Strong data protection, with a 10% slowdown on uploads”).
-
Transparency: Developers should provide a mechanism by which curious users can identify exactly which cryptographic library is being used in a program. This will help ease users’ minds when a vulnerability is discovered – ”Ah, this is running OpenSSL X.X, so I’m safe!” – and can help the community more easily hold developers accountable for updates. It can also be useful increasing the visibility of closed-source, country-mandated cryptographic suites, which many security experts worry may contain backdoors.
Developers, we <3 you
There’s more that the security-expert community can do to help developers. Here are some broader-reaching ideas.
-
Algorithm guidance: It’s not enough to simply say “these are the algorithms available”, or “these are the algorithms approved for use”. Authoritative entities – be they government agencies like NIST, standards bodies like ISO, or educational materials like textbooks – should try whenever possible to offer unambiguous guidance as to the relative benefits and drawbacks of algorithms. There is broad consensus in the security community on which algorithms are reaching their end-of-life and which ones are still fresh, but average developers don’t have easy access to this information.
-
Programming education: It is a time-honored tradition for practitioners to complain that academic institutions aren’t preparing students well for “the real world”. There are many critical areas of programming practice that receive no attention in many undergraduate programs, such as writing automated tests for code. For what it’s worth, I would like to add the cryptography lifecycle to this list. In addition to offering guidance around the pros and cons of different algorithms, security courses should require students to spend time thinking about how a program’s architecture impacts its resilience in the face of cryptographic vulnerabilities over time. It’s not enough to design a system that uses a cryptographic library well; students must also learn to plan for a library’s obsolescence.
-
Study developers: In the user-experience community, we understand that studying our users is an essential part of building systems that work for them. If we are to understand the current practice of cryptographic agility – what’s really working for developers, what challenges they face, and why they make the decisions they do – we can’t just convene experts to talk about the problem. We must use social-science qualitative-research methods to actually talk to developers in the context of their work, probe their practices, and uncover their lived experiences.
Bridging the people-tech gap
Simply Secure has multiple stakeholders, and in technical circles we often try to be the voice of the user when users aren’t in the room. We bring that same passion in advocating for the needs of cryptographers, software engineers, and fledging computer scientists. We work on tools for people who just want to communicate with their friends – who treat computers as black boxes – and people who are passionate about writing good, secure, usable code. Let us know how we can do more.