Building great software requires understanding what users want and need. If you’re building privacy-preserving software, this includes understanding the privacy threats that your users face.
One of the participants in Ame’s NYC study.
When Ame set out to talk to people in the New York City neighborhoods of Brownsville and Harlem about their experiences with mobile messaging, she wanted to amplify voices that are frequently underrepresented in the software community. (Many thanks again to Blue Ridge Labs for helping her connect with study participants.)
Big tech companies usually end up focusing their user research on the affluent Silicon-Valley dwellers that resemble their employees. Many funders of internet-freedom software are interested in the needs of activists and journalists. As a result, the software requirements of other people – folks who aren’t activists, but who have modest financial means – are not heard by developers, product managers, and other decision-makers who shape the features and presentation of software today.
Important nuances
Ame began sharing findings of the study with researchers, developers, and members of our Slack community to seek feedback. We were excited and gratified to see it referenced in a recent blog post by a Google security engineer who contributed to Allo, the new instant-messaging app that the company announced during I/O. (Note: the engineer wrote the post as a personal opinion, not in his official capacity as a Google employee. He subsequently made several changes to the wording of the post, we assume at his employer’s request.)
The blog post highlights that study participants expressed concern around physical threats like shoulder surfing, and that they see disappearing messaging (where messages are automatically deleted after a certain amount of time) as a key protective feature.
We were pleased, because the post signified that this research was already reaching software decision-makers. It validated our belief that this kind of study, which amplifies the voices of underrepresented users, holds real potential to influence the features and priorities of software teams.
However, we were less pleased by this part of the post: “Most people focus on end-to-end encryption, but I think the best privacy feature of Allo is disappearing messaging. This is what users actually need when it comes to privacy.”
It’s true that our writeups of the study thus far have talked about the mismatch between privacy enthusiasts’ priorities (e.g. end-to-end encryption) and participants’ requested security features (e.g. disappearing messaging). However, we have never argued that disappearing messages should come at the expense of end-to-end encryption.
Participants in the study saw disappearing messaging as an important feature because it combats a set of threats that they feel they have some control over. That doesn’t mean that those are the only threats that they care about. Indeed, participants also expressed concern about government surveillance, while simultaneously conveying a sense of inevitability. If you believe the government will always have the power to spy on you, why would you waste time trying to find software that prevents that spying?
False dichotomies
The Allo team has faced significant criticism by members of the security community because they plan to make its end-to-end encryption opt-in rather than on by default. They argue that this allows users to upgrade their security if they want, but otherwise have immediate access to chatbot-style, AI-powered features. Until we can actually use the product, it’s hard to know whether this dichotomy – privacy vs. chatbot goodness – is really a necessary one. Is it truly impossible to both provide end-to-end encryption for interpersonal channels and offer an advanced bot interface on another?
If it is the case that users have to choose between a feature that offers chatbot functionality and one that works to preserve their privacy, let’s all be honest about the decision that’s being made. Don’t imply that disappearing messaging is sufficient because it’s what users are already asking for. Meeting user demands is a necessary part of building software, just as protecting against the threats they’re familiar with is a necessary part of ensuring their privacy. But that doesn’t make either sufficient. Software teams need to use their expert knowledge to offer users features that they demonstrably need, even if they don’t know to ask for them. Software that truly meets users’ privacy needs will protect them against the spectrum of threats they genuinely face, not just the ones they know to talk about.
Connect & share your thoughts
Seeing this study interpreted by one software engineer has already taught us a lot. We now know that the way we’ve been presenting these findings has not gone far enough to contextualize how they should be interpreted. This is something we will work to improve.
In the meantime, we are eager to get your thoughts and opinions on this work as well. Please take a look at a draft technical report describing it – available here and in our Github repo – and let us know what you think.
Finally, if you live in NYC and you’re interested in connecting with some great people doing outreach around security and cryptography, we encourage you to check out @cryptoharlem. It is one of many groups around the world working with their local communities to improve access to privacy-preserving tools. If you are part of such a group, we’d love to hear about your experiences, and talk about the possibility of working with you to amplify the needs of people in your community.