Menu
Menu
Superbloom

Introduction

(Download the PDF version of this resource here).

This document represents Superbloom’s approach to scoping risks and improving usability for software projects. It is a collection of tools, resources, practices, and lines of inquiry to help us navigate risks and mitigate potential harms present and emergent in software design and development. By evaluating the accessibility, risk, and security of software tools and projects through the lens of user experience and usability, we aim to help foster a deeper level of trust among all stakeholders involved in the user experience being designed within and around a given tool or project.

This toolbox is intended to be an adaptable mechanism that encourages research and design activities to prioritize the needs of the software’s intended users while addressing the vulnerabilities associated with its development. By sharing our approach and inviting collaboration, feedback, and community input, we also aim to provide software teams in the Digital Rights, Human Rights, and Internet Freedom spaces – including researchers, coaches, creators, designers, product managers, and engineers – with a set of resources that can be useful and adaptable to their unique contexts.

Using this toolbox will help our team members and other software teams to:

  • Evaluate the potential risks and harms associated with the use and maintenance of software.
  • Work towards a balance between user convenience and robust security without compromising trust or engagement.
  • Address accessibility as a core principle in itself, and in identifying and mitigating potential security vulnerabilities to reinforce the software’s reliability and protection of human rights.
  • Implement accessibility best practices to create user-friendly experiences for everyone, thus broadening the software’s reach and impact.

Who we are

Founded in 2014 as Simply Secure, Superbloom leverages design as a transformative practice to shift power in the tech ecosystem. We apply a holistic approach and view design as an intervention opportunity to center people and their needs. Our vision is a world where everyone has the knowledge, network, and digital tools needed to enrich their lives. Together, we’re committed to bring about a more just world by exposing the dangers and inequalities of emerging technologies, co-building alternative models, and contributing to the global cultivation of a transformative movement in tech.

We work to overcome the alarming lack of available resources in the public interest technology space by openly sharing our knowledge and partnering with organizations to identify unmet needs, design for access, and extend the impact of newly built tools and interventions. Changing who technology serves requires challenging not just the technology, but the processes that formed it. Our approach is to intervene in the tech strategy, design and development process across multiple contexts and scales. Our focus has been on bridging areas of expertise – such as usability, accessibility, and security – to develop new methods that shed light on the interplay between people and technology, especially among at-risk populations dealing with threats to their privacy, safety, or security.

What is this toolbox?

In today’s landscape of increasingly riskier digital experiences, successful software development requires more than just functionality and aesthetics. The multifaceted challenges faced by software teams require prioritizing user experiences while carefully addressing potential harms and respecting human rights. We designed this document to move beyond a set of heuristics that attempt to bridge usability, security, and accessibility toward a more holistic and modular approach to understanding and mitigating risk within human-centered tool design and development.

While the toolbox provides a holistic, structured approach, it has limitations, requires thoughtful application, and is not a cure-all solution.

This framework does not detail a full or typical design-thinking process, nor does it offer a full set of research and design activities that ought to take place throughout a product design and development cycle.

The methods detailed here cannot replace a technical security audit, security design review, or an expert accessibility audit. It cannot eliminate all risks associated with software development. Instead, it provides a set of human-centered tools and strategies that can be used to mitigate these risks to a manageable level. It is up to the software team and the organization to use these tools effectively, and to continually monitor and adjust their approach as needed.

  • Expert reviews to consider in parallel to the methods listed here:

    • Security audit: our approach here should not replace a systemic evaluation that happens during a security audit of your software’s code and infrastructure. Our approach does not assess the effectiveness of any security policies or controls, nor does it provide recommendations for improvement on that front. The Open Tech Fund’s Red Team Lab offers security audits for Open Source projects.

    • Security design review: our approach here is not a security design review – this approach is not focused on security architecture to harden a tool against technical vulnerabilities and risks in a software stack. Our approach does not emphasize infosec principles like CIA (confidentiality, integrity, availability). Instead, we’ve focused this document on a holistic review of human factors, mitigating biases that emerge in the process, examining security from a user-centered perspective, providing a balanced set of recommendations to enhance the user experience while meeting your software’s security/compliance requirements. The Open Source Security Foundation offers courses and maintains a repository of security design reviews of open source software.

    • Expert accessibility audit: While self-audits can help teams get ahead of common accessibility “fails”, expert accessibility audits provide a complete, rigorous breakdown of all WCAG 2.11 requirements for your tool, including testing your tools on assistive technology and with disabled users. Expert audits obviously help teams comply with accessibility standards and regulations, but further, they can enhance user engagement and improve SEO. We recommend contacting our friends at A11y Lab if you’re looking for an expert accessibility audit.

Heuristics, tools, and guidelines can never replace user input

While we believe that the principles, guidelines, and tools here can help you to design more usable, accessible, and security- and privacy-minded software products and tools, they are not guarantees. A tool or website can be 100% WCAG 2.1 compliant and still have accessibility challenges that went unforeseen. The only way to get ahead of these surprise issues is to test your tools with real and diverse sets of users early and often.

Testing with diverse sets of users from multiple perspectives is important to cover the bases of usability, accessibility, and security:

  • Users representing your different personas will improve usability issues
  • Users that employ different assistive technologies will improve accessibility issues
  • Users that have a range of threat models will improve security and privacy issues.

A living resource

Our approach will shift over time, in response to emergent technological considerations and best practices in human-centered design, amongst other factors. We intend for this resource to be updated annually at the minimum, and will continue to add resources via the Superbloom learning hub and our blog.

We also enthusiastically encourage our community to contribute ideas, resources, and feedback to this framework. To do so, you may join our community on Slack or send us an email at [email protected].

Our responsibilities as human-centered User Experience practitioners

What is the degree of responsibility for designers/developers to reduce harm in technology and software?

We at Superbloom believe that one of our responsibilities is to acknowledge and mitigate bias in our work, which means acknowledging and mitigating the harm that comes from a variety of potential biases. Amidst a general lack of and struggle for human rights in software development, the effects of personal bias are often amplified at scale, causing harm and perpetuating the marginalization of at-risk communities. For a further explanation of what bias looks like in this work and for methods and resources to address it, see Part 1 of this document.

We also hold ourselves responsible for approaching technology not as a static product, but as a process that includes, affects, and is affected by many people. This responsibility to us also intersects with and supports the effectiveness of technology: to have a truly effective software product, as well as a responsible one, it must consider and prioritize Accessibility, Usability, and Security. Failing in any of these aspects will not only significantly impact the software’s overall success and user satisfaction, but can introduce avoidable risk and harm to its users. Importantly, as software itself is an evolving process, so must our attention be towards these three key aspects; one-off audits and reviews will not be sufficient for attending to the ongoing usability, accessibility, and security of a software product.

Key Definitions

Working towards rights-respecting software development requires a shared understanding of the key values and ideas we are upholding. Here we define the most central concepts to this framework; for more definitions, please see the Glossary.

Accessibility: The degree to which a product, service, or digital interface can be used effectively and comfortably by everyone, regardless of their abilities or the assistive technologies they use. It ensures that all users can access and interact with the system without discrimination or barriers.

Digital Security: the practice of protecting digital information from unauthorized access, use, disclosure, disruption, modification, or destruction. We often define security in function of a particular threat model: i.e. “is a program secure against threats A, B, and C?”

Risk Mitigation: The process of identifying and reducing potential risks or uncertainties associated with a project or product development. In the context of software development, it involves planning and implementing strategies to avoid or minimize potential negative impacts on the project’s success.

Threat Modeling: A way of thinking about the sorts of protection you want for your data so you can decide which potential threats you are going to take seriously. Coming up with a set of possible threats you plan to protect against is called threat modeling, or risk assessment. While it’s impossible to protect against every kind of trick or adversary, threat modeling enables users to envision possible scenarios on a scale of likely to less likely, and plan accordingly2.

Usability: The measure of how easy and efficient it is for users to achieve their goals when interacting with a product, system, or interface. Usability focuses on aspects such as learnability, efficiency, memorability, errors, and user satisfaction.

What’s at stake?

While the potential threats, risks, and harms related to a particular software product will depend on its own context, users, and threat model, we advocate for this general approach to work against the larger scale ways that technology fails to be human- and user-centered. Lacking usable security means that in the worst case scenarios, users find ways around usability barriers, which can increase their risk (e.g. by choosing the wrong settings and sharing data unsafely, or revealing more of their identity than intended due to not being able to safely navigate the options). Accessibility practices and audits often happen in addition to, not in concert with, usable security reviews, and sometimes they don’t happen at all or are treated as afterthoughts or “nice to haves,” excluding huge swaths of people from their right to digital tools and information, not to mention basic dignity and equity.

As UX practitioners, we’ve learned that there is interplay, overlap, and synergistic benefits between each of these aspects:

  • The security and accessibility of your product are critical aspects of usability: if your software is not accessible, it is not truly usable; if your software is not secure, it is not truly usable; if your software is not usable, it is not truly effective.

  • Usability 🤝 Security 🤝 Accessibility: We want to warn against and work to break down the sometimes false dichotomies that arise when attempting to make software that is usable, secure, and accessible. For example, teams are often warned or worried that accessibility comes at the cost of security or usability. These concepts are mutually beneficial, dependent on one another, and importantly, noncompeting.

  • Benefits greatly outweigh the perceived and real costs: Making software that is usable, accessible, and secure increases usership, user trust, and legitimacy for your product. Considering these aspects from as early as possible in a development cycle saves teams countless hours of work refactoring when problems arise later. Accessibility compliance can save a company or organization from both discriminating against users and potential noncompliance litigation. All three aspects are green flags to funders looking to support sustainable software.

Using this toolbox

When and how to use the methods and practices outlined in this document will depend on many factors in a given tool team or project. Sometimes Superbloom staff are brought into a project at its inception, and other times we enter years into its existence. For external dev teams and partners, the question often arises: how do I balance the need for timely releases with thorough reviews (of any type)? Instead of prescribing a set of steps or an order of operations, we instead encourage teams and practitioners to consider the following principles when using this set of resources:

  • Early and meaningful community input: practicing human-centered work means putting the people using and being affected by your software at the center of your design and development process. Engaging community members not only builds trust, but it can provide invaluable insight for any question you may have.

  • Open communication: transparency benefits users and products; making the design process and discussions visible increases trust amongst team members and between teams and their users.

  • Experimentation: as mentioned, this framework is not a one-size-fits all prescription of activities. Experiment with what methods and inquiries suit the project at hand.

  • Openness to pivoting and adjustment: user-centric work means challenging our own assumptions and beliefs, and taking seriously when those assumptions are proven wrong.

  • Sensitivity to local contexts: not all users and users groups operate within the same contexts personally – which personas, bias mitigation, etc, can help us think through – but users also reside within and among vastly different geopolitical contexts that affect: 1) their access to information and their susceptibility to censorship and information oppression and 2) their risk tolerance and threat models. Depending on the local context, concepts like security, risk, usability, and accessibility may shift and take on different stakes.

  • Holistic mindset required: rather than thinking of accessibility, usability, and security attributes separately, they must be considered holistically in order to identify synergies and reconcile conflicts. The optimum balance is not static, and as contexts evolve, designs should be re-evaluated to ensure the right “tradeoffs” are being made. Improving one attribute may inadvertently worsen another. For example, adding more security controls can harm usability. Enhancing accessibility can introduce vulnerabilities if the adjustments aren’t accounted for in the security design. Overlaps, when accounted for together, are typically synergistic – i.e. improving one attribute enhances the others – but the overlap is not symmetrical at all times between any aspect, so remain diligent.

Understanding risks in software development

We see UX design playing a critical role within risk mitigation. UX methods not only help to clarify, understand, and empathize with the potential pains, challenges, and frustrations in a user’s experience, but can play a key role in anticipating, planning for, and alleviating potential risk.

We understand risk as a broad concept that encompasses the potential vulnerabilities inherent to a software product’s users and stakeholders, especially those that impact the key aspects of security, accessibility, and usability.

To capture the breadth of what risk can entail, we also acknowledge and ground this framework in key Contributing Factors, which all increase risk in a project for several reasons that can be mitigated:

  • Sensitive User Data: Handling sensitive user data requires careful design to ensure privacy and security. Typically, the most sensitive user data is called Personal Identifiable Information (PII). Mismanagement and poor security design can lead to data breaches or unintentional user error, which can leave users vulnerable to malicious activity with their PII. This can lead to not only identity theft and financial fraud, but also to significant physiological and emotional distress which erodes users’ trust in a company or a software team, resulting in reputational damage or financial loss.

  • Security Risks: Security risks can compromise the integrity of a product and the privacy of its users. UX design can help mitigate these risks by designing for security from the ground up.

  • Vulnerable Populations: Products designed without considering the needs of vulnerable populations can lead to exclusion and discrimination. UX design can help ensure inclusivity and accessibility.

  • Legal Considerations: Non-compliance with laws and regulations can result in legal penalties. UX design can help ensure compliance by incorporating legal requirements, including accessibility requirements, into the design process.

  • Potentially Harmful Stimuli: UX design can help prevent harmful stimuli by considering the potential impact of design decisions on users’ mental and physical health. Trauma-informed UX design can help mitigate this kind of harm.

  • Experimental Interventions: Experimental interventions can introduce unexpected risks. UX design can help manage these risks by testing interventions in a controlled environment before implementation.

  • Intrusive Research Methods: Intrusive research methods can replicate aspects of surveillance capitalism, invade users’ privacy, and cause harm. UX design can help mitigate this risk by using non-intrusive research methods.

  • Human Rights Considerations: Rights-respecting considerations are essential in UX design to avoid harm. Human rights provide specific principles enshrined in international declarations that should frame any design practice. Observing these principles ensures regulatory compliance and prevents alienating users, which can lead to negative repercussions.

  • Local contexts and the limits of Open Source tooling: As mentioned above, what is considered high-risk is dependent on local context — geopolitical, cultural, linguistic, infrastructural, etc differences affect what high risk means for different users around the world. Predominant Open Source tools and security infrastructure are often premised on the local contexts that those teams know well, meaning they are biased toward more Western and/or better understood risk contexts, and thus can be inadequate for high-risk scenarios in other global contexts that are less well understood (particularly those that are highly censored and thus harder to get visibility into).

Part 2 goes into the practices and methods you can employ when scoping threats.

Foregrounding accessibility

Special thanks to Accessibility Lab for advising on and providing resources for this document.

A11y Lab is a company that seeks to ensure the inclusion of people with disabilities through accessibility in the digital world. They work with public, private, and civil society organizations to promote and defend digital rights.

A11y Lab provides many indispensable services to teams, companies, and organizations looking to take accessibility seriously. See their offerings here.

This section acts as a quick primer on accessibility in software design. We go more into detail about accessibility audits in Part 3 of this document and the accompanying heuristic review.

While it’s easy to understand why creating accessible software is important from an equity perspective, the most common factor limiting the accessibility of software is its treatment as an afterthought. The benefits of creating accessible user experiences are far reaching:

  • Social responsibility: we are responsible as creators of tools and products not to discriminate against potential users due to their physical or mental abilities.
  • Competitive advantage: the more people who can use your software, the more users you can have, and the more likely people are to choose your software over a competitor with fewer accessibility features.
  • Improves SEO positioning and compatibility: the way that accessibility requirements are often coded into software inadvertently (but helpfully) improves the discoverability of your software on search engines.
  • Complies with legislation: most countries in which we work have legislation requiring compliance with accessibility standards.
  • Security: accessible online services offer efficient and secure options for everyday tasks.
  • Autonomy: web accessibility improves everyone’s quality of life.
  • Efficiency: good, accessible design factored into a development process early saves developers time and businesses money.

Contrary to some belief, accessibility does not negatively affect design or security. Most requirements are not displayed to users who are not utilizing assistive technology, i.e. alternative text, tags on data entries, header tags, or language options.

The Web Content Accessibility Guidelines (WCAG 2.1) are the standard of compliance for accessibility online. They cover a wide range of recommendations for making Web content more accessible, and when implemented to their full extent, software creators can benefit from having the following acknowledgments:

  • Accessibility Statement: A powerful declaration of commitment which is normally available in the footer via a link and includes information about the platform’s accessibility.
  • The W3C Conformance logo: The platform indicates a claim of conformance to a specified conformance level of the Web Content Accessibility Guidelines 2.1 W3C, could be level A, AA or AAA.

Consciousness raising

Foregrounding accessibility in software practices may necessitate some consciousness raising amongst teams, which can help people to empathize with and understand better the severity of need for accessibility practice. Understanding accessibility is often understanding new user personas. Of course, collaborating and testing with users with disabilities is the most ideal way to work on the accessibility of your project.

Key concepts and tools:

Part 1: Knowing your users and yourself

Positionality, bias, and cultural context

Driving question: “How can we proactively identify our own biases and gaps in understanding stemming from team culture and positionality?”

We acknowledge that our team and other teams employing the concepts in this framework may need additional support to highlight their gaps stemming from their own cultural viewpoint. Understanding that viewpoint means practicing self-awareness and reflecting on and understanding one’s own positionality related to the contexts, tools, and topics involved in the software at hand, and to the user groups it represents. Taking a meaningful step towards interrogating our own positionality, potential biases, and potential gaps helps to prevent harms that may arise from ignorance and/or unconscious discrimination.

Below are some concrete exercises to start identifying bias and articulating positionality. These are focused on the questions “who am I?” and “who are we?” and how those answers reflect or don’t reflect the communities and users we’re intending to serve. Once articulated, how can we open ourselves to the perspectives and people we may be missing?

It’s important to note that bias itself isn’t the enemy. We will never be free from bias. But, we believe the purpose of identifying bias is to move towards underlying “design justice [that] seeks more than ‘freedom from bias.’” Because bias leads to real-world harm, interrupting bias helps us to “unpack the ways that intersecting forms of oppression, including patriarchy, white supremacy, ableism, and capitalism, are constantly hard-coded into designed objects, platforms, and systems3."

Once identified, one way to navigate identified gaps within ourselves is through cultural brokers. Cultural brokers can provide insider perspectives to enhance cultural relevance and prevent ethical pitfalls. Their mediation and interpretation is invaluable for creating an user experience that is responsible, inclusive, and respectful of the user’s rights and dignity. They can also help to employ a design process that seeks to care for a software team themselves. So, as this framework notes risk factors and mitigations associated in the use of a particular method, it identifies how a cultural broker might intervene/aid in a particular mitigation.

Resources:

  • Positionality worksheet: This worksheet helps to explain what positionality is, why it’s important for participatory research and design projects, and how it informs potential bias. It guides the reader to write their own positionality statement.

  • How to begin designing for diversity: A guide from Jahan + Boyuan of Project Inkblot with helpful resources for positionality and diversity in design ideation.

  • Assumption Dump: As a team, an assumption dump can help surface biases that may be unspoken within yourself or your project peers.

  • Ability Prompt Cards: These can be used for more than just accessibility awareness, but also to understand where our assumptions and ideas may be leaving people behind.

  • Guidelines for Being a Cultural Broker: Cultural brokering is the act of bridging, linking, or mediating between groups or persons of different cultural backgrounds for the purpose of reducing conflict or producing change. This document outlines Superbloom’s approach to the practice.

  • Freedom of the Net reports by Freedom House: Freedom of the Net reports are annual assessments of internet freedom around the world. They examine the state of digital rights, censorship, surveillance, and restrictions on online activities. These reports provide insights into the extent to which governments and other entities are limiting internet access and expression in various countries.

  • Global Information Society Watch (GISWatch): it is a collaborative platform that provides critical analysis and monitoring of information and communication technologies (ICTs) and their impact on society. It focuses on issues related to internet governance, digital rights, and access to information, offering insights and reports from various contributors around the world. GISWatch aims to foster an inclusive, equitable, and open global information society by addressing key challenges in the digital age.

  • #KeepItOn reports by Access Now: these reports are about documenting and raising awareness of internet shutdowns and disruptions around the world. These reports provide information on the extent and impact of government-imposed internet restrictions, with a focus on human rights violations, censorship, and the suppression of free expression.

Identifying and familiarizing yourself with users

Driving question: How can we rigorously understand the perspective of all potential segments of our users, not just the “intended” users?

It is essential that software and evaluation teams develop a clear picture of who the target users are for the tool. Otherwise, there’s no standpoint from which to evaluate risk. Users include both those the team would like to encourage to use their tool, but also users who are likely to use the tool (or who are currently using it), even if they are outside of the group that the team would like to focus on.

While the desired user population should be the highest priority in evaluating usability, entirely ignoring probable or actual user populations will result in frustration for the development team when they receive a high volume of complaints from this “extra” population. Ignoring likely but unexpected users of tools with a security or privacy focus can also cause these users to be endangered when the threats they face are not the threats the tool protects against, if the tool does not help them understand this.

Resources to identify users

  • Persona exercises enable tool teams to think through their potential user groups and create sample identities (called a “persona”) that can stand-in for the needs of larger groups of users. Some useful templates include:

  • Ecosystem or stakeholder map: helps you to understand the broader context within which your tool is being created, which includes users but also other stakeholders and people/ecosystems affected by your work.

  • Engaging with community boards and channels: Users tend to congregate online, especially when they face usability challenges (Twitter, StackOverflow, Reddit, Slack and Discord groups, etc can all be places where users communicate). Open Source projects often have the great advantage of engaged contributors and users who congregate on Github and other OSS channels. Use these channels to your advantage by treating them as ad-hoc user research repositories and direct lines of communication to user groups.

  • Simulation Exercises: In the absence of direct access to users’ lives, simulation exercises can offer insight into the realities of certain situations that we play out in theory in our minds. Physically playing out a scenario helps us understand users in relation to time, space and environment.

  • Shadowing, Touchstone tours: If a user invites you into their experiences and environment you can shadow their behavior or take the opportunity to take a touch stone tour - where you ask the users to talk and walk you through their interactions with both software tools and their lived environment.

Part 2: Assessing risks & threats

Driving question: What vulnerabilities are there that may increase risks for either users or stakeholders?

Once grounded in our own and our users’ perspectives and positionality, we can take our comprehension of risks and threats conceptually and move toward assessment and action. This section focuses on lines of inquiry and resources for doing threat modeling and risk assessment in projects and with project teams.

Resources to utilize in this phase include:

  • User Journey mapping: A structured activity for charting users’ possible journeys to and through a tool, as a way to identify potential pain points and vulnerabilities.

  • Tarot Cards of Tech: A set of provocations to help think through different scenarios and your tool’s effect on society and users.

  • Design Under Pressure: a practical resource to help you and your team proactively create products and services that hold up under stress cases.

  • Threat modeling: a structured approach of identifying and prioritizing potential threats to a system, and determining the value that potential mitigations would have in reducing or neutralizing those threats.

  • Personas non grata: ten people you probably don’t want in your product or service, but who are likely to find their way in anyway. A way to plan for potential malicious actors.

  • Anxiety games: an activity for playing out different threat scenarios to help build resilience and proactive risk mitigation into your tool.

    • Blog post about the method from Andrew Lovett-Barron
  • Red Team, Blue Team: exercise that helps scope security vulnerabilities and possible actions and interventions to prevent them.

  • Document reviews: can help teams know if their licenses, contracts, agreements, policies, and procedures are compliant, effective, and up-to-date.

Once again, what we offer here is not a comprehensive introduction to security, privacy, and data handling best practices that ought to be in use in software projects. As a jumping off point, digital security training guidelines about day to day digital hygiene can be applied to developing and designing software. We like the guides from Freedom of the Press Foundation and the Electronic Frontier Foundation.

Use the table below to walk through possible lines of inquiry and methods that may be useful to your exploration in this phase.


Assessing Risks and Threats
Context: Your team is cautious. Your team needs to identify risks and vulnerabilities. You want to prevent harm to users and stakeholders early in development when mitigations are cheaper and easier to implement.
Driving question: What vulnerabilities are there that may increase risks for either users or stakeholders?
Lines of inquiry Methods
What motivated attackers or abuses should we plan for? Where are we vulnerable? What assets do we need to protect? Threat modeling
What potential risks or threats could impact the target user? Where are we most susceptible to ethical pitfalls or unintended societal consequences? Anxiety games
What risks could lead to failure down the line? How might our solution be misused? Threat modeling, anxiety games, Design Under Pressure
What are the key touchpoints and interactions that users have with the product? How can we visualize and analyze moments to intervene with mitigations in the user’s overall experience? Journey mapping
How does data move through the software itself? Where might user data be exposed or vulnerable to malicious actors? How can we use mitigations to better enable data and enhance our users’ goals? Data flow mapping
How can we create realistic representations that represent different potential threat actors or adversaries to understand their motivations, capabilities, and potential actions? Persona non-grata
What are the needs, concerns, interests and expectations of stakeholders related to the software’s development? How do stakeholders want to be involved in the software’s development? Stakeholder interviews
Are all of our licenses, contracts, agreements, policies, and procedures compliant? How effective are our internal controls? Document review

Part 3: Usability & accessibility audit

An accessibility audit on its own is beneficial because it can help organizations comply with accessibility standards and regulations, enhance user engagement, and improve SEO. We’ve found that an accessibility audit is a complementary process to an UX audit; they share a goal to improve user experience and each focuses on overlapping aspects of the software.

Here we have combined the best practices of a usability heuristic evaluation with an accessibility self-audit, while keeping privacy and security top-of-mind. We believe that using this method, software teams can develop complementary findings, identifying a wider range of issues and opportunities for improvement which can help ensure their product or service is both usable and accessible to a broader range of users.

Please see this document: Accessibility and Usability Heuristic Review

Outputs

The output of a heuristic evaluation is a list of pain points. Each pain point should include information on the relative priority of the problem (e.g., “must fix before launch”, “fix soon”, “fix someday”), details on why it is painful (“users will expect X and be frustrated when they get Y”), and suggestions for improvement. In some cases it will not be possible for evaluators to directly suggest solutions, in which case they should aim to characterize what an improvement would look like (e.g., “this dialogue box should help the user understand A in addition to B”).

Glossary

Accessibility: The degree to which a product, service, or digital interface can be used effectively and comfortably by everyone, regardless of their abilities or the assistive technologies used. It ensures that all users can access and interact with the system without discrimination or barriers.

Digital Security: the practice of protecting digital information from unauthorized access, use, disclosure, disruption, modification, or destruction. We often define security in function of a particular threat model: i.e. “is a program secure against threats A, B, and C?”

Human Rights Centered Design: insists on the same sovereignty and protection for the user of a product. In essence, this means respecting a user’s privacy and data, thinking about the digital rights of people across the world (instead of just in our own backyards), and designing for all4.

Personal Identifiable Information (PII): any data that could potentially identify a specific individual, either directly or indirectly. Examples include legal name, address, phone number, credit card and bank account numbers, passport and driver’s license numbers, social security numbers, etc.

Risk Mitigation: The process of identifying and reducing potential risks or uncertainties associated with a project or product development. In the context of software development, it involves planning and implementing strategies to avoid or minimize potential negative impacts on the project’s success.

Threat Modeling: A way of thinking about the sorts of protection you want for your data so you can decide which potential threats you are going to take seriously. Coming up with a set of possible threats you plan to protect against is called threat modeling, or assessing your risks. While it’s impossible to protect against every kind of trick or adversary, threat modeling enables users to envision possible scenarios on a scale of likely to less likely, and plan accordingly5.

Trauma: A complex, disruptive, and painful phenomena that people experience individually and collectively from abuse, deprivation, neglect, violence, or other violation of their basic needs and human rights.

Trigger experience: Not all people who experience trauma experience triggers. It results in greatly reduced capacity for the duration of the person’s response. This experience is frequently, but not always, terrifying for the person involved. It can be extremely difficult for people around the person who is triggered, too.

Triggered/trigger response: A person’s extreme, involuntary, rapid, physiological and mental response to an experience that the amygdala, or ‘reptilian’ part of their brain, has associated with an earlier trauma. Being triggered renders important parts of a person’s physiological and mental capacity temporarily non-functional.

Usability: The measure of how easy and efficient it is for users to achieve their goals when interacting with a product, system, or interface. Usability focuses on aspects such as learnability, efficiency, memorability, errors, and user satisfaction.

Types of Bias6

Confirmation Bias: The tendency to favor information that confirms one’s preexisting beliefs or hypotheses while ignoring or downplaying contradictory evidence. It can lead to skewed research results and erroneous conclusions.

Cultural Bias: The tendency to interpret information and experiences based on one’s own cultural background, leading to potential misinterpretations or misunderstandings when designing products for users from different cultural backgrounds.

Negativity Bias: The psychological tendency for people to pay more attention to and be influenced by negative information or experiences than positive ones. In UX research, this bias can impact how users perceive and evaluate a product.

Observational Bias: A bias that occurs when a researcher’s preconceived notions or expectations influence their observations and interpretations during a study. It can unintentionally affect the objectivity and validity of the research findings.

Recency Bias: The cognitive bias that gives more weight to recent events or experiences when making decisions or judgments, often overshadowing older or more distant information. In UX research, it can affect how users recall and evaluate their experiences with a product.

Segmentation Bias: A bias that arises when researchers make generalizations about a diverse group of users based on the behaviors or preferences of a specific subgroup within that larger group.


Appendix A - How to Prioritize Expert Audit Outputs

Upon receiving the report

Review the audit report then try to identify the specific accessibility violations that need to be addressed.

  1. Prioritize the violations based on their severity and impact on users. For example, violations that affect the most critical functions of the product or service should be addressed first.
  2. For each violation, create or edit the user story that describes the problem and its impact on users.
    1. Identify or create a (proto)persona to be the subject of the user story

Implementation of solutions

  1. Work with the development team to ensure that the design solutions are implemented and tested to ensure they meet accessibility standards.
    1. Develop design solutions that address the violations and improve the accessibility of the product or service. This may involve working with developers to implement changes to the user interface, such as adding alt text to images or providing alternative text for non-text content.

Testing of solutions

  1. Conduct user testing with people with disabilities to ensure that the design solutions are effective and meet their needs
  2. Continuously monitor and evaluate the accessibility of the product or service and make adjustments as needed.


Footnotes


  1. The Web Content Accessibility Guidelines (WCAG 2.1) are the standard of compliance for accessibility online. They cover a wide range of recommendations for making Web content more accessible. For more information about incorporating accessibility standards, see here↩︎

  2. Adapted from EFF definition of threat model↩︎

  3. https://designjustice.mitpress.mit.edu/pub/3h2zq86d/release/1?from=40310&to=40670 ↩︎

  4. Definition from https://www.humanrightscentered.design/glossary ↩︎

  5. Adapted from EFF: https://ssd.eff.org/glossary/threat-model↩︎

  6. For more on cognitive bias in design, we recommend David Dylan Thomas’s Design for Cognitive Bias↩︎