My previous posts (part one and part two) explored what phishing attacks are and ways that designers can help prevent their products from becoming a target. In this post, I’d like to examine some more technical countermeasures. If you’re a designer interested in fighting phishing, this can be useful background information, and it can help prepare you for discussions with your more technical teammates. I also hope this post will highlight that current technical solutions alone are not enough to help users fight phishing.
How browser companies fight phishing
Web-browser companies work hard to fight phishing. Services such as the Safe Browsing initiative provide a continually updated catalog of probable phishing sites and help users of Chrome, Firefox, and Safari avoid them. These browsers pop up a warning message when users navigate to a site in the catalog. Some anti-virus companies provide software that performs a similar function. These services work best against phishing sites that have been around for a few hours or days but are less effective for ones that just launched or that target a limited number of high-value users (such as the spear phishing attacks I described in my first post).
An example of Firefox’s phishing warning for a site that was registered on the Safe Browsing blacklist. Adapted from this image by Paul Jacobson, which was released under a CC BY-NC-SA 2.0 license.
In considering the browser’s efforts to protect users, one common misconception is that the lock icon in the URL bar communicates the authenticity of a website. For example, some people might think that a lock next to a URL containing the word “Amazon” means that you’re viewing a page legitimately owned by Amazon.com. In fact, the lock symbol is meant to convey whether the connection between your computer and the web server is encrypted. It’s entirely possible for the creator of a phishing site to set up encryption on a bogus site, so relying on the presence of a lock icon alone can’t keep you from falling for an attack.
While reassuring, a lock icon in the URL of a browser does necessarily mean that the site in question is legitimate.
Although the lock itself isn’t necessarily meaningful in the fight against phishing, the information you get when you click on it can be if you know what to look for. For most modern web browsers, clicking on the lock will show you security details about the site, including information about its SSL certificate. This certificate includes information about the organization’s name, its location, and what website(s) are affiliated with it. In theory, these certificates are only issued to an organization after a certification authority such as Symantec or Entrust verifies these aspects of its identity. When the identify-verification process works well, it means that someone pretending to be Amazon.com Inc. and located in Seattle, WA will be prevented from getting an SSL certificate tying their website to that company’s name and location.
The SSL certificate I received when viewing Bank of America’s website. It is issued by the Symantec Corporation’s “certification authority” and has a specific assurance level.
In practice, the process can be very messy and subject to corruption or subversion. This was the case in 2011 when the webmail of up to 300,000 Iranians was compromised after a certification authority was hacked. By issuing fraudulent SSL certificates, the attackers were able to more accurately impersonate domains such as gmail.com, compromise a number of Iranian users’ credentials, and spy on them. Even sophisticated users were fooled.
When the classic certificate-based system fails, there are newer lines of defense such as key pinning and certificate transparency. Key pinning is a browser feature to verify that the SSL certificates for a company's sites are actually issued by the certification authorities that the company uses.
In the case of the 2011 webmail compromise, there is evidence that Google was able to detect the attack because it monitored error messages generated by Chrome's key pinning feature. Certificate transparency is another approach to protecting against malicious or compromised certification authorities. It creates a public, auditable record of the certificates that are issued. Since it’s an independent service that doesn't rely on any web browsers, it makes it possible for anyone with the technical know-how to monitor the certificate infrastructure and detect when something fishy is going on.
Although the certificate system was designed to help users verify the authenticity of a website, it is not very accessible to the average person. As the above figure of a certificate shows, it’s difficult to communicate a digital certificate’s contents in a way that is meaningful to non-experts.
Instead of relying on users to manually check digital certificates, many web browser teams are now trying to surface certificate information more proactively. One recent trend includes putting certificate information directly in the URL bar. While this can confirm that a site is legitimate, its absence does not serve to alert users where the site is a fake, in part because users don’t understand what the information is trying to address in the first place.
Screenshots of the URL bar in Firefox, Chrome, and Safari (in descending order). Each browser has a slightly different way of signifying the presence of a high-assurance SSL certificate.
Phishing is not an issue of “stupid users”
Although browsers work hard to help users protect themselves from phishing attacks, many of the mechanisms in place are not useful for non-expert users. As my first post discussed, phishing attacks are growing ever-more sophisticated. They target victims with carefully crafted messages that reference specific cultural touchpoints to put them at ease. Thus, it’s not surprising when even savvy people fall prey to a well-designed attack, especially if it takes advantage of a particularly stressful moment or situation.
That’s why I find it so frustrating when so much anti-phishing advice is focused entirely on the behavior of the would-be victims. For example, given how today’s dynamic email content works, tidbits like these are often not practical:
“Never use links in an email to connect to a Web site. Instead, open a new browser window and type the URL directly into the address bar.“ – Advice from Norton
I recently received a message from NPR.org and wanted to give them feedback on their NPR One app via the link they provided. If I followed the above advice literally, I would have to type the following URL into my browser before I could contact NPR!
This example highlights that the “type the URL into the web browser” method is outdated and impractical, especially as our society moves toward mobile form factors. Unfortunately, most people attempting to follow this one piece of advice are likely to get frustrated, feel overwhelmed, and end up altogether ignoring information about fighting phishing. This highlights a hard problem: It’s difficult to give advice to non-expert users that is both accessible and useful.
The UX-research community needs to do more to understand what kind of anti-phishing advice is actually helpful to end users and what mechanisms for conveying it are the most effective. This, combined with continued work on the technical and design side, will help keep the threat at bay.