False positives do occur with auto testing tools. False positives are errors that a tool reports that are not in fact accessibility barriers.
Example False Positive
For example Compliance Sheriff can incorrectly declare a spacer image as an error.
<div id="UMDgoldline"> <img src="/base/extras/spacer.gif" height="1" width="1" alt=""> </div>
Spacer images are old technology. They don't need to be used anymore and it is good for developers to phase them out. But they are not accessibility errors. If spacer images are present, they should be treated as "purely decorative." That is, they should have null alt text so they are hidden from screen reader users. For more information consult How to Write Short Text Alternatives.
How to Check for False Positives
False positives can be minimized by using a additional tools and by developing your accessibility knowledge.
Using a Second Tool
The WAVE can help check for Compliance Sheriff false positives.
For example, if our example false positive is checked with the WAVE, it will identify that same spacer.gif as a feature not an error.
The WAVE sidebar's documentation will explain spacer images:
Null or empty alternative text on spacer
What It Means
Alternative text is null or empty (alt="") on a spacer image.
Why It Matters
Spacer images are used to control layout or positioning. Because they do not convey content, they should be given empty/null alternative text (alt="") to ensure that the content is not presented to screen reader users and is hidden when images are disabled or unavailable.
How to Fix It
Ensure that the image is a spacer image and that it does not convey content. Consider using CSS instead of spacer images for better control of positioning and layout.
The Algorithm... in English
An image with width and/or height of 3 pixels or less or file name of spacer.*, space.*, or blank.* has empty/null alt attribute value (alt="").
Standards and Guidelines
- Section 508 (a)
- 1.1.1 Non-text Content (Level A)
To be effective, accessibility tools must form part of a wider process that includes an informed person acting on their recommendations. Checking for accessibility is a bit like spell-checking a document. The spell-checker can make best guesses of misspelled words based on patterns that it knows. However, it takes a human to make the final decision. The real trick is to understand what the specifications mean and why a tool says what it says.
Due to the interpretive nature of WCAG 2.0, automated evaluation tools can lack the discerning nature of a human evaluator, who can look subjectively at WCAG 2.0 and decide if a particular guideline has been satisfied. Typically, automated software tools mostly indicate a negative or positive result against a guideline with no contextualized interpretation of the guideline and its severity impact on the user. That is why many success criteria require human evaluation and therefore fall outside the scope of fully automated evaluation.
Where accessibility is concerned, human judgment is key. Knowledge is the best tool for ensuring accessibility. The tools are only useful if you know what they're looking for, and if you know how to interpret their suggestions (including allowing for special cases in your code that generate false positives in the evaluation). They will also only verify a fraction of the things you need to do -- the rest of the checkpoints require "human judgment"-- i.e., knowledge.