Accessibility improvements
See original GitHub issueI was testing my website with a person who is completely blind and uses the VoiceOver (VO) function on an iPad. I was observing her as she browsed the entire webpage (through a tapping function in VO that mimics tabbing between elements*) and came across the Leaflet map. I don’t remember if she came across the map by tabbing through the page or by randomly tapping on it. What comes next is the unexpected behavior.
Problems while using VoiceOver
VO spoke the filename of the images in the map’s base layer, speaking, “6078 dot p n g, link”, “6079 dot p n g, link”, and so forth. When the user tabbed and landed on the zoom controls VO read “plus sign, link” and “hyphen, link”. I can’t remember if it spoke the default title
attributes on these buttons
The ESRI search button had no attribute to read, but VO said that it was a search input that could be opened. This is its own issue.
When she landed on the layer control (where layers are switched on and off) the outcome was as I expected, and useful. VO spoke the name of the layer, indicated it was a checkbox or radio button and said whether it was selected. By double tapping she could control the layer’s visibility.
I feel at this point that it must be explained that this user likes maps. She can use maps that are designed for the blind. She highlighted the “Blindsquare” app as extremely usable, as it speaks the names of businesses she’s walking past or that are nearby in a different direction.
I think that Leaflet can be designed to be usable by the blind because it’s still possible to describe what features are being overlaid on the map. Through continuing to tab through the page VO came across the one marker on my map that has a popup that’s popped up when the map loads. The information in the popup window could be read, and any links within it to other pages could be clicked/tapped.
What she or VO couldn’t do was tab to the next few hundred markers on the map that were inside MarkerCluster (the plugin) clusters.
Expectations
My ideas apply only to using VO on an iPad. I think that when tabbing through elements on the screen that VO should come across the <div id='map'> tag and read its title
attribute which should tell the user that this is a map, what the map shows, and that it’s been designed for their screen-reading environment.
The next tab should land on the zoom controls, layer controls, and any other controls, so the user can understand what functions are available to control the map. These should all have title
attributes.
The next tab should land on any marker that has a popup and the contents of the popup should be read. The next tabs should land and activate other markers depending on how the map designer activates them (does a click on the marker open a popup or does it open a link to another page?).
The difference in marker icons should be made apparent to the screen reader, through the title
and alt
options that Leaflet exposes. For example, I have markers that indicate the kind of building permit it represents, and I use a wrench to indicate “renovation” permits. I have gone and set these attributes for the website, and any other missing attributes, but I’m not sure if they will be noticed by VO.
Conclusion
The biggest obstacle I observed was that each individual base map tile image was selectable and the user had no idea what these were or what they would do if she clicked on them (that they were links was unexpected).
I think these map tiles should be invisible to even the screen reader. Anyone with an iPad can turn on VoiceOver by asking Siri (or in the Settings app) and experience some of these problems.
Notes
This user primarily uses JAWS software on a Windows computer to navigate the web and all other software. That experience is very different than using VO because of the hardware keyboard and the key commands that JAWS offers (for example, press “H” to jump between heading tags on a website).
It’s nothing short of amazing to watch how VoiceOver works. To move across the page, to tab forward, you swipe from left to right and VO reads the element. For example, text that is tagged in HTML with <h1>
to <h6>
will be read by VO as “[tagged text], heading level 1”. To select something, like a link, you would double tap.
Issue Analytics
- State:
- Created 9 years ago
- Reactions:14
- Comments:25 (10 by maintainers)
Top GitHub Comments
After reading through this issue (in response to an internal Esri discussion around accessibility) I think a few things could be done:
role=presentation
andalt=""
on map tiles so screen readers don’t read their URLs.role=button
andaria-label="Zoom In/Out"
on the zoom buttons.aria-label
might be read by more screen readers thentitle
.Definitely not. In my own testing most screen readers will read the images URL if an
alt
attribute is missing. You can see https://www.w3.org/TR/WCAG20-TECHS/H67 for more info and https://github.com/Esri/esri-leaflet/blob/10fced2121c5cbc506c8b6e7ee95cc891076eabe/src/Layers/TiledMapLayer.js#L81-L85 for how this is implemented in Esri Leaflet.i actually really like this idea. If a user focuses a popup we should send focus to the popup container. I might look into putting this in Esri Leaflet first and seeing if I can push anything that might be useful upstream to Leaflet. This would also need to make SVGs accessible but I think there is something here.
Hi, in my own testing of NVDA screen reader, I see that popups are not easy to open. In “navigation” mode of NVDA, when a marker is focused by TAB key, screen reader will say the alt text of img tag if any. But hit ENTER or SPACE doesn’t open the popup.
Adding only
role="button"
in the img tag of markers make it work._initIcon: function () { ... if (icon !== this._icon) { ... icon.setAttribute('role', 'button') }... }
works great. Is there any reason not to do so ?After that : maybe focus popup content automatically at popupOpen -> the voice can read it -> give focus back to the marker when popup is closed -> user can tab to next marker.
This would be very useful for case where popups contain complex informations, links or buttons and can’t be sumarized only in the alt tag of icons.
EDITED This 2015 demo is very enlightening : http://melmo.github.io/accessibility/berlin.js/code/dist/leaflet.html
The role=‘application’ basically prevents assistive technology (JAWS / NVDA) to take hand over JS. In my testing (chrome and firefox, leaflet in fullscreen) it prevents the screen reader to read ALL the alt tags of icons when page loads. And after that, user can TAB from one marker to another and open the markers whith ENTER key. Just like usual leaflet keyboard navigation without screen reader.
The role=‘application’ may be dangerous as it turns off reen readers keyboard shortcuts a user could be used to, but it seems to fit perfectly with leaflet. The article says it breaks things in firefox. It is ok for me though.
NOTE : adding role=‘button’ on img tags is still a good idea : When tabbing beetwen markers, user will hear : “this is my alt text - BUTTON” and not “this is my alt text - GRAPHIC”. So he knows he can “click” it with enter key. great!