Add alt text to scikit-learn documentation
See original GitHub issueDescribe the issue linked to the documentation
Adding alt text to images permits visually impaired users to have greater access.
Suggest a potential alternative/fix
About Alt Text
Alt text (alternative text), also known as “alt attributes,” “alt descriptions," or technically incorrectly as "alt tags,” are used within an HTML code to describe the appearance and function of an image on a page.
Alt text uses:
-
Adding alternative text to photos is first and foremost a principle of web accessibility. Visually impaired users using screen readers will be read an alt attribute to better understand an on-page image.
-
Alt text will be displayed in place of an image if an image file cannot be loaded.
-
Alt text provide better image context/descriptions to search engine crawlers, helping them to index an image properly.
Reference
- https://github.com/isabela-pf/scikit-learn/pull/1
- Chartability
- Meetup event PyLadies SWFL & Miami, Python SWFL
Questions
- For scikit-learn, what is the maximum line length for writing alt text descriptions for images?
- Is there a way to do a
grep
of the library and see how many images exist in the documentation? - Can you confirm that the images are static? They are produced from code, within the documentation. Are images always the same that are produced?
Issue Analytics
- State:
- Created 2 years ago
- Reactions:5
- Comments:16 (15 by maintainers)
Top Results From Across the Web
Working With Text Data — scikit-learn 1.2.0 documentation
The goal of this guide is to explore some of the main scikit-learn tools on a single practical task: analyzing a collection of...
Read more >Classification of text documents using sparse features
Classification of text documents using sparse features¶. This is an example showing how scikit-learn can be used to classify documents by topics using...
Read more >sklearn.feature_extraction.text.TfidfVectorizer
Convert a collection of raw documents to a matrix of TF-IDF features. Equivalent to CountVectorizer followed by TfidfTransformer . Read more in the...
Read more >Frequently Asked Questions — scikit-learn 1.2.0 documentation
Will you add graphical models or sequence prediction to scikit-learn? ... If you have text documents, you can use a term frequency features;...
Read more >scikit-learn user guide
This version of scikit-learn comes with alternative solvers for some ... If you have text documents, you can use a term frequency features; ......
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
I’m adding some resources from a presentation I attended by Wandke Consulting.
I’m here to respond to some of the thoughts listed above. I’m always happy to see an active discussion around these topics as we have a lot less resources to draw from when it comes to writing alt text for scientific diagrams specifically.
Responding to @adrinjalali, this is good to know! We had a few contributors to the alt text mini sprint mention the same thing when we were talking.
A key part of writing helpful alt text is understanding the role and information an image provides in its surrounding context. This means that the same image used in different places might benefit from different alt text depending what it’s meant to illustrate in that instance. Or, in the case that you are talking about, plots with differing content may be perfectly fine with the same alt text if they still serve the same role in the documentation.
I’m not personally a user of scikit-learn (or anything similar), but when I was reading the docs my understanding is that many of the images are examples of a process described in the preceding paragraph. The images aren’t usually adding new information (like a step-by-step guide on how to use the process) or relying on the reader to understand each point on the plot. So unless the type of plot or its axes are changing, I think this might be a less critical problem for this project.
(I had a similar discussion with people about variable content in the numpy-tutorials repo when we worked on alt text there.)
Responding to @thomasjpfan, I agree on the making it easy for contributors. Some description is better than none (none usually just reads the name of the image), so if I were to one reviewing PRs I’d be looking for a non-empty alt attribute as the baseline.
As for “what makes good alt text,” I have some resources that might help. The main resource I’ve found for plots (with the help of @marsbarlee) is the Diagram center’s checklist which asks for type of graph, axes, and points. Personally, I’ve found that we are usually dealing with far too many points to reasonably list out in the alt attribute, so I take this as a chance to describe the trend of the plot or any other defining features relevant to the text that surrounds it.
I also put together a guide collecting resources and guidelines for different types of images that we’ve used to help people new to alt text ignoring the few Jupyter-specific bits).
And this is the checklist we used during the alt text mini-sprint:
I hope that helps! Let me know if you have any other questions or there’s other ways I can help.