Can't log into Scratch with some user agents
See original GitHub issueExpected Behavior
A user should be able to log into Scratch with any user agent. https://scratch.mit.edu/robots.txt
Actual Behavior
With a certain user agent (shown below) a user is not able to log into Scratch. It also shows a message along the lines of “an error occured”.
Steps to Reproduce
- Set your useragent to
Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ninetails/2.2.0 Chrome/96.0.4664.45 Electron/16.0.0 Safari/537.36
. - Log in to Scratch (or try to at least).
One interesting thing I found is that if you change the x86_64
part of the UA, it will work.
Issue Analytics
- State:
- Created 2 years ago
- Reactions:7
- Comments:11 (9 by maintainers)
Top Results From Across the Web
Student Can't log in with their teacher created username and ...
I created student usernames and passwords for my class. One of my students can't get signed in, but the login information works from...
Read more >Issues logging in - Scratch
If you're looking for steps on how to log in, please view our Logging in and ... If you're having issues logging in,...
Read more >RFC 8252: OAuth 2.0 for Native Apps
RFC 8252 OAuth 2.0 for Native Apps October 2017 "embedded user-agent" A user-agent ... as well as the user needing to authenticate from...
Read more >Symptoms - VMware Knowledge Base
Configuring a persistent scratch location using the vSphere Web Client for ESXi 6.x. Follow similar steps when using HTML5 Client. · Log in...
Read more >Getting started with Guide for your help center: Setting up
Agents can use the knowledge base to help solve tickets faster. ... Sign in to Zendesk Support as the account owner.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Actually,
robots.txt
is a guideline for search engines and web crawlers. Additionally, any entity that is not a web browser should respectrobots.txt
. Therefore, as/site-api
is disallowed inrobots.txt
, there is plenty of justification for unrecognized UAs to be blocked, such as cURL, which could potentially be used (in the form of libcurl) for a bot, and other abnormal UAs. At least, that’s what my opinion on the subject is.[citation needed]