User timeout caused connection failure
See original GitHub issueHello, when I send requests at the same time(with dont_filter=True, because some will be error, and the ‘DOWNLOAD_TIMEOUT’: 60),
some requests appear the info as following:
Retrying <GET http://www.xxx> (failed 1 times): User timeout caused connection failure: Getting http://www.xxx took longer than 1800 seconds..
It doesn’t appear other error, and some log that
have crawled XX pages, and XX items.
I know the log from https://github.com/scrapy/scrapy/blob/master/scrapy/core/downloader/webclient.py
but I don’t know why.
Can someone help me?
Issue Analytics
- State:
- Created 7 years ago
- Comments:7 (4 by maintainers)
Top Results From Across the Web
Scrapy error:User timeout caused connection failure
It retries for 5 times and then fails completely. I can access the url on chrome but it's not working on scrapy.
Read more >How to Solve Scrapy's User Timeout Failure - Tech Monger
Request timout could be possible due to host of reasons. But to solve timeout issue you should try different request values while making...
Read more >User timeout caused connection failure · Issue #3128 - GitHub
I have noticed that, after prolonged inactivity, people get stucked in a javascript refresh loop. It is sporadic, do not know if it...
Read more >How do I fix a "User timeout caused connection failure" error?
The bootstrapping quickly is to be expected. In most environments the bootstrap will actually spin up a machine, when you're working with a ......
Read more >Packet NEs cannot be enrolled with 'Connection timeout' error
In this case, the User timeout error is caused by the device not accessible. Step 1: Check the NE connection profile in MCP...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
The error message
Retrying <GET http://www.xxx> (failed 1 times): User timeout caused connection failure: Getting http://www.xxx took longer than 1800 seconds..
says:I timed out trying to fetch http://www.xxx in 1800 seconds, sorry
”Yes, if you are using Crawlera with scrapy-crawlera, then there is a
CRAWLERA_DOWNLOAD_TIMEOUT
that defaults to 1800 seconds. It seems to override the standardDOWNLOAD_TIMEOUT
setting when enabled.But what is your problem, here? Do you want to change the timeout? Do you think there is a bug with the timeout? Or something else?
You can change the timeout by setting
CRAWLERA_DOWNLOAD_TIMEOUT = 60
in your settings (orCRAWLERA_DOWNLOAD_TIMEOUT = DOWNLOAD_TIMEOUT
). I think settingrequest.meta['download_timeout'] = 60
on a specific request should also work.