question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

User timeout caused connection failure

See original GitHub issue

Hello, when I send requests at the same time(with dont_filter=True, because some will be error, and the ‘DOWNLOAD_TIMEOUT’: 60), some requests appear the info as following: Retrying <GET http://www.xxx> (failed 1 times): User timeout caused connection failure: Getting http://www.xxx took longer than 1800 seconds.. It doesn’t appear other error, and some log that have crawled XX pages, and XX items. I know the log from https://github.com/scrapy/scrapy/blob/master/scrapy/core/downloader/webclient.py but I don’t know why. Can someone help me?

Issue Analytics

  • State:closed
  • Created 7 years ago
  • Comments:7 (4 by maintainers)

github_iconTop GitHub Comments

2reactions
nyovcommented, May 11, 2016

The error message Retrying <GET http://www.xxx> (failed 1 times): User timeout caused connection failure: Getting http://www.xxx took longer than 1800 seconds.. says:

  1. Downloader: “I timed out trying to fetch http://www.xxx in 1800 seconds, sorry”
  2. RetryMiddleware: “I am retrying the Request that failed with Error I timed out trying to fetch http://www.xxx in 1800 seconds, sorry
1reaction
nyovcommented, May 11, 2016

Yes, if you are using Crawlera with scrapy-crawlera, then there is a CRAWLERA_DOWNLOAD_TIMEOUT that defaults to 1800 seconds. It seems to override the standard DOWNLOAD_TIMEOUT setting when enabled.

But what is your problem, here? Do you want to change the timeout? Do you think there is a bug with the timeout? Or something else?

You can change the timeout by setting CRAWLERA_DOWNLOAD_TIMEOUT = 60 in your settings (or CRAWLERA_DOWNLOAD_TIMEOUT = DOWNLOAD_TIMEOUT). I think setting request.meta['download_timeout'] = 60 on a specific request should also work.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Scrapy error:User timeout caused connection failure
It retries for 5 times and then fails completely. I can access the url on chrome but it's not working on scrapy.
Read more >
How to Solve Scrapy's User Timeout Failure - Tech Monger
Request timout could be possible due to host of reasons. But to solve timeout issue you should try different request values while making...
Read more >
User timeout caused connection failure · Issue #3128 - GitHub
I have noticed that, after prolonged inactivity, people get stucked in a javascript refresh loop. It is sporadic, do not know if it...
Read more >
How do I fix a "User timeout caused connection failure" error?
The bootstrapping quickly is to be expected. In most environments the bootstrap will actually spin up a machine, when you're working with a ......
Read more >
Packet NEs cannot be enrolled with 'Connection timeout' error
In this case, the User timeout error is caused by the device not accessible. Step 1: Check the NE connection profile in MCP...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found