Connection timeout (not Read, Wall or Total) is consistently taking twice as long
See original GitHub issueI’m aware that several issues related to timeout were opened (and closed) before, so I’m trying to narrow this report down to a very specific scope: connection timeout is behaving in a consistent, wrong way: it times out at precisely twice the requested time.
Results below are so consistent we must acknowledge there is something going on here! I beg you guys not to dismiss this report before taking a look at it!
What this report is not about:
-
Total/Wall timeout: That would be a nice feature, but I’m fully aware this is currently outside the scope of Requests. I’m focusing on connection timeout only.
-
Read timeout: All my tests were using http://google.com:81, which fails to even connect. There’s no read involved, the server exists but never responds, not even to refuse the connection. No data is ever transmitted. No HTTP connection is ever established. This is not about
ReadTimeoutError
, this is aboutConnectTimeoutError
. -
Accurate timings / network fluctuations: Not asking for millisecond precision. I don’t even care about whole seconds imprecision. But, surprisingly,
requests
is being incredibly accurate… to twice the time.
Expected Result
requests.get('http://google.com:81', timeout=(4, 1))
should take approximately 4 seconds to timeout.
Actual Result
It consistently takes about 8.0 seconds to raise requests.ConnectTimeout
. It always takes twice the time, for timeouts ranging from 1 to 100. Exception message clearly says in the end: Connection to google.com timed out. (connect timeout=4)
, a very distinct message from read timeouts.
Reproduction Steps
import requests, time, os, sys
# Using a know URL to test connection timeout
def test_timeout(timeout, url='http://google.com:81'):
start = time.time()
try:
requests.get(url, timeout=timeout)
print("OK!") # will never reach this...
except requests.ConnectTimeout: # any other exception will bubble out
print('{}: {:.1f}'.format(timeout, time.time()-start))
print("\n1 to 10, simple numeric timeout")
for i in range(1, 11):
test_timeout(i)
print("\n1 to 10, (x, 1) timeout tuple")
for i in range(1, 11):
test_timeout((i, 1))
print("\n1 to 10, (x, 10) timeout tuple")
for i in range(1, 11):
test_timeout((i, 1))
print("\nLarge timeouts")
for i in (20, 30, 50, 100):
test_timeout((i, 1))
Results:
Linux desktop 5.4.0-66-generic #74~18.04.2-Ubuntu SMP Fri Feb 5 11:17:31 UTC 2021 x86_64
3.6.9 (default, Jan 26 2021, 15:33:00)
[GCC 8.4.0]
Requests: 2.25.1
Urllib3: 1.26.3
1 to 10, simple numeric timeout
1: 2.0
2: 4.0
3: 6.0
4: 8.0
5: 10.0
6: 12.0
7: 14.0
8: 16.0
9: 18.0
10: 20.0
1 to 10, (x, 1) timeout tuple
(1, 1): 2.0
(2, 1): 4.0
(3, 1): 6.0
(4, 1): 8.0
(5, 1): 10.0
(6, 1): 12.0
(7, 1): 14.0
(8, 1): 16.0
(9, 1): 18.0
(10, 1): 20.0
1 to 10, (x, 10) timeout tuple
(1, 10): 2.0
(2, 10): 4.0
(3, 10): 6.0
(4, 10): 8.0
(5, 10): 10.0
(6, 10): 12.0
(7, 10): 14.0
(8, 10): 16.0
(9, 10): 18.0
(10, 10): 20.0
Large timeouts
(20, 1): 40.0
(30, 1): 60.0
(50, 1): 100.1
(100, 1): 200.2
System Information
{
"chardet": {
"version": "3.0.4"
},
"cryptography": {
"version": "3.2.1"
},
"idna": {
"version": "2.8"
},
"implementation": {
"name": "CPython",
"version": "3.6.9"
},
"platform": {
"release": "5.4.0-66-generic",
"system": "Linux"
},
"pyOpenSSL": {
"openssl_version": "1010108f",
"version": "17.5.0"
},
"requests": {
"version": "2.25.1"
},
"system_ssl": {
"version": "1010100f"
},
"urllib3": {
"version": "1.26.3"
},
"using_pyopenssl": true
}
It seems there is a single, “hidden”, connection retry, performed by either requests
or urllib3
, somewhere in the line. It has been reported by other users in other platforms too.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:1
- Comments:10 (2 by maintainers)
After more tests, the issue is really the dual IPv4/IPv6 connection attempts. Using the workaround proposed at by a Stackoverflow answer to force either IPv4 or IPv6 only, timeout behaves as expected:
Results:
I think I’ve found the culprit: IPv6! It seems requests/urllib3 is automatically trying to connect using both IPv4 and IPv6, and that accounts for the doubled time.
I’ll do some more tests to properly isolate the problem, as it seems requests is trying IPv6 even when it’s not available, raising a
ConnectionError
withFailed to establish a new connection: [Errno 101] Network is unreachable
, which is undesirable.