Dead connection never die
See original GitHub issueHi,
I use plugin 5.2.0 and making some test, if you take a fresh install and set 2 servers let’s say 127.0.0.1:9200 (who work) and 1.2.3.4:9200 (which not exist), I always have a timeout exception and my site have a latency (about the timeout set), somehow the 1.2.3.4 is not marked a dead and continue to try to connect, I tried with 3 ips (2 are not existing) and the latency is double etc…
Here the stack trace that I have with 60~70% of my requests if I refresh 10 times in ~30 sec. For this test I set timeout to 1 cause I don’t want to wait.
WARNING:elasticsearch:GET http://1.2.3.4:9200/_nodes/_all/http [status:N/A request:1.002s]
Traceback (most recent call last):
File "/home/src/lib/elasticsearch/connection/http_urllib3.py", line 114, in perform_request
response = self.pool.urlopen(method, url, body, retries=False, headers=self.headers, **kw)
File "/home/src/lib/urllib3/connectionpool.py", line 640, in urlopen
_stacktrace=sys.exc_info()[2])
File "/home/src/lib/urllib3/util/retry.py", line 238, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/home/src/lib/urllib3/connectionpool.py", line 595, in urlopen
chunked=chunked)
File "/home/src/lib/urllib3/connectionpool.py", line 363, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/local/lib/python2.7/httplib.py", line 1053, in request
self._send_request(method, url, body, headers)
File "/usr/local/lib/python2.7/httplib.py", line 1093, in _send_request
self.endheaders(body)
File "/usr/local/lib/python2.7/httplib.py", line 1049, in endheaders
self._send_output(message_body)
File "/usr/local/lib/python2.7/httplib.py", line 893, in _send_output
self.send(msg)
File "/usr/local/lib/python2.7/httplib.py", line 855, in send
self.connect()
File "/home/src/lib/urllib3/connection.py", line 167, in connect
conn = self._new_conn()
File "/home/src/lib/urllib3/connection.py", line 147, in _new_conn
(self.host, self.timeout))
ConnectTimeoutError: (<urllib3.connection.HTTPConnection object at 0x7fc74dbb4fd0>, u'Connection to 1.2.3.4 timed out. (connect timeout=1)')
Here the full command I use with all the params:
elasticsearch_server = ('127.0.0.1:9200', '1.2.3.4:9200')
flask.g.es = elasticsearch.Elasticsearch(elasticsearch_server, timeout=1, retry_on_timeout=True, sniff_on_start=True, sniff_on_connection_fail=True, serializer=JSONSerializerPython2(), sniffer_timeout=60, sniff_timeout=10)
the timeout=1 is only for this test on production is 10.
If I remove the not working IP from the list (i.e 1.2.3.4:9200) I have no error and no latency!
If I have 2 or 3 servers and one die I’ll always have the latency!!! TT
Thanks for any help!
Issue Analytics
- State:
- Created 6 years ago
- Comments:10 (5 by maintainers)
Top GitHub Comments
Use a single instance of the client, the library cannot have any persistence outside of it. You can use a global client even in multi threaded environments.
So no problem in the lib only in the way I create the client, I’ll investigate about.
Thanks a lot for the help.