docker: EOF occurred in violation of protocol
See original GitHub issueDescribe the bug
When running zap2docker-weekly (probably other images are/will be affected as well), we experience the following error:
2022-02-21 08:03:56,810 I/O error: HTTPSConnectionPool(host='example.org', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1131)')))
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/urllib3/connectionpool.py", line 594, in urlopen
self._prepare_proxy(conn)
File "/usr/local/lib/python3.8/dist-packages/urllib3/connectionpool.py", line 805, in _prepare_proxy
conn.connect()
File "/usr/local/lib/python3.8/dist-packages/urllib3/connection.py", line 337, in connect
self.sock = ssl_wrap_socket(
File "/usr/local/lib/python3.8/dist-packages/urllib3/util/ssl_.py", line 345, in ssl_wrap_socket
return context.wrap_socket(sock, server_hostname=server_hostname)
File "/usr/lib/python3.8/ssl.py", line 500, in wrap_socket
return self.sslsocket_class._create(
File "/usr/lib/python3.8/ssl.py", line 1040, in _create
self.do_handshake()
File "/usr/lib/python3.8/ssl.py", line 1309, in do_handshake
self._sslobj.do_handshake()
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:1131)
Steps to reproduce the behavior
Minimal example that triggers this:
docker run --pull always owasp/zap2docker-weekly zap-full-scan.py -t https://example.org
I am able to reproduce this from a CI environment as well as my own desktop. And although we use a different URL for the actual CI, it does actually trigger on example.org as well
Expected behavior
This particular error doesn’t happen
Software versions
owasp/zap2docker-weekly at the time of writing this is Digest: sha256:c5895a86cf4752e4a030fe854ff09836969776daf00024bc6b71646c19c620ca
Screenshots
No response
Errors from the zap.log file
The zap log from our CI environment doesn’t actually contain the error, so I’ve copy-pasted the entire output when running it manually from a console: console.log
Additional context
Looking at our CI logs, this job last succeeded on Feb 14 (around 3:00 GMT) but has failed since Feb 15 (around 3:00 GMT). It is entirely possible that this will ‘fix’ itself with tomorrow’s run, if that’s the case I’ll report back here and close this ticket.
Would you like to help fix this issue?
- Yes
Issue Analytics
- State:
- Created 2 years ago
- Reactions:1
- Comments:13 (11 by maintainers)
Top GitHub Comments
And I can confirm that the weekly image is also good 😃
The Live docker image now works as expected. We’ve had a hickup regenerating the weekly image but we think we’ve fixed that and have kicked off the generation again.