TLS CI failures on Linux Python 3.6 and 3.7
See original GitHub issueSo TLS is reliably failing on Python 3.6 and 3.7 on Linux. I have a CI testing PR up here: https://github.com/dask/distributed/pull/3587
Failing Travis CI run: https://travis-ci.org/github/dask/distributed/jobs/664007369?utm_medium=notification&utm_source=github_status
cc @mariusvniekerk @jcrist @jacobtomlinson if any of you have a moment
Tracebacks (long)
_________________________ test_tls_reject_certificate __________________________
addr = 'tls://10.20.0.241:40676', timeout = 2, deserialize = True
connection_args = {'ssl_context': <ssl.SSLContext object at 0x7ff0bfc34ca8>}
async def connect(addr, timeout=None, deserialize=True, connection_args=None):
"""
Connect to the given address (a URI such as ``tcp://127.0.0.1:1234``)
and yield a ``Comm`` object. If the connection attempt fails, it is
retried until the *timeout* is expired.
"""
if timeout is None:
timeout = dask.config.get("distributed.comm.timeouts.connect")
timeout = parse_timedelta(timeout, default="seconds")
scheme, loc = parse_address(addr)
backend = registry.get_backend(scheme)
connector = backend.get_connector()
comm = None
start = time()
deadline = start + timeout
error = None
def _raise(error):
error = error or "connect() didn't finish in time"
msg = "Timed out trying to connect to %r after %s s: %s" % (
addr,
timeout,
error,
)
raise IOError(msg)
# This starts a thread
while True:
try:
while deadline - time() > 0:
future = connector.connect(
loc, deserialize=deserialize, **(connection_args or {})
)
with ignoring(TimeoutError):
comm = await asyncio.wait_for(
future, timeout=min(deadline - time(), 1)
)
break
if not comm:
> _raise(error)
distributed/comm/core.py:218:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
error = 'in <distributed.comm.tcp.TLSConnector object at 0x7ff0bfd75b00>: SSLError: [SSL: KRB5_S_TKT_NYV] unexpected eof while reading (_ssl.c:2309)'
def _raise(error):
error = error or "connect() didn't finish in time"
msg = "Timed out trying to connect to %r after %s s: %s" % (
addr,
timeout,
error,
)
> raise IOError(msg)
E OSError: Timed out trying to connect to 'tls://10.20.0.241:40676' after 2 s: in <distributed.comm.tcp.TLSConnector object at 0x7ff0bfd75b00>: SSLError: [SSL: KRB5_S_TKT_NYV] unexpected eof while reading (_ssl.c:2309)
distributed/comm/core.py:203: OSError
During handling of the above exception, another exception occurred:
@pytest.mark.asyncio
async def test_tls_reject_certificate():
cli_ctx = get_client_ssl_context()
serv_ctx = get_server_ssl_context()
# These certs are not signed by our test CA
bad_cert_key = ("tls-self-signed-cert.pem", "tls-self-signed-key.pem")
bad_cli_ctx = get_client_ssl_context(*bad_cert_key)
bad_serv_ctx = get_server_ssl_context(*bad_cert_key)
async def handle_comm(comm):
scheme, loc = parse_address(comm.peer_address)
assert scheme == "tls"
await comm.close()
# Listener refuses a connector not signed by the CA
listener = listen("tls://", handle_comm, connection_args={"ssl_context": serv_ctx})
await listener.start()
with pytest.raises(EnvironmentError) as excinfo:
comm = await connect(
listener.contact_address,
timeout=0.5,
connection_args={"ssl_context": bad_cli_ctx},
)
await comm.write({"x": "foo"}) # TODO: why is this necessary in Tornado 6 ?
# The wrong error is reported on Python 2, see https://github.com/tornadoweb/tornado/pull/2028
if sys.version_info >= (3,) and os.name != "nt":
try:
# See https://serverfault.com/questions/793260/what-does-tlsv1-alert-unknown-ca-mean
assert "unknown ca" in str(excinfo.value)
except AssertionError:
if os.name == "nt":
assert "An existing connection was forcibly closed" in str(
excinfo.value
)
else:
raise
# Sanity check
comm = await connect(
> listener.contact_address, timeout=2, connection_args={"ssl_context": cli_ctx}
)
distributed/comm/tests/test_comms.py:676:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
distributed/comm/core.py:227: in connect
_raise(error)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
error = "Timed out trying to connect to 'tls://10.20.0.241:40676' after 2 s: in <distributed.comm.tcp.TLSConnector object at 0x7ff0bfd75b00>: SSLError: [SSL: KRB5_S_TKT_NYV] unexpected eof while reading (_ssl.c:2309)"
def _raise(error):
error = error or "connect() didn't finish in time"
msg = "Timed out trying to connect to %r after %s s: %s" % (
addr,
timeout,
error,
)
> raise IOError(msg)
E OSError: Timed out trying to connect to 'tls://10.20.0.241:40676' after 2 s: Timed out trying to connect to 'tls://10.20.0.241:40676' after 2 s: in <distributed.comm.tcp.TLSConnector object at 0x7ff0bfd75b00>: SSLError: [SSL: KRB5_S_TKT_NYV] unexpected eof while reading (_ssl.c:2309)
distributed/comm/core.py:203: OSError
----------------------------- Captured stderr call -----------------------------
distributed.comm.tcp - WARNING - Listener on 'tls://0.0.0.0:40676': TLS handshake failed with remote 'tls://10.20.0.241:54292': [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)
________________________ test_tls_comm_closed_implicit _________________________
addr = 'tls://127.0.0.1:36491', timeout = 5, deserialize = True
connection_args = {'ssl_context': <ssl.SSLContext object at 0x7ff0cc3dcf48>}
async def connect(addr, timeout=None, deserialize=True, connection_args=None):
"""
Connect to the given address (a URI such as ``tcp://127.0.0.1:1234``)
and yield a ``Comm`` object. If the connection attempt fails, it is
retried until the *timeout* is expired.
"""
if timeout is None:
timeout = dask.config.get("distributed.comm.timeouts.connect")
timeout = parse_timedelta(timeout, default="seconds")
scheme, loc = parse_address(addr)
backend = registry.get_backend(scheme)
connector = backend.get_connector()
comm = None
start = time()
deadline = start + timeout
error = None
def _raise(error):
error = error or "connect() didn't finish in time"
msg = "Timed out trying to connect to %r after %s s: %s" % (
addr,
timeout,
error,
)
raise IOError(msg)
# This starts a thread
while True:
try:
while deadline - time() > 0:
future = connector.connect(
loc, deserialize=deserialize, **(connection_args or {})
)
with ignoring(TimeoutError):
comm = await asyncio.wait_for(
future, timeout=min(deadline - time(), 1)
)
break
if not comm:
> _raise(error)
distributed/comm/core.py:218:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
error = 'in <distributed.comm.tcp.TLSConnector object at 0x7ff0bc2978d0>: SSLError: [SSL: KRB5_S_TKT_NYV] unexpected eof while reading (_ssl.c:2309)'
def _raise(error):
error = error or "connect() didn't finish in time"
msg = "Timed out trying to connect to %r after %s s: %s" % (
addr,
timeout,
error,
)
> raise IOError(msg)
E OSError: Timed out trying to connect to 'tls://127.0.0.1:36491' after 5 s: in <distributed.comm.tcp.TLSConnector object at 0x7ff0bc2978d0>: SSLError: [SSL: KRB5_S_TKT_NYV] unexpected eof while reading (_ssl.c:2309)
distributed/comm/core.py:203: OSError
During handling of the above exception, another exception occurred:
@pytest.mark.asyncio
async def test_tls_comm_closed_implicit():
> await check_comm_closed_implicit("tls://127.0.0.1", **tls_kwargs)
distributed/comm/tests/test_comms.py:728:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
distributed/comm/tests/test_comms.py:712: in check_comm_closed_implicit
comm = await connect(contact_addr, connection_args=connect_args)
distributed/comm/core.py:227: in connect
_raise(error)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
error = "Timed out trying to connect to 'tls://127.0.0.1:36491' after 5 s: in <distributed.comm.tcp.TLSConnector object at 0x7ff0bc2978d0>: SSLError: [SSL: KRB5_S_TKT_NYV] unexpected eof while reading (_ssl.c:2309)"
def _raise(error):
error = error or "connect() didn't finish in time"
msg = "Timed out trying to connect to %r after %s s: %s" % (
addr,
timeout,
error,
)
> raise IOError(msg)
E OSError: Timed out trying to connect to 'tls://127.0.0.1:36491' after 5 s: Timed out trying to connect to 'tls://127.0.0.1:36491' after 5 s: in <distributed.comm.tcp.TLSConnector object at 0x7ff0bc2978d0>: SSLError: [SSL: KRB5_S_TKT_NYV] unexpected eof while reading (_ssl.c:2309)
distributed/comm/core.py:203: OSError
________________________ test_tls_comm_closed_explicit _________________________
@pytest.mark.asyncio
async def test_tls_comm_closed_explicit():
> await check_comm_closed_explicit("tls://127.0.0.1", **tls_kwargs)
distributed/comm/tests/test_comms.py:766:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
distributed/comm/tests/test_comms.py:745: in check_comm_closed_explicit
await b_read
distributed/comm/tcp.py:188: in read
n_frames = await stream.read_bytes(8)
../../../miniconda/envs/test-environment/lib/python3.6/site-packages/tornado/iostream.py:441: in read_bytes
self._try_inline_read()
../../../miniconda/envs/test-environment/lib/python3.6/site-packages/tornado/iostream.py:913: in _try_inline_read
pos = self._read_to_buffer_loop()
../../../miniconda/envs/test-environment/lib/python3.6/site-packages/tornado/iostream.py:815: in _read_to_buffer_loop
if self._read_to_buffer() == 0:
../../../miniconda/envs/test-environment/lib/python3.6/site-packages/tornado/iostream.py:945: in _read_to_buffer
bytes_read = self.read_from_fd(buf)
../../../miniconda/envs/test-environment/lib/python3.6/site-packages/tornado/iostream.py:1690: in read_from_fd
return self.socket.recv_into(buf)
../../../miniconda/envs/test-environment/lib/python3.6/ssl.py:1012: in recv_into
return self.read(nbytes, buffer)
../../../miniconda/envs/test-environment/lib/python3.6/ssl.py:874: in read
return self._sslobj.read(len, buffer)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <ssl.SSLObject object at 0x7ff0afed5278>, len = 65536
buffer = bytearray(b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x...0\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')
def read(self, len=1024, buffer=None):
"""Read up to 'len' bytes from the SSL object and return them.
If 'buffer' is provided, read into this buffer and return the number of
bytes read.
"""
if buffer is not None:
> v = self._sslobj.read(len, buffer)
E ssl.SSLError: [SSL: KRB5_S_TKT_NYV] unexpected eof while reading (_ssl.c:2309)
../../../miniconda/envs/test-environment/lib/python3.6/ssl.py:631: SSLError
----------------------------- Captured stderr call -----------------------------
tornado.application - ERROR - Exception in callback functools.partial(<function wrap.<locals>.null_wrapper at 0x7ff0afedb158>, <Task finished coro=<BaseTCPListener._handle_stream() done, defined at /home/travis/build/dask/distributed/distributed/comm/tcp.py:435> exception=TypeError("object NoneType can't be used in 'await' expression",)>)
Traceback (most recent call last):
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/tornado/ioloop.py", line 758, in _run_callback
ret = callback()
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/tornado/stack_context.py", line 300, in null_wrapper
return fn(*args, **kwargs)
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/tornado/tcpserver.py", line 297, in <lambda>
lambda f: f.result())
File "/home/travis/build/dask/distributed/distributed/comm/tcp.py", line 444, in _handle_stream
await self.comm_handler(comm)
TypeError: object NoneType can't be used in 'await' expression
___________________________ test_require_encryption ____________________________
addr = 'tls://10.20.0.241:43690', timeout = 5, deserialize = True
connection_args = {'require_encryption': True, 'ssl_context': <ssl.SSLContext object at 0x7ff0bc0dbee8>}
async def connect(addr, timeout=None, deserialize=True, connection_args=None):
"""
Connect to the given address (a URI such as ``tcp://127.0.0.1:1234``)
and yield a ``Comm`` object. If the connection attempt fails, it is
retried until the *timeout* is expired.
"""
if timeout is None:
timeout = dask.config.get("distributed.comm.timeouts.connect")
timeout = parse_timedelta(timeout, default="seconds")
scheme, loc = parse_address(addr)
backend = registry.get_backend(scheme)
connector = backend.get_connector()
comm = None
start = time()
deadline = start + timeout
error = None
def _raise(error):
error = error or "connect() didn't finish in time"
msg = "Timed out trying to connect to %r after %s s: %s" % (
addr,
timeout,
error,
)
raise IOError(msg)
# This starts a thread
while True:
try:
while deadline - time() > 0:
future = connector.connect(
loc, deserialize=deserialize, **(connection_args or {})
)
with ignoring(TimeoutError):
comm = await asyncio.wait_for(
future, timeout=min(deadline - time(), 1)
)
break
if not comm:
> _raise(error)
distributed/comm/core.py:218:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
error = 'in <distributed.comm.tcp.TLSConnector object at 0x7ff06869b438>: SSLError: [SSL: KRB5_S_TKT_NYV] unexpected eof while reading (_ssl.c:2309)'
def _raise(error):
error = error or "connect() didn't finish in time"
msg = "Timed out trying to connect to %r after %s s: %s" % (
addr,
timeout,
error,
)
> raise IOError(msg)
E OSError: Timed out trying to connect to 'tls://10.20.0.241:43690' after 5 s: in <distributed.comm.tcp.TLSConnector object at 0x7ff06869b438>: SSLError: [SSL: KRB5_S_TKT_NYV] unexpected eof while reading (_ssl.c:2309)
distributed/comm/core.py:203: OSError
During handling of the above exception, another exception occurred:
@pytest.mark.asyncio
async def test_require_encryption():
"""
Functional test for "require_encryption" setting.
"""
async def handle_comm(comm):
comm.abort()
c = {
"distributed.comm.tls.ca-file": ca_file,
"distributed.comm.tls.scheduler.key": key1,
"distributed.comm.tls.scheduler.cert": cert1,
"distributed.comm.tls.worker.cert": keycert1,
}
with dask.config.set(c):
sec = Security()
c["distributed.comm.require-encryption"] = True
with dask.config.set(c):
sec2 = Security()
for listen_addr in ["inproc://", "tls://"]:
async with listen(
listen_addr, handle_comm, connection_args=sec.get_listen_args("scheduler")
) as listener:
comm = await connect(
listener.contact_address,
> connection_args=sec2.get_connection_args("worker"),
)
distributed/tests/test_security.py:338:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
distributed/comm/core.py:227: in connect
_raise(error)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
error = "Timed out trying to connect to 'tls://10.20.0.241:43690' after 5 s: in <distributed.comm.tcp.TLSConnector object at 0x7ff06869b438>: SSLError: [SSL: KRB5_S_TKT_NYV] unexpected eof while reading (_ssl.c:2309)"
def _raise(error):
error = error or "connect() didn't finish in time"
msg = "Timed out trying to connect to %r after %s s: %s" % (
addr,
timeout,
error,
)
> raise IOError(msg)
E OSError: Timed out trying to connect to 'tls://10.20.0.241:43690' after 5 s: Timed out trying to connect to 'tls://10.20.0.241:43690' after 5 s: in <distributed.comm.tcp.TLSConnector object at 0x7ff06869b438>: SSLError: [SSL: KRB5_S_TKT_NYV] unexpected eof while reading (_ssl.c:2309)
distributed/comm/core.py:203: OSError
=============================== warnings summary ===============================
/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/pytest_asyncio/plugin.py:39: 166 tests with warnings
/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/pytest_asyncio/plugin.py:39: PytestDeprecationWarning: direct construction of Function has been deprecated, please use Function.from_parent
item = pytest.Function(name, parent=collector)
/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/pytest_asyncio/plugin.py:45: 166 tests with warnings
/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/pytest_asyncio/plugin.py:45: PytestDeprecationWarning: direct construction of Function has been deprecated, please use Function.from_parent
item = pytest.Function(name, parent=collector) # To reload keywords.
distributed/protocol/tests/test_pandas.py:2
/home/travis/build/dask/distributed/distributed/protocol/tests/test_pandas.py:2: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
distributed/tests/test_variable.py:102
distributed/tests/test_variable.py:102: PytestCollectionWarning: yield tests were removed in pytest 4.0 - test_timeout_sync will be ignored
def test_timeout_sync(client):
distributed/dashboard/tests/test_worker_bokeh.py::test_simple
distributed/dashboard/tests/test_worker_bokeh.py::test_basic
distributed/dashboard/tests/test_worker_bokeh.py::test_basic
BokehDeprecationWarning: 'WidgetBox' is deprecated and will be removed in Bokeh 3.0, use 'bokeh.models.Column' instead
distributed/deploy/tests/test_spec_cluster.py::test_logs
distributed/deploy/tests/test_spec_cluster.py::test_logs
distributed/deploy/tests/test_spec_cluster.py::test_logs
distributed/deploy/tests/test_spec_cluster.py::test_logs
distributed/deploy/tests/test_spec_cluster.py::test_logs
/home/travis/build/dask/distributed/distributed/deploy/cluster.py:197: DeprecationWarning: logs is deprecated, use get_logs instead
warnings.warn("logs is deprecated, use get_logs instead", DeprecationWarning)
distributed/diagnostics/tests/test_task_stream.py::test_get_task_stream_save
distributed/tests/test_client.py::test_profile_bokeh
/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/bokeh/io/saving.py:125: UserWarning: save() called but no resources were supplied and output_file(...) was never called, defaulting to resources.CDN
warn("save() called but no resources were supplied and output_file(...) was never called, defaulting to resources.CDN")
distributed/tests/test_actor.py::test_client_actions[True]
distributed/tests/test_actor.py::test_client_actions[False]
/home/travis/build/dask/distributed/distributed/tests/test_actor.py:64: RuntimeWarning: coroutine 'get_actor_attribute_from_worker' was never awaited
assert hasattr(counter, "n")
distributed/tests/test_actor.py::test_Actor
/home/travis/build/dask/distributed/distributed/tests/test_actor.py:123: RuntimeWarning: coroutine 'get_actor_attribute_from_worker' was never awaited
assert hasattr(counter, "n")
distributed/tests/test_client.py::test_client_async_before_loop_starts
/home/travis/build/dask/distributed/distributed/tests/test_client.py:5062: RuntimeWarning: coroutine 'Client._close' was never awaited
client.close()
distributed/tests/test_client.py::test_compute_retries
/home/travis/build/dask/distributed/distributed/tests/test_client.py:249: RuntimeWarning: coroutine 'Client._start' was never awaited
gc.collect()
distributed/tests/test_client.py::test_multiple_scatter
/home/travis/build/dask/distributed/distributed/tests/test_client.py:5467: RuntimeWarning: coroutine 'Client._scatter' was never awaited
x = c.scatter(1, direct=True)
distributed/tests/test_collections.py::test_dataframes
/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/pandas/core/indexes/multi.py:684: RuntimeWarning: coroutine 'Future._result' was never awaited
ensure_index(lev, copy=copy)._shallow_copy() for lev in levels
distributed/tests/test_core.py::test_connection_pool_detects_remote_close
/home/travis/build/dask/distributed/distributed/tests/test_core.py:784: RuntimeWarning: coroutine 'ConnectionPool.close' was never awaited
p.close()
distributed/tests/test_steal.py::test_steal_host_restrictions
/home/travis/build/dask/distributed/distributed/worker.py:466: UserWarning: the ncores= parameter has moved to nthreads=
warnings.warn("the ncores= parameter has moved to nthreads=")
-- Docs: https://docs.pytest.org/en/latest/warnings.html
========================== slowest 20 test durations ===========================
55.32s call distributed/tests/test_stress.py::test_close_connections
24.28s call distributed/tests/test_scheduler.py::test_file_descriptors
18.34s call distributed/tests/test_stress.py::test_stress_creation_and_deletion
14.89s call distributed/tests/test_diskutils.py::test_workspace_concurrency_intense
14.53s call distributed/tests/test_variable.py::test_race
12.21s call distributed/tests/test_client.py::test_threadsafe_get
12.04s call distributed/tests/test_queues.py::test_race
11.26s call distributed/tests/test_client.py::test_threadsafe_compute
9.89s call distributed/tests/test_utils_test.py::test_bare_cluster
9.85s call distributed/tests/test_stress.py::test_stress_gc[inc-1000]
9.58s call distributed/tests/test_client.py::test_normalize_collection_with_released_futures
9.33s call distributed/tests/test_stress.py::test_cancel_stress_sync
9.26s call distributed/tests/test_stress.py::test_stress_scatter_death
8.79s call distributed/tests/test_stress.py::test_cancel_stress
7.85s call distributed/tests/test_failed_workers.py::test_submit_after_failed_worker_sync
7.76s call distributed/tests/test_client.py::test_secede_balances
7.75s call distributed/tests/test_failed_workers.py::test_gather_after_failed_worker
7.31s call distributed/tests/test_failed_workers.py::test_broken_worker_during_computation
7.14s call distributed/tests/test_client.py::test_reconnect
6.81s call distributed/tests/test_client_executor.py::test_shutdown
=========================== short test summary info ============================
SKIPPED [1] /home/travis/build/dask/distributed/distributed/comm/tests/test_ucx.py:4: could not import 'ucp': No module named 'ucp'
SKIPPED [1] /home/travis/build/dask/distributed/distributed/comm/tests/test_ucx_config.py:16: could not import 'ucp': No module named 'ucp'
SKIPPED [1] /home/travis/build/dask/distributed/distributed/protocol/tests/test_arrow.py:4: could not import 'pyarrow': No module named 'pyarrow'
SKIPPED [1] /home/travis/build/dask/distributed/distributed/protocol/tests/test_cupy.py:6: could not import 'cupy': No module named 'cupy'
SKIPPED [1] /home/travis/build/dask/distributed/distributed/protocol/tests/test_keras.py:5: could not import 'keras': No module named 'tensorflow'
SKIPPED [1] /home/travis/build/dask/distributed/distributed/protocol/tests/test_numba.py:5: could not import 'numba.cuda': No module named 'numba'
SKIPPED [1] /home/travis/build/dask/distributed/distributed/protocol/tests/test_rmm.py:5: could not import 'numba.cuda': No module named 'numba'
SKIPPED [1] /home/travis/build/dask/distributed/distributed/protocol/tests/test_sparse.py:5: could not import 'sparse': No module named 'sparse'
SKIPPED [1] /home/travis/build/dask/distributed/distributed/protocol/tests/test_torch.py:5: could not import 'torch': No module named 'torch'
SKIPPED [1] /home/travis/build/dask/distributed/distributed/comm/tests/test_comms.py:521: could not import 'ucp': No module named 'ucp'
SKIPPED [1] distributed/comm/tests/test_comms.py:570: ipv6 required
SKIPPED [1] distributed/comm/tests/test_comms.py:595: ipv6 required
SKIPPED [1] distributed/comm/tests/test_comms.py:617: ipv6 required
SKIPPED [1] distributed/deploy/tests/test_local.py:1017: asyncio.all_tasks not implemented
SKIPPED [4] /home/travis/build/dask/distributed/distributed/protocol/tests/test_collection_cuda.py:11: could not import 'cupy': No module named 'cupy'
SKIPPED [4] /home/travis/build/dask/distributed/distributed/protocol/tests/test_collection_cuda.py:43: could not import 'cudf': No module named 'cudf'
SKIPPED [1] /home/travis/build/dask/distributed/distributed/protocol/tests/test_numpy.py:124: could not import 'numpy.core.test_rational': No module named 'numpy.core.test_rational'
SKIPPED [1] distributed/protocol/tests/test_numpy.py:215: unconditional skip
SKIPPED [1] /home/travis/build/dask/distributed/distributed/protocol/tests/test_numpy.py:244: could not import 'blosc': No module named 'blosc'
SKIPPED [1] distributed/tests/test_client.py:1801: because
SKIPPED [1] distributed/tests/test_client.py:1881: condition: True
SKIPPED [2] distributed/tests/test_client.py:3583: TODO: intermittent failures
SKIPPED [7] distributed/utils_test.py:884: unconditional skip
SKIPPED [1] distributed/utils_test.py:884: Use fast random selection now
SKIPPED [1] distributed/utils_test.py:884: Now prefer first-in-first-out
SKIPPED [1] distributed/utils_test.py:884: known intermittent failure
SKIPPED [1] /home/travis/build/dask/distributed/distributed/tests/test_collections.py:177: could not import 'sparse': No module named 'sparse'
SKIPPED [1] distributed/tests/test_core.py:136: asynccontextmanager not avaiable before Python 3.7
SKIPPED [1] distributed/tests/test_ipython.py:25: IPython kernel broken with Tornado 5
SKIPPED [1] distributed/tests/test_ipython.py:46: IPython kernel broken with Tornado 5
SKIPPED [1] distributed/tests/test_ipython.py:63: IPython kernel broken with Tornado 5
SKIPPED [1] distributed/tests/test_ipython.py:83: IPython kernel broken with Tornado 5
SKIPPED [1] distributed/tests/test_ipython.py:111: IPython kernel broken with Tornado 5
SKIPPED [1] distributed/tests/test_ipython.py:138: IPython kernel broken with Tornado 5
SKIPPED [1] distributed/tests/test_ipython.py:158: IPython kernel broken with Tornado 5
SKIPPED [1] distributed/utils_test.py:884: getting same client from main thread
SKIPPED [2] distributed/utils_test.py:884: <Skipped instance>
SKIPPED [1] distributed/utils_test.py:884: Should protect resource keys from optimization
SKIPPED [1] distributed/tests/test_scheduler.py:1734: asyncio.all_tasks not implemented
SKIPPED [1] /home/travis/build/dask/distributed/distributed/tests/test_utils.py:305: could not import 'pyarrow': No module named 'pyarrow'
SKIPPED [1] distributed/tests/test_utils_test.py:54: This hangs on travis
SKIPPED [2] /home/travis/build/dask/distributed/distributed/tests/test_worker.py:1529: could not import 'ucp': No module named 'ucp'
SKIPPED [1] distributed/utils_test.py:884: don't yet support uploading pyc files
SKIPPED [1] distributed/utils_test.py:884: Other tests leak memory, so process-level checks trigger immediately
SKIPPED [1] distributed/utils_test.py:884: Our logic here is faulty
SKIPPED [1] /home/travis/build/dask/distributed/distributed/tests/test_worker.py:1590: could not import 'pynvml': No module named 'pynvml'
= 4 failed, 1500 passed, 61 skipped, 5 deselected, 11 xfailed, 8 xpassed, 353 warnings in 1679.01s (0:27:59) =
Task was destroyed but it is pending!
task: <Task pending coro=<Worker.heartbeat() done, defined at /home/travis/build/dask/distributed/distributed/worker.py:882> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7ff06aa20348>()]> cb=[IOLoop.add_future.<locals>.<lambda>() at /home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/tornado/ioloop.py:719]>
Exception ignored in: <coroutine object gen_cluster.<locals>._.<locals>.test_func.<locals>.coro at 0x7ff057600888>
Traceback (most recent call last):
File "/home/travis/build/dask/distributed/distributed/utils_test.py", line 927, in coro
result = await future
File "/home/travis/build/dask/distributed/distributed/utils_test.py", line 840, in end_cluster
await asyncio.gather(*[end_worker(w) for w in workers])
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/asyncio/tasks.py", line 602, in gather
fut = ensure_future(arg, loop=loop)
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/asyncio/tasks.py", line 519, in ensure_future
task = loop.create_task(coro_or_future)
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/asyncio/base_events.py", line 306, in create_task
self._check_closed()
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/asyncio/base_events.py", line 381, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
sys:1: RuntimeWarning: coroutine 'end_cluster.<locals>.end_worker' was never awaited
Task was destroyed but it is pending!
task: <Task pending coro=<gen_cluster.<locals>._.<locals>.test_func.<locals>.coro() done, defined at /home/travis/build/dask/distributed/distributed/utils_test.py:889> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7ff06f26b2b8>()]> cb=[IOLoop.add_future.<locals>.<lambda>() at /home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/tornado/ioloop.py:719]>
Task exception was never retrieved
future: <Task finished coro=<test_gpu_metrics() done, defined at /home/travis/build/dask/distributed/distributed/tests/test_worker.py:1588> exception=could not import 'pynvml': No module named 'pynvml'>
Traceback (most recent call last):
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/_pytest/runner.py", line 244, in from_call
result = func()
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/_pytest/runner.py", line 217, in <lambda>
lambda: ihook(item=item, **kwds), when=when, reraise=reraise
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/pluggy/hooks.py", line 289, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/pluggy/manager.py", line 87, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/pluggy/manager.py", line 81, in <lambda>
firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/pluggy/callers.py", line 208, in _multicall
return outcome.get_result()
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/pluggy/callers.py", line 80, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/pluggy/callers.py", line 187, in _multicall
res = hook_impl.function(*args)
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/_pytest/runner.py", line 135, in pytest_runtest_call
item.runtest()
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/_pytest/python.py", line 1479, in runtest
self.ihook.pytest_pyfunc_call(pyfuncitem=self)
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/pluggy/hooks.py", line 289, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/pluggy/manager.py", line 87, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/pluggy/manager.py", line 81, in <lambda>
firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/pluggy/callers.py", line 208, in _multicall
return outcome.get_result()
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/pluggy/callers.py", line 80, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/pluggy/callers.py", line 187, in _multicall
res = hook_impl.function(*args)
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/_pytest/python.py", line 184, in pytest_pyfunc_call
result = testfunction(**testargs)
File "/home/travis/build/dask/distributed/distributed/utils_test.py", line 957, in test_func
coro, timeout=timeout * 2 if timeout else timeout
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/tornado/ioloop.py", line 571, in run_sync
self.start()
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/tornado/platform/asyncio.py", line 132, in start
self.asyncio_loop.run_forever()
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/asyncio/base_events.py", line 442, in run_forever
self._run_once()
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/asyncio/base_events.py", line 1462, in _run_once
handle._run()
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/asyncio/events.py", line 145, in _run
self._callback(*self._args)
File "/home/travis/build/dask/distributed/distributed/tests/test_worker.py", line 1590, in test_gpu_metrics
pytest.importorskip("pynvml")
File "/home/travis/miniconda/envs/test-environment/lib/python3.6/site-packages/_pytest/outcomes.py", line 214, in importorskip
raise Skipped(reason, allow_module_level=True) from None
Skipped: could not import 'pynvml': No module named 'pynvml'
The command "if [[ $TESTS == true ]]; then source continuous_integration/travis/run_tests.sh ; fi" exited with 1.
0.01s$ if [[ $LINT == true ]]; then python -m pip install flake8 ; flake8 distributed ; fi
The command "if [[ $LINT == true ]]; then python -m pip install flake8 ; flake8 distributed ; fi" exited with 0.
0.01s$ if [[ $LINT == true ]]; then python -m pip install black ; black distributed --check; fi
The command "if [[ $LINT == true ]]; then python -m pip install black ; black distributed --check; fi" exited with 0.
Done. Your build exited with 1.
Top
My Repositories
dask/dask-gateway
332
Duration: 15 sec
pangeo-data/intake-stac
130
Duration: 1 min 7 sec
dask/dask
13917
Duration: 1 hr 35 min 11 sec
Finished: 30 minutes ago
python-streamz/streamz
830
Duration: 11 min 52 sec
Finished: 46 minutes ago
geopandas/geopandas
3116
Duration: 37 min 5 sec
Finished: about an hour ago
pangeo-data/pangeo-datastore
180
Duration: 5 min 48 sec
Finished: 4 hours ago
mrocklin/dask
2989
Duration: 1 hr 21 min 15 sec
Finished: 5 hours ago
jupyterlab/jupyterlab_server
1148
Duration: 1 min 22 sec
Finished: 6 hours ago
dask/distributed
8236
Duration: 1 hr 30 min 49 sec
Finished: 6 hours ago
mrocklin/distributed
4024
Duration: 2 hrs 27 min 26 sec
Finished: 6 hours ago
dask/dask-ml
1156
Duration: 4 min 21 sec
Finished: 11 hours ago
jupyterlab/jupyterlab-demo
309
Duration: 8 min 38 sec
Finished: 11 hours ago
jupyterlab/mimerender-cookiecutter-ts
86
Duration: 3 min 45 sec
Finished: 11 hours ago
jupyterlab/mimerender-cookiecutter
55
Duration: 1 min 40 sec
Finished: 11 hours ago
pydata/pandas-datareader
1391
Duration: 26 min 20 sec
Finished: 14 hours ago
dask/dask-drmaa
829
Duration: 1 hr 40 min
Finished: 15 hours ago
dask/dask-jobqueue
1544
Duration: 22 min 33 sec
Finished: 18 hours ago
jupyterlab/jupyterlab-git
730
Duration: 4 min 25 sec
Finished: 19 hours ago
jupyterlab/extension-cookiecutter-js
812
Duration: 2 min 37 sec
Finished: 19 hours ago
pytoolz/cytoolz
250
Duration: 16 min 25 sec
Finished: 20 hours ago
jupyterlab/jupyterlab-google-drive
1264
Duration: 11 min 22 sec
Finished: 21 hours ago
pytoolz/toolz
1151
Duration: 4 min 17 sec
Finished: 22 hours ago
jupyterlab/lumino
186
Duration: 5 min 18 sec
Finished: a day ago
dask/dask-yarn
136
Duration: 21 min 27 sec
Finished: 2 days ago
dask/s3fs
691
Duration: 4 min 18 sec
Finished: 2 days ago
pangeo-data/helm-chart
278
Duration: 23 sec
Finished: 3 days ago
pyviz-topics/EarthML
378
Duration: 20 sec
Finished: 3 days ago
jupyterlab/jupyter-renderers
530
Duration: 5 min 39 sec
Finished: 6 days ago
blaze/odo
1771
Duration: 22 min 39 sec
Finished: 6 days ago
dask/cachey
39
Duration: 3 min 24 sec
Finished: 7 days ago
Issue Analytics
- State:
- Created 4 years ago
- Comments:23 (23 by maintainers)
Top Results From Across the Web
What's New In Python 3.7 — Python 3.11.1 documentation
This article explains the new features in Python 3.7, compared to 3.6. ... functions of the existing TLS API will be no-op and...
Read more >Issue 32947: Support OpenSSL 1.1.1 - Python tracker
1 and TLS 1.3. Fixes need to be backported to 2.7 and 3.6 to 3.8. We might have to consider backports to 3.4...
Read more >Changelog — Python 3.7.15 documentation
bpo-40457: The ssl module now support OpenSSL builds without TLS 1.0 and 1.1 ... This bug was present in every release of Python...
Read more >test_start_tls_server_1() fails on Python on x86 Windows7 3.7 ...
The test fails even when test_asyncio is re-run alone (not when other tests run in parallel). http://buildbot.python.org/all/#/builders/58/ ...
Read more >Changelog — Python 3.11.1 documentation
Issue discovered and analyzed by Mingliang ZHAO. Patch by Victor Stinner. gh-96864: Fix a possible assertion failure, fatal error, or ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
openssl !=1.1.1e
would be preferred so openssl 1.1.1f can be installed when it is released and assuming it does not have the same issue.Yes, openssl 1.1.1f reverted the change in 1.1.1e that was causing the failure. I created #3668 to remove the pin now that 1.1.1f is available via conda.