Test Errors and Failures while trying to Package for openSUSE
See original GitHub issueThe issue:
The Qtile package for openSUSE has not been updated in a while and is currently failing some tests, (though better than Fedora where the package no longer exists I guess…) so I decided to actually try to update it and fix the failing tests. I had to use the newer ffibuild script 8f70047ea5c24f484e527ea237e5b72c1ba5ff47 as openSUSE does not actually have pywayland or pywlroots packaged yet, and I also had to use the patch at #3518 to fix some of the build failures. (I also decided to add #3544 as it didn’t seem to impact the tests and looks like a nice feature)
With the preface out of the way, here is the main issue now. Even after applying the above patches, there are still a bunch of test failures. The full log file can be found at: https://build.opensuse.org/package/live_build_log/home:Pi-Cla:branches:X11:windowmanagers/qtile/openSUSE_Tumbleweed/x86_64 but the errors get duplicated a lot across tests so I will just list them once. Also, you can see which patches I am currently using at https://build.opensuse.org/package/show/home:Pi-Cla:branches:X11:windowmanagers/qtile.
[ 2221s] _____________ ERROR at setup of test_basic[1-x11-FakeScreenConfig] _____________
[ 2221s]
[ 2221s] request = <SubRequest 'manager' for <Function test_basic[1-x11-FakeScreenConfig]>>
[ 2221s] manager_nospawn = <test.helpers.TestManager object at 0x7f92c8fd6170>
[ 2221s]
[ 2221s] @pytest.fixture(scope="function")
[ 2221s] def manager(request, manager_nospawn):
[ 2221s] config = getattr(request, "param", BareConfig)
[ 2221s]
[ 2221s] > manager_nospawn.start(config)
[ 2221s]
[ 2221s] test/conftest.py:139:
[ 2221s] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
[ 2221s]
[ 2221s] self = <test.helpers.TestManager object at 0x7f92c8fd6170>
[ 2221s] config_class = <class 'test.test_fakescreen.FakeScreenConfig'>, no_spawn = False
[ 2221s] state = None
[ 2221s]
[ 2221s] def start(self, config_class, no_spawn=False, state=None):
[ 2221s] rpipe, wpipe = multiprocessing.Pipe()
[ 2221s]
[ 2221s] def run_qtile():
[ 2221s] try:
[ 2221s] os.environ.pop("DISPLAY", None)
[ 2221s] os.environ.[ 2221s] _____________ ERROR at setup of test_basic[1-x11-FakeScreenConfig] _____________
[ 2221s]
[ 2221s] request = <SubRequest 'manager' for <Function test_basic[1-x11-FakeScreenConfig]>>
[ 2221s] manager_nospawn = <test.helpers.TestManager object at 0x7f92c8fd6170>
[ 2221s]
[ 2221s] @pytest.fixture(scope="function")
[ 2221s] def manager(request, manager_nospawn):
[ 2221s] config = getattr(request, "param", BareConfig)
[ 2221s]
[ 2221s] > manager_nospawn.start(config)
[ 2221s]
[ 2221s] test/conftest.py:139:
[ 2221s] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
[ 2221s]
[ 2221s] self = <test.helpers.TestManager object at 0x7f92c8fd6170>
[ 2221s] config_class = <class 'test.test_fakescreen.FakeScreenConfig'>, no_spawn = False
[ 2221s] state = None
[ 2221s]
[ 2221s] def start(self, config_class, no_spawn=False, state=None):
[ 2221s] rpipe, wpipe = multiprocessing.Pipe()
[ 2221s]
[ 2221s] def run_qtile():
[ 2221s] try:
[ 2221s] os.environ.pop("DISPLAY", None)
[ 2221s] os.environ.pop("WAYLAND_DISPLAY", None)
[ 2221s] kore = self.backend.create()
[ 2221s] os.environ.update(self.backend.env)
[ 2221s] logger = init_log(self.log_level, log_path=None, log_color=False)
[ 2221s] if hasattr(self, "log_queue"):
[ 2221s] logger.addHandler(logging.handlers.QueueHandler(self.log_queue))
[ 2221s] Qtile(
[ 2221s] kore,
[ 2221s] config_class(),
[ 2221s] socket_path=self.sockfile,
[ 2221s] no_spawn=no_spawn,
[ 2221s] state=state,
[ 2221s] ).loop()
[ 2221s] except Exception:
[ 2221s] wpipe.send(traceback.format_exc())
[ 2221s]
[ 2221s] self.proc = multiprocessing.Process(target=run_qtile)
[ 2221s] self.proc.start()
[ 2221s]
[ 2221s] # First, wait for socket to appear
[ 2221s] if can_connect_qtile(self.sockfile, ok=lambda: not rpipe.poll()):
[ 2221s] ipc_client = ipc.Client(self.sockfile)
[ 2221s] ipc_command = command.interface.IPCCommandInterface(ipc_client)
[ 2221s] self.c = command.client.InteractiveCommandClient(ipc_command)
[ 2221s] self.backend.configure(self)
[ 2221s] return
[ 2221s] if rpipe.poll(0.1):
[ 2221s] error = rpipe.recv()
[ 2221s] raise AssertionError("Error launching qtile, traceback:\n%s" % error)
[ 2221s] > raise AssertionError("Error launching qtile")
[ 2221s] E AssertionError: Error launching qtile
[ 2221s]
[ 2221s] test/helpers.py:203: AssertionError
[ 2221s] ---------------------------- Captured stdout setup -----------------------------
[ 2221s] 2022-07-07 07:53:56,726 INFO libqtile bar.py:_configure():L315 The following widgets were renamed in qtile.widgets_map: sep_1, importerrorwidget_1, importerrorwidget_2, sep_2 To bind commands, rename the widget or use lazy.widget[new_name].
[ 2221s] 2022-07-07 07:53:56,839 INFO libqtile bar.py:_configure():L315 The following widgets were renamed in qtile.widgets_map: groupbox_1, windowname_1, clock_1 To bind commands, rename the widget or use lazy.widget[new_name].
[ 2221s] 2022-07-07 07:53:56,843 INFO libqtile bar.py:_configure():L315 The following widgets were renamed in qtile.widgets_map: groupbox_2, windowname_2, clock_2 To bind commands, rename the widget or use lazy.widget[new_name].
[ 2221s] 2022-07-07 07:53:56,847 INFO libqtile bar.py:_configure():L315 The following widgets were renamed in qtile.widgets_map: groupbox_3, windowname_3, clock_3 To bind commands, rename the widget or use lazy.widget[new_name].pop("WAYLAND_DISPLAY", None)
[ 2221s] kore = self.backend.create()
[ 2221s] os.environ.update(self.backend.env)
[ 2221s] logger = init_log(self.log_level, log_path=None, log_color=False)
[ 2221s] if hasattr(self, "log_queue"):
[ 2221s] logger.addHandler(logging.handlers.QueueHandler(self.log_queue))
[ 2221s] Qtile(
[ 2221s] kore,
[ 2221s] config_class(),
[ 2221s] socket_path=self.sockfile,
[ 2221s] no_spawn=no_spawn,
[ 2221s] state=state,
[ 2221s] ).loop()
[ 2221s] except Exception:
[ 2221s] wpipe.send(traceback.format_exc())
[ 2221s]
[ 2221s] self.proc = multiprocessing.Process(target=run_qtile)
[ 2221s] self.proc.start()
[ 2221s]
[ 2221s] # First, wait for socket to appear
[ 2221s] if can_connect_qtile(self.sockfile, ok=lambda: not rpipe.poll()):
[ 2221s] ipc_client = ipc.Client(self.sockfile)
[ 2221s] ipc_command = command.interface.IPCCommandInterface(ipc_client)
[ 2221s] self.c = command.client.InteractiveCommandClient(ipc_command)
[ 2221s] self.backend.configure(self)
[ 2221s] return
[ 2221s] if rpipe.poll(0.1):
[ 2221s] error = rpipe.recv()
[ 2221s] raise AssertionError("Error launching qtile, traceback:\n%s" % error)
[ 2221s] > raise AssertionError("Error launching qtile")
[ 2221s] E AssertionError: Error launching qtile
[ 2221s]
[ 2221s] test/helpers.py:203: AssertionError
[ 2221s] ---------------------------- Captured stdout setup -----------------------------
[ 2221s] 2022-07-07 07:53:56,726 INFO libqtile bar.py:_configure():L315 The following widgets were renamed in qtile.widgets_map: sep_1, importerrorwidget_1, importerrorwidget_2, sep_2 To bind commands, rename the widget or use lazy.widget[new_name].
[ 2221s] 2022-07-07 07:53:56,839 INFO libqtile bar.py:_configure():L315 The following widgets were renamed in qtile.widgets_map: groupbox_1, windowname_1, clock_1 To bind commands, rename the widget or use lazy.widget[new_name].
[ 2221s] 2022-07-07 07:53:56,843 INFO libqtile bar.py:_configure():L315 The following widgets were renamed in qtile.widgets_map: groupbox_2, windowname_2, clock_2 To bind commands, rename the widget or use lazy.widget[new_name].
[ 2221s] 2022-07-07 07:53:56,847 INFO libqtile bar.py:_configure():L315 The following widgets were renamed in qtile.widgets_map: groupbox_3, windowname_3, clock_3 To bind commands, rename the widget or use lazy.widget[new_name].
[ 2221s] _____________________ test_cycle_layouts[1-x11-AllLayouts] _____________________
[ 2221s]
[ 2221s] self = <StreamReader eof transport=<_SelectorSocketTransport closed fd=56>>
[ 2221s] n = -1
[ 2221s]
[ 2221s] async def read(self, n=-1):
[ 2221s] """Read up to `n` bytes from the stream.
[ 2221s]
[ 2221s] If n is not provided, or set to -1, read until EOF and return all read
[ 2221s] bytes. If the EOF was received and the internal buffer is empty, return
[ 2221s] an empty bytes object.
[ 2221s]
[ 2221s] If n is zero, return empty bytes object immediately.
[ 2221s]
[ 2221s] If n is positive, this function try to read `n` bytes, and may return
[ 2221s] less or equal bytes than requested, but at least one byte. If EOF was
[ 2221s] received before any byte is read, this function returns empty byte
[ 2221s] object.
[ 2221s]
[ 2221s] Returned value is not limited with limit, configured at stream
[ 2221s] creation.
[ 2221s]
[ 2221s] If stream was paused, this function will automatically resume it if
[ 2221s] needed.
[ 2221s] """
[ 2221s]
[ 2221s] if self._exception is not None:
[ 2221s] raise self._exception
[ 2221s]
[ 2221s] if n == 0:
[ 2221s] return b''
[ 2221s]
[ 2221s] if n < 0:
[ 2221s] # This used to just loop creating a new waiter hoping to
[ 2221s] # collect everything in self._buffer, but that would
[ 2221s] # deadlock if the subprocess sends more than self.limit
[ 2221s] # bytes. So just call self.read(self._limit) until EOF.
[ 2221s] blocks = []
[ 2221s] while True:
[ 2221s] > block = await self.read(self._limit)
[ 2221s]
[ 2221s] /usr/lib64/python3.10/asyncio/streams.py:662:
[ 2221s] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
[ 2221s]
[ 2221s] self = <StreamReader eof transport=<_SelectorSocketTransport closed fd=56>>
[ 2221s] n = 65536
[ 2221s]
[ 2221s] async def read(self, n=-1):
[ 2221s] """Read up to `n` bytes from the stream.
[ 2221s]
[ 2221s] If n is not provided, or set to -1, read until EOF and return all read
[ 2221s] bytes. If the EOF was received and the internal buffer is empty, return
[ 2221s] an empty bytes object.
[ 2221s]
[ 2221s] If n is zero, return empty bytes object immediately.
[ 2221s]
[ 2221s] If n is positive, this function try to read `n` bytes, and may return
[ 2221s] less or equal bytes than requested, but at least one byte. If EOF was
[ 2221s] received before any byte is read, this function returns empty byte
[ 2221s] object.
[ 2221s]
[ 2221s] Returned value is not limited with limit, configured at stream
[ 2221s] creation.
[ 2221s]
[ 2221s] If stream was paused, this function will automatically resume it if
[ 2221s] needed.
[ 2221s] """
[ 2221s]
[ 2221s] if self._exception is not None:
[ 2221s] raise self._exception
[ 2221s]
[ 2221s] if n == 0:
[ 2221s] return b''
[ 2221s]
[ 2221s] if n < 0:
[ 2221s] # This used to just loop creating a new waiter hoping to
[ 2221s] # collect everything in self._buffer, but that would
[ 2221s] # deadlock if the subprocess sends more than self.limit
[ 2221s] # bytes. So just call self.read(self._limit) until EOF.
[ 2221s] blocks = []
[ 2221s] while True:
[ 2221s] block = await self.read(self._limit)
[ 2221s] if not block:
[ 2221s] break
[ 2221s] blocks.append(block)
[ 2221s] return b''.join(blocks)
[ 2221s]
[ 2221s] if not self._buffer and not self._eof:
[ 2221s] > await self._wait_for_data('read')
[ 2221s]
[ 2221s] /usr/lib64/python3.10/asyncio/streams.py:669:
[ 2221s] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
[ 2221s]
[ 2221s] self = <StreamReader eof transport=<_SelectorSocketTransport closed fd=56>>
[ 2221s] func_name = 'read'
[ 2221s]
[ 2221s] async def _wait_for_data(self, func_name):
[ 2221s] """Wait until feed_data() or feed_eof() is called.
[ 2221s]
[ 2221s] If stream was paused, automatically resume it.
[ 2221s] """
[ 2221s] # StreamReader uses a future to link the protocol feed_data() method
[ 2221s] # to a read coroutine. Running two read coroutines at the same time
[ 2221s] # would have an unexpected behaviour. It would not possible to know
[ 2221s] # which coroutine would get the next data.
[ 2221s] if self._waiter is not None:
[ 2221s] raise RuntimeError(
[ 2221s] f'{func_name}() called while another coroutine is '
[ 2221s] f'already waiting for incoming data')
[ 2221s]
[ 2221s] assert not self._eof, '_wait_for_data after EOF'
[ 2221s]
[ 2221s] # Waiting for data while paused will make deadlock, so prevent it.
[ 2221s] # This is essential for readexactly(n) for case when n > self._limit.
[ 2221s] if self._paused:
[ 2221s] self._paused = False
[ 2221s] self._transport.resume_reading()
[ 2221s]
[ 2221s] self._waiter = self._loop.create_future()
[ 2221s] try:
[ 2221s] > await self._waiter
[ 2221s] E asyncio.exceptions.CancelledError
[ 2221s]
[ 2221s] /usr/lib64/python3.10/asyncio/streams.py:502: CancelledError
[ 2221s]
[ 2221s] During handling of the above exception, another exception occurred:
[ 2221s]
[ 2221s] fut = <Task cancelled name='Task-851273' coro=<StreamReader.read() done, defined at /usr/lib64/python3.10/asyncio/streams.py:628>>
[ 2221s] timeout = 10
[ 2221s]
[ 2221s] async def wait_for(fut, timeout):
[ 2221s] """Wait for the single Future or coroutine to complete, with timeout.
[ 2221s]
[ 2221s] Coroutine will be wrapped in Task.
[ 2221s]
[ 2221s] Returns result of the Future or coroutine. When a timeout occurs,
[ 2221s] it cancels the task and raises TimeoutError. To avoid the task
[ 2221s] cancellation, wrap it in shield().
[ 2221s]
[ 2221s] If the wait is cancelled, the task is also cancelled.
[ 2221s]
[ 2221s] This function is a coroutine.
[ 2221s] """
[ 2221s] loop = events.get_running_loop()
[ 2221s]
[ 2221s] if timeout is None:
[ 2221s] return await fut
[ 2221s]
[ 2221s] if timeout <= 0:
[ 2221s] fut = ensure_future(fut, loop=loop)
[ 2221s]
[ 2221s] if fut.done():
[ 2221s] return fut.result()
[ 2221s]
[ 2221s] await _cancel_and_wait(fut, loop=loop)
[ 2221s] try:
[ 2221s] return fut.result()
[ 2221s] except exceptions.CancelledError as exc:
[ 2221s] raise exceptions.TimeoutError() from exc
[ 2221s]
[ 2221s] waiter = loop.create_future()
[ 2221s] timeout_handle = loop.call_later(timeout, _release_waiter, waiter)
[ 2221s] cb = functools.partial(_release_waiter, waiter)
[ 2221s]
[ 2221s] fut = ensure_future(fut, loop=loop)
[ 2221s] fut.add_done_callback(cb)
[ 2221s]
[ 2221s] try:
[ 2221s] # wait until the future completes or the timeout
[ 2221s] try:
[ 2221s] await waiter
[ 2221s] except exceptions.CancelledError:
[ 2221s] if fut.done():
[ 2221s] return fut.result()
[ 2221s] else:
[ 2221s] fut.remove_done_callback(cb)
[ 2221s] # We must ensure that the task is not running
[ 2221s] # after wait_for() returns.
[ 2221s] # See https://bugs.python.org/issue32751
[ 2221s] await _cancel_and_wait(fut, loop=loop)
[ 2221s] raise
[ 2221s]
[ 2221s] if fut.done():
[ 2221s] return fut.result()
[ 2221s] else:
[ 2221s] fut.remove_done_callback(cb)
[ 2221s] # We must ensure that the task is not running
[ 2221s] # after wait_for() returns.
[ 2221s] # See https://bugs.python.org/issue32751
[ 2221s] await _cancel_and_wait(fut, loop=loop)
[ 2221s] # In case task cancellation failed with some
[ 2221s] # exception, we should re-raise it
[ 2221s] # See https://bugs.python.org/issue40607
[ 2221s] try:
[ 2221s] > return fut.result()
[ 2221s] E asyncio.exceptions.CancelledError
[ 2221s]
[ 2221s] /usr/lib64/python3.10/asyncio/tasks.py:456: CancelledError
[ 2221s]
[ 2221s] The above exception was the direct cause of the following exception:
[ 2221s]
[ 2221s] self = <libqtile.ipc.Client object at 0x7f92c8b275b0>
[ 2221s] msg = ([], 'next_layout', (), {})
[ 2221s]
[ 2221s] async def async_send(self, msg: Any) -> Any:
[ 2221s] """Send the message to the server
[ 2221s]
[ 2221s] Connect to the server, then pack and send the message to the server,
[ 2221s] then wait for and return the response from the server.
[ 2221s] """
[ 2221s] try:
[ 2221s] reader, writer = await asyncio.wait_for(
[ 2221s] asyncio.open_unix_connection(path=self.socket_path), timeout=3
[ 2221s] )
[ 2221s] except (ConnectionRefusedError, FileNotFoundError):
[ 2221s] raise IPCError("Could not open {}".format(self.socket_path))
[ 2221s]
[ 2221s] try:
[ 2221s] send_data = _IPC.pack(msg, is_json=self.is_json)
[ 2221s] writer.write(send_data)
[ 2221s] writer.write_eof()
[ 2221s]
[ 2221s] > read_data = await asyncio.wait_for(reader.read(), timeout=10)
[ 2221s]
[ 2221s] libqtile/ipc.py:191:
[ 2221s] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
[ 2221s]
[ 2221s] fut = <Task cancelled name='Task-851273' coro=<StreamReader.read() done, defined at /usr/lib64/python3.10/asyncio/streams.py:628>>
[ 2221s] timeout = 10
[ 2221s]
[ 2221s] async def wait_for(fut, timeout):
[ 2221s] """Wait for the single Future or coroutine to complete, with timeout.
[ 2221s]
[ 2221s] Coroutine will be wrapped in Task.
[ 2221s]
[ 2221s] Returns result of the Future or coroutine. When a timeout occurs,
[ 2221s] it cancels the task and raises TimeoutError. To avoid the task
[ 2221s] cancellation, wrap it in shield().
[ 2221s]
[ 2221s] If the wait is cancelled, the task is also cancelled.
[ 2221s]
[ 2221s] This function is a coroutine.
[ 2221s] """
[ 2221s] loop = events.get_running_loop()
[ 2221s]
[ 2221s] if timeout is None:
[ 2221s] return await fut
[ 2221s]
[ 2221s] if timeout <= 0:
[ 2221s] fut = ensure_future(fut, loop=loop)
[ 2221s]
[ 2221s] if fut.done():
[ 2221s] return fut.result()
[ 2221s]
[ 2221s] await _cancel_and_wait(fut, loop=loop)
[ 2221s] try:
[ 2221s] return fut.result()
[ 2221s] except exceptions.CancelledError as exc:
[ 2221s] raise exceptions.TimeoutError() from exc
[ 2221s]
[ 2221s] waiter = loop.create_future()
[ 2221s] timeout_handle = loop.call_later(timeout, _release_waiter, waiter)
[ 2221s] cb = functools.partial(_release_waiter, waiter)
[ 2221s]
[ 2221s] fut = ensure_future(fut, loop=loop)
[ 2221s] fut.add_done_callback(cb)
[ 2221s]
[ 2221s] try:
[ 2221s] # wait until the future completes or the timeout
[ 2221s] try:
[ 2221s] await waiter
[ 2221s] except exceptions.CancelledError:
[ 2221s] if fut.done():
[ 2221s] return fut.result()
[ 2221s] else:
[ 2221s] fut.remove_done_callback(cb)
[ 2221s] # We must ensure that the task is not running
[ 2221s] # after wait_for() returns.
[ 2221s] # See https://bugs.python.org/issue32751
[ 2221s] await _cancel_and_wait(fut, loop=loop)
[ 2221s] raise
[ 2221s]
[ 2221s] if fut.done():
[ 2221s] return fut.result()
[ 2221s] else:
[ 2221s] fut.remove_done_callback(cb)
[ 2221s] # We must ensure that the task is not running
[ 2221s] # after wait_for() returns.
[ 2221s] # See https://bugs.python.org/issue32751
[ 2221s] await _cancel_and_wait(fut, loop=loop)
[ 2221s] # In case task cancellation failed with some
[ 2221s] # exception, we should re-raise it
[ 2221s] # See https://bugs.python.org/issue40607
[ 2221s] try:
[ 2221s] return fut.result()
[ 2221s] except exceptions.CancelledError as exc:
[ 2221s] > raise exceptions.TimeoutError() from exc
[ 2221s] E asyncio.exceptions.TimeoutError
[ 2221s]
[ 2221s] /usr/lib64/python3.10/asyncio/tasks.py:458: TimeoutError
[ 2221s]
[ 2221s] During handling of the above exception, another exception occurred:
[ 2221s]
[ 2221s] manager = <test.helpers.TestManager object at 0x7f92c80be140>
[ 2221s]
[ 2221s] @all_layouts_config
[ 2221s] def test_cycle_layouts(manager):
[ 2221s] manager.test_window("one")
[ 2221s] manager.test_window("two")
[ 2221s] manager.test_window("three")
[ 2221s] manager.test_window("four")
[ 2221s] manager.c.group.focus_by_name("three")
[ 2221s] assert_focused(manager, "three")
[ 2221s]
[ 2221s] # Cycling all the layouts must keep the current window focused
[ 2221s] initial_layout_name = manager.c.layout.info()["name"]
[ 2221s] while True:
[ 2221s] > manager.c.next_layout()
[ 2221s]
[ 2221s] test/layouts/test_common.py:511:
[ 2221s] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
[ 2221s] libqtile/command/client.py:199: in __call__
[ 2221s] return self._command.execute(self._current_node, args, kwargs)
[ 2221s] libqtile/command/interface.py:232: in execute
[ 2221s] status, result = self._client.send((call.parent.selectors, call.name, args, kwargs))
[ 2221s] libqtile/ipc.py:171: in send
[ 2221s] return asyncio.run(self.async_send(msg))
[ 2221s] /usr/lib64/python3.10/asyncio/runners.py:44: in run
[ 2221s] return loop.run_until_complete(main)
[ 2221s] /usr/lib64/python3.10/asyncio/base_events.py:646: in run_until_complete
[ 2221s] return future.result()
[ 2221s] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
[ 2221s]
[ 2221s] self = <libqtile.ipc.Client object at 0x7f92c8b275b0>
[ 2221s] msg = ([], 'next_layout', (), {})
[ 2221s]
[ 2221s] async def async_send(self, msg: Any) -> Any:
[ 2221s] """Send the message to the server
[ 2221s]
[ 2221s] Connect to the server, then pack and send the message to the server,
[ 2221s] then wait for and return the response from the server.
[ 2221s] """
[ 2221s] try:
[ 2221s] reader, writer = await asyncio.wait_for(
[ 2221s] asyncio.open_unix_connection(path=self.socket_path), timeout=3
[ 2221s] )
[ 2221s] except (ConnectionRefusedError, FileNotFoundError):
[ 2221s] raise IPCError("Could not open {}".format(self.socket_path))
[ 2221s]
[ 2221s] try:
[ 2221s] send_data = _IPC.pack(msg, is_json=self.is_json)
[ 2221s] writer.write(send_data)
[ 2221s] writer.write_eof()
[ 2221s]
[ 2221s] read_data = await asyncio.wait_for(reader.read(), timeout=10)
[ 2221s] except asyncio.TimeoutError:
[ 2221s] > raise IPCError("Server not responding")
[ 2221s] E libqtile.ipc.IPCError: Server not responding
[ 2221s]
[ 2221s] libqtile/ipc.py:193: IPCError
[ 2221s] --------------------------- Captured stderr teardown ---------------------------
[ 2221s] Killing qtile forcefully
[ 2221s] qtile exited with exitcode: -9
Required:
- I have searched past issues to see if this bug has already been reported.
Issue Analytics
- State:
- Created a year ago
- Comments:24 (4 by maintainers)
I found a branch with 0.17 and am now testing it with cairocffi https://build.opensuse.org/package/show/home:jayvdb:cairocffi/cairo
Bummer. I guess the failing tests is probably the reason we have to disable in the future if this keeps up. It seems most of the failing tests come from cairocffi and xcb. And yet we haven’t found anything conclusive for what caused the tests to segfault or fail. One thing I am suspicious of is
DISPLAY=:$SERVERNUM XAUTHORITY=$AUTHFILE "$@" 2>&1
from the segfaults.