question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Discussion about how subtests failures should be displayed

See original GitHub issue

One more test case that is interesting. This file:

import unittest

class T(unittest.TestCase):
    def test_fail(self):
        with self.subTest():
           self.assertEqual(1, 2)

No passing subtests, just one failure. Still shows PASSED. I would expect FAILED and the bottom line to show 1 failed Maybe pytest itself is calling pytest_runtest_logreport() at the end and causing an extra test to be counted.

pytest -v test_sub.py
=============================== test session starts ===============================
platform win32 -- Python 3.7.1, pytest-4.4.0, ...
plugins: subtests-0.2.0
collected 1 item

test_sub.py::T::test_fail FAILED                                         [100%]
test_sub.py::T::test_fail PASSED                                             [100%]

==================================== FAILURES =====================================
_____________________________ T.test_fail (<subtest>) _____________________________

self = <test_sub.T testMethod=test_fail>

    def test_fail(self):
        with self.subTest():
>          self.assertEqual(1, 2)
E          AssertionError: 1 != 2

test_sub.py:6: AssertionError
======================= 1 failed, 1 passed in 0.05 seconds ========================

_Originally posted by @okken in https://github.com/pytest-dev/pytest-subtests/issues/7#issuecomment-479983466_

cc @jurisbu @bskinn

Issue Analytics

  • State:open
  • Created 4 years ago
  • Comments:6 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
nicoddemuscommented, Apr 5, 2019

(I’m short on time so I will read your post more carefully later @bskinn, thanks!)

This relates to a broader pain I’ve been feeling, where AFAIK pytest has no mechanism for selectively marking test functions as xfail (“@pytest.mark.xfail_if(…)”, so to speak).

Just wanted to mention that xfail supports a condition argument for exactly that purpose. 😉

0reactions
bskinncommented, Apr 5, 2019

🤦‍♂️ Thank you. 😃

Read more comments on GitHub >

github_iconTop Results From Across the Web

Using Subtests and Sub-benchmarks
The introduction of subtests and sub-benchmarks enables better handling of failures, fine-grained control of which tests to run from the command ...
Read more >
Subtests in Python - Paul Ganssle
The result of running this test is that you'll see successes reported for 0, 2 and 4 with failures reported for 1 and...
Read more >
Sub-Test Descriptions: Woodcock-Johnson Test of Achievement
This sub-test measures a student's ability to write orally presented words ... is comprised of four sub-tests: Rhyming, Deletion (student must remove a...
Read more >
Appendix I: The WISC-IV and WAIS-IV Subtests - Springer Link
There is one supplementary subtest that can be used to measure the Perceptual. Reasoning Index: Picture Completion. The individual is shown a picture...
Read more >
Wechsler Administration and Scoring Errors Made by ...
Individuals trained in the use of cognitive tests should be able to complete ... Any discrepancy found was discussed and agreed upon before...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found