question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

pytest tests marked as passed or failed if test name contains upper letters

See original GitHub issue

Hello,

I’ve encountered a little bug recently in the “Tests” tab. I am not 100% sure that the bug is related to vscode or to the python extension. According to the answer on vscode repo, it is due to the extension, that’s why I’m reposting here.

Environment data

  • VS Code version: 1.34.0
  • Extension version (available under the Extensions sidebar): 2019.6.24221
  • OS and version: Windows_NT x64 6.1.7601
  • Python version (& distribution if applicable, e.g. Anaconda): 3.6.8 64 bit
  • Type of virtual environment used : conda
  • Relevant/affected Python packages and their versions: user interface of the test tab, pytest
  • Jedi or Language Server? “python.jediEnabled” = True

My file settings.json is :

{
    "python.testing.pytestArgs": [
        "tests"
    ],
    "python.testing.unittestEnabled": false,
    "python.testing.nosetestsEnabled": false,
    "python.testing.pytestEnabled": true
}

Bug description :

When a test file contains an upper letter, it is not displayed properly in the test tab. After running if passed or failed, it is displayed with a “grey question mark”.

Steps to Reproduce:

  1. Create an empty project (note : use a conda environment where pytest is installed)
  2. Create a “tests” folder 2.1 Create a test_working.py file
import pytest
def test_this_one_works():
    assert 1==1

2.2 Create a test_NotWorking.py file with the same code

import pytest
def test_this_one_works():
    assert 1==1

In the Test tab of vscode, run Discover Tests If everything ran properly, you should see the following architecture :

  • (?) tests
    • (?) test_notworking.py (with lower case!!!)
      • (?) test_this_one_does_not_work
    • (?) test_working.py
      • (?) test_this_one_works

In the Test tab of vscode, run Run All Tests.

Actual & expected behaviour

-(green tick ok) tests - (?) test_notworking.py <- BUG - EXPECTED BEHAVIOUR : should have a green tick and have name in upper case - (?) test_this_one_does_not_work <- BUG - EXPECTED BEHAVIOUR : should have a green tick and have name in upper case - (green tick ok) test_working.py (with lower case!!!) - (green tick ok) test_this_one_works

example_img

Logs

The tests seem to be ran whatever is displayed (the bug only concerns the user interface) : indeed in the bottom line (where my python version and my environment are shown), I see an icon “two tests passed”. When i run pytest tests in the command line, everything is fine.

Python Test log

python C:\Users\F400411\.vscode\extensions\ms-python.python-2019.6.24221\pythonFiles\testing_tools\run_adapter.py discover pytest -- -s --cache-clear tests
============================= test session starts =============================
platform win32 -- Python 3.6.8, pytest-5.0.1, py-1.8.0, pluggy-0.12.0
rootdir: c:\Users\F400411\Documents\1_Datascience\3_MISSIONS\bug_test_vscode
collected 2 items

tests\test_NotWorking.py .                                               [ 50%]
tests\test_working.py .                                                  [100%]

- generated xml file: C:\Users\F400411\AppData\Local\Temp\tmp-5180zo59mydLbZ4a.xml -
========================== 2 passed in 0.03 seconds ===========================
python C:\Users\F400411\.vscode\extensions\ms-python.python-2019.6.24221\pythonFiles\testing_tools\run_adapter.py discover pytest -- -s --cache-clear tests
============================= test session starts =============================
platform win32 -- Python 3.6.8, pytest-5.0.1, py-1.8.0, pluggy-0.12.0
rootdir: c:\Users\F400411\Documents\1_Datascience\3_MISSIONS\bug_test_vscode
collected 2 items

tests\test_NotWorking.py .                                               [ 50%]
tests\test_working.py .                                                  [100%]

- generated xml file: C:\Users\F400411\AppData\Local\Temp\tmp-5180SM9GGMumKvVy.xml -
========================== 2 passed in 0.02 seconds ===========================

Conclusion :

This bugs is not critical at all since it only concerns the UI, but in my old projects I can’t see the tests in the UI (while it used to work perfectly fine few weeks ago). I definitely do not want to change the names of the scripts (the tests scripts are a mirror of the scripts names of my project).

Any hope this would be easy to fix?

Many thanks for reading and any further help ! Galileo

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Reactions:2
  • Comments:6 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
ericsnowcurrentlycommented, Aug 13, 2019

This should be resolved now due to #6781 and #6877. Those changes are already in the insiders build of the extension and will be part of the September release.

1reaction
karrtikrcommented, Jul 25, 2019

Thanks for reporting this @Galileo-Galilei

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to use skip and xfail to deal with tests that cannot succeed
When a test passes despite being expected to fail (marked with pytest.mark.xfail ), it's an xpass and will be reported in the test...
Read more >
How do I disable a test using pytest? - python - Stack Overflow
The simplest way to skip a test is to mark it with the skip decorator which may be passed an optional reason ....
Read more >
Effective Python Testing With Pytest
An F means that the test has failed. An E means that the test raised an unexpected exception. The special characters are shown...
Read more >
unittest — Unit testing framework — Python 3.11.1 ...
A test case is the individual unit of testing. ... You can pass in a list with any combination of module names, and...
Read more >
pytest Documentation - Read the Docs
This will run tests which contain names that match the given string ... When a test passes despite being expected to fail (marked...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found