Port test/test_functional_tensor.py to pytest
See original GitHub issueCurrently, most tests in test/test_functional_tensor.py rely on unittest.TestCase
. Now that we support pytest
, we want to remove the use of the unittest
module.
Instructions
There are many tests in this file, so I bundled them in multiple related groups below. If you’re interested in working on this issue, please comment below with “I’m working on <group X>
, <group Y>
, etc…” so that others don’t pick the same tests as you do. Feel free to pick as many groups as you wish, but please don’t submit more than 2 groups per PR in order to keep the reviews manageable. Before picking a group, make sure it wasn’t picked by another contributor first. Thanks!!
How to port a test to pytest
Porting a test from unittest
to pytest is usually fairly straightforward. For a typical example, see https://github.com/pytorch/vision/pull/3907/files:
- take the test method out of the
Tester(unittest.TestCase)
class and just declare it as a function - Replace
@unittest.skipIf
withpytest.mark.skipif(cond, reason=...)
- remove any use of
self.assertXYZ
.- Typically
assertEqual(a, b)
can be replaced byassert a == b
when a and b are pure python objects (scalars, tuples, lists), and otherwise we can rely onassert_equal
which is already used in the file. self.assertRaises
should be replaced with thepytest.raises(Exp, match=...):
context manager, as done in https://github.com/pytorch/vision/pull/3907/files. Same for warnings withpytest.warns
self.assertTrue
should be replaced with a plainassert
- Typically
- When a function uses for loops to tests multiple parameter values, one should use
pytest.mark.parametrize
instead, as done e.g. in https://github.com/pytorch/vision/pull/3907/files. - It may make sense to keep related tests within a single class. For example here, the tests in group A could be grouped into a
TestToPILImage
class, the tests in group N could be inTestPad
, etc. Not all groups need a dedicated class though, it’s on a case-by-case basis. - Important: a lot of these tests rely on
self.device
because they need to be run on both CPU and GPU. For these, use thecpu_and_gpu()
fromcommon_utils
instead, e.g.:
and you can just replace self.device
by device
in the test
CC @saswatpp as promised!
- Group A https://github.com/pytorch/vision/pull/3977
test_assert_image_tensor
- parametrize over (func, arg)test_vflip
test_hflip
test_crop
– parametrize over test_configs
(EDIT: oops, I initially named both groups below Group B
. Renamed into B1 and B2)
-
Group B1
test_hsv2rgb
test_rgb2hsv
test_rgb_to_grayscale
– parametrize over num_output_channelstest_center_crop
test_five_crop
test_ten_crop
-
Group B2 https://github.com/pytorch/vision/pull/3974
test_pad
– parametrize over dt, pad configstest_resized_crop
-
Group C https://github.com/pytorch/vision/pull/3974 This one might be a bit tricky. It will require some parametrization, and we should probably split each sub-method like
_test_affine_all_ops
,_test_affine_rect_rotations
etc. into a single test function (or method, if you decide to bundle them into aTestAffine
class, which would probably make sense).test_affine
-
Group D https://github.com/pytorch/vision/pull/3983 Parametrize over data, dt, a, e, f, (and maybe c if there’s no issue) We should split this test into multiple one, for example the
self.assertWarnsRegex
should be in a separate testtest_rotate
-
Group E https://github.com/pytorch/vision/pull/3977 parametrize over tensor, dt, ksize, sigma
test_gaussian_blur
Issue Analytics
- State:
- Created 2 years ago
- Reactions:1
- Comments:16 (16 by maintainers)
Looks like we’re done with this file, thank you so much everyone for your help!! I’ll close this issue, for those who are interested I opened a similar issue in https://github.com/pytorch/vision/issues/3987
We can probably parametrize over a new
image_size
parameter, like so:I prefer the second way, unless
test_configs
can/should be re-used somewhere else.