Port test/test_transforms_tensor.py to pytest
See original GitHub issueCurrently, most tests in test/test_transforms_tensor.py rely on unittest.TestCase
. Now that we support pytest
, we want to remove the use of the unittest
module.
Instructions
There are many tests in this file, so I bundled them in multiple related groups below. If you’re interested in working on this issue, please comment below with “I’m working on <group X>
, <group Y>
, etc…” so that others don’t pick the same tests as you do. Feel free to pick as many groups as you wish, but please don’t submit more than 2 groups per PR in order to keep the reviews manageable. Before picking a group, make sure it wasn’t picked by another contributor first. Thanks!!
How to port a test to pytest
Porting a test from unittest
to pytest is usually fairly straightforward. For a typical example, see https://github.com/pytorch/vision/pull/3907/files:
- take the test method out of the
Tester(unittest.TestCase)
class and just declare it as a function - Replace
@unittest.skipIf
withpytest.mark.skipif(cond, reason=...)
- remove any use of
self.assertXYZ
.- Typically
assertEqual(a, b)
can be replaced byassert a == b
when a and b are pure python objects (scalars, tuples, lists), and otherwise we can rely onassert_equal
which is already used in the file. self.assertRaises
should be replaced with thepytest.raises(Exp, match=...):
context manager, as done in https://github.com/pytorch/vision/pull/3907/files. Same for warnings withpytest.warns
self.assertTrue
should be replaced with a plainassert
- Typically
- When a function uses for loops to tests multiple parameter values, one should use
pytest.mark.parametrize
instead, as done e.g. in https://github.com/pytorch/vision/pull/3907/files. - It may make sense to keep related tests within a single class. Not all groups need a dedicated class though, it’s on a case-by-case basis.
- Important: a lot of these tests rely on
self.device
because they need to be run on both CPU and GPU. For these, use thecpu_and_gpu()
fromcommon_utils
instead, e.g.:
and you can just replace self.device
by device
in the test
- The tests that only need a CPU should use the
cpu_only
decorator fromcommon_utils
, and the tests that need a cuda device should use theneeds_cuda
decorator (unless they already usecpu_and_gpu()
).
-
group A https://github.com/pytorch/vision/pull/3996 These ones could be bundled into a single
test_random()
function and be parametrized overfunc
,method
,fn_kwargs
andmatch_kwargs
.test_random_horizontal_flip
test_random_vertical_flip
test_random_invert
test_random_posterize
test_random_solarize
test_random_adjust_sharpness
test_random_autocontrast
test_random_equalize
-
group B https://github.com/pytorch/vision/pull/4008
test_color_jitter
– make 5 new test functions and parametrize 4 of them (over brightness, contrast, saturation, hue)test_pad
– parametrize over m and multest_crop
– can be split and parametrized over different things, or not.test_center_crop
– same
-
group C https://github.com/pytorch/vision/pull/4010 These 2 below can probably be merged into
_test_op_list_output
(which should be renamed to e.g. test_x_crop) by parametrizing over func, method, out_length and fn_kwargstest_five_crop
test_ten_crop
test_resize
– parametrize over all for loop variablestest_resized_crop
– same
-
group D https://github.com/pytorch/vision/pull/4000
test_random_affine
– split this into different parametrized functions (one for shear, one for scale, etc.)test_random_rotate
– parametrize over all loop variablestest_random_perspective
– parametrize over all loop variablestest_to_grayscale
– parametrize over meth_kwargs
-
group E https://github.com/pytorch/vision/pull/4023
test_normalize
test_linear_transformation
test_compose
test_random_apply
test_gaussian_blur
– parametrize over meth_kwargs
-
group F https://github.com/pytorch/vision/pull/3996
test_random_erasing
– maybe split this one into different functions, and parametrize over test_configstest_convert_image_dtype
– parametrize over all loop variables and convert thecontinue
and the assertRaises into apytest.xfail
test_autoaugment
– parametrize over policy and fill
Issue Analytics
- State:
- Created 2 years ago
- Comments:6 (6 by maintainers)
I would like to work on D and E.
I’ll start working on group B and C.