question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[RFC] Improve Transforms testing codebase

See original GitHub issue

🚀 Feature

Current way of transformations testing is not satisfactory due random configurations, duplicated code and unclear structure (test_functional_tensor.py, test_transforms.py, test_transforms_tensor.py). This feature request proposes to rewrite transforms tests in order to takle these problems.

Motivation

Structured approach for transforms tests provides :

  • better coverage
  • simpler way to add tests for new transforms
  • extension for other transforms which could cover more input datatypes (images, masks, bboxes, etc)

Pitch

What do we need:

  • Tests for functional operations (torchvision.transforms.functional)
    • code coverage checks for incorrect input
    • representative and deterministic operation configs to cover documented API
    • all possible input types:
      • PIL, accimage?, torch.Tensor, batch of tensors
      • tensor’s device: CPU/CUDA
      • tensor dtype: intergrals and floats + half on CUDA
    • ensure that same works for torchscripted transform
    • check result correctness vs stored or precomputed reference
    • consistency check for results: torchscripted, tensor and PIL
  • Tests for Transforms (torchvision.transforms)
    • code coverage checks for incorrect input
    • representative and deterministic transform configs to cover documented API
    • tests for generated random parameters
    • ensure that same works for torchscripted transform

How to do that:

Limitations:

  • pytest.parametrize can not be used due to certain internal reasons.

1) Inspiration from “Simplify and organize test_ops” PR

Common part of testing a transformation we can defined in special class and derived classes could configure input type etc. Example from the referenced PR:


class RoIOpTester:
    # Defines functional tests to execute
    # on cpu, on cuda, other options etc

class RoIPoolTester(RoIOpTester, unittest.TestCase):

    def fn(self, *args, **kwarsg):
        # function to test

    def get_script_fn(self, *agrs, **kwargs):
        # scripted function

     def expected_fn(self, *agrs, **kwargs):
        # reference function

2) Use torch.testing._internal

from torch.testing._internal.common_utils import TestCase, run_tests
from torch.testing._internal.common_device_type import dtypes, dtypesIfCUDA, instantiate_device_type_tests
from torch.testing import floating_types, floating_types_and_half, integral_types


class Tester(TestCase):

    @dtypes(*(floating_types() + integral_types()))
    @dtypesIfCUDA(*(floating_types_and_half() + integral_types()))
    def test_resize(self, device, dtype):
        img = self.create_image(h=12, w=16, device=device, dtype=dtype)
        ...


instantiate_device_type_tests(Tester, globals())

if __name__ == '__main__':
    run_tests()

this gives

TesterCPU::test_resize_cpu_float32 PASSED [ 14%]
TesterCPU::test_resize_cpu_float64 PASSED [ 28%]
TesterCPU::test_resize_cpu_int16 PASSED   [ 42%]
TesterCPU::test_resize_cpu_int32 PASSED   [ 57%]
TesterCPU::test_resize_cpu_int64 PASSED   [ 71%]
TesterCPU::test_resize_cpu_int8 PASSED    [ 85%]
TesterCPU::test_resize_cpu_uint8 PASSED   [100%]

Problems:

  • dtypes is perfect for torch.Tensor and no simple way to add PIL as dtype

3) Parametrized

Packaged looks promising and could potentially solve the limitation of pytest (to confirm by fb).

import unittest

from torch.testing import floating_types, integral_types
from parameterized import parameterized


class Tester(unittest.TestCase):

   @parameterized.expand(
       [("cuda", dt) for dt in floating_types() + integral_types()] +
       [("cpu", dt) for dt in floating_types() + integral_types()]
   )
   def test_resize(self, device, dtype):
       pass


if __name__ == "__main__":
    unittest.main()

this gives

TestMathUnitTest::test_resize_00_cuda PASSED  [  7%]
TestMathUnitTest::test_resize_01_cuda PASSED  [ 14%]
TestMathUnitTest::test_resize_02_cuda PASSED  [ 21%]
TestMathUnitTest::test_resize_03_cuda PASSED  [ 28%]
TestMathUnitTest::test_resize_04_cuda PASSED  [ 35%]
TestMathUnitTest::test_resize_05_cuda PASSED  [ 42%]
TestMathUnitTest::test_resize_06_cuda PASSED  [ 50%]
TestMathUnitTest::test_resize_07_cpu PASSED   [ 57%]
TestMathUnitTest::test_resize_08_cpu PASSED   [ 64%]
TestMathUnitTest::test_resize_09_cpu PASSED   [ 71%]
TestMathUnitTest::test_resize_10_cpu PASSED   [ 78%]
TestMathUnitTest::test_resize_11_cpu PASSED   [ 85%]
TestMathUnitTest::test_resize_12_cpu PASSED   [ 92%]
TestMathUnitTest::test_resize_13_cpu PASSED   [100%]

Problems:

  • project’s adoption and maintainance: last commit in Apr 2020

4) Another approach inspired from torchaudio tests

Split test into 3 files, for example

We can similarly put a file for PIL input, e.g. torchscript_consistency_pil_test.py

Open questions

  • How to do operation’s configuration injection ?
img = ...
ref_fn = ...

for config in test_configs:
    output = fn(img, **config)
    true_output = ref_fn(img, **config)
    self.assertEqual(output, true_output)

Additional context

Recent bugs (e.g https://github.com/pytorch/vision/pull/2869) show unsatisfactory code coverage for transforms.

cc @vfdev-5 @fmassa @datumbox @mthrok

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:1
  • Comments:15 (9 by maintainers)

github_iconTop GitHub Comments

2reactions
vfdev-5commented, Jun 2, 2021

@datumbox if we can go with pytest what was not the case on the moment of creating this RFC so we can close this issue.

2reactions
mthrokcommented, Oct 27, 2020

I love the direction this RFC is going.

One thing to note regarding the split of devices. The reason why I have decided to split GPU/CUDA tests of torchaudio into separate files is fbcode. CUDA tests cannot run in fbcode, and skipping them constantly will trigger alerts. There is no way in fbcode to disable such alarm because from the viewpoint of fbcode, there is no point of having a test that is constantly skipped. PyTorch-core considers it fine to receive alerts and decided to ignore them (or probably because it’s too much work to do such migration), but since torchaudio is small and new, I decided to split them into separate files and run only CPU tests in fbcode.

I do no know if the same thing can be accomplished with PyTorch’s test utilities. I did not play around with it.

Read more comments on GitHub >

github_iconTop Results From Across the Web

RFC: In-Line Tests - Pitches - Swift Forums
I've drawn up this RFC to kick off a broader discussion about what we can do to get modern testing facilities into the...
Read more >
[RFC] Improve code-review process for clang-tidy
Hi, I'm writing with the hope of starting a discussion on how to improve the code-review process for clang-tidy patches.
Read more >
RFC: Leverage --incremental #1310 - kulshekhar/ts-jest - GitHub
Issue Our team is working on a large codebase and we are currently ... When tests run in parallel, transformation seems to happens...
Read more >
Introduce new test helpers for rendering (and re ... - Ember RFCs
Introduce new testing utilities that remove the need for the use of this.get / this.set in test contexts. This will make rendering tests...
Read more >
[RFC] Type-Directed Relay Fuzzing Library
This RFC proposes to employ fuzzing (mass generation of random programs) for Relay in order to test the compiler. Fuzz testing for Relay...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found