Custom Runner e TestCase
See original GitHub issueResume
I want to use a custom Runner and a custom TestCase.
I can’t overwrite or configure such classes on the current lib.
I’m willing to open PR.
Runner
The Behave runner for the lib is located in https://github.com/behave/behave-django/blob/6fccd9b7dc2c61c9a894fa915cf87a7758581c69/behave_django/runner.py#L33
The SimpleTestRunner inherit from DiscoverRunner.
CustomRunner
I want to use SimpleTestRunner inheriting from MyTestSuiteRunner.
my runner has a performance measure. Could be related to https://github.com/behave/behave-django/issues/61
class TimedTextTestResult(TextTestResult):
def __init__(self, *args, **kwargs):
super(TimedTextTestResult, self).__init__(*args, **kwargs)
self.clocks = dict()
def startTest(self, test):
self.clocks[test] = time()
super(TextTestResult, self).startTest(test)
if self.showAll:
self.stream.write(self.getDescription(test))
self.stream.write(" ... ")
self.stream.flush()
def addSuccess(self, test):
super(TextTestResult, self).addSuccess(test)
if self.showAll:
self.stream.writeln("time spent: %.6fs" % (time() - self.clocks[test]))
elif self.dots:
self.stream.write(".")
self.stream.flush()
class TimedTextTestRunner(TextTestRunner):
resultclass = TimedTextTestResult
class MyTestSuiteRunner(DiscoverRunner):
def __init__(self, *args, **kwargs):
super(MyTestSuiteRunner, self).__init__(*args, **kwargs)
settings.TEST = True
Possibilities
I can see two possible ways to achieve this.
- we can pass a
--behave-test-runner
arg to behave management command and configure aBEHAVE_TEST_RUNNER
config insettings.py
(django settings) - we can use django’s function
get_runner
code and documentation
In 5 min I made the possibility 2 work.
TestCase
The behave test cases for the lib are located in https://github.com/behave/behave-django/blob/6fccd9b7dc2c61c9a894fa915cf87a7758581c69/behave_django/testcase.py
They are a little more tricky to work because there isn’t a config TEST_CASE
on Django (such as TEST_RUNNER
).
Possibilities
- set 3 configs for each test_case on lib and let the runners use each specific test_case
- set 1 config for the BEHAVE_TEST_CASE. This way the Runner’s could use a function
get_test_case
(which would work similarly asget_runner
) - ‘expose’ the django_test_runner on context (on the method
BehaveHooksMixin.patch_context
) and raise the custom behave hookbehave_run_hook(self, 'before_django_ready', context)
before doingdjango_test_runner.setup_testclass
. This way we could do a monkey patch on the attributedjango_test_runner.testcase_class
the way we want on the hookbefore_django_ready
In 5 min I almost made the possibility 3 work.
_pre_setup
and _post_teardown
I need to understand why the methods _pre_setup
, _post_teardown
have the additional flag run=False
if these methods are always called with run=True
.
test.__call__
I need to undestand why the TestCase is called considering that the __call__
does nothing more than _pre_setup
, run
, _post_teardown
and the method runTest
is empty!
Issue Analytics
- State:
- Created 2 years ago
- Comments:19 (12 by maintainers)
I wrote a VERY basic PR to get the ball rolling. It should work.
@bittner any idea when
behave
1.2.7
will be released? The issue is I’m having a hell of a hard time testing this through tox, since it validates it against pypi, andv1.2.7.dev2
is not pushed there, it fails. This line works fine locally, but tox hates it.Ps. @bittner what’s going on with the
behave-django
CI? My tests are not running for some reason (not like they will pass due to the aforementioned comment above).– Yes, the PR needs a little love.
This indeed solves the issue about the Runner class. Consequently it should resolve the TestCase issue as @bittner mentioned in https://github.com/behave/behave-django/pull/123#issuecomment-1009458067
At this point I’m no longer using behave-django. We decided to use the manual integration from behave seeing as my case may be off the charts compared to the “common” way.
Therefore, for me, this issue can be closed.
@kingbuzzman ty for the PR 🙏