question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[Feature] test.todo or test.manual

See original GitHub issue

testcase – a set of actions to perform to verify some functionality test – automated testcase

In our delivery process all our testcases are stored as a code:

test – automated testcase test.skip – broken/outdated testcase (not test!) which must not be checked anyway test.todo – testcase, which is valid and must be checked until release, but not implemented as autotest yet.

Let say we have two testcases:

test('testcase one')

test.todo('testcase two')

the second one will be uploaded to Allure EE as inprogress testcase, which must be checked. QA guy could mark it as Passed or Failed manually:

Screenshot 2022-10-10 at 08 35 09

So, our QA do this:

  1. Run all tests (this is done automatically by pull request CI) → generate report → upload report to Allure as a test launch
  2. Inspect the test launch – and here we come 1) some testcases passed 2) some testcases failed 3) some testcases are not implemented
  3. Run manually all failed and not implemented testcases (within the same test launch!)
  4. Close test launch
  5. Merge pull request if launch is successful → run CD and make a release

What do you think about the idea of using autotests as a source of truth for all testcases? Including these which could not be automated (e.g. hover state or some rare browser).

What do you think about test.todo or test.manual annotation for such a case?

Jest and CodeceptJS already have this. But Jest does not allow to pass callback, which is not good, because we cannot set allure.description anywhere but in callback. allure.description is important for QA guy, because it contains a body of a testcase (a human readable set of actions to perform).

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
pavelfeldmancommented, Oct 11, 2022

I would suggest introducing a convention for annotating those using fixme. Something like a tag, annotation or attachment:

test.fixme('add to cart @manual', async ({ page }) => {
  test.info().annotations.push({ type: 'manual' });
  test.info().annotations.push({ type: 'spec', description: `
    1. Log in
    2. Pick product
    3. Add to cart
  ` });
  test.info().attach('spec', {
    body: `
      1. Log in
      1. Pick product
      1. Add to cart
    `,
    contentType: 'text/markdown'
  });
});
0reactions
pavelfeldmancommented, Oct 12, 2022

Closing as per above, please feel free to open a new issue if this does not cover your use case.

Read more comments on GitHub >

github_iconTop Results From Across the Web

What Is Feature Testing And Why Is It Important
This comprehensive Feature Testing tutorial explains what is it, why it is important, and how to do Feature Testing.
Read more >
Manual Testing for Beginners | BrowserStack
Manual testing, as the term suggests, refers to a test process in which a QA manually tests the software application in order to...
Read more >
Manual Testing vs. Automation Testing | Which Is Better?
The biggest pro of automation testing over manual testing is that it allows you to do more testing in less time.
Read more >
Manual testing - what is it? | Global App Testing
Manual software testing is when human testers check the quality of a new application without using automation tools or scripting. The purpose is...
Read more >
What is Manual testing? Why do we need, its ... - Tools QA
So manual testing is a process in which we compare the behavior of a piece of software (it can be a component, module,...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found