question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Inconsistent handling of script errors between the three engines

See original GitHub issue

(This is with backstopjs@3.2.9)

I noticed an inconsistency with how the three engines handle script errors, with Puppeteer being the one more troublesome, imho.

I came up with this setup, which you can use to replicate the issue:

Three scenarios, one of which is designed to fail

"scenarios": [
  {
    "label": "Google",
    "url": "https://google.com"
  },
  {
    "label": "Google / Wait for wrong selector",
    "url": "https://google.com",
    "onReadyScript": "wait-for-wrong-selector.js"
  },
  {
    "label": "Google / Type",
    "url": "https://google.com",
    "onReadyScript": "type.js"
  }
]

For the two scenarios with the “onReady” script, there is a variation of the script for each of the engines

Casper

// casper/type.js
module.exports = casper =>
  casper.then(() => {
    casper.fillSelectors('form.tsf', {
      'input#lst-ib': 'foobarbar'
    });
  });
}

// casper/wait-for-wrong-selector.js
module.exports = casper =>
  casper.then(() => {
    casper.fillSelectors('form.tsf', {
      'input#lst-ib': 'foobarbar'
    });
  });

  casper.then(() => {
    casper.waitForSelector('.does-not-exist');
  });
}

Chromy

// chromy/type.js
module.exports = chromy => {
  chromy.type('input#lst-ib', 'foobarbar');
}

// chromy/wait-for-wrong-selector.js
module.exports = chromy => {
  chromy.type('input#lst-ib', 'foobarbar');
  chromy.wait('.does-not-exist');
}

Puppeteer

// puppet/type.js
module.exports = async puppet => {
  await puppet.type('input#lst-ib', 'foobarbar');
}

// puppet/wait-for-wrong-selector.js
module.exports = async puppet => {
  await puppet.type('input#lst-ib', 'foobarbar');
  await puppet.waitFor('.does-not-exist', { timeout: 1000 });
}

Now backstop reference and backstop test are run with all three engines, and these are the results that I’ve found

Casper

COMMAND | Executing core for `reference`
# ...
CasperJS:  CREATING NEW REFERENCE FILES
CasperJS:  Current location is https://google.com
CasperJS:  Capturing screenshots for desktop (1920x900)
CasperJS:  Ready event received.
CasperJS:  Current location is https://google.com
CasperJS:  Wait timeout of 5000ms expired, exiting.

Testing script failed with code: 1
# ...
      COMMAND | Command `reference` ended with an error after [9.965s]

With CasperJS the script error makes the entire command immediately fail and exit, meaning that BackstopJS can’t capture the remaining screenshots in the list.

Given that reference fails, there’s no point in running test given that it will fail as well and no report will be generated

Chromy

COMMAND | Executing core for `reference`
# ...
CREATING NEW REFERENCE FILES
9222 INFO >  BackstopTools have been installed.
CREATING NEW REFERENCE FILES
9223 Chrome v65 detected.
9223 INFO >  BackstopTools have been installed.
CREATING NEW REFERENCE FILES
9224 Chrome v65 detected.
9224 INFO >  BackstopTools have been installed.

# ...
      COMMAND | Command `reference` successfully executed in [10.396s]

Chromy marks the task as successful and doesn’t report any error whatsoever, although the screenshot for “Google / Wait for wrong selector” had not been generated

Running test gives the same output, but it’s marked as failed. This is report’s outcome

COMMAND | Executing core for `report`
      compare | Chromy error: WaitTimeoutError. See scenario – Google / Wait for wrong selector (desktop)
      compare | OK: Google backstop_default_Google_0_document_0_desktop.png
      compare | OK: Google / Type backstop_default_Google___Type_0_document_0_desktop.png
      # ...
      report | 2 Passed
      report | 1 Failed
      # ...
      COMMAND | Command `report` ended with an error after [0.177s]
      COMMAND | Command `test` ended with an error after [10.845s]

Now report reports on the WaitTimeoutError and it fails, showing in the html report the missing screenshot for “Google / Wait for wrong selector”

chromy

Puppeteer

COMMAND | Executing core for `reference`
# ...
CREATING NEW REFERENCE FILE
Browser Console Log 0: JSHandle:BackstopTools have been installed.
# ...
CREATING NEW REFERENCE FILE
Browser Console Log 0: JSHandle:BackstopTools have been installed.
######## Error running Puppeteer ######### Error: waiting failed: timeout 1000ms exceeded
# ...
CREATING NEW REFERENCE FILE
Browser Console Log 0: JSHandle:BackstopTools have been installed.

# ...

      COMMAND | Command `reference` successfully executed in [5.624s]

While it reports that something went wrong with a script (which is good for debugging), Puppeteer still allows for the entire test suite to run, and the report command is marked as successful

Running test gives the same output, and it’s marked as successful as well. This is report’s outcome

COMMAND | Executing core for `report`
      compare | OK: Google / Type backstop_default_Google___Type_0_document_0_desktop.png
      compare | OK: Google backstop_default_Google_0_document_0_desktop.png
       # ...
       report | 2 Passed
       report | 0 Failed
       # ...
      COMMAND | Command `report` successfully executed in [0.476s]
      COMMAND | Command `test` successfully executed in [6.424s]

There is no mentioning whatsoever of “Google / Wait for wrong selector” which at this point is simply ignored, report is marked as successful and the html report shows that everything is fine…but with only 2 screenshots instead of 3

puppet


While CasperJS is too eager to stop everything as soon as it finds an error, Puppeteer’s handling of script errors imho is worse as it simply marks the whole run as successful even if there are screenshots missing. If you have a big test suite (we have 120+ screenshots in our project), it’s very easy to trust the report and not realize that some screenshots might be missing.

I think the best behaviour should be a mix of Chromy + Puppeteer: show the script error in the output for debugging purposes, let the command run to completion and then mark test/report as failed and show the missing screenshot(s) in the report

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:20 (18 by maintainers)

github_iconTop GitHub Comments

4reactions
garriscommented, Apr 24, 2018

Yes. That is very specifically exactly what I was thinking.

1reaction
AkA84commented, Apr 22, 2018

@garris no i’ve never approved any report in any of the tests i did as part of this issue, that’s why i’m reporting this as a bug. I think the behaviour should be the one displayed in your screenshot

img

and the one that Chromy has (with the due differences)

Running test gives the same output, but it’s marked as failed. […] Now report reports on the WaitTimeoutError and it fails, showing in the html report the missing screenshot for “Google / Wait for wrong selector”

otherwise imho this could be even worse than it was before: if before you knew the total number of scenarios you could quickly see that the number reported on was not right, now you can’t even do that ! You basically have to scroll down yourself and see if there are any errors reported (= you can’t trust the report)

And I’m not even considering the scenario where you are running backstop as part of a CI process where none of the two options are even possible 😕

Read more comments on GitHub >

github_iconTop Results From Across the Web

Cloning data - error 0xa0001: Failed to prepare operations ...
Cloning data - error 0xa0001: Failed to prepare operations. Error code 3: 'internal error: script is inconsistent'. Thread needs solution.
Read more >
Chapter 4. Error Handling and Debugging - O'Reilly
The script loads OK, but the program runs with unexpected results or fails when executing a particular function or subroutine. This can be...
Read more >
Engine Error Messages - Conitec
Aside from script syntax errors, error or warning messages indicating a malfunction can be issued by the engine at startup or during runtime....
Read more >
Handling common JavaScript problems - MDN Web Docs
Now we'll look at common cross-browser JavaScript problems and how to fix them. This includes information on using browser dev tools to ...
Read more >
Inconsistent error when validating my script
Unfortunately the code generated by the SEformatter for New Relic (latest version) generates the same error. It's not depending on the runtime, ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found