question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Some tests seem to be flaky and fail depending on Python version used

See original GitHub issue

Some tests (e.g. for Modelsim) do file comparisons between a reference Makefile and the Makefile generated by the tool run.

For example, the reference Makefile contains:

PLUSARGS      ?= plusarg_bool=1 plusarg_int=42 plusarg_str=hello

When running test_modelsim, the generated Makefile will contain:

PLUSARGS      ?= plusarg_int=42 plusarg_bool=1 plusarg_str=hello

I was running Python 3.5.9. With a different version you might get a different ordering, as this depends on how dictionaries are handled.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:6 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
GCHQDeveloper560commented, Apr 30, 2020

Your diagnosis was the problem in #132. Pull request #143 fixes the current tests. Avoiding assumptions about dict ordering is probably good for robustness in general, but it would be nice if the tests were a little more flexible about the order of things that don’t matter.

I’d also like a better solution for mocking than our current crowd of scripts that all do the same thing in mock_commands. If we weren’t trying to support Windows I’d make them all symlinks to a single script, but haven’t come up with a better cross platform solution.

0reactions
olofkcommented, May 4, 2020

Does this mean we can close the issue now? Agree with @GCHQDeveloper560 that we want a better long-term solution though. Especially with the mocking. I just implemented the first approach I could think of, but there are likely far better ways to accomplish this. It’s also a bit leaky since it uses the host’s make (and probably other tools) IIRC

Read more comments on GitHub >

github_iconTop Results From Across the Web

Flaky tests — pytest documentation
A “flaky” test is one that exhibits intermittent or sporadic failure, that seems to have non-deterministic behaviour. Sometimes it passes, sometimes it ...
Read more >
What does Flaky: Hypothesis test produces unreliable results ...
It means more or less what it says: You have a test which failed the first time but succeeded the second time when...
Read more >
An Empirical Study of Flaky Tests in Python - arXiv
Abstract—Tests that cause spurious failures without any code changes, i.e., flaky tests, hamper regression testing, increase.
Read more >
Mark for flaky tests · Issue #814 · pytest-dev/pytest - GitHub
Add a flaky() marker and function, explicitly documented as being for intermittently failing tests; the outcome of a test so marked (whether it ......
Read more >
Flaky tests - GitLab Docs
Flaky tests. What's a flaky test? It's a test that sometimes fails, but if you retry it enough times, it passes, eventually.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found