question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Flaky test: test_normalized_laplacian in tests.core.test_utils

See original GitHub issue

Describe the bug

The test_normalized_laplacian in tests.core.test_utils test sometimes fails.

To Reproduce

  1. Run the test many times?

Observed behavior

https://buildkite.com/stellar/stellargraph-public/builds/2677

example_graph = 

    def test_normalized_laplacian(example_graph):
        Aadj = example_graph.to_adjacency_matrix()
        laplacian = normalized_laplacian(Aadj).todense()
        eigenvalues, _ = np.linalg.eig(laplacian)
    
        # min eigenvalue of normalized laplacian is 0
        # max eigenvalue of normalized laplacian is <= 2
        assert eigenvalues.min() == pytest.approx(0, abs=1e-7)
        assert eigenvalues.max() <= (2 + 1e-7)
        assert laplacian.shape == Aadj.get_shape()
    
        laplacian = normalized_laplacian(Aadj, symmetric=False)
>       assert 1 == pytest.approx(laplacian.sum(), abs=1e-7)
E       AssertionError: assert 1 == 2.0000000000000004 ± 1.0e-07
E        +  where 2.0000000000000004 ± 1.0e-07 = (2.0000000000000004, abs=1e-07)
E        +    where  = pytest.approx
E        +    and   2.0000000000000004 = '\n	with 14 stored elements in Compressed Sparse Row format>>()
E        +      where '\n	with 14 stored elements in Compressed Sparse Row format>> = <6x6 sparse matrix of type ''\n	with 14 stored elements in Compressed Sparse Row format>.sum

tests/core/test_utils.py:60: AssertionError

Expected behavior

The test shouldn’t fail.

Environment

Operating system: CI

Python version: 3.7

Package versions:

absl-py==0.9.0
ansiwrap==0.8.4
appdirs==1.4.3
astor==0.8.1
attrs==19.3.0
backcall==0.1.0
black==19.10b0
bleach==3.1.3
boto3==1.12.28
botocore==1.15.28
cachetools==4.0.0
certifi==2019.11.28
chardet==3.0.4
click==7.1.1
coverage==4.5.4
cycler==0.10.0
decorator==4.4.2
defusedxml==0.6.0
docopt==0.6.2
docutils==0.15.2
entrypoints==0.3
gast==0.2.2
gensim==3.8.1
google-api-core==1.16.0
google-auth==1.11.3
google-auth-oauthlib==0.4.1
google-cloud-core==1.3.0
google-cloud-storage==1.26.0
google-pasta==0.2.0
google-resumable-media==0.5.0
googleapis-common-protos==1.51.0
grpcio==1.27.2
h5py==2.10.0
idna==2.9
importlib-metadata==1.5.0
ipykernel==5.2.0
ipython==7.13.0
ipython-genutils==0.2.0
ipywidgets==7.5.1
isodate==0.6.0
jedi==0.16.0
Jinja2==2.11.1
jmespath==0.9.5
joblib==0.14.1
jsonschema==3.2.0
jupyter==1.0.0
jupyter-client==6.1.0
jupyter-console==6.1.0
jupyter-core==4.6.3
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.0
kiwisolver==1.1.0
llvmlite==0.31.0
Markdown==3.2.1
MarkupSafe==1.1.1
matplotlib==3.2.1
mistune==0.8.4
more-itertools==8.2.0
mplleaflet==0.0.5
nbclient==0.1.0
nbconvert==5.6.1
nbformat==5.0.4
networkx==2.4
notebook==6.0.3
numba==0.48.0
numpy==1.18.2
oauthlib==3.1.0
opt-einsum==3.2.0
packaging==20.3
pandas==1.0.3
pandocfilters==1.4.2
papermill==2.0.0
parso==0.6.2
pathspec==0.7.0
pexpect==4.8.0
pickleshare==0.7.5
pluggy==0.13.1
prometheus-client==0.7.1
prompt-toolkit==3.0.4
protobuf==3.11.3
ptyprocess==0.6.0
py==1.8.1
py-cpuinfo==5.0.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
Pygments==2.6.1
pyparsing==2.4.6
pyrsistent==0.16.0
pytest==5.3.1
pytest-benchmark==3.2.3
pytest-cov==2.8.1
python-dateutil==2.8.1
pytz==2019.3
PyYAML==5.3.1
pyzmq==19.0.0
qtconsole==4.7.1
QtPy==1.9.0
rdflib==4.2.2
regex==2020.2.20
requests==2.23.0
requests-oauthlib==1.3.0
rsa==4.0
s3transfer==0.3.3
scikit-learn==0.22.2.post1
scipy==1.4.1
seaborn==0.10.0
Send2Trash==1.5.0
six==1.14.0
smart-open==1.10.0
tenacity==6.1.0
tensorboard==2.1.1
tensorflow==2.1.0
tensorflow-estimator==2.1.0
termcolor==1.1.0
terminado==0.8.3
testpath==0.4.4
textwrap3==0.9.2
toml==0.10.0
tornado==6.0.4
tqdm==4.43.0
traitlets==4.3.3
treon==0.1.3
typed-ast==1.4.1
urllib3==1.25.8
wcwidth==0.1.9
webencodings==0.5.1
Werkzeug==1.0.0
widgetsnbextension==3.5.1
wrapt==1.12.1
zipp==3.1.0

Additional context

N/A

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
huonwcommented, Oct 10, 2021

Ah, of course, thank you for that investigation! It does seem like we didn’t think about the random graph sometimes having disconnected regions (in addition to the purposeful isolated node), and thus the correct value changing (number of connected components - 1).

If you’re willing to file a PR that changes the 1 in assert 1 == ... to the correct value (and removes the flaky_xfail_mark annotation), that would be appreciated.

0reactions
sgrigorycommented, Oct 11, 2021

@huonw Sure, have a look at https://github.com/stellargraph/stellargraph/pull/1986

Upon a closer inspection it turned out that one should use not just the number of connected components, but the number of degree 0 nodes - such nodes create problems when normalizing Laplacian because the corresponding row of adjacency matrix is all 0. I tried to explain it in the PR description, let me know if that makes sense

Read more comments on GitHub >

github_iconTop Results From Across the Web

What are Flaky Tests? | TeamCity CI/CD Guide
Flaky tests are tests that return new results, despite there being no changes to code. Find out why flaky tests matter and how...
Read more >
How to Fix Flaky Tests
Randomly failing tests are the hardest to debug. Here's a framework you can use to fix them and keep your test suite healthy....
Read more >
Manage flaky tests - Azure Pipelines
A flaky test is a test that provides different outcomes, such as pass or fail, even when there are no changes in the...
Read more >
Flaky Test Management
One way to battle flaky tests is to detect and monitor them as they occur in an organized and methodical manner such that...
Read more >
What is a flaky test? Definition from WhatIs.com.
When the test fails to produce a consistent result, the test is deemed flaky. Flaky tests can be caused by various factors: an...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found