question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Test cases are flaking in the FakeOppiaClockTest

See original GitHub issue

Describe the bug A clear and concise description of what the bug is. The test cases testGetCurrentCalendar_wallClockMode_returnsCalendarWithCurrentTimeMillis() and testGetCurrentTimeMs_wallClockMode_returnsCurrentTimeMillis are flaky.

Failure error -

expected to be in range: (1616173019436..1616173019440)
but was                : 1616173019440

	at org.oppia.android.testing.time.FakeOppiaClockTest.isWithin(FakeOppiaClockTest.kt:262)
	at org.oppia.android.testing.time.FakeOppiaClockTest.testGetCurrentCalendar_wallClockMode_returnsCalendarWithCurrentTimeMillis(FakeOppiaClockTest.kt:194)

Expected behavior The test cases should always pass.

Screenshots If applicable, add screenshots or videos to help explain your problem. When run 100 times, the test case failed 1 time - Screenshot from 2021-03-19 20-43-41

Sometimes failing on CI checks - Screenshot from 2021-03-19 20-47-03

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:15 (15 by maintainers)

github_iconTop GitHub Comments

1reaction
prayutsucommented, Mar 24, 2021

@Karanjot-singh If you carefully take a look at the assertion of the test -

assertThat(reportedTimeMs).isWithin((currentTimeMs - 2)..(currentTimeMs + 2))

you can see that we check that if reportedTime lies in the range [ currentTime-2, currentTime+2 ] or not, which is a very small range, although currentTime and reportedTime are initialized one after another but the difference between their actual values can vary more than that. So, the current buffer range for the reportedTime is 4 ms, but we want to increase it to 100 ms. Does that make sense?

1reaction
anandwana001commented, Mar 19, 2021

@prayutsu It will be helpful if you add the failure result.

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to Fix Flaky Tests - Semaphore CI
Randomly failing tests are the hardest to debug. Here's a framework you can use to fix them and keep your test suite healthy....
Read more >
What are Flaky Tests? | TeamCity CI/CD Guide - JetBrains
Flaky tests are tests that return new results, despite there being no changes to code. Find out why flaky tests matter and how...
Read more >
What is a flaky test? Definition from WhatIs.com. - TechTarget
Flaky tests can be caused by various factors: an issue with the newly-written code; an issue with the test itself; some external factor...
Read more >
Test Flakiness - Methods for identifying and dealing with flaky ...
A flaky test is a test that both passes and fails periodically without any code changes. Flaky tests are definitely annoying but they...
Read more >
How to reduce flaky test failures - CircleCI
If your tests are flaky, they cannot help you find (and fix) all your bugs, which negatively impacts user experience.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found