question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[Discussion] Property testing ergonomics improvements

See original GitHub issue

Context

Property testing in kotest relies on upfront commitment of arbs in checkAll, which needs to be put inside of a test. The checkAll function internally calls the arb.generate function on a given seed which is immensely useful to track down the root cause of failures. It also orchestrates shrinking, classifications, before / after prop, etc.

While that current mechanism is simple to understand and useful, I do notice that there are still gaps when it comes to tests requiring dependent arbitraries.

A use case that I often see is the need of additional test setups which require using various arbs based on those generated values in checkAll. There are multiple ways that developers would do this, but one of the more popular one seemed to involve calling arb.single() without random seed. Note: I observed some users that are aware of the importance of random seed would figure out a way to recompute random seeds or proxy it via arbitrary { rs -> rs } and propagate them to these dependent setups manually.

In https://github.com/kotest/kotest/issues/2493 Kotest 5.x enables setting up additional suspend setups via generateArbitrary. however, that doesn’t change the fact that one would have to still install that arb inside a checkAll upfront.

Caveats:

  • We’ve established that at a very high level these are already possible. There are some intricate detail that still doesn’t quite work, e.g. shrinking because the dynamic nature of the lambdas.

in a nutshell:

// currently 
test("should test something") {
  // observe primarily the upfront commitment of arbs
  checkAll(PropTestConfig(...), arbA, arbB, arbC) { a, b, c ->
     val expected = ...
     fn(a, b, c) shouldBe expected
  }
}

// what we sort-of expected to be able to do
proptest("should test something") {  // this: PropContext 
  // prop context carries a random seed in the coroutine context
  // devs can call .value() on an arb inside of a prop context
  val result = fn(
    arbA.value()
    arbB.value()
    arbC.value()
  )
  result shouldBe ...
}

interface PropContext {
  suspend fun <A> Arb<A>.value(): A
  suspend fun randomSource(): RandomSource // this is available in the coroutine context
}

in addition inside of proptest(...) { ... } you’d also have some additional functions to configure the test itself i.e.

interface PropTestContext : PropContext {
  suspend fun beforeProperty(suspend fn: () -> Unit): Unit
  suspend fun afterProperty(suspend fn: () -> Unit): Unit
  suspend fun configure(suspend fn: () -> PropTestConfig): Unit
}

If we have this we also enable a seed-propagation via the coroutine context. This means developer all of a sudden can do something like this:

proptest("should propagate property-test-context") { // this: PropContext 
  val state = setup(arbParamA.value(), arbParamB.value()) 
  val expected = ...
  sut(state).doSomething() shouldBe expected
}

suspend fun setup(params...): TestState = propContext { 
  // this: PropContext - the random seed and value syntax is available here
  TestState(
    fooInStore = Arb.list(arbFoo, 1..10).value(),
    somethingElseStateful = arbXyz.value()
  )
} 

This is a very powerful feature because all of a sudden devs are able to do more complex aspects of property tests while still keeping those repeatable because of the propagated random seed.

Why

We’d wish to free developers from the upfront commitment of arbs in checkAll and allow them to call .value() from within their test codes. This solves various requirements as described in the context above.

arb.next() and arb.single() are useful but unfortunately also very easy to misuse. I observe developers that weren’t familiar with property testing best practices increasingly got frustrated because prop test made their tests flakey and they can’t figure out why. Often times the blame was put on the test framework instead. They’re not exactly wrong, kotest does “allow” users to do that. arb.next() is a feature that is both useful and a footgun at the same time.

How might we find ways to make the framework assist developers to do the right thing and also make the ergonomics better? One possible alternative that comes to mind is to treat single() / next() with a warning. A rather more heavy-handed one is to only expose the non-argumented one to be called within the propertyContext

Initial discussions and PR: #2529

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:12 (10 by maintainers)

github_iconTop GitHub Comments

2reactions
sksamuelcommented, Oct 17, 2021

My motive for the play around code from last night was to support before and after easily for a property test, and less about new syntax for arbs. I think that having the Arb interface is useful for things like edgecases and shrinks, and so trying to skirt around that by having the .value() means you might as well just not bother with the arb strucutre at all. Just use a function that returns random values.

2reactions
sksamuelcommented, Oct 17, 2021

I think that without requiring a lambda from the user, that has a specified number of inputs, we can’t make shrinking work. So right now that’s the final argument to checkAll.

Without requiring this, the code path can vary, and there’s no way for us to inject shrunk values into the code.

Read more comments on GitHub >

github_iconTop Results From Across the Web

[proposal] introduce continuation passing style arbitrary ...
Drawing inspiration introduced to simplify chaining effects in arrow-fx, I realized that the arbitrary builder ergonomics can be improved ...
Read more >
What is Ergonomics? - LightGuide
Ergonomics is “an applied science concerned with designing and ... continuous improvement efforts, quality, or simply finishing the task.
Read more >
Foundational Principles in Ergonomics Lab Manual - Routledge
Goal: The goal of the laboratory assignments is provide an environment to enhance the learning experience by teaching through the use of ergonomic...
Read more >
Hand Tool Ergonomics - Tool Design : OSH Answers
What are the major ergonomic concerns of a hand tool design? ... The electrical and heat insulation properties of the handles are important...
Read more >
Field tests of a participatory ergonomics toolkit for Total ... - NCBI
The toolkit provided start-up guides for committee formation and training, and a structured PE process for generating integrated TWH ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found