question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

FP in Model testing

See original GitHub issue

First of all, I love this library, thank you very much 💛

💡 Idea

Could it be possible to make the Model testing interface more “Functional” (meaning, lose the need of classes)

Motivation

TL;DR:

In summary:

  • It seems pointless to use Classes.
  • It just seems odd to rely on mutating an argument (the model), that is constantly injected anyways.
  • I’m not sure what could be a better replacement for Classes in Model Testing.

Long version:

fast-check has been an amazing functional library that elegantly abstracts all the “states” related to property testing.

So far fast-check is pretty immutable and adapted to common monadic interfaces (chain, map, etc), but the model testing part is pushing the mutable side (of the model) to userland relying on an OOP interface, therefore forcing the run command to lose “purity”.

I see it is convenient and easy to wrap our hands around the “if Class = Command then this & m.x = y are valid”, but it seems that the class part is unnecessary.

If the constructor, model, and real arguments are passed each time, what is the instance even used for? and the classes are constantly instantiated, it seems like a plain record of pure functions could do the job.

Counters to this idea:

  • Change.
  • Having just one update function might feel limiting because currently, we have the freedom of updating the model anywhere on the run function, the pipeline might need something before and after of run.

Example

export const AddTrackCommand =
  (readonly position: number, readonly trackName: string) => ({

    check: (m: MusicPlayerModel) => {
      return !m.tracksAlreadySeen[trackName];
    },

    update: (m: MusicPlayerModel, p: MusicPlayer) => {
      return {
        ...m,
        numTracks: m.numTracks++,
        tracksAlreadySeen: {
          ...m.tracksAlreadySeen,
          [trackName]: true
        }
      }
    },

    run: (m: MusicPlayerModel, p: MusicPlayer) => {
      const trackBefore = p.currentTrackName();
      p.addTrack(trackName, position % (m.numTracks - 1)); // old model
      assert.equal(p.playing(), m.isPlaying);
      assert.equal(p.currentTrackName(), trackBefore);
    },

    toString: () => {
      return `AddTrack(${position}, "${trackName}")`;
    }
  })

Hope this idea doesn’t sound too stupid, I feel like Model Testing is amazing, but somehow it feels a bit awkward to work with it, atm.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:3
  • Comments:9 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
dubzzzcommented, Jun 1, 2020

Thanks for the feedback on the library, pretty cool to see positive feedbacks on it 😉

Concerning the class-based approach of model based testing in fast-check, I think the main reason is that I took my inspiration from RapidCheck, a C++ library to do Property Based Testing. By the way, I recommand this article from the author of this library. You can see their model based testing approach code here.

Internally, fast-check does not use any Class is Command to do anything. So I believe a more functional API like the one you suggest might be possible 🤔

Anyway, it will need a bit of work, specs around it… and if it really becomes the new way to go for model based in fast-check, I’ll need to offer a migration path for existing users while preserving the two approaches for a certain amount of time (at least up to the next major).

0reactions
stale[bot]commented, Aug 28, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Read more comments on GitHub >

github_iconTop Results From Across the Web

TPR and FPR or precision or recall. - Towards Data Science
The model predicts 9 of the samples as positives (8 true positives and 1 negative) and one as negative. The basic metrics are:...
Read more >
Classification: Precision and Recall | Machine Learning
Let's calculate precision for our ML model from the previous section that analyzes ... True Positives (TP): 8, False Positives (FP): 2.
Read more >
Confusion matrix - Wikipedia
... false positive (FP): A test result which wrongly indicates that a particular condition or attribute is present; false negative (FN): A test...
Read more >
Confusion matrix, accuracy, recall, precision, false positive ...
FN = false negative; TP = true positive. Accuracy. Accuracy is a measure for how many correct predictions your model made for the...
Read more >
A simple guide to building a confusion matrix - Oracle Blogs
False Positive (FP) is an outcome where the model incorrectly predicts the positive ... The confusion matrix created on the test data set...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found