Decoupling test generation and test execution
See original GitHub issueMany of the equivalence oracles in LearnLib are based on testing (this includes RandomWordEQOracle
, W(p)MethodEQOracle
, CompleteExplorationEQOracle
, …) and so each of these classes will contain code like this (in many flavours):
D hypOutput = output.computeOutput(queryWord);
sulOracle.processQueries(Collections.singleton(query));
if (!Objects.equals(hypOutput, query.getOutput()))
return query;
It is a bit annoying to have to do this every time one comes up with a new test generation algorithm. My proposal is to introduce the concept of a test generator, then you implement the above code just once (i.e. one can make an equivalence oracle from any test generator).
Pros:
- Implement test execution just once.
- Allows to batch tests in order to execute them in parallel (now you’d have to implement this batching in each class,
RandomWordEQOracle
for example does this). - Allows to bound the test generation, withouth having to add a counter to each test generator.
- Allows to interleave different test generation methods.
There are no cons 😉, besides having yet another abstraction.
Ideally we would implement this with coroutines, but Java does not have them. So I think iterators will do fine. Note that a collection is not sufficient, since many test generation methods are infinite. A supplier is also not sufficient because it cannot stop (some test generations methods may be finite).
In terms of interfaces, I think we would like to have two concepts. First there is the test generator, which may be just an iterator. Then there is a test generation method which has a function taking a hypothesis and returning a test generator. I have no opinion on whether we should use standard Java interfaces (like iterator) or define our own.
What are your thoughts?
Also should this be in LearnLib, or as something separate?
Issue Analytics
- State:
- Created 7 years ago
- Reactions:2
- Comments:14 (13 by maintainers)
Top GitHub Comments
I like the idea and think we should implement this in LearnLib!
The one method that does not match this pattern is a random walk - but I think that is OK.
So my plan was to have an interface
TestMethod
which returns anTestGenerator
for every hypothesis. Meaning that you get a new test generator for each hypothesis. But maybe a single entity with someupdateHypothesis
function is easier to use.What I envision for active learning is something where we can write the following (where all these components are provided in the library):
where
TestingOracle
adapts a test method into an equivalence oracle (i.e. it is the test executor). AndBound
just picks the first 1000 tests and then stops.Interleave
will zip the two test methods.