Epsilon reinitialized when continuing a sampling run
See original GitHub issueIt can be handy to pause a sampling run to print some status information. This is easy to implement via a for-loop:
for i in range(desiredSteps):
abc.run(max_nr_populations=1)
# do some other stuff based on the results
However, unfortunately, the adaptive epsilon is always computed based on the very first sample in this case. This is counterintuitive, since everything else seems as if the sampling run is just continued (e.g. t is incremented). Hence, I would expect that epsilon is updated based on the most recent sample.
The reason for this issue is the cached implementation of _get_initial_population:
Caching like this could be fine if the method would consistently return the very first sample. However, the method takes a t argument, which suggests that it returns the population at a specific time. I am not to judge if this is really an issue, but for the application mentioned above, it would be nice, if the argument t were considered, too, when caching. E.g. the cached value would only be returned if the corresponding t values match.
Quick fix:
To solve the issue for the specific application, setting abc._initial_population = None prior to abc.run solves the problem. It would be nice if this could be done more elegantly, though.
Issue Analytics
- State:
- Created 2 years ago
- Comments:7 (4 by maintainers)

Top Related StackOverflow Question
Thanks 😃 I will try to suggest a modification tomorrow, then it should be on PyPI at the latest some time next week, if that works.
Thank you @yannikschaelte! This looks perfect!