Parallelism via function vectorization?
See original GitHub issueHi @yannikschaelte - I’m really enjoying using pyABC. Thanks for putting together such a powerful and easy to use tool!
A common scenario that I find myself in is that I have a very fast function that can be passed vector arguments. A simple example is some simply numpy function. Perhaps a more realistic case is when I have something like a Gaussian process regression surrogate trained. In either of these cases, passing several hundred points is almost as fast as just doing a single point call, since python function call overhead dominates.
Is there an easy way to use a sampler to make vectorized calls? I poked around at MappingSampler, but it seems like one would need to dig into the function being mapped, since the list being passed just controls the number of samples called (as far as I can tell).
I’d be happy to help implement this feature, but I want to make sure that it’s not already in place (or easy to hack). Thanks!
Issue Analytics
- State:
- Created 3 years ago
- Comments:17 (10 by maintainers)

Top Related StackOverflow Question
Ah, no problem! Out of interest: Does the pymc3 MCMC implementation provide vectorization, or is it just parallelized?
But if this “guessing” is sufficiently sophisticated, this could work very well. Implementation-wise, this would mean a slight API extension of simulate_one to simulate_many (m=int), and then defining a new sampler pypesto.sampler.VectorizedSomethingSampler.
I hope this is consistent what I wrote earlier this year 😆