question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Rename from_pymc3(prior) arg to prior_predictive and add prior arg

See original GitHub issue

Tell us about it

Is it unclear whether prior means prior_predictive, or prior samples from reading docs. https://arviz-devs.github.io/arviz/generated/arviz.from_pymc3.html#arviz.from_pymc3

From tests though it looks like this argument should be prior predictive https://github.com/arviz-devs/arviz/blob/master/arviz/tests/test_data.py#L640

Proposal is to rename prior to prior_predictive and include a method for prior samples to be stored in az.InferenceData

Thoughts on implementation

Rename argument to be specific like pystans method https://arviz-devs.github.io/arviz/generated/arviz.from_pystan.html#arviz.from_pystan

@ColCarroll @aloctavodia Let me know what you think and I can go ahead and do this

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:20 (19 by maintainers)

github_iconTop GitHub Comments

1reaction
rpgoldmancommented, Jan 8, 2020

The thing my PR fixes is that you can’t determine what is constant and what is not based on the trace that sample_prior_predictive returns (or really, the return of any PyMC3 sampling function). The reason is that PyMC3 allows the user to control what variables do and do not appear in the traces that the sampler returns. That is why, as far as I can tell, we must give from_pymc3 the PyMC3 model as an argument: without access to the model, the translator simply doesn’t have enough information to allocate variables to groups. Note also that using the backdoor from the posterior trace doesn’t generally work, either, since if you are doing predictions out of sample, you must create a new PyMC3 model and use the old trace with the new model – meaning that the model cached in the posterior trace is the wrong model.

That was kind of my designer’s notes for the Pull Request!

0reactions
rpgoldmancommented, Mar 2, 2020

I agree: we can look at the model and the trace(s) and figure out how to allocate the information between the two groups ourselves, instead of asking the users to.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Extracting the updated prior parameter values - PyMC Discourse
I am working with the implementation of a data fusion method using pymc3. I asked a question before from the following link Creating...
Read more >
NumPyro Documentation - Read the Docs
Factor statement to add arbitrary log probability factor to a probabilistic model. ... In the argument prior, to specify kernel param-.
Read more >
Uber AI Labs - NumPyro documentation
Factor statement to add arbitrary log probability factor to a probabilistic model. ... In the argument prior, to specify kernel parame-.
Read more >
Num Pyro Ai en Stable | PDF - Scribd
Factor statement to add arbitrary log probability factor to a ... prior – a NumPyro distribution or a Python dict with parameter names...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found