Feature Request: Nested priors for common terms
See original GitHub issueHi,
I’m trying to play around with sparse models and would like to place NormalMixture
prior with Dirichlet
weights as the prior for the common terms. The code would be like
model = bmb.Model("response~1+d", data)
w_prior = bmb.Prior("Dirichlet", a=[1.0, 1.0])
subject_prior = bmb.Prior("NormalMixture", w=w_prior, mu=[0, 0], sigma=[0.1, 4.5])
p = {"d": subject_prior}
model.set_priors(p)
model.build()
It crashes with aesara
saying NotImplementedError: Cannot convert Dirichlet(a: [1.0 1.0]) to a tensor variable.
I think it should be fairly simple fix to build the priors recursively also for CommonTerm
😒 like already done for GroupSpecificTerm
😒. (Apologies for this not being a pull request.)
Issue Analytics
- State:
- Created a year ago
- Comments:7
Top Results From Across the Web
nested data (nth-degree recursion) · Issue #237 · graphql ...
One strong aspect of graphs is deep nesting. Is this a possibly worthy feature request?
Read more >Latent Nested Nonparametric Priors (with Discussion)
The popular nested Dirichlet process is shown to degenerate to the fully exchangeable case when there are ties across samples at the observed...
Read more >Latent Nested Nonparametric Priors (with ... - NSF PAR
The popular nested Dirichlet process is shown to degenerate to the fully exchangeable case when there are ties across samples at the observed...
Read more >Latent Nested Nonparametric Priors (with Discussion) - PMC
The popular nested Dirichlet process is shown to degenerate to the fully exchangeable case when there are ties across samples at the observed...
Read more >20 Multilevel Models | Updating: A Set of Bayesian Notes
20.1 Terminology. These models go by different names in different literatures: hierarchical (generalized) linear models, nested data models, mixed models, ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
That seems to work, also with formula like
response ~ 0+d0+d1+d2+d3+d4+d5+d6+d7+d8+d9
(I’ve never before seen thec(d0,d1...)
syntax.)I don’t have proper references for this apart from the blog post https://betanalpha.github.io/assets/case_studies/modeling_sparsity.html I already linked. I guess sparsity is not a thing with Bayes, when you can just sum over probability zero events.
I’m leaving it open because I don’t have time now to finish off this PR (and understand how it works too!). But it is definitely very interesting.
Edit
For the record.
c(d0, d1, ..., d9)
is not the same thand0 + d1 + ... + d9
. The first creates a single model term which is ten-dimensional, and the second creates ten model terms where each one is one-dimensional.