Log likelihood of covariance incorrect?
See original GitHub issueFirst of all, I apologize if it’s just me making a mistake here. However, I can’t seem to reconcile my understanding of the gaussian likelihood function with the one in empirical_covariance.py.
In particular:
log_likelihood_ = - np.sum(emp_cov * precision) + fast_logdet(precision)
The term emp_cov * precision
should be multiplied by p
. In the multivariate gaussian density, it’s the scatter matrix, not the covariance matrix, who’s Frobenius inner product is being computed.
Am I missing something here or is this incorrect?
EDIT: Here’s a source that seems to confirm my interpretation (slide 34).
Issue Analytics
- State:
- Created 4 years ago
- Comments:9 (7 by maintainers)
Top Results From Across the Web
Find covariance of estimator and derivative of the log ...
Try things in the univariate case. I'll suppose f(x;θ) is the density and we're estimating θ with ˆθ which is unbiased but otherwise...
Read more >Negative Log Likelihood - an overview | ScienceDirect Topics
In other words, when the distributions that describe the data are Gaussians with a common covariance matrix, then the log ratio of the...
Read more >How to evaluate the multivariate normal log likelihood
The log likelihood depends on the mean vector μ and the covariance matrix, Σ, which are the parameters for the MVN distribution.
Read more >Maximum Likelihood Estimation (MLE) - Sherry Towers
As we'll see in a moment, the standard errors of the estimator, ˆθ, are just the square roots of the diagonal terms in...
Read more >Chapter 3 Maximum Likelihood Estimation - DiSCDown
However, if there is an interior solution to the problem, we solve the ... The asymptotic covariance matrix of the MLE can be...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
for me there is no n (aka n_samples in our formula) as it’s the mean log likelihood.
@ashaffer I don’t know where the p would come from. You can do a numerical test if you want to confirm. Simulate data that match the model and check that log like tends to 0.