More support for non-normal maximum likelihood estimation #974
Replies: 2 comments
-
Actually, the easiest way to get this functionality right now without any changes to |
Beta Was this translation helpful? Give feedback.
-
@jagerber48 Well, lmfit supports using different functions for "reducing" a residual array ( The default is "sum of squares", but "negentropy" and "’neglogcauchy" are available as built-in options. Or a custom callable can be used. I'm unsure what to make of your concern about a factor of 2. How is a scale factor an issue? |
Beta Was this translation helpful? Give feedback.
-
This "idea" may actually be a question if it turns out there are just details about lmfit that I'm missing. This post is motivated by this stack exchange question which Newville and myself have provided answers to.
Historically, I think lmfit arose out of the need for a more unified API to scipy least squares minimization routines so much of the code is oriented towards least squares fitting. However, least squares minimization is just a specific type of maximum likelihood estimation. As I point out in my answer to the question, if the data points are normally distributed, the sum-of-squares-of-residuals cost function is equal to
-2
times the log-likelihood function, so minimizing the sum-of-squares-of-residuals corresponds to maximizing the log-likelihood.The idea I'm proposing here is to bake this realization about the relationship between maximum likelihood estimation (MLE) and least squares estimation (LSE) into the
lmfit
source code in such a way thatlmfit
becomes more user friendly for users wanting to do maximum likelihood estimation for non-normally distributed data.Right now, if I wanted to use
lmfit
to do maximum likelihood estimation on non-normally distributed data I would do the following:lmfit
by 1/2. I think also multiply all standard errors etc. by a factor of 1/sqrt(2).Maybe the easiest way to make
lmfit
more user friendly for non-normal log-likelihoods would be to create an MLE mode inlmfit
whereAdditionally, for user-convenience, some common likelihood cost function calculators could be provided. For example, factory functions that
theta
and returns a log-likelihood function usingscipy.poisson.logpmf
theta
and returns log-likelihood function usingscipy.binomial.logpfm
These changes are, of course, not strictly necessary because users cal just write these residual functions themselves (just remember to include a negative sign because
lmfit
only does minimization and won't ever provide the negative sign on its own.As a note, since least squares estimation is a specific case of maximum likelihood estimation, if the MLE machine described above was set up then it could be possible, under the hood, to reconfigure the least squares machinery in
lmfit
to take advantage of the maximum likelihood code. It would essentially amount to redirecting the least-squares cost function to include a factor -2 so that it become the maximum likelihood cost function.Beta Was this translation helpful? Give feedback.
All reactions