How do I use the autograd to do maximum likelihood estimate?

Basically, I am doing MLE on a 1024 normal vector with a matern covariance matrix. I want to do mle to estimate the parameters of the matern covariance function but using autograd.grad(negloglik) and then plugging in a value for the first parameter, I get the following error:

β€˜ufunc 'kv' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''’

The function kv is the modified bessel function of the second kind or order v. It is imported from scipy

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/RaunchyAppleSauce
πŸ“…︎ Nov 09 2021
🚨︎ report
confusion between loss function, maximum likelihood estimate, linear regression, linear classification

hi guys... so, i know what a loss function (like mean squared error) is, and if i understand correctly, we use a linear regression function to classify data in a problem of linear classification... but where does maximum likelihood estimation fits? does it take the place of the mean squared error function?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Ok-War-9040
πŸ“…︎ Oct 23 2021
🚨︎ report
Suppose that you know that a random variable has density 1/𝛼 𝑒^(βˆ’π›Όπ‘₯) for some value 𝛼. You obtain a single observation from the random variable which is the number 3. What is the maximum likelihood estimate for 𝛼? ; Is my solution correct?

I have a task which goes as follows:

>Suppose that you know that a random variable has density 1/𝛼 𝑒^(βˆ’π›Όπ‘₯) for some value 𝛼. You obtain a single observation from the random variable which is the number 3. What is the maximum likelihood estimate for 𝛼?

Here's my solution. Is it correct?

Thank you in advance!

πŸ‘︎ 3
πŸ’¬︎
πŸ“…︎ Jun 23 2021
🚨︎ report
[Q] Any sources for implementing general funcional relationship estimation by maximum likelihood (FREML)?

Hello you beautiful, I'm trying to increase my engineering toolset and robust parameter estimation considering errors in inputs and outputs for arbitrary functions is not something easy to find in the wild, but applicable to many cases in the practical engineering life. There is this paper from the Royal Society of Chemistry, whose reference is too dense for my poor engineering applied math skills. Does someone has a reference or a scheme I could use to implement it in excel and use solver to maximize the log-likelihood?

πŸ‘︎ 15
πŸ’¬︎
πŸ‘€︎ u/Engine_engineer
πŸ“…︎ Dec 17 2021
🚨︎ report
ELI5 Targeted Maximum Likelihood Estimate (TMLE) [Q]

I’m not a statistician, but I’m trying to suggest that maybe TMLE is better than propensity score matching for a retrospective observational cohort study. I read that it is, but I’m not really understanding how TMLE (with machined learning) works. Lots of stats jargon and stuff. Hope one of y’all experts can help me. Ty in advance!

πŸ‘︎ 22
πŸ’¬︎
πŸ‘€︎ u/afatamatai
πŸ“…︎ Sep 20 2020
🚨︎ report
Does anyone know of a simplistic explanation of the Probit Model and Maximum likelihood estimation in general?

For my Econometrics project I am doing a probit model, but part of the report requires me to explain the economic model of my project. I don't understand at all what I am meant to put here wrt to the Probit model or MLE This is my first research project I have done so apologies if I am missing something simplistic here.

What should I put down for this section?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/dowckv
πŸ“…︎ Dec 21 2021
🚨︎ report
Which is the maximum likelihood estimate, OLS or analytic method of linear regression?

Two of the widely used linear regression methods for finding parameters are ordinary least square (OLS) and analytical methods. In these two methods which is considered as a maximum likelihood estimate(MLE)? Are they both MLE?

Can we say that, any method which estimates parameters is a MLE?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/begooboi
πŸ“…︎ Jan 20 2021
🚨︎ report
Maximum Likelihood Estimation

Hi, could someone please give some advice on this code? Why is it not working?

My dataset: a fair coin that was tossed 30 times first (p=0.5) then switched to a biased coin and tossed another 70 times (p=0.2)

coin_toss <- c(rbinom(30, 1, 0.5), rbinom(70, 1, 0.2))

I want to create the function of log-likelihood so I could optimise my dataset in optim()

mix_2bin_nll <- function(switch, obs, prob){

sum(log(switch*dbinom(obs, size=1, prob=prob[1]) + (1-switch)*dbinom(obs, size=1, prob=prob[2])))

}

mle_100 <- optim(par=c(0.51, 0.24), fn=mix_2bin_nll, switch=0.3, obs=coin_toss)

summary(mle_100)

My problems:

  1. lots of warnings, such as " NaNs produced"Warning message in dbinom(obs, size = 1, prob = prob[2]):
  2. outputs are non-numeric

Thank you!

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/MattyPoppetMoew
πŸ“…︎ Nov 06 2021
🚨︎ report
Why do we need to use Maximum Likelihood Estimate in Logistic Regression?

Why we cannot use Sum of Least Squares??

πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/faithecup
πŸ“…︎ Oct 15 2019
🚨︎ report
[Econometrics/Statistics: Regression] HELP!! Does anybody here have any references or recommended readings on Non-Linear Multiple Regression Maximum Likelihood Estimation?

I am absolutely DESPERATE. I've been looking for HOURS on how to calculate MLE for Non-Linear Multiple Regression in 4 ways (Manual Calculation, Excel, EVIEWS, SPSS). If you know any books, websites, or vides that could help with this, please do share it. You don't even have to explain it to me. I'm willing to do the work. I just want to know if there's any suggested readings or viewings that could help me learn it on my own.

Thanks everyone and have a great day :)

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/rojhin213
πŸ“…︎ Nov 10 2021
🚨︎ report
"Analysis of Maximum Likelihood Parameter Estimates" in R

Is it possible to do "Analysis of Maximum Likelihood Parameter Estimates" in R like in SAS?

I did logistic regression on both but they produce similar coefficients

https://preview.redd.it/aaowexqc39u41.png?width=1920&format=png&auto=webp&s=4c0a4abbc8780cf3f5d6fbc623824dc5eee6e88d

https://preview.redd.it/ioyvkwu439u41.png?width=1920&format=png&auto=webp&s=e9be23c8bd6995283c62da69637071467ec28b95

πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/intern524
πŸ“…︎ Apr 21 2020
🚨︎ report
Could someone give a hand finding the Maximum Likelihood Estimate of mu for the normal distribution where sd = mu
πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/dunkerz69
πŸ“…︎ Nov 22 2019
🚨︎ report
[P] Maximum Likelihood Estimation in Jax

I created the following Jupyter notebook that illustrates maximum likelihood estimation in Jax:

Maximum Likelihood in Jax

Any questions, comments, or corrections are appreciated. Also, any advice on what other forums that would be interested would be appreciated.

Thanks!

πŸ‘︎ 8
πŸ’¬︎
πŸ“…︎ Jul 24 2021
🚨︎ report
ELI5: What is the Maximum Likelihood Estimate (MLE) and how do you use it?

We are learning this concept in a statistical programming course for research methods, and the professor couldn't be less clear if he tried. Can someone explain the maximum likelihood estimate (MLE) like I'm five?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/GwynethAnne
πŸ“…︎ Sep 19 2019
🚨︎ report
Where is a good resource to learn Marginal maximum likelihood estimation with EM(MMLE with EM)?

I am still quite a beginner in statistics and i do not have a good command over calculus and LA. I would prefer a resource which is relatively easier

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/sweett96
πŸ“…︎ Jul 29 2021
🚨︎ report
Maximum Likelihood Estimations

Hello! I haven’t used R in years and am having some difficulties getting back into it.

I was reading an article and noticed they had maximum likelihood estimates presented in the paper. I have watched a few videos on YouTube and read articles as well.

I am find the gap from overly simplistic to very complex examples a bit daunting.

I have attached the information here for reference. Can anyone assist with how these numbers were produced for Anaplasma spp.

If you could post your code that would be appreciated as well. I have gotten stuck since the only information provided is the positive pools and I am uncertain how to calculate p.

If I am doing this entirely wrong, please correct me and enlighten me.

Example: 2/52 pools positive Pathogen: Anaplasma spp. MIR 0.8% (0.0,1.8) 95% CI MLE 0.8% (0.1,2.5) 95% CI

πŸ‘︎ 2
πŸ’¬︎
πŸ“…︎ Sep 03 2021
🚨︎ report
Is Monte Carlo estimate is same as Maximum Likelihood estimate?

Say I want to find out the probability of success p of an experiment. So I carry out the experiment n times and obtain k success. Then the simple Monte Carlo estimate gives p=k/n.

The other way ia using the likelihood function. Given the outcome, the probability of that outcome for p \in [0,1] will be my likelihood function L(p). If I look at the value of p for which L(p) is maximum, it will be my maximum likelihood estimate of p.

My question is, are both of these estimates equal? And if so, is there a way of proving it?

The third way to estimate is by taking expected value of p assuming p is a random variable with PDF c*L(p) where c is some scaling constant. Is this estimate better than previous ones? Is there a way to prove this?

And a last question, how to obtain the confidence interval for the estimate obtained from likelihood function?

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/TheFoolVoyager
πŸ“…︎ May 31 2018
🚨︎ report
Someone can help me in this exercise of maximum likelihood estimation?
πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/HandHot498
πŸ“…︎ Jun 02 2021
🚨︎ report
[P] Maximum Likelihood Estimation in Jax /r/MachineLearning/commen…
πŸ‘︎ 2
πŸ’¬︎
πŸ“…︎ Jul 24 2021
🚨︎ report
Why is least *squares* the maximum likelihood estimator? Or more generally, why not some other least ____ estimate?

I have felt unconvinced by the explanations I've read, or when they were more mathematical, I felt I didn't quite follow exactly what they were doing to grasp the significance of the proof.

Is it simply an artifact of defining variance as the squared error? I've read that the least squares method minimizes the variance of the errors.

If I defined the variance as a third moment,

Var(x) = E( |x - mu|^3 ), would the least absolute cubes estimate be the best linear estimator in terms of minimizing the "variance"?

What if I made my "variance" function simply the absolute deviation?

Or are my thoughts simply off completely, and these properties come out of the behavior of the Gaussian distribution?

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/mandelbrony
πŸ“…︎ Mar 23 2014
🚨︎ report
About Parametric Inference and Maximum Likelihood Estimation (description below)

What are the prerequisites for learning Parametric Inference and Maximum Likelihood Estimation and what is the gradual order so that no detail is vague while learning? There are a few lectures on Youtube but they mostly went over my head, maybe because I omitted few important chapters. I need some help.

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/masteroffappets
πŸ“…︎ May 17 2021
🚨︎ report
[Q] Can you provide a dummies explanation of the purpose and the difference between maximum likelihood estimation and bayesian estimation?

Hi. so I'm a 3rd year CS student and I'm really struggling to understand this topic. I can't even understand the simple purpose of these two methods or what they mean. Can someone explain it to me in a very easy way? even if it superficial and not mathsy, I just want to know at the top level of abstraction what these two things are for and what their difference is.

πŸ‘︎ 46
πŸ’¬︎
πŸ‘€︎ u/Br3ikros
πŸ“…︎ Oct 27 2020
🚨︎ report
using maximum-likelihood to estimate volatility in Black-Scholes model

hi. can someone please show how one can use maximum-likelihood theory to find an estimate for the volatility in BS model? Wikipedia has an equation here but I cannot derive it.

πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/FailedLifeForm
πŸ“…︎ Mar 25 2011
🚨︎ report
Help with maximum likelihood estimation

Hi I have a problem regarding the MLE of a negative binomial distribution. It has the pdf: f(Z)=((z-1)Choose(j-1))(0.75^j)(0.25)^z-j. I have to find the MLE of j. I made the log likelihood function and tried to derive it but I am not sure how to derive all of the terms in the derivative. Specifically, I do not know how to derive ln((z-1)Choose(j-1)). Could anyone help me out?

πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/ok6579748
πŸ“…︎ Apr 30 2021
🚨︎ report
[Graduate Probability] Large deviation estimates of maximum likelihood estimators

I'm searching for a reference on large deviations of maximum likelihood estimators.

Specifically, I'm searching for a theorem that says that under some regularity conditions, the probability that the MLE deviates from its limit decreases exponentially with the distance.

I want the upper bound, and it would be even better if the upper bound is uniform in n.

I've search in van der Vaart's book without success ...

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Aftermath12345
πŸ“…︎ Aug 21 2017
🚨︎ report
[Q] Rewriting the iteratively reweighted least squares maximum likelihood estimator for the logit model

Does anyone have any recommendations for resources showing that the iteratively reweighted least squares maximum likelihood estimator for the logit model can be expressed as:

b = (X'VX)^(-1)X'VY^*

where y^* = Xb + V^-1(y βˆ’ p)

Thanks! Any resource recommendations would be appreciated.

πŸ‘︎ 2
πŸ’¬︎
πŸ“…︎ Mar 15 2021
🚨︎ report
[Q] Pseudo Maximum Likelihood Estimation for Bivariate t-Copula (in Python)

Hi all,

I've been working on a personal project where I try to make a pairs trading strategy based on conditional probabilities computed from a t-copula. I've been coding everything from scratch (more or less) and done several sanity checks using the "copula" package for R and I know that I am computing the correct copula density for my pseudo observations.

I am, however, struggling to estimate the degrees of freedom of the copula. I essentially have two datasets: Return series of two assets sampled daily and return series of the same two assets sampled at 6H intervals.

I am "somewhat" able to estimate the degrees of freedom using the daily data, but not at all able using the 6H data. I've used the estimated degrees of freedom and maximized log likelihood produced by the copula package in R as a benchmark for my own estimation. With the daily data, I get quite close but for the 6H data, the log likelihood I calculate for seemingly any degrees of freedom is "inf" in Python.

I also noticed that R estimates the degrees of freedom on the daily data a lot faster than my Python implementation does. I very inexperienced with optimization, so I don't really care about these kind of performance issues, I just want to get close to the benchmark estimates I get in R.

This is how I compute the log likelihood and how I estimate the degrees of freedom:

  1. For a given degrees of freedom, compute the copula density for each pair of pseudo observations
  2. Take the cumulative product of these densities
  3. The log of this cumulative product and multiply by -1. This returns "-inf" in Python. (Would taking the log of the densities before taking the cumulative product make a difference for Python? I am using numpy.cumprod() )
  4. Use scipy.optimize.minimize to find the degrees of freedom that minimizes the negative log likelihood

I didn't use the "actual" log likelihood function because I do not trust myself to derive it correctly (the copula density function is already quite nasty-looking) and I have not been able to find the log likelihood function for a d-dimensional t-copula online.

My question is firstly, how can I get a closer to the benchmark estimates of the degrees of freedom from R and secondly, what do I need to change about my approach in order to even get an estimate for the 6H data.

Thanks!

πŸ‘︎ 9
πŸ’¬︎
πŸ‘€︎ u/blacksiddis
πŸ“…︎ Feb 23 2021
🚨︎ report
Is there a way to estimate the mean using the maximum likelihood estimate (e.g., non-linear least squares) using gradient and Hessian information?

Given a likelihood L(\theta) and the \theta* MLE solution, is there a way to approximate the expected value of L(\theta)? I have the ability to calculate the gradient and Hessian of L(\theta). L(\theta) is the non-linear least squares likelihood so the expected value is not necessarily the same as the MLE.

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/brownck
πŸ“…︎ Apr 24 2016
🚨︎ report
Maximum Likelihood Estimation - Python Guide - Analytics India Magazine analyticsindiamag.com/max…
πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/analyticsindiam
πŸ“…︎ Apr 19 2021
🚨︎ report
Short Maximum Likelihood Estimation Tutorial in R youtube.com/watch?v=w3drL…
πŸ‘︎ 47
πŸ’¬︎
πŸ‘€︎ u/OperaMetrics
πŸ“…︎ Nov 22 2020
🚨︎ report
Huber-White 'Robust' standard errors for Maximum Likelihood, and meaningless parameter estimates. Any thoughts on this? Not a terribly long paper. stat.berkeley.edu/~census…
πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/econometrician
πŸ“…︎ May 31 2012
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.