Carl Hickman

89 Reputation

2 Badges

17 years, 247 days

MaplePrimes Activity


These are replies submitted by Carl Hickman

Hi Nico,

Treating the least-squares estimate as an MLE for a linear model having normal noise (see reference below), the slope and intercept could be computed as

$mle=maple("
V := Vector([seq($L[i]-a*$X[i]-b, i=1..numelems($X))]):
likelihood := subs(d=1, Statistics[Likelihood](Normal(0, d), V)):
mle := Optimization[Maximize](likelihood, initialpoint={a=0,b=0}):
map(rhs, mle[2])
");

Sincerely,

Carl

 

Reference: Ernie Croot, "Maximum likelihood estimators and least squares" (click here)

Dear @nhlbooij,

I understand our Help team has reached out to you concerning MLE in Maple. You can provide them with an export of the actual Maple TA question as well, so we can help with streamlining the coding or any follow-up questions you may have.

Thank you.

Sincerely,

Carl Hickman

Maplesoft

Dear Dr. Kaplan, Try making the following change to your LaTeX code. Instead of: \maple{evalb(simplify(diff($RESPONSE,x)-$b)=0);} Try: \maple*{evalb(simplify(diff($RESPONSE,x)-$b)=0);} The asterisk (*) should keep the extra evalb(..) away. Let me know how it goes. Carl Hickman, Maplesoft
Dear Dr. Kaplan, Try making the following change to your LaTeX code. Instead of: \maple{evalb(simplify(diff($RESPONSE,x)-$b)=0);} Try: \maple*{evalb(simplify(diff($RESPONSE,x)-$b)=0);} The asterisk (*) should keep the extra evalb(..) away. Let me know how it goes. Carl Hickman, Maplesoft
Page 1 of 1