In a recent conversation I explained whyLSODE was giving wrong results (http://www.mapleprimes.com/questions/210948-Can-We-Trust-Maple#comment230167). After a lot of confusions and weird infinite loops for answers, it turned out that Newton Raphson was not properly done.

Both LSODE and MEBDFI are currently incompletely implemented (only one iteration is done instead of Newton Raphson till convergence). Maplesoft should update the help files accordingly.

The post below explains how better results are obtained with method = mgear. To run the command mgear you will need Maple 6 or earlier versions. For lsode, any current version is fine.  Unfortunately Maple deprecated an algorithm that worked fine. From Maple 8, the algorithm moved to Rosenbrock methods for stiff equations. This is still not ideal.

If Maple had a working algorithm, I am hoping that Maplesoft folks would consider bringing it back in future versions. (At least with the same functionality as in Maple 6).

PLEASE NOTE, the issue is not with solving this example (Very simple). This example is chosen to show how a popular algorithm in the literature is wrongly implemented.

 

Here Maple's lsode is forced to take only one step and use first order back ward difference formula to integrate from 0 to 1.  LSODE mimics Eulerbackward using the options given below. The post shows that LSODE does not do Newton Raphson and just performs only iteration for nonlinear equations.

restart;

Digits:=15;

Digits := 15

(1)

eq:=diff(y(t),t)=-y(t);

eq := diff(y(t), t) = -y(t)

(2)

C:=array([0$22]);

C := Vector[row](22, {(1) = 0, (2) = 0, (3) = 0, (4) = 0, (5) = 0, (6) = 0, (7) = 0, (8) = 0, (9) = 0, (10) = 0, (11) = 0, (12) = 0, (13) = 0, (14) = 0, (15) = 0, (16) = 0, (17) = 0, (18) = 0, (19) = 0, (20) = 0, (21) = 0, (22) = 0})

(3)

C[9]:=1;

C[9] := 1

(4)

sol:=dsolve({eq,y(0)=1},type=numeric,method=lsode[backfull],ctrl=C,initstep=0.1,minstep=0.1,abserr=1,relerr=1):

sol(0.1);

[t = .1, y(t) = .909090909090834]

(5)

subs(diff(y(t),t)=(y1-1)/0.1,y(t)=y1,eq);

0.1e2*y1-0.1e2 = -y1

(6)

fsolve(%,y1=0.5);

.909090909090909

(7)

 While for linear it gave the expected result, it gives wrong results for nonlinear problems.

sol1:=dsolve({eq,y(0)=1},type=numeric):

sol1(0.1);

[t = .1, y(t) = .904837355407810]

(8)

eq:=diff(y(t),t)=-y(t)^2*exp(-y(t))-10*y(t)*(1+0.01*exp(y(t)));

eq := diff(y(t), t) = -y(t)^2*exp(-y(t))-10*y(t)*(1+0.1e-1*exp(y(t)))

(9)

sol:=dsolve({eq,y(0)=1},type=numeric,method=lsode[backfull],ctrl=C,initstep=0.1,minstep=0.1,abserr=1,relerr=1):

sol(0.1);

[t = .1, y(t) = .501579294869466]

(10)

subs(diff(y(t),t)=(y1-1)/0.1,y(t)=y1,eq);

0.1e2*y1-0.1e2 = -y1^2*exp(-y1)-10*y1*(1+0.1e-1*exp(y1))

(11)

fsolve(%,y1=1);

.488691779256025

(12)

sol1:=dsolve({eq,y(0)=1},type=numeric):

 the expected answer is correctly obtained with default tolerance as

sol1(0.1);

[t = .1, y(t) = .349614721994122]

(13)

 The results obtained are worse than single iteration using jacobian.

eq2:=(lhs-rhs)(subs(diff(y(t),t)=(y1-1)/0.1,y(t)=y1,eq));

eq2 := 0.1e2*y1-0.1e2+y1^2*exp(-y1)+10*y1*(1+0.1e-1*exp(y1))

(14)

jac:=unapply(diff(eq2,y1),y1);

jac := proc (y1) options operator, arrow; 20.+2*y1*exp(-y1)-y1^2*exp(-y1)+.10*exp(y1)+.10*y1*exp(y1) end proc

(15)

f:=unapply(eq2,y1);

f := proc (y1) options operator, arrow; 0.1e2*y1-0.1e2+y1^2*exp(-y1)+10*y1*(1+0.1e-1*exp(y1)) end proc

(16)

y0:=1;

y0 := 1

(17)

dy:=-evalf(f(y0)/jac(y0));

dy := -.508796088545793

(18)

ynew:=y0+dy;

ynew := .491203911454207

(19)

 Following procedures confirm that it is indeed calling the procedure only at 0 and 0.1, with backdiag giving slightly better results.

myfun:= proc(x,y) if not type(x,'numeric') or not type(evalf(y),numeric)then 'procname'(x,y);
    else lprint(`Request at x=`,x); -y^2*exp(-y(x))-10*y*(1+0.01*exp(y)); end if; end proc;

myfun := proc (x, y) if not (type(x, 'numeric') and type(evalf(y), numeric)) then ('procname')(x, y) else lprint(`Request at x=`, x); -y^2*exp(-y(x))-10*y*(1+0.1e-1*exp(y)) end if end proc

(20)

sol1:=dsolve({diff(y(x),x)=myfun(x,y(x)),y(0)=1},numeric,method=lsode[backfull],ctrl=C,initstep=0.1,minstep=0.1,abserr=1,relerr=1,known={myfun}):

sol1(0.1);

`Request at x=`, 0.

`Request at x=`, 0.

`Request at x=`, .1

`Request at x=`, .1

[x = .1, y(x) = .501579304183583]

(21)

sol2:=dsolve({diff(y(x),x)=myfun(x,y(x)),y(0)=1},numeric,method=lsode[backdiag],ctrl=C,initstep=0.1,minstep=0.1,abserr=1,relerr=1,known={myfun}):

sol2(0.1);

`Request at x=`, 0.

`Request at x=`, 0.

`Request at x=`, .1

`Request at x=`, .1

[x = .1, y(x) = .497831388424072]

(22)

 

Download Lsodeanalysistrunc.mws

 

Next see how dsolve method = mgear works just fine in Maple 6 (gives the expected answer upto 3 Digits accuracy). To run this code you will need Maple 6 or earlier versions. Maple 7 has this algorithm, but I don't know to use it as it is hidden. I would like to get support from other members to get Maplesoft's attention to bring this algorithm back.

If Mdy/dt = f(y) is solved using mgear algorithm (instead of dy/dt =f ), then one can have a good DAE solver based on this (M being singular). 

 

restart;

myfun:= proc(x,y) if not type(x,'numeric') or not type(evalf(y),numeric)then 'procname'(x,y);
    else lprint(`Request at x=`,x); -y^2*exp(-y(x))-10*y*(1+0.01*exp(y)); end if; end proc;

myfun := proc (x, y) if not (type(x, 'numeric') and type(evalf(y), numeric)) then ('procname')(x, y) else lprint(`Request at x=`, x); -y^2*exp(-y(x))-10*y*(1+0.1e-1*exp(y)) end if end proc

(1)

sol2:=dsolve({diff(y(x),x)=myfun(x,y(x)),y(0)=1},{y(x)},numeric,method=mgear[mstepnum],stepsize=0.1,minstep=0.1,errorper=1):

sol2(0.1);

`Request at x=`, 0.

`Request at x=`, .1

`Request at x=`, .1

`Request at x=`, .1

[x = .1, y(x) = .4887165263]

(2)

 

 

Download Mgearworks.mws


Please Wait...