Allan Wittkopf

245 Reputation

7 Badges

19 years, 118 days

MaplePrimes Activity


These are replies submitted by Allan Wittkopf

@mmcdara To be fair, I'm the author of the code/help, which is how I knew about the method. I had thought it was documented, but I see that is incorrect.

How about this for the help page (replacing the bullet about the generic method):

The first mode of operation uses the default theta method, which is an implicit scheme that is centered in both time and space, and is capable of finding solutions for higher order PDE or PDE systems. The PDE systems must be sufficiently close to a standard form for the method to find the numerical solution.
 Note that the default value for the theta parameter is 1/2, which for a second order PDE corresponds to the Crank-Nicholson scheme. A theta parameter value of 0 gives the forward Euler scheme, while a theta parameter value of 1 gives the backward Euler scheme.
 The value of the theta parameter can be controlled by passing 'method'=theta[v] as an option with v being the desired theta parameter value (in the range 0..1).

 

@tomleslie 

That didn't happen for me on 64-bit Linux in Maple 2016, but I don't have windows, so I cannot check. Also I'm not 100% sure I have the correct system, as I had to copy/paste the system from earlier in these threads, then manually edit for line breaks, as there were things like:

..... * a
bs(...)

in the text - i.e. line breaks in the middle of functions and/or numbers.

Win64 is a bit of a special case for compile - you need to install the Microsoft visual studio compiler and the Windows SDK to have the compile option work there.

As for the maxfun, just set it to 0. This *should* work for you though I do not have Win64 to check:

dsolve(MC, numeric, method=rosenbrock, maxfun=0);

Note though that if you are running into the maxfun limit, this suggests that dsolve is working pretty hard to solve the problem - is there any chance that the solution is near a singularity, and the round-off difference between Linux and Windows is sufficient to make that large a change? Do you know how stable the solution is?

@tomleslie 

Simply change the method option in your dsolve call from:

sol := dsolve(MC, numeric, method=lsode[adamsfull]);

to:

sol := dsolve(MC, numeric, method=rosenbrock);

or better yet:

sol := dsolve(MC, numeric, method=rosenbrock, compile=true);

if efficiency is a concern.

When you run the point comparisons, the results for "what odeplot gives" and "what sol(t) gives" are identical.

See my earlier post "Maple's ODE solvers" for more of an explanation why this is so, but put simply, the rosenbrock (stiff), rkf45 and ck45 solvers are better set up to work with interactive point querying due to the fact that they are implemented to use a natural step, and do not need to integrate to a specific point to get the value at that point.

Procedure form is not currently in the plans.

Please file a request with a use case. The mechanism might be a bit complicated to use.

The handling for delay DEs that is currently present in Maple 18.02 is an alpha version of delay DE handling to be made available in Maple in a later release.

The integration engine was extended to support DDEs in the Maple 18.02 update for MapleSim support for delay components.

@Dr. Venkat Subramanian

Hi Venkat,

Actually this might be a bit more versatile than you indicated. The fsolve call is part of the initialization problem, and as such occurs outside the external/compiled code, so the computation occurs in Maple proper, and one can independently specify the precision there. The subsequent numerical integration of the ODE can then be performed at any precision you may want.

Modulo a bug in the auto-detection of the boundary/initial points of a problem (the bug to be fixed), the following will provide a packaged solution for the problem at hand, though it is a bit involved:

> ODE := A*diff(h(t),t)^2+B*diff(h(t),t)*(h(t)+C)+E*h(t)=F*cos(diff(h(t),t)):
> init_xpr := eval(subs(diff(h(t),t)=x,ODE),h(t)=0):
> find_der := unapply('fsolve'(init_xpr,x),[A,B,C,F],numeric):
> dsn := dsolve({ODE,h(0)=0,D(h)(0)=find_der(A,B,C,F)},numeric,
>    method=rkf45_dae,parameters=[A,B,C,E,F], known={find_der}):
> dsn(parameters=[0,0,1,1,1]):
> plots[odeplot](dsn,0..3):

The idea is that the parametrized form is used to construct a parametrized 'fsolve' call that obtains the initial value for diff(h(t),t).

One nice feature about this approach is that a more elaborate version of 'find_der' can be manually constructed if, say, one wanted certain properties for the chosen solution (i.e. which root of the nonlinear equation should be chosen when multiple roots are present).

Basically the _dae methods, when faced with leading nonlinear equations, perform a prolongation (differentiation) that converts the equation to a higher order equation that is leading linear, and treats the lower order nonlinear equation as a constraint, applying Newton-Raphlson to the constraint.

Sadly, at this time, the above fails, as the presence of find_der(A,B,C,F) in the initial conditions causes 'A' to be mistakenly thought to be a boundary point.

An unelegant hack that allows this to work in the released version can be obtained by adding a fake variable in as the first argument to find_der as follows:

ODE := A*diff(h(t),t)^2+B*diff(h(t),t)*(h(t)+C)+E*h(t)=F*cos(diff(h(t),t)):
init_xpr := eval(subs(diff(h(t),t)=x,ODE),h(t)=0):
find_der := unapply('fsolve'(init_xpr,x),[q,A,B,C,F],numeric):
dsn := dsolve({ODE,h(0)=0,D(h)(0)=find_der(0,A,B,C,F)},numeric,
   method=rkf45_dae,parameters=[A,B,C,E,F], known={find_der}):
dsn(parameters=[0,0,1,1,1]):
plots[odeplot](dsn,0..3):

I think the reduced accuracy is allowing a false solution to be returned in this case.

The issue is more visible if one attempts to use Maple's IVP solver on the DE:

> eq:=(1+theta(x))*diff(theta(x),x)+x*diff(theta(x),x)^2+x*(1+theta(x))
>    *diff(theta(x),x,x)-theta(x)^3-diff(theta(x),x)=0:
> ics:=theta(0)=1, D(theta)(0) = 0:
> Sol:=dsolve({eq, ics}, numeric);
Error, (in dsolve/numeric/checksing) ode system has a removable singularity at
x=0. Initial data is restricted to one of the following 2 options:
{{diff(theta(x),x) = theta(x)^2}, {theta(x) = 0}}

This tells us that the equation has a singularity at x=0, and for a solution to exist the initial data must be restricted to satisfying the provided constraints. Given this, if we require that D(theta)(0)=0, the only option for theta(0) is 0, which gives us the trivial solution.

If the theta(1)=1 constraint is the more important of the two (i.e. is the one we really need satisfied), then one can find values of theta(0), D(theta)(0) that provide this under the constraint D(theta)(0)=theta(0)^2. The value I get is somewhere around theta(0)=0.5978939.

- Allan

Hello Venkat,

I started looking at the PDE problem in your worksheet earlier this week, and unfortunately it's not in the class of problems that the 'implicit' option to dsolve/numeric will work for.

One issue that comes to mind is that the solver techniques that can be used in MapleSim are a bit more advanced than the ones in use by dsolve/numeric, but there is a reason for this. MapleSIm deals with systems that originate from Modelica, which guarantees a 1-1 correspondence between equations and unknowns in the input system, and we have no such guarantee for input systemss that are entered directly in a call to dsolve/numeric.

Much of the symbolic processing time that you see for this problem is required to detect and confirm this correspondence, and simultaneously form the set of equations used to solve for the unknowns.

I am currently thinking about approaches where one can specify additional information to dsolve/numeric to tell it that this is the case, and which variables to use in which equations, so that the symbolic preprocessing can be avoided altogether, and better techniques can then be applied for the solution process.

I will get back to you when I come up with something.

- Allan

With regards to the points Venkat raised in the prior post, I'll discuss each of these in turn.

(1)   Use sparse storage for jacobian (when it is written or created). I see the jacobian to be a regular matrix in the current solvers

Yes, the Jacobian is always a regular Matrix in the current implementation. A banded solver might be possible, but a sparse solver is quite difficult due to the current design of the integration engine. Use of Maple's current sparse solvers via a callback into native Maple would lose a great deal of the efficiency obtained by having compiled code, and implemeting a reasonable sparse solver together with a sparse data structure directly into the dsolve code-base is a significant undertaking.

(2)   Use parallel sparse solver for the linear solver part of the code (very few codes in the world do this, so this will be a great advantage).

See '1'

(3)   Provide options for numerical jacobian to be used.

This is in progress now.

(4)   Solve the system Mdy/dt=f or f(dy/dt,y) directly without any rearrangement or reduce.

This should already be possible, but straightforrward use requires a pure ODE system. This is the 'implicit' option, and requires that the system be linear in the derivatives (which is your first form M dy/dt = f). When using 'implicit' very little simplification or additional processing is performed for the system (just the minimum needed to obtain the coefficients). Note though that computing the Jacobian for a system writtren in implicit form can be very expensive, as the computation of each row of the Jacobian requires a numerical solve be applied (i.e. the LU decomposition of your 'M').

(5)   Enable sparstity pattern to be provided or detected to numerical parallel, compiled sparse linear solvers.

This requires the implementation of '1' above.

(6)   Last, but not the least, provide a way to send the code as an exe to others. I am not sure why this is restricted (available only with Maplesim for certain solvers. However dsolve has better options for events compared to Maplesim). For example, dlls created from the dsolve if we can export as exe, that would be great. MATLAB provides for files to be sent to non matlab users. This will enable faster adaption of codes outside the groups. Currently, we convert codes to C (the solvers are faster and slightly less robust compared to Maple) and send that as exe codes.

At this time, the core solvers (rkf45, rosenbrock and ck45) use a great deal of Maple code and code in the Maple kernel, making it impossible to detach from Maple without rewriting all the parts in seperate implementations. The MapleSim solvers that are available for export to C code are fixed step solvers, removing the need for advanced rootfinding techniques needed to locate events, etc.

Question: is the main purpose for '6' to be able to provide a solution procedure ffor other Maple users to use, or it is to provide a true stand-alone solved that can run independently of Maple?

- Allan

A graze condition is when the zero line passes through a corner (and only the corner) of a cell.

For example, consider the implicitplot of x+y=0, and the cell for x=-1/10,0 and y=-1/10,0. The upper triangle of that cell has values:

-1/10    0
          -1/10

so the zero in the corner is condidered a graze condition because the zero line only grazes the cell. If the values were instead:

-1/10    0          1/10

Then this is not a graze condition because there are two zeros on the borders of the cell, one in the corner, and one in the midpoint of the diagonal. Make sense?

- Allan

If you look at the explanation of how cells work, the reason why comes from there.

When you sample (x+y)^2 over a uniform grid, the values in the cells along the relevant line look like:

0    >0
        0

which will draw a line on the diagonal, and:

0
>0    0

which will draw a line on the diagonal.

If you shift the function slightly (e.g. (x+y+1e-5)^2) then the line no longer lines up with the diagonal of the cells, and no line is drawn.

The implicitplot command in Maple is a sample-based technique, which is stated in the help page:

"Because the implicitplot command samples the function being plotted and builds the final image from the sample, it does not, by default, detect discontinuities in the function. Instead, the function is interpolated across the discontinuities. To change this behavior, see the rational and signchange options (described in the Options section).
For similar reasons, a sample based method cannot be used to detect isolated points which should be plotted."

The last sentence is the important one. In your example you have an isolated line as a solution, i.e. a zero line where all points away from the line have positive sign.

This means that unless you get really lucky, none of the sample triangles will exactly line up with the zeros of the function, so the zero line will be invisible.

The function is divided into a grid, and each grid cell is divided into two triangles - the triangle division occurs from the upper left corner of the cell to the lower right. Now in the example where you get a plot the zeros of the function *exactly* line up with the egdes of the trinagles, so the zero line is detcted and plotted. In the other example the zeros do not line up, and each triangular cell along where the zeros occur looks like either:

>0
 0       >0

or:

>0       0
         >0

Both of which look like a 'graze' condition, so neither will generate line segments.

To see just how much of a fluke the original plot is, simply change the function being plotted to:

(x+y+eps)^2

where 'eps' is any nonzero value that is not an exact multiple of the grid spacing, and the plot will be empty.

 

For the exact same reason, the original function that started this thread, plotting the solutions of:

sqrt(x^2+y^2)-sqrt((x-4)^2+(y-3)^2) = 5

is equally ill defined, because:

sqrt(x^2+y^2)-sqrt((x-4)^2+(y-3)^2) <= 5

for all points in the region, so any grid cell will likely have all values of one sign, or only one zero value, so no line segments will be generated.

One can get an approximation of the solution by plotting:

sqrt(x^2+y^2)-sqrt((x-4)^2+(y-3)^2) = 4.9999

or

sqrt(x^2+y^2)-sqrt((x-4)^2+(y-3)^2) = 4.99999

- Allan

Hi Patrick,

Actually it's prety complex. There's a great deal of data stored with a dsolve/numeric procedure, including event control procedures, the ode evaluation procedure, and numerous data tables for storing computed events, and the current integrator state, etc. This cannot all be stored in the remember table, as then 'forget' would also cause the procedure to forget how to integrate the ODE system.

The only reasonable way to reset the integrator is to use a re-initialization.

- Allan

 

Hi Patrick,

Actually it's prety complex. There's a great deal of data stored with a dsolve/numeric procedure, including event control procedures, the ode evaluation procedure, and numerous data tables for storing computed events, and the current integrator state, etc. This cannot all be stored in the remember table, as then 'forget' would also cause the procedure to forget how to integrate the ODE system.

The only reasonable way to reset the integrator is to use a re-initialization.

- Allan

 

1 2 3 4 5 6 Page 1 of 6