MaplePrimes Posts

MaplePrimes Posts are for sharing your experiences, techniques and opinions about Maple, MapleSim and related products, as well as general interests in math and computing.

Latest Post
  • Latest Posts Feed
  • In order to change Maple for the better, I use to submit SCRs. However, as i was kindly
    informed by Bryon (a copy of his e-letter on demand), MaplePrimes are under reconstruction and do not
    work properly. At least my three messages sent through the Contact button were lost.
    I have  unsuccessfully tried to reach beta.maplesoft.com (see the result of ping in the screen screen.29.08.16.docx).
    Please, help me!

    Hi,

    In the following example I introduce some commutation rules that are standard in Quantum Mechanics. A major feature of the Maple Physics Package, is that it is possible to define tensors as Quantum Operators. This is of great interest because powerful tensor simplification rules can then be used in Quantum Mechanics. For an example, see the commutation rules of the components of the angular momentum operator in ?Physics,Examples. Here, I focus on a possible issue: when destroying all quantum operators, the pre-defined commutation rules still apply, which should not be the case. As shown in the post, this is link to the fact that these operators are also tensors.
     

    NULL

     

    Physics:-Version()[2]

    `2016, August 16, 18:56 hours`

    (1)

    NULL

    NULL

    restart; with(Physics); interface(imaginaryunit = I)

    First, set a 3D Euclidian space

    Setup(mathematicalnotation = true, dimension = 3, signature = `+`, spacetimeindices = lowercaselatin, quiet)

    [dimension = 3, mathematicalnotation = true, signature = `+ + +`, spacetimeindices = lowercaselatin]

    (2)

    Define two rank 1 tensors

    Define(x[k], p[k])

    `Defined objects with tensor properties`

     

    {Physics:-Dgamma[a], Physics:-Psigma[a], Physics:-d_[a], Physics:-g_[a, b], p[k], x[k], Physics:-KroneckerDelta[a, b], Physics:-LeviCivita[a, b, c]}

    (3)

    Now, further define these tensors as quantum operators and gives the usual commutation rule between position and momentum operators (Quantum Mechanics).

    Setup(hermitianoperators = {p, x}, algebrarules = {%Commutator(p[k], p[n]) = 0, %Commutator(x[k], p[l]) = I*`ℏ`*KroneckerDelta[k, l], %Commutator(x[k], x[l]) = 0}, realobjects = {`ℏ`})

    [algebrarules = {%Commutator(p[k], p[n]) = 0, %Commutator(x[k], p[l]) = I*`ℏ`*Physics:-KroneckerDelta[k, l], %Commutator(x[k], x[l]) = 0}, hermitianoperators = {p, x}, realobjects = {`ℏ`}]

    (4)

    As expected:

    (%Commutator = Commutator)(p[a], x[b])

    %Commutator(p[a], x[b]) = -I*`ℏ`*Physics:-KroneckerDelta[a, b]

    (5)

    Now, reset all the Hermitian operators, so that all quantum operators are destroyed. This is useful if, for instance, one needs to compare some the result with the commutative case.

    Setup(redo, hermitianoperators = {})

    [hermitianoperators = none]

    (6)

    As expected, there are no quantum operators anymore...

    Setup(quantumoperators)

    [quantumoperators = {}]

    (7)

    ...so that the following expressions should commute (result should be true)

    Library:-Commute(p[a], x[b])

    false

    (8)

    Result should be 0NULL

    Commutator(p[a], x[b])

    -I*`ℏ`*Physics:-KroneckerDelta[a, b]

    (9)

    p[a], x[b]

    p[a], x[b]

    (10)

    NULL

    NULL

    ``

    NULLNULL

    Below is just a copy & paste of the above section. The only difference, is that "Define(x[k], p[k])" has been commented, so that x[k]and p[k] are not a tensor. In that case, everything behaves as expected (but of course, the interesting feature of tensors is not available).

    ````

    NULL

    restart; with(Physics); interface(imaginaryunit = I)

    First, set a 3D Euclidian space

    Physics:-Setup(mathematicalnotation = true, dimension = 3, signature = `+`, spacetimeindices = lowercaselatin, quiet)

    [dimension = 3, mathematicalnotation = true, signature = `+ + +`, spacetimeindices = lowercaselatin]

    (11)

    #Define two rank 1 tensors

    Now, further define these tensors as quantum operators and gives the usual commutation rule between position and momentum operators (Quantum Mechanics)

    Physics:-Setup(hermitianoperators = {p, x}, algebrarules = {%Commutator(p[k], p[n]) = 0, %Commutator(x[k], p[l]) = Physics:-`*`(Physics:-`*`(I, `ℏ`), Physics:-KroneckerDelta[k, l]), %Commutator(x[k], x[l]) = 0}, realobjects = {`ℏ`})

    [algebrarules = {%Commutator(p[k], p[n]) = 0, %Commutator(x[k], p[l]) = I*`ℏ`*Physics:-KroneckerDelta[k, l], %Commutator(x[k], x[l]) = 0}, hermitianoperators = {p, x}, realobjects = {`ℏ`}]

    (12)

    As expected:

    (%Commutator = Physics:-Commutator)(p[a], x[b])

    %Commutator(p[a], x[b]) = -I*`ℏ`*Physics:-KroneckerDelta[a, b]

    (13)

    Now, reset all the Hermitian operators, so that all quantum operators are destroyed.

    Physics:-Setup(redo, hermitianoperators = {})

    [hermitianoperators = none]

    (14)

    As expected, there are no quantum operators anymore...

    Physics:-Setup(quantumoperators)

    [quantumoperators = {}]

    (15)

    ...so that the following expressions should commute (result should be true)

    Physics:-Library:-Commute(p[a], x[b])

    true

    (16)

    Result should be 0``

    Physics:-Commutator(p[a], x[b])

    0

    (17)

    p[a], x[b]

    p[a], x[b]

    (18)

    NULL

    ``

    NULL``

    NULL


    Download Quantum_operator_as_Tensors_August_23_2016.mw

    Bruce Jenkins is President of Ora Research, an engineering research and advisory service. Maplesoft commissioned him to examine how systems-driven engineering practices are being integrated into the early stages of product development, the results of which are available in a free whitepaper entitled System-Level Physical Modeling and Simulation. In the coming weeks, Mr. Jenkins will discuss the results of his research in a series of blog posts.

    This is the first entry in the series.

    Discussions of how to bring simulation to bear starting in the early stages of product development have become commonplace today. Driving these discussions, I believe, is growing recognition that engineering design in general, and conceptual and preliminary engineering in particular, face unprecedented pressures to move beyond the intuition-based, guess-and-correct methods that have long dominated product development practices in discrete manufacturing. To continue meeting their enterprises’ strategic business imperatives, engineering organizations must move more deeply into applying all the capabilities for systematic, rational, rapid design development, exploration and optimization available from today’s simulation software technologies.

    Unfortunately, discussions of how to simulate early still fixate all too often on 3D CAE methods such as finite element analysis and computational fluid dynamics. This reveals a widespread dearth of awareness and understanding—compounded by some fear, intimidation and avoidance—of system-level physical modeling and simulation software. This technology empowers engineers and engineering teams to begin studying, exploring and optimizing designs in the beginning stages of projects—when product geometry is seldom available for 3D CAE, but when informed engineering decision-making can have its strongest impact and leverage on product development outcomes. Then, properly applied, systems modeling tools can help engineering teams maintain visibility and control at the subsystems, systems and whole-product levels as the design evolves through development, integration, optimization and validation.

    As part of my ongoing research and reporting intended to help remedy the low awareness and substantial under-utilization of system-level physical modeling software in too many manufacturing industries today, earlier this year I produced a white paper, “System-Level Physical Modeling and Simulation: Strategies for Accelerating the Move to Simulation-Led, Systems-Driven Engineering in Off-Highway Equipment and Mining Machinery.” The project that resulted in this white paper originated during a technology briefing I received in late 2015 from Maplesoft. The company had noticed my commentary in industry and trade publications expressing the views set out above, and approached me to explore what they saw as shared perspectives.

    From these discussions, I proposed that Maplesoft commission me to further investigate these issues through primary research among expert practitioners and engineering management, with emphasis on the off-highway equipment and mining machinery industries. In this research, focused not on software-brand-specific factors but instead on industry-wide issues, I interviewed users of a broad range of systems modeling software products including Dassault Systèmes’ Dymola, Maplesoft’s MapleSim, The MathWorks’ Simulink, Siemens PLM’s LMS Imagine.Lab Amesim, and the Modelica tools and libraries from various providers. Interviewees were drawn from manufacturers of off-highway equipment and mining machinery as well as some makers of materials handling machinery.

    At the outset, I worked with Maplesoft to define the project methodology. My firm, Ora Research, then executed the interviews, analyzed the findings and developed the white paper independently of input from Maplesoft. That said, I believe the findings of this project strongly support and validate Maplesoft’s vision and strategy for what it calls model-driven innovation. You can download the white paper here.

    Bruce Jenkins, Ora Research
    oraresearch.com

    just reading this story... a large quantity of information does not add up for me, like he denies it is his work and was done by hamilton? also its blasted across the net that he his proof is valid, so he cant say he doesnt want fame because its been all over the internet, and the sheer lack of logic of refusing money based on a moral code of conduct? then give it to charity, pay for a dozen scholarships but what the money is dirty?

     

    pretty sure this whole thing is a pile of crap made up by someone.

    A more honest and specific version of lemma 3.

    CONGRUENT_FUNCTIONS_OF_THE_FRACTIONAL_PART_OVER_Q_LEMMA_4.mw

    Maple Worksheet - Error

    Failed to load the worksheet /maplenet/convert/CONGRUENT_FUNCTIONS_OF_THE_FRACTIONAL_PART_OVER_Q_LEMMA_4.mw .

    Download CONGRUENT_FUNCTIONS_OF_THE_FRACTIONAL_PART_OVER_Q_LEMMA_4.mw

    Using a mouse's back/forward buttons or alt+ left arrow/ alt+right arrow did not work in Maple 2015 and they still don't work in Maple 2016. 

    Also, if you select anywhere in the search bar, it automatically selects all the words.  I sometimes (most the time) want to change only one word.  

    Im using x64 Windows 10 and it also did it when running Windows 8/8.1 .   Does anybody else have this problem?

    Can it be fixed please if not?

    If you try and scroll a page up or down, it just selects the content.  How hard would it be to fix this?

    In a recent blog post, I found a single rotation that was equivalent to a sequence of Givens rotations, the underlying message being that teaching, learning, and doing mathematics is more effective and efficient when implemented with a tool like Maple. This post has the same message, but the medium is now the Householder reflection.

    Given the vector x = , the Householder matrix H = I - 2 uuT reflects x to y = Hx, where I is the appropriate identity matrix, u = (x - y) / ||x - y|| is a unit normal for the plane (or hyperplane) across which x is reflected, and y necessarily has the same norm as x. The matrix H is orthogonal but its determinant is -1, making it a reflection instead of a rotation.

    Starting with x and uH can be constructed and the reflection y calculated. Starting with x and yu and H can be determined. But what does any of this look like? Besides, when the Householder matrix is introduced as a tool for upper triangularizing a matrix, or for putting it into upper Hessenberg form, a recipe such as the one stated in Table 1 is the starting point.

    In other words, the recipe in Table 1 reflects x to a vector y in which all entries below the kth are zero. Again, can any of this be visualized and rendered more concrete? (The chair who hired me into my first job averred that there are students who can learn from the general to the particular. Maybe some of my classmates in graduate school could, but in 40 years of teaching, I've never met one such student. Could that be because all things are known through the eyes of the beholder?)

    In the attached worksheet, Householder matrices that reflect x = <5, -2, 1> to vectors y along the coordinate axes are constructed. These vectors and the reflecting planes are drawn, along with the appropriate normals u. In addition, the recipe in Table 1 is implemented, and the recipe itself examined. If you look at the worksheet, I believe you will agree that without Maple, the explorations shown would have been exceedingly difficult to carry out by hand.

    Attached: RHR.mw

    It is very important that you learn to pose and solve equations in practical problems. Ernest Mach, a famous scientist of the nineteenth century, said that algebra is characterized by a lightening of mind, because the solution of a problem, after building the equation, you can "forget" all the practical situation to focus on the mathematical expression; everything that is not necessary to solve the problem no longer interfere with your mind. Another famous scientist, Isaac Newton, wrote that the language of algebra is the equation. To see a problem concerning abstract relations of numbers or amounts, simply translate the problem of colloquial language to the algebraic language. Here I leave the application for first order equations developed in 2016 Maple.

     

    Aplicativo_Ecuaciones.mw

    (In Spanish)

    Lenin Araujo Castillo

    Ambassador of Maple - Perú

     

    hello i was just looking back on some stuff i did a few months back and although im aware there is a function for generating the prime subset up to a given number already featured in a package in mape im just curious to know how this one measures up in terms of computational efficiency etc.

     

    anyway, this is code, if anyone has the time to give it a try and let me know what they think ie faster more logical way about it any feed back is appreciated cheers.

     

    restart;
    interface(showassumed = 0, rtablesize = infinity);
    with(plots); with(numtheory); with(Statistics); with(LinearAlgebra); with(RandomTools); with(codegen, makeproc); with(combinat); with(Maplets[Elements]);
    unprotect(real, rational, integer, complex);
    alias(P[In] = CurveFitting[PolynomialInterpolation]); alias(L[In] = CurveFitting[LeastSquares]); alias(R[In] = CurveFitting[RationalInterpolation]); alias(S[In] = CurveFitting[Spline]); alias(B[In] = CurveFitting[BSplineCurve]); alias(L[In] = CurveFitting[ThieleInterpolation], rho = frac); alias(`&Nscr;` = Count); alias(`&Dopf;` = numtheory:-divisors); alias(sigma = numtheory:-sigma); alias(`&Fscr;` = ListTools['Flatten']); alias(`&Sopf;` = seq);
    delta := proc (x, y) options operator, arrow; piecewise(x = y, 1, x <> y, 0) end proc;
    `&Mopf;` := proc (X, Y) options operator, arrow; map(X, Y) end proc;
    `&Cscr;`[S, L] := proc (X) options operator, arrow; convert(X, 'list') end proc;
    `&Cscr;`[L, S] := proc (X) options operator, arrow; convert(X, 'set') end proc;
    `&Popf;` := proc (N) options operator, arrow; `minus`({`&Sopf;`(k*delta(`&Nscr;`(`&Fscr;`(`&Cscr;`[S, L](`&Mopf;`(`&Cscr;`[S, L], `&Mopf;`(`&Dopf;`, `&Dopf;`(k)))))), 3), k = 1 .. N)}, {0}) end proc;
    N -> `minus`({(k delta(&Nscr;(&Fscr;(&Cscr;[S, L]((&Cscr;[S, L])

      &Mopf; (&Dopf; &Mopf; (&Dopf;(k)))))), 3)) &Sopf; (k = 1 .. N)},

      {0})
    n[P] := proc (N) options operator, arrow; `&Nscr;`(`&Cscr;`[S, L](`&Popf;`(N)))-1 end proc;



    Maple Worksheet - Error

    Failed to load the worksheet /maplenet/convert/prime_subset_up_to_N.mw .

    Download prime_subset_up_to_N.mw

    The 5th International Congress on Mathematical Software was  held in Berlin from July, 11 to July,14, 2016.

    I'd like to pay attention to the following sessions:

    The talks demonstrate what is going on at the frontiers of math soft nowadays and what are the cutting edge research topics in it.

    This post is about the relationship between the number of processors used in parallel processing with the Threads package and the resultant real times and cpu times for a computation.

    In the worksheet below, I perform the same computation using each possible number of processors on my machine, one thru eight. The computation is adding a list of 32 million pre-selected random integers. The real times and cpu times are collected from each run, and these are analyzed with a variety of metrics that I devised. Note that garbage-collection (gc) time is not an issue in the timings; as you can see below, the gc times are zero throughout.

    My conclusion is that there are severely diminishing returns as the number of processors increases. There is a major benefit in going from one processor to two; there is a not-as-great-but-still-substantial benefit in going from two processors to four. But the real-time reduction in going from four processors to eight is very small compared to the substantial increase in resource consumption.

    Please discuss the relevance of my six metrics, the soundness of my test technique, and how the presentation could be better. If you have a computer capable of running more than eight threads, please modify and run my worksheet on it.

    Diminishing Returns from Parallel Processing: Is it worth using more than four processors with Threads?

    Author: Carl J Love, 2016-July-30 

    Set up tests

     

    restart:

    currentdir(kernelopts(homedir)):
    if kernelopts(numcpus) <> 8 then
         error "This worksheet needs to be adjusted for your number of CPUs."
    end if:
    try fremove("ThreadsData.m") catch: end try:
    try fremove("ThreadsTestData.m") catch: end try:
    try fremove("ThreadsTest.mpl") catch: end try:

    #Create and save random test data
    L:= RandomTools:-Generate(list(integer, 2^25)):
    save L, "ThreadsTestData.m":

    #Create code file to be read for the tests.
    fd:= FileTools:-Text:-Open("ThreadsTest.mpl", create):
    fprintf(
         fd,
         "gc();\n"
         "read \"ThreadsTestData.m\":\n"
         "CodeTools:-Usage(Threads:-Add(x, x= L)):\n"
         "fd:= FileTools:-Text:-Open(\"ThreadsData.m\", create, append):\n"
         "fprintf(\n"
         "     fd, \"%%m%%m%%m\\n\",\n"
         "     kernelopts(numcpus),\n"
         "     CodeTools:-Usage(\n"
         "          Threads:-Add(x, x= L),\n"
         "          iterations= 8,\n"
         "          output= [realtime, cputime]\n"
         "     )\n"
         "):\n"
         "fclose(fd):"
    ):
    fclose(fd):

    #Code review
    fd:= FileTools:-Text:-Open("ThreadsTest.mpl"):
    while not feof(fd) do
         printf("%s\n", FileTools:-Text:-ReadLine(fd))
    end do:

    fclose(fd):

    gc();
    read "ThreadsTestData.m":
    CodeTools:-Usage(Threads:-Add(x, x= L)):
    fd:= FileTools:-Text:-Open("ThreadsData.m", create, append):
    fprintf(
         fd, "%m%m%m\n",
         kernelopts(numcpus),
         CodeTools:-Usage(
              Threads:-Add(x, x= L),
              iterations= 8,
              output= [realtime, cputime]
         )
    ):
    fclose(fd):

     

    Run the tests

    restart:

    kernelopts(numcpus= 1):
    currentdir(kernelopts(homedir)):
    read "ThreadsTest.mpl":

    memory used=0.79MiB, alloc change=0 bytes, cpu time=2.66s, real time=2.66s, gc time=0ns

    memory used=0.78MiB, alloc change=0 bytes, cpu time=2.26s, real time=2.26s, gc time=0ns

     

    Repeat above test using numcpus= 2..8.

     

    restart:

    kernelopts(numcpus= 2):
    currentdir(kernelopts(homedir)):
    read "ThreadsTest.mpl":

    memory used=0.79MiB, alloc change=2.19MiB, cpu time=2.73s, real time=1.65s, gc time=0ns

    memory used=0.78MiB, alloc change=0 bytes, cpu time=2.37s, real time=1.28s, gc time=0ns

     

    restart:

    kernelopts(numcpus= 3):
    currentdir(kernelopts(homedir)):
    read "ThreadsTest.mpl":

    memory used=0.79MiB, alloc change=4.38MiB, cpu time=2.98s, real time=1.38s, gc time=0ns

    memory used=0.78MiB, alloc change=0 bytes, cpu time=2.75s, real time=1.05s, gc time=0ns

     

    restart:

    kernelopts(numcpus= 4):
    currentdir(kernelopts(homedir)):
    read "ThreadsTest.mpl":

    memory used=0.80MiB, alloc change=6.56MiB, cpu time=3.76s, real time=1.38s, gc time=0ns

    memory used=0.78MiB, alloc change=0 bytes, cpu time=3.26s, real time=959.75ms, gc time=0ns

     

    restart:

    kernelopts(numcpus= 5):
    currentdir(kernelopts(homedir)):
    read "ThreadsTest.mpl":

    memory used=0.80MiB, alloc change=8.75MiB, cpu time=4.12s, real time=1.30s, gc time=0ns

    memory used=0.78MiB, alloc change=0 bytes, cpu time=3.74s, real time=910.88ms, gc time=0ns

     

    restart:

    kernelopts(numcpus= 6):
    currentdir(kernelopts(homedir)):
    read "ThreadsTest.mpl":

    memory used=0.81MiB, alloc change=10.94MiB, cpu time=4.59s, real time=1.26s, gc time=0ns

    memory used=0.78MiB, alloc change=0 bytes, cpu time=4.29s, real time=894.00ms, gc time=0ns

     

    restart:

    kernelopts(numcpus= 7):
    currentdir(kernelopts(homedir)):
    read "ThreadsTest.mpl":

    memory used=0.81MiB, alloc change=13.12MiB, cpu time=5.08s, real time=1.26s, gc time=0ns

    memory used=0.78MiB, alloc change=0 bytes, cpu time=4.63s, real time=879.00ms, gc time=0ns

     

    restart:

    kernelopts(numcpus= 8):
    currentdir(kernelopts(homedir)):
    read "ThreadsTest.mpl":

    memory used=0.82MiB, alloc change=15.31MiB, cpu time=5.08s, real time=1.25s, gc time=0ns

    memory used=0.78MiB, alloc change=0 bytes, cpu time=4.69s, real time=845.75ms, gc time=0ns

     

    Analyze the data

    restart:

    currentdir(kernelopts(homedir)):

    (R,C):= 'Vector(kernelopts(numcpus))' $ 2:
    N:= Vector(kernelopts(numcpus), i-> i):

    fd:= FileTools:-Text:-Open("ThreadsData.m"):
    while not feof(fd) do
         (n,Tr,Tc):= fscanf(fd, "%m%m%m\n")[];
         (R[n],C[n]):= (Tr,Tc)
    end do:

    fclose(fd):

    plot(
         (V-> <N | 100*~V>)~([R /~ max(R), C /~ max(C)]),
         title= "Raw timing data (normalized)",
         legend= ["real", "CPU"],
         labels= [`number of processors\n`, `%  of  max`],
         labeldirections= [HORIZONTAL,VERTICAL],
         view= [DEFAULT, 0..100]
    );

    The metrics:

     

    R[1] /~ R /~ N:          Gain: The gain from parallelism expressed as a percentage of the theoretical maximum gain given the number of processors

    C /~ R /~ N:               Evenness: How evenly the task is distributed among the processors

    1 -~ C[1] /~ C:           Overhead: The percentage of extra resource consumption due to parallelism

    R /~ R[1]:                   Reduction: The percentage reduction in real time

    1 -~ R[2..] /~ R[..-2]:  Marginal Reduction: Percentage reduction in real time by using one more processor

    C[2..] /~ C[..-2] -~ 1:  Marginal Consumption: Percentage increase in resource consumption by using one more processor

     

    plot(
         [
              (V-> <N | 100*~V>)~([
                   R[1]/~R/~N,             #gain from parallelism
                   C/~R/~N,                #how evenly distributed
                   1 -~ C[1]/~C,           #overhead
                   R/~R[1]                 #reduction
              ])[],
              (V-> <N[2..] -~ .5 | 100*~V>)~([
                   1 -~ R[2..]/~R[..-2],   #marginal reduction rate
                   C[2..]/~C[..-2] -~ 1    #marginal consumption rate        
              ])[]
         ],
         legend= typeset~([
              'r[1]/r/n',
              'c/r/n',
              '1 - c[1]/c',
              'r/r[1]',
              '1 - `Delta__%`(r)',
              '`Delta__%`(c) - 1'       
         ]),
         linestyle= ["solid"$4, "dash"$2], thickness= 2,
         title= "Efficiency metrics\n", titlefont= [HELVETICA,BOLD,16],
         labels= [`number of processors\n`, `% change`], labelfont= [TIMES,ITALIC,14],
         labeldirections= [HORIZONTAL,VERTICAL],
         caption= "\nr = real time,  c = CPU time,  n = # of processors",
         size= combinat:-fibonacci~([16,15]),
         gridlines
    );

     

     

    Download Threads_dim_ret.mw

    What's up with Mapleprimes's timing today? Every time I click on a Post or Question, the main post loads at the normal speed (which is slow to start with), and then it takes a full minute for the Answers or Replies (or lack thereof) to load. This has been consistent for every Post or Question that I've read today. Is anyone else experiencing this?

    In a recent conversation I explained whyLSODE was giving wrong results (http://www.mapleprimes.com/questions/210948-Can-We-Trust-Maple#comment230167). After a lot of confusions and weird infinite loops for answers, it turned out that Newton Raphson was not properly done.

    Both LSODE and MEBDFI are currently incompletely implemented (only one iteration is done instead of Newton Raphson till convergence). Maplesoft should update the help files accordingly.

    The post below explains how better results are obtained with method = mgear. To run the command mgear you will need Maple 6 or earlier versions. For lsode, any current version is fine.  Unfortunately Maple deprecated an algorithm that worked fine. From Maple 8, the algorithm moved to Rosenbrock methods for stiff equations. This is still not ideal.

    If Maple had a working algorithm, I am hoping that Maplesoft folks would consider bringing it back in future versions. (At least with the same functionality as in Maple 6).

    PLEASE NOTE, the issue is not with solving this example (Very simple). This example is chosen to show how a popular algorithm in the literature is wrongly implemented.

     

    Here Maple's lsode is forced to take only one step and use first order back ward difference formula to integrate from 0 to 1.  LSODE mimics Eulerbackward using the options given below. The post shows that LSODE does not do Newton Raphson and just performs only iteration for nonlinear equations.

    restart;

    Digits:=15;

    Digits := 15

    (1)

    eq:=diff(y(t),t)=-y(t);

    eq := diff(y(t), t) = -y(t)

    (2)

    C:=array([0$22]);

    C := Vector[row](22, {(1) = 0, (2) = 0, (3) = 0, (4) = 0, (5) = 0, (6) = 0, (7) = 0, (8) = 0, (9) = 0, (10) = 0, (11) = 0, (12) = 0, (13) = 0, (14) = 0, (15) = 0, (16) = 0, (17) = 0, (18) = 0, (19) = 0, (20) = 0, (21) = 0, (22) = 0})

    (3)

    C[9]:=1;

    C[9] := 1

    (4)

    sol:=dsolve({eq,y(0)=1},type=numeric,method=lsode[backfull],ctrl=C,initstep=0.1,minstep=0.1,abserr=1,relerr=1):

    sol(0.1);

    [t = .1, y(t) = .909090909090834]

    (5)

    subs(diff(y(t),t)=(y1-1)/0.1,y(t)=y1,eq);

    0.1e2*y1-0.1e2 = -y1

    (6)

    fsolve(%,y1=0.5);

    .909090909090909

    (7)

     While for linear it gave the expected result, it gives wrong results for nonlinear problems.

    sol1:=dsolve({eq,y(0)=1},type=numeric):

    sol1(0.1);

    [t = .1, y(t) = .904837355407810]

    (8)

    eq:=diff(y(t),t)=-y(t)^2*exp(-y(t))-10*y(t)*(1+0.01*exp(y(t)));

    eq := diff(y(t), t) = -y(t)^2*exp(-y(t))-10*y(t)*(1+0.1e-1*exp(y(t)))

    (9)

    sol:=dsolve({eq,y(0)=1},type=numeric,method=lsode[backfull],ctrl=C,initstep=0.1,minstep=0.1,abserr=1,relerr=1):

    sol(0.1);

    [t = .1, y(t) = .501579294869466]

    (10)

    subs(diff(y(t),t)=(y1-1)/0.1,y(t)=y1,eq);

    0.1e2*y1-0.1e2 = -y1^2*exp(-y1)-10*y1*(1+0.1e-1*exp(y1))

    (11)

    fsolve(%,y1=1);

    .488691779256025

    (12)

    sol1:=dsolve({eq,y(0)=1},type=numeric):

     the expected answer is correctly obtained with default tolerance as

    sol1(0.1);

    [t = .1, y(t) = .349614721994122]

    (13)

     The results obtained are worse than single iteration using jacobian.

    eq2:=(lhs-rhs)(subs(diff(y(t),t)=(y1-1)/0.1,y(t)=y1,eq));

    eq2 := 0.1e2*y1-0.1e2+y1^2*exp(-y1)+10*y1*(1+0.1e-1*exp(y1))

    (14)

    jac:=unapply(diff(eq2,y1),y1);

    jac := proc (y1) options operator, arrow; 20.+2*y1*exp(-y1)-y1^2*exp(-y1)+.10*exp(y1)+.10*y1*exp(y1) end proc

    (15)

    f:=unapply(eq2,y1);

    f := proc (y1) options operator, arrow; 0.1e2*y1-0.1e2+y1^2*exp(-y1)+10*y1*(1+0.1e-1*exp(y1)) end proc

    (16)

    y0:=1;

    y0 := 1

    (17)

    dy:=-evalf(f(y0)/jac(y0));

    dy := -.508796088545793

    (18)

    ynew:=y0+dy;

    ynew := .491203911454207

    (19)

     Following procedures confirm that it is indeed calling the procedure only at 0 and 0.1, with backdiag giving slightly better results.

    myfun:= proc(x,y) if not type(x,'numeric') or not type(evalf(y),numeric)then 'procname'(x,y);
        else lprint(`Request at x=`,x); -y^2*exp(-y(x))-10*y*(1+0.01*exp(y)); end if; end proc;

    myfun := proc (x, y) if not (type(x, 'numeric') and type(evalf(y), numeric)) then ('procname')(x, y) else lprint(`Request at x=`, x); -y^2*exp(-y(x))-10*y*(1+0.1e-1*exp(y)) end if end proc

    (20)

    sol1:=dsolve({diff(y(x),x)=myfun(x,y(x)),y(0)=1},numeric,method=lsode[backfull],ctrl=C,initstep=0.1,minstep=0.1,abserr=1,relerr=1,known={myfun}):

    sol1(0.1);

    `Request at x=`, 0.

    `Request at x=`, 0.

    `Request at x=`, .1

    `Request at x=`, .1

    [x = .1, y(x) = .501579304183583]

    (21)

    sol2:=dsolve({diff(y(x),x)=myfun(x,y(x)),y(0)=1},numeric,method=lsode[backdiag],ctrl=C,initstep=0.1,minstep=0.1,abserr=1,relerr=1,known={myfun}):

    sol2(0.1);

    `Request at x=`, 0.

    `Request at x=`, 0.

    `Request at x=`, .1

    `Request at x=`, .1

    [x = .1, y(x) = .497831388424072]

    (22)

     

    Download Lsodeanalysistrunc.mws

     

    Next see how dsolve method = mgear works just fine in Maple 6 (gives the expected answer upto 3 Digits accuracy). To run this code you will need Maple 6 or earlier versions. Maple 7 has this algorithm, but I don't know to use it as it is hidden. I would like to get support from other members to get Maplesoft's attention to bring this algorithm back.

    If Mdy/dt = f(y) is solved using mgear algorithm (instead of dy/dt =f ), then one can have a good DAE solver based on this (M being singular). 

     

    restart;

    myfun:= proc(x,y) if not type(x,'numeric') or not type(evalf(y),numeric)then 'procname'(x,y);
        else lprint(`Request at x=`,x); -y^2*exp(-y(x))-10*y*(1+0.01*exp(y)); end if; end proc;

    myfun := proc (x, y) if not (type(x, 'numeric') and type(evalf(y), numeric)) then ('procname')(x, y) else lprint(`Request at x=`, x); -y^2*exp(-y(x))-10*y*(1+0.1e-1*exp(y)) end if end proc

    (1)

    sol2:=dsolve({diff(y(x),x)=myfun(x,y(x)),y(0)=1},{y(x)},numeric,method=mgear[mstepnum],stepsize=0.1,minstep=0.1,errorper=1):

    sol2(0.1);

    `Request at x=`, 0.

    `Request at x=`, .1

    `Request at x=`, .1

    `Request at x=`, .1

    [x = .1, y(x) = .4887165263]

    (2)

     

     

    Download Mgearworks.mws

    First 64 65 66 67 68 69 70 Last Page 66 of 306