acer

32400 Reputation

29 Badges

19 years, 345 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

If you know particular values for n and t as well as the range for R then you can use Maple to construct H = P(R)/Q(R) which approximates your expression quite well, where P(R) and Q(R) are polynomials in R with float coefficients.

For example (there are other similar ways... you can even get an estimate of the error bound).

restart:

ee := solve(R = exp(t/MTBF) * (t/MTBF) * Sum(1/(i!), i=0..n),MTBF);

t/LambertW(R/(Sum(1/factorial(i), i = 0 .. n)))

P := plot(value(eval(ee,[n=15,t=0.3])),R=0..3,color=green):

H := eval(numapprox:-chebpade(value(eval(ee,[n=15,t=0.3])),R=0.1..3,[4,4]),
     T = orthopoly[T]);

(-.1046598996+.3979979186*R+.2094792186*((20/29)*R-31/29)^2+0.2631635411e-1*((20/29)*R-31/29)^3+0.7344468442e-3*((20/29)*R-31/29)^4)/(-.6116268169+.8210912092*R+.6724478367*((20/29)*R-31/29)^2+.1360729330*((20/29)*R-31/29)^3+0.7230731934e-2*((20/29)*R-31/29)^4)

PH := plot(H, R=0..3, color=red, style=point, numpoints=20):

plots:-display(P, PH, view=-10..10);

 

 

Download LW2.mw

 

@tomleslie Saving to .m format was not always discouraged. It's a convenient way to save results of time consuming computations. It's easier to do than to store in an .mla archive, especially if there are several such results to handle separately.

One major reason for its being discouraged, AFAIK, is that it cannot handle everything. It can't handle modules and records in general. Localness is a problem. For a many instances that is not a problem, however.

Some of the best resources on basic and introductory Maple programming are older texts and web resources, some of which use this functionality as a stock tool.

So it's not so surprising that people would utilize the functionality.

@tomleslie The order of terms in SUM DAG assigned to name p are at issue here. It may be that conjoining the restart and the initial assignment to p involves a SUM with the terms stored in a different order than otherwise. But that is not the only way to get a SUM which exhibits the problem.

For example (and this is just showing a way to force the problem -- in general we would not know how p was formed and by what commands),


restart;

p:=x^2*y-2*y*z+3*x^2+2*y-z:
sort(p,order=plex(x,y,z), ascending);

cl:=[coeffs(p, [x,y,z])]:

[seq(`if`(cl[j]>0,seq(op(j,p)/cl[j], i=1..cl[j]),NULL),j=1..numelems(cl))];

                                             2    2  
                       -z + 2 y - 2 y z + 3 x  + x  y
               [  1      1            1  2    1  2    1  2  ]
               [- - z, - - z, -2 y z, - x  y, - x  y, - x  y]
               [  2      2            3       3       3     ]

restart;

p:=x^2*y-2*y*z+3*x^2+2*y-z:
sort(p,order=plex(x,y,z), descending);

cl:=[coeffs(p, [x,y,z])]:

[seq(`if`(cl[j]>0,seq(op(j,p)/cl[j], i=1..cl[j]),NULL),j=1..numelems(cl))];

                         2        2                  
                        x  y + 3 x  - 2 y z + 2 y - z
                [1  2    1  2              1      1      1  ]
                [- x  y, - x  y, -2 y z, - - z, - - z, - - z]
                [2       2                 3      3      3  ]

 


Download sumdagf.mw

As for the question of restart and additional commands in the same execution group, I have observed the problem occuring for commands which are intercepted by the GUI. By this I mean commands which get executed by the Java Standard GUI (!) rather than the Maple kernel (aka mserver, aka engine). In particular the interface command fits in this problematic class, but I believe that there are others. (The difference can be established by running particular code that calls the intercepted commands at the top level in the GUI versus inside a procedure executed in the GUI.) For example, an interface call following a restart, in the same execution group, can get ignored by the GUI. So, sure, using a separate execution group for restart, alone, is the right way to go.

The main point is that the terms in a polynomial such as p may be sorted internally in different ways.

And, with a local X...


restart:

aliasedlatex:=proc(e)

  local lookup;

  lookup:=op(eval(alias:-ContentToGlobal)):

  :-latex(subs(lookup,e));

  NULL;

end proc:

local X;

alias(X=:-X(a,b,c)):

test:=diff(:-X(a,b,c),a);

diff(X(a, b, c), a)

latex(test);

{\frac {\partial }{\partial a}}X \left( a,b,c \right)

 

aliasedlatex(test);

{\frac {\rm d}{{\rm d}a}}X

 


Download aliasedlatexlocal.mw

Hmm, I guess that had better be subs instead of eval, or else derivatives become 0.


restart:

aliasedlatex:=proc(e)

  local lookup;

  lookup:=op(eval(alias:-ContentToGlobal)):

  :-latex(subs(lookup,e));

  NULL;

end proc:

alias(Y=X(a,b,c)):

test:=diff(X(a,b,c),a);

diff(Y, a)

latex(test);

{\frac {\partial }{\partial a}}X \left( a,b,c \right)

 

aliasedlatex(test);

{\frac {\rm d}{{\rm d}a}}Y

 


Download aliasedlatexsubs.mw

@Carl Love I did it that way so that the name UpdatingRead wouldn't itself appear in the Variable Manager, if the code was just pasted into a worksheet. It wasn't the only way to get that effect, but served that purpose.

Of course, one could instead savelib just the assigned UpdatingRead procedure to a .mla archive located in libname in future sessions.

I wanted the OP to see the addition of his .m file's missing names to the Variable Manager, using the posted Answer code in the simplest way, without the effect of the Variable Manager being cluttered by the name UpDatingRead itself.

Who knows... maybe someone will fix assign so that it too updates the Variable Manager. In which case the behaviour would be the same either way. It was intended only as the most minor aspect, and is not a big deal.

Dr. Subramanian, your line of questioning is valid, but please note that Roman's Post is about exact rational data. And your line of questioning above is about floating-point data.

You are right, Maple 2015's LinearAlgebra:-LinearSolve is missing the ability to do direct sparse (LU based) real floating-point linear solving at working precision Digits>15. That's a regression and I've submitted a bug report. It can however still do indirect, iterated sparse real floating-point linear solving using functions from the chapter F11 of the NAG library.

I mentioned above (true at the time I wrote it) that LinearSolve was using NAG routine f01brf to LU factorize a real float Matrix and routine f04axf to do the subsequent solving for a given real float RHS. In Maple 2015 this is done at hardware working precision using functions (bundled in Maple in an external shared library) from UMFPACK.

You asked above about how to re-use the real floating-point sparse LU factorization. The following sets up and uses functions from UMFPACK to accomplish that. It works in my 64bit Linux version of Maple 2015 (and seems to also work in the 64bit Linux versions Maple 18.02 back as far as, hmm, 15.01). It also seems to run ok on my 64bit Maple2015.1 on Windows 7.

I've coded it with three stages: LU factorization, linear solving for one or multiple RHSs, and freeing of the LU factorization data. The idea -- which I believe is what you are looking for -- is the ability to factorize the LHS Matrix just once and then to do several separate linear-solving steps using different RHSs. Indeed that special use pattern is the only reason to use this code. Otherwise one would simply call LinearSolve.

One must be careful with the so-called prefactored data. It's stored internally in memory, and is not explicitly assigned to any Matrix or other structure. (The module code below simply saves a memory reference. Using an invalid memory reference would likely crash.)

The code below consists of 1D Maple Notation input, interspersed with prettyprinted plaintext output.

SparseLU:=module()
   option package;
   export LU,Solve,Free;
   local extlib, Anz, NumericA, numrows;
   LU:=proc(M::Matrix(:-storage=:-sparse,:-datatype=:-float[8],
                      :-order=:-Fortran_order))
      local extfun, numcols;
      if extlib='extlib' then
         extlib:=ExternalCalling:-ExternalLibraryName("linalg",':-HWFloat');
      end if;
      extfun:=ExternalCalling:-DefineExternal(':-hw_SpUMFPACK_MatFactor',
                                              extlib);
      (numrows,numcols):=op(1,M);
      (Anz,NumericA):=extfun(numrows,numcols,M);
      NULL;
   end proc:
   Solve:=proc(numrows::posint, V::{Matrix,Vector})
      local B,extfun,res;
      if type(V,'Matrix'(':-storage'=':-sparse',':-datatype'=':-float[8]',
                         ':-order'=':-Fortran_order')) then
         B:=V;
      else
         B:=Matrix(V,':-storage'=':-sparse',':-datatype'=':-float[8]',
                   ':-order'=':-Fortran_order');
      end if;
      if extlib='extlib' then
         extlib:=ExternalCalling:-ExternalLibraryName("linalg",':-HWFloat');
      end if;
      extfun:=ExternalCalling:-DefineExternal(':-hw_SpUMFPACK_MatMatSolve',
                                              extlib);
      if not(Anz::posint and NumericA::posint and numrows::posint) then
          error "invalid factored data";
      end if;
      res:=extfun(numrows,op([1,2],B),Anz,NumericA,B);
      res;
   end proc:
   Free:=proc()
      local extfun;
      if not NumericA::posint then
         error "nothing valid to free";
      end if;
      if extlib='extlib' then
         extlib:=ExternalCalling:-ExternalLibraryName("linalg",':-HWFloat');
      end if;
      extfun:=ExternalCalling:-DefineExternal(':-hw_SpUMFPACK_FreeNumeric',
                                              extlib);
      extfun(NumericA);
      NumericA:=-1;
      NULL;
   end proc:
end module:
 

MM:=Matrix([[0.,0.,-25.],[-53.,-7.,0.],[0.,-70.,0.]],
           ':-storage'=':-sparse',':-datatype'=':-float[8]');
                               [ 0.      0.     -25.]
                               [                    ]
                         MM := [-53.    -7.      0. ]
                               [                    ]
                               [ 0.     -70.     0. ]

 
with(SparseLU);

                               [Free, LU, Solve]


# precompute the sparse LU factorization
LU(MM);
 
# solve MM.ans=VV with VV a Vector
VV:=Vector([1,1,1]);

                                         [1]
                                         [ ]
                                   VV := [1]
                                         [ ]
                                         [1]

ans:=Solve(3,VV);

                                [-0.0169811320754717]
                                [                   ]
                         ans := [-0.0142857142857143]
                                [                   ]
                                [-0.0400000000000000]

MM,VV; # untouched, ok

                          [ 0.      0.     -25.]  [1]
                          [                    ]  [ ]
                          [-53.    -7.      0. ], [1]
                          [                    ]  [ ]
                          [ 0.     -70.     0. ]  [1]

# verify by computing forward error
LinearAlgebra:-Norm(MM.ans - Matrix(VV));

                                      0.

# solve MM.ans=VV with VV a Matrix
VV:=LinearAlgebra:-RandomMatrix(3,2,':-datatype'=':-float[8]');

                                    [-53.    40.]
                                    [           ]
                              VV := [21.     97.]
                                    [           ]
                                    [-25.    43.]

ans:=Solve(3,VV);

                      [-0.443396226415094    -1.74905660377358 ]
                      [                                        ]
               ans := [0.357142857142857     -0.614285714285714]
                      [                                        ]
                      [ 2.12000000000000     -1.60000000000000 ]

MM,VV; # untouched, ok

                     [ 0.      0.     -25.]  [-53.    40.]
                     [                    ]  [           ]
                     [-53.    -7.      0. ], [21.     97.]
                     [                    ]  [           ]
                     [ 0.     -70.     0. ]  [-25.    43.]

# verify by computing forward error
LinearAlgebra:-Norm(MM.ans - Matrix(VV));

                                      0.

 
# free the internal LU factorization data
Free();
 
# the factorization data is gone
Solve(3,VV);
Error, (in Solve) invalid factored data

After I posted this Answer Markiyan provided a link as source of this problem:  http://math.stackexchange.com/questions/1342291/evaluate-an-integral

I had not previously seen that stackexchange post (...I only follow the "maple" tag on that site). At least one responder there used the same method as I did to arrive at Int(-1/5*sin(w)^3/w^(3/5),w=0..infinity).

I only solved that last form numerically, in my Answer above. But the cited stackexchange thread contains a hint to a way (apart from Axel's other fine suggestion to use a Laplace transform) get an exact result. The hint was that int(sin(u)*u^p,u=0..infinity) ought to be known. And Maple does know it. Now I'm sad I didn't test with the trig identity earlier, to get a difference of terms in that form. I was too happy with the form that had just the single sin call, as it would make my numeric approach with discretization easier!

Using 64bit Maple 2015.1 on Windows 7 (and it also worked when tried in my Maple 18.02),

 

restart:

P := Int(-1/5*sin(w)^3/w^(3/5),w=0..infinity);

Int(-(1/5)*sin(w)^3/w^(3/5), w = 0 .. infinity)

combine(P);

Int((1/20)*(sin(3*w)-3*sin(w))/w^(3/5), w = 0 .. infinity)

value(%);

(1/1440)*72^(4/5)*Pi^(1/2)*GAMMA(7/10)/GAMMA(4/5)-(3/40)*2^(2/5)*Pi^(1/2)*GAMMA(7/10)/GAMMA(4/5)

simplify(%);

(1/120)*Pi^(1/2)*2^(2/5)*(3^(3/5)-9)*GAMMA(7/10)/GAMMA(4/5)

 

 

Download resexact.mw

@Axel Vogt I was a little surprised that Maple didn't find an answer for int(-1/5*sin(w)^3/w^(3/5),w=0..infinity) on its own. Good job with the Laplace transform.

I got the numeric result by discretizing the oscillatory integrand and then using evalf/Sum to compute with its Levin's u-transform to accelerate convergence. Axel, you and I have discussed that approach before. In this case the period was constant (2*Pi), but for varying period NextZero may be used. But it seems this approach sometimes needs a better implementation of that acceleration algorithm than what's currently in evalf/Sum.

 

@Preben Alsholm This sounds like it fits the pattern of a known regression for 1D Maple Notation input in the Standard GUI of Maple 2015.0 which was fixed in Maple 2015.1. Namely, in a single execution group, an error gets emitted if there are multiple prompts and some aren't separated by a statement terminator. As you say, normally that is not supposed to matter.

In Maple 2015.1 it's ok. In Maple 18.02 and earlier it was ok.

@Markiyan Hirnyk It did make a difference. Instead of having to waffle around with calls to both eliminate and solve, the formula for Y as a function of only X can be done using just eliminate. I was also addressing Carl's concerns about lexicographic dependency and luck.

And now, negate theta. X stays the same and Y flips sign.

@Markiyan Hirnyk Why not eliminate both Y and theta? The result include formulas for Y in terms of X alone, and for theta in terms of X alone (and restrictions on X, which happen to be null).

restart;
eq1:=convert( X=cos(theta) + 0.8e-1*cos(3.*theta), rational):
eq2:=convert( Y=-sin(theta)+ 0.8e-1*sin(3.*theta), rational):
sols:=[eliminate({eq1,eq2},{Y,theta})]:
seq(sols[i][2], i=1..nops(sols));  # restrictions on X (NULL)
S:=seq(eval(Y,sols[i][1]),i=1..nops(sols)):
plot(S[1],X=-1-0.8e-1..1+0.8e-1,color=red);

Now, you may wish to try and show programmatically that only S[1] of the three results in sol is real. And then you might justify the negation Y=-S[1].

@Carl Love It might also be worth noting in this context, to get the x-axes alignment effect that the aligncolumns option of plots:-display provides,

_PLOTARRAY( Matrix(2,1, [P1,P2]), _ALIGNCOLUMNS(true) );

@Carl Love I checked, and see that I submitted a bug report (SCR) on the setattribute and floats problem in the Standard GUI in July 2010.

(I have an idea that it might have regressed between Maple 11 and Maple 12, but I haven't double-checked that.)

@rlewis You might want to ensure that you're using a completely new name for the .gif target file (in case the unrebooted machine is confused about the old 0 byte file -- possibl even an orphaned kernel?).

Note that for "large" 3D animations the whole export process can use a lot of memory and a lot of time. It gets more severe as the number of frames and the size of the plotted structures grow. 100 frames can be "large" in the 3D case. Also, using plottools:-sphere rather than pointplot3d symbol=solidsphere incurs more cost.

The Maple Standard GUI can also leak memory, when rendering involved 3D plots. This can hamper 3D plot animation export which itself can use a lot of resources. It can help to export from a GUI session in which you haven't otherwise display the animation inline. Or you could run the code the generates the export fully programmatically in another interface (such as the CLI Commandline Interface).

If it happens for a modest number of frames, with an entirely new filename, using the symbol=solidsphere way, then you might check that the OS is not running any errant mserver processes. Ie, if all instances of maple owned by you are shut down and some mserver process owned by you is still running, then that process could be killed (or the host restarted).

First 329 330 331 332 333 334 335 Last Page 331 of 593