acer

32358 Reputation

29 Badges

19 years, 331 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Here is a revision with the style of 2D Math output customized to black (Format-Styles from the main menubar), and the text containing inlined 2D Math put into a GUI Table with custon properties.

Download inlinedmathtable.mw

Sure, the rtable subtype of the result returned by SparseLU:-Solve can be made to match that of the RHS of the linear system. See the code revision below.

As for the zeroes of a sparse Matrix, it depends on how you contruct the Matrix. If you used the Matrix command then, yes, you need to specify all the scanned data. But I only used the command because it was convenient. You can also create a sparse Matrix with the syntax M:=Matrix(3,3,storage=sparse,datatype=float[8]) where no entries are initially assigned. You can then start assigning entries, like M[2,4]:=3.12, say. Or you can import the whole sparse Matrix from a file with special formatting, using ImportMatrix.


restart:

SparseLU:=module()

   option package;

   export LU,Solve,Free;

   local extlib, Anz, NumericA, numrows;

   LU:=proc(M::Matrix(:-storage=:-sparse,:-datatype=:-float[8],

                      :-order=:-Fortran_order))

      local extfun, numcols;

      if extlib='extlib' then

         extlib:=ExternalCalling:-ExternalLibraryName("linalg",':-HWFloat');

      end if;

      extfun:=ExternalCalling:-DefineExternal(':-hw_SpUMFPACK_MatFactor',

                                              extlib);

      (numrows,numcols):=op(1,M);

      (Anz,NumericA):=extfun(numrows,numcols,M);

      NULL;

   end proc:

   Solve:=proc(numrows::posint, V::{Matrix,Vector})

      local B,extfun,res;

      if type(V,'Matrix'(':-storage'=':-sparse',':-datatype'=':-float[8]',

                         ':-order'=':-Fortran_order')) then

         B:=V;

      else

         B:=Matrix(V,':-storage'=':-sparse',':-datatype'=':-float[8]',

                   ':-order'=':-Fortran_order');

      end if;

      if extlib='extlib' then

         extlib:=ExternalCalling:-ExternalLibraryName("linalg",':-HWFloat');

      end if;

      extfun:=ExternalCalling:-DefineExternal(':-hw_SpUMFPACK_MatMatSolve',

                                              extlib);

      if not(Anz::posint and NumericA::posint and numrows::posint) then

          error "invalid factored data";

      end if;

      res:=extfun(numrows,op([1,2],B),Anz,NumericA,B);
      if rtable_options(res,':-subtype') <> rtable_options(V,':-subtype') then
          res := convert(res,rtable_options(V,':-subtype'));
      end if;
      res;

   end proc:

   Free:=proc()

      local extfun;

      if not NumericA::posint then

         error "nothing valid to free";

      end if;

      if extlib='extlib' then

         extlib:=ExternalCalling:-ExternalLibraryName("linalg",':-HWFloat');

      end if;

      extfun:=ExternalCalling:-DefineExternal(':-hw_SpUMFPACK_FreeNumeric',

                                              extlib);

      extfun(NumericA);

      NumericA:=-1;

      NULL;

   end proc:

end module:

MM:=Matrix([[0.,0.,-25.],[-53.,-7.,0.],[0.,-70.,0.]],

           ':-storage'=':-sparse',':-datatype'=':-float[8]'):

with(SparseLU):

LU(MM);

VV:=Vector([1,1,1]);

VV := Vector(3, {(1) = 1, (2) = 1, (3) = 1})

ans:=Solve(3,VV);

ans := Vector(3, { sparse_data }, datatype = float[8], storage = sparse)

rtable_options(ans,':-subtype');

Vector[column]

LinearAlgebra:-Norm(MM.ans - VV);

0.

VV:=LinearAlgebra:-RandomMatrix(3,2,':-datatype'=':-float[8]');

VV := Matrix(3, 2, {(1, 1) = -53.0, (1, 2) = 40.0, (2, 1) = 21.0, (2, 2) = 97.0, (3, 1) = -25.0, (3, 2) = 43.0}, datatype = float[8])

ans:=Solve(3,VV);

ans := Matrix(3, 2, { sparse_data }, datatype = float[8], storage = sparse)

rtable_options(ans,':-subtype');

Matrix

LinearAlgebra:-Norm(MM.ans - VV);

0.


Download sparselu.mw

@ClearBlueSky In my followup S is a formula for approximating MTBF. It is devoid of LambertW and GAMMA related functions.

So, you would need a function to compute the finite summation Sum(1/i!, i=0..n). Plug that and the values for R and t into the formula given by S.  I presume you can do that in Excel.

Thanks for adding information about the range of R=0.0..1.0 to your post.

It seems to me that a series approximation fits the MTBF well for R in that range, with t>0, n::posint, etc.

The approximating formula is given by S below. You indicated that you could handle a finite sum (which appears in S).

Let me know, if I've misunderstood. (I discounted this earlier, since I didn't know R and the series approach breaks down for certain t and n when R>1.)

The plot and the Explore are meant to illustrate that ee, eeG, and S agree quite well in the given ranges of the parameters.

restart:

ee := solve(R = exp(t/MTBF) * (t/MTBF) * Sum(1/(i!), i=0..n),MTBF);

t/LambertW(R/(Sum(1/factorial(i), i = 0 .. n)))

eeG := solve(R = exp(t/MTBF) * (t/MTBF) * sum(1/(i!), i=0..n),MTBF);

t/LambertW(exp(-1)*R*GAMMA(n+1)/GAMMA(n+1, 1))

S := convert(series(ee,R=0,6),polynom);

(Sum(1/factorial(i), i = 0 .. n))*t/R+t-(1/2)*t*R/(Sum(1/factorial(i), i = 0 .. n))+(2/3)*t*R^2/(Sum(1/factorial(i), i = 0 .. n))^2-(9/8)*t*R^3/(Sum(1/factorial(i), i = 0 .. n))^3

eplot:=proc(T,N)
   plot(eval([ee, S, eeG],[t=T, n=N]),
        R=0..1,
        style=[line,point,point],
        symbol=[solidcircle,diagonalcross],
        symbolsize=[8,16],
        color=[green,black,red],
        numpoints=20);
end proc:

eplot(33.0, 5);

#Explore(eplot(t,n), parameters=[t=0..100, n=1..10]);

 


Download MTBF1.mw

@Markiyan Hirnyk He didn't ask for a closed form. He asked about something he could use in Excel.

Note that the latter part of my Answer (not Comment) was about what was giving him grief, not what I was suggesting to do. And the first part got rid of the GAMMA calls he seemed to not want. That left the LambertW. So in my followup Comment I discussed obtaining an approximating expression.

Now, obtaining a black box approximating mechanism that allows for arbitrary float R and posint n, to work in Excel, is more work...

If you know particular values for n and t as well as the range for R then you can use Maple to construct H = P(R)/Q(R) which approximates your expression quite well, where P(R) and Q(R) are polynomials in R with float coefficients.

For example (there are other similar ways... you can even get an estimate of the error bound).

restart:

ee := solve(R = exp(t/MTBF) * (t/MTBF) * Sum(1/(i!), i=0..n),MTBF);

t/LambertW(R/(Sum(1/factorial(i), i = 0 .. n)))

P := plot(value(eval(ee,[n=15,t=0.3])),R=0..3,color=green):

H := eval(numapprox:-chebpade(value(eval(ee,[n=15,t=0.3])),R=0.1..3,[4,4]),
     T = orthopoly[T]);

(-.1046598996+.3979979186*R+.2094792186*((20/29)*R-31/29)^2+0.2631635411e-1*((20/29)*R-31/29)^3+0.7344468442e-3*((20/29)*R-31/29)^4)/(-.6116268169+.8210912092*R+.6724478367*((20/29)*R-31/29)^2+.1360729330*((20/29)*R-31/29)^3+0.7230731934e-2*((20/29)*R-31/29)^4)

PH := plot(H, R=0..3, color=red, style=point, numpoints=20):

plots:-display(P, PH, view=-10..10);

 

 

Download LW2.mw

 

@tomleslie Saving to .m format was not always discouraged. It's a convenient way to save results of time consuming computations. It's easier to do than to store in an .mla archive, especially if there are several such results to handle separately.

One major reason for its being discouraged, AFAIK, is that it cannot handle everything. It can't handle modules and records in general. Localness is a problem. For a many instances that is not a problem, however.

Some of the best resources on basic and introductory Maple programming are older texts and web resources, some of which use this functionality as a stock tool.

So it's not so surprising that people would utilize the functionality.

@tomleslie The order of terms in SUM DAG assigned to name p are at issue here. It may be that conjoining the restart and the initial assignment to p involves a SUM with the terms stored in a different order than otherwise. But that is not the only way to get a SUM which exhibits the problem.

For example (and this is just showing a way to force the problem -- in general we would not know how p was formed and by what commands),


restart;

p:=x^2*y-2*y*z+3*x^2+2*y-z:
sort(p,order=plex(x,y,z), ascending);

cl:=[coeffs(p, [x,y,z])]:

[seq(`if`(cl[j]>0,seq(op(j,p)/cl[j], i=1..cl[j]),NULL),j=1..numelems(cl))];

                                             2    2  
                       -z + 2 y - 2 y z + 3 x  + x  y
               [  1      1            1  2    1  2    1  2  ]
               [- - z, - - z, -2 y z, - x  y, - x  y, - x  y]
               [  2      2            3       3       3     ]

restart;

p:=x^2*y-2*y*z+3*x^2+2*y-z:
sort(p,order=plex(x,y,z), descending);

cl:=[coeffs(p, [x,y,z])]:

[seq(`if`(cl[j]>0,seq(op(j,p)/cl[j], i=1..cl[j]),NULL),j=1..numelems(cl))];

                         2        2                  
                        x  y + 3 x  - 2 y z + 2 y - z
                [1  2    1  2              1      1      1  ]
                [- x  y, - x  y, -2 y z, - - z, - - z, - - z]
                [2       2                 3      3      3  ]

 


Download sumdagf.mw

As for the question of restart and additional commands in the same execution group, I have observed the problem occuring for commands which are intercepted by the GUI. By this I mean commands which get executed by the Java Standard GUI (!) rather than the Maple kernel (aka mserver, aka engine). In particular the interface command fits in this problematic class, but I believe that there are others. (The difference can be established by running particular code that calls the intercepted commands at the top level in the GUI versus inside a procedure executed in the GUI.) For example, an interface call following a restart, in the same execution group, can get ignored by the GUI. So, sure, using a separate execution group for restart, alone, is the right way to go.

The main point is that the terms in a polynomial such as p may be sorted internally in different ways.

And, with a local X...


restart:

aliasedlatex:=proc(e)

  local lookup;

  lookup:=op(eval(alias:-ContentToGlobal)):

  :-latex(subs(lookup,e));

  NULL;

end proc:

local X;

alias(X=:-X(a,b,c)):

test:=diff(:-X(a,b,c),a);

diff(X(a, b, c), a)

latex(test);

{\frac {\partial }{\partial a}}X \left( a,b,c \right)

 

aliasedlatex(test);

{\frac {\rm d}{{\rm d}a}}X

 


Download aliasedlatexlocal.mw

Hmm, I guess that had better be subs instead of eval, or else derivatives become 0.


restart:

aliasedlatex:=proc(e)

  local lookup;

  lookup:=op(eval(alias:-ContentToGlobal)):

  :-latex(subs(lookup,e));

  NULL;

end proc:

alias(Y=X(a,b,c)):

test:=diff(X(a,b,c),a);

diff(Y, a)

latex(test);

{\frac {\partial }{\partial a}}X \left( a,b,c \right)

 

aliasedlatex(test);

{\frac {\rm d}{{\rm d}a}}Y

 


Download aliasedlatexsubs.mw

@Carl Love I did it that way so that the name UpdatingRead wouldn't itself appear in the Variable Manager, if the code was just pasted into a worksheet. It wasn't the only way to get that effect, but served that purpose.

Of course, one could instead savelib just the assigned UpdatingRead procedure to a .mla archive located in libname in future sessions.

I wanted the OP to see the addition of his .m file's missing names to the Variable Manager, using the posted Answer code in the simplest way, without the effect of the Variable Manager being cluttered by the name UpDatingRead itself.

Who knows... maybe someone will fix assign so that it too updates the Variable Manager. In which case the behaviour would be the same either way. It was intended only as the most minor aspect, and is not a big deal.

Dr. Subramanian, your line of questioning is valid, but please note that Roman's Post is about exact rational data. And your line of questioning above is about floating-point data.

You are right, Maple 2015's LinearAlgebra:-LinearSolve is missing the ability to do direct sparse (LU based) real floating-point linear solving at working precision Digits>15. That's a regression and I've submitted a bug report. It can however still do indirect, iterated sparse real floating-point linear solving using functions from the chapter F11 of the NAG library.

I mentioned above (true at the time I wrote it) that LinearSolve was using NAG routine f01brf to LU factorize a real float Matrix and routine f04axf to do the subsequent solving for a given real float RHS. In Maple 2015 this is done at hardware working precision using functions (bundled in Maple in an external shared library) from UMFPACK.

You asked above about how to re-use the real floating-point sparse LU factorization. The following sets up and uses functions from UMFPACK to accomplish that. It works in my 64bit Linux version of Maple 2015 (and seems to also work in the 64bit Linux versions Maple 18.02 back as far as, hmm, 15.01). It also seems to run ok on my 64bit Maple2015.1 on Windows 7.

I've coded it with three stages: LU factorization, linear solving for one or multiple RHSs, and freeing of the LU factorization data. The idea -- which I believe is what you are looking for -- is the ability to factorize the LHS Matrix just once and then to do several separate linear-solving steps using different RHSs. Indeed that special use pattern is the only reason to use this code. Otherwise one would simply call LinearSolve.

One must be careful with the so-called prefactored data. It's stored internally in memory, and is not explicitly assigned to any Matrix or other structure. (The module code below simply saves a memory reference. Using an invalid memory reference would likely crash.)

The code below consists of 1D Maple Notation input, interspersed with prettyprinted plaintext output.

SparseLU:=module()
   option package;
   export LU,Solve,Free;
   local extlib, Anz, NumericA, numrows;
   LU:=proc(M::Matrix(:-storage=:-sparse,:-datatype=:-float[8],
                      :-order=:-Fortran_order))
      local extfun, numcols;
      if extlib='extlib' then
         extlib:=ExternalCalling:-ExternalLibraryName("linalg",':-HWFloat');
      end if;
      extfun:=ExternalCalling:-DefineExternal(':-hw_SpUMFPACK_MatFactor',
                                              extlib);
      (numrows,numcols):=op(1,M);
      (Anz,NumericA):=extfun(numrows,numcols,M);
      NULL;
   end proc:
   Solve:=proc(numrows::posint, V::{Matrix,Vector})
      local B,extfun,res;
      if type(V,'Matrix'(':-storage'=':-sparse',':-datatype'=':-float[8]',
                         ':-order'=':-Fortran_order')) then
         B:=V;
      else
         B:=Matrix(V,':-storage'=':-sparse',':-datatype'=':-float[8]',
                   ':-order'=':-Fortran_order');
      end if;
      if extlib='extlib' then
         extlib:=ExternalCalling:-ExternalLibraryName("linalg",':-HWFloat');
      end if;
      extfun:=ExternalCalling:-DefineExternal(':-hw_SpUMFPACK_MatMatSolve',
                                              extlib);
      if not(Anz::posint and NumericA::posint and numrows::posint) then
          error "invalid factored data";
      end if;
      res:=extfun(numrows,op([1,2],B),Anz,NumericA,B);
      res;
   end proc:
   Free:=proc()
      local extfun;
      if not NumericA::posint then
         error "nothing valid to free";
      end if;
      if extlib='extlib' then
         extlib:=ExternalCalling:-ExternalLibraryName("linalg",':-HWFloat');
      end if;
      extfun:=ExternalCalling:-DefineExternal(':-hw_SpUMFPACK_FreeNumeric',
                                              extlib);
      extfun(NumericA);
      NumericA:=-1;
      NULL;
   end proc:
end module:
 

MM:=Matrix([[0.,0.,-25.],[-53.,-7.,0.],[0.,-70.,0.]],
           ':-storage'=':-sparse',':-datatype'=':-float[8]');
                               [ 0.      0.     -25.]
                               [                    ]
                         MM := [-53.    -7.      0. ]
                               [                    ]
                               [ 0.     -70.     0. ]

 
with(SparseLU);

                               [Free, LU, Solve]


# precompute the sparse LU factorization
LU(MM);
 
# solve MM.ans=VV with VV a Vector
VV:=Vector([1,1,1]);

                                         [1]
                                         [ ]
                                   VV := [1]
                                         [ ]
                                         [1]

ans:=Solve(3,VV);

                                [-0.0169811320754717]
                                [                   ]
                         ans := [-0.0142857142857143]
                                [                   ]
                                [-0.0400000000000000]

MM,VV; # untouched, ok

                          [ 0.      0.     -25.]  [1]
                          [                    ]  [ ]
                          [-53.    -7.      0. ], [1]
                          [                    ]  [ ]
                          [ 0.     -70.     0. ]  [1]

# verify by computing forward error
LinearAlgebra:-Norm(MM.ans - Matrix(VV));

                                      0.

# solve MM.ans=VV with VV a Matrix
VV:=LinearAlgebra:-RandomMatrix(3,2,':-datatype'=':-float[8]');

                                    [-53.    40.]
                                    [           ]
                              VV := [21.     97.]
                                    [           ]
                                    [-25.    43.]

ans:=Solve(3,VV);

                      [-0.443396226415094    -1.74905660377358 ]
                      [                                        ]
               ans := [0.357142857142857     -0.614285714285714]
                      [                                        ]
                      [ 2.12000000000000     -1.60000000000000 ]

MM,VV; # untouched, ok

                     [ 0.      0.     -25.]  [-53.    40.]
                     [                    ]  [           ]
                     [-53.    -7.      0. ], [21.     97.]
                     [                    ]  [           ]
                     [ 0.     -70.     0. ]  [-25.    43.]

# verify by computing forward error
LinearAlgebra:-Norm(MM.ans - Matrix(VV));

                                      0.

 
# free the internal LU factorization data
Free();
 
# the factorization data is gone
Solve(3,VV);
Error, (in Solve) invalid factored data

After I posted this Answer Markiyan provided a link as source of this problem:  http://math.stackexchange.com/questions/1342291/evaluate-an-integral

I had not previously seen that stackexchange post (...I only follow the "maple" tag on that site). At least one responder there used the same method as I did to arrive at Int(-1/5*sin(w)^3/w^(3/5),w=0..infinity).

I only solved that last form numerically, in my Answer above. But the cited stackexchange thread contains a hint to a way (apart from Axel's other fine suggestion to use a Laplace transform) get an exact result. The hint was that int(sin(u)*u^p,u=0..infinity) ought to be known. And Maple does know it. Now I'm sad I didn't test with the trig identity earlier, to get a difference of terms in that form. I was too happy with the form that had just the single sin call, as it would make my numeric approach with discretization easier!

Using 64bit Maple 2015.1 on Windows 7 (and it also worked when tried in my Maple 18.02),

 

restart:

P := Int(-1/5*sin(w)^3/w^(3/5),w=0..infinity);

Int(-(1/5)*sin(w)^3/w^(3/5), w = 0 .. infinity)

combine(P);

Int((1/20)*(sin(3*w)-3*sin(w))/w^(3/5), w = 0 .. infinity)

value(%);

(1/1440)*72^(4/5)*Pi^(1/2)*GAMMA(7/10)/GAMMA(4/5)-(3/40)*2^(2/5)*Pi^(1/2)*GAMMA(7/10)/GAMMA(4/5)

simplify(%);

(1/120)*Pi^(1/2)*2^(2/5)*(3^(3/5)-9)*GAMMA(7/10)/GAMMA(4/5)

 

 

Download resexact.mw

@Axel Vogt I was a little surprised that Maple didn't find an answer for int(-1/5*sin(w)^3/w^(3/5),w=0..infinity) on its own. Good job with the Laplace transform.

I got the numeric result by discretizing the oscillatory integrand and then using evalf/Sum to compute with its Levin's u-transform to accelerate convergence. Axel, you and I have discussed that approach before. In this case the period was constant (2*Pi), but for varying period NextZero may be used. But it seems this approach sometimes needs a better implementation of that acceleration algorithm than what's currently in evalf/Sum.

 

@Preben Alsholm This sounds like it fits the pattern of a known regression for 1D Maple Notation input in the Standard GUI of Maple 2015.0 which was fixed in Maple 2015.1. Namely, in a single execution group, an error gets emitted if there are multiple prompts and some aren't separated by a statement terminator. As you say, normally that is not supposed to matter.

In Maple 2015.1 it's ok. In Maple 18.02 and earlier it was ok.

First 328 329 330 331 332 333 334 Last Page 330 of 592