acer

31819 Reputation

29 Badges

19 years, 180 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

You can use the time() command to measure how long a computation takes. See the help-page ?time for more details.

You might also be interested in ?profile and ?nprofile. The routine showtime() seldom gets mentioned, so maybe it is unpopular.

Note that, for a general program (which may not do numeric solving in repeated steps) the concept of "iterations" is not always meaningful. Even a command like Matlab's flops() doesn't make much sense in a computer algebra system.

It's likely off topic, but you might possibly be interested in ?SoftwareMetrics .

ps. flops no longer available in current Matlab. Some discussion, taken at pseudorandom.

acer

Try emailing it to support@maplesoft.com .

acer

You may want to contact support@maplesoft.com

Without seeing the actual code it's difficult to tell whether it is exiting due to some limit near 500MB, or whether it is sitting at 500MB and failing to further allocate memory way past 2GB for some new rtable object. I would guess the latter, but without seeing the code cannot tell how it maybe made to work.

You might also consider uploadind your code to this site using the green up-arrow to access the site's FileManager.

acer

The method=float should be OK, as it should call externally if it can. The code shows that,

kernelopts(opaquemodules=false):
showstat(LinearAlgebra:-LA_Main:-`Determinant/float`);

Now, it sounds like Maple is creating too many copies of your data, along the way. Let's see how to cut that down.

Don't create your Matrix with storage=sparse, because there is only compiled C external fast routine for non-sparse storage (eg, full rectangular). So for a sparse storage Matrix Maple is going to copy it to full rectangular anyway.

And as Alec say, create it with datatype=float[8]. So that's one or two copies removed.

Each copy of a 6000x6000 float[8] Matrix will take about 36*8 MB of memory to store.

Now, Maple usually has to create at least one copy of your Matrix, so that it can do an LU decomposition on that in-place. It makes a copy so that it doesn't have to overwrite your own original data. So, it could take about 600MB or allocated memory to get the job done if you were simply to call Determinant() on a float[8] rectangular storage 6000x6000 Matrix. It sounds to me as if your machine has enough memory for that.

The rest is just for fun, in case you want to halve the memory requirement further.

If the original Matrix data is not important to you then you can do the steps yourself and act in-place on your original. That would save a copy. And the total memory use should then only be about 300MB or so. The tricky thing about doing it this way is to get the sign of the scalar result correct. Here's some code that does it on a random Matrix,

> N:= 6000;
                                   N := 6000

> with(LinearAlgebra):

> M := RandomMatrix(N,generator=-0.1..0.1,density=0.05,
>                   outputoptions=[datatype=float[8]]):
> for i from 1 to N do M[i,i]:=1.0: end do:

> # for testing only
> #origM := copy(M);
> #Digits:=trunc(evalhf(Digits)):Determinant(origM);

> P,M := LUDecomposition(M,inplace=true,output=['NAG']):

> d := proc(M::Matrix,n)
> local i,res;
>   res := 1.0:
>   for i from 1 to n do
>     res := res*M[i,i];
>   end do;
> end proc:

> evalhf(d(M,N));
                              9.68585766336265408
 
> quit
memory used=277.3MB, alloc=276.8MB, time=62.88

As I mentioned, getting the sign correct need a little more work. It can be done by walking through the permutation Vector P created above. I don't have time at the moment to figure out code for that, but someone may find it an amusing exercise. It's in the format of parameter IPIV of CLAPACK routine dgetrf.

Can anyone think of a way to get the determinant efficiently from sparse float[8] Matrix by doing a LinearSolve? I mention it because there is a sparse linear solver, whose use would not require copying to full rectangular storage.

acer

It's good to point out these things, and ask these questions.

What if you wanted to get your hands on that 244 value for Pu, programmatically, so that your code could use it regardless?

What should we make of the Scientific Constants assistant (top menubar's Tools -> Assistants -> Scientific Constants) ? When the same query is made within that assistant, a Maplet error pops up and the value can't even be cut'n'pasted (Linux).

This is one of those interesting situations where Maple wants to let you know something special, as well as give a partial result. Warnings aren't satisfying in these situations since they can't be trapped programmatically. Issuing errors make the user feel that he's done something wrong, and can make it very difficult to get at the value.

The `solve` routine does this sort of thing by setting a global variable _SolutionsMayBeLost which may be queried programattically after a computation. But would Maple be better with more use of global or environment variables like this?

So, what's the best solution?

ps. Why does querying ?_EnvSolutionsMayBeLost return no help-page?

acer

Trying to follow up on Joe's suggestion to rework the code to avoid the problem: Maple can collect garbage and reclaim memory of variables no longer referenced. You might be able to put the looping action inside a procedure, and then call that procedures (several times) to work in batches, all from within Maple. Such a procedure might save the variables which you need to keep and, each time the procedure is exited, could allow other transient data to be collected and some memory to be reclaimed for the same ongoing session.

It would be easier to make concrete suggestions if you could post a small but representative piece of code which illustrates what youare trying already.

acer

It's how synthetic elements are handled, I think. See the third paragraph here.

There are also several references which you might check this with, in that article.

acer

Try setting Digits a little higher, say to 13 or more.

The fsolve routine may be encountering difficulty in ascertaining that the residuals are acceptably small (after having added some guard digits). This may be an example where it would be nicer to be able to forcibly keep fsolve's working precision and the tolerances (stopping criteria) separate.

acer

> convert(987654321,base,10);
                          [1, 2, 3, 4, 5, 6, 7, 8, 9]

> ListTools:-Reverse(%);
                          [9, 8, 7, 6, 5, 4, 3, 2, 1]

You might also search this site and find some older posts on related topics. Here's an example.

acer

Note that abs(H)^(1/2) * sign(H) is not actually a Maple function. It is an expression. Now, plot supports both functional and expression forms. And confusing the two, in plot calls especially, is a FAQ.

Maple's first action upon doing a plot() call is to evaluate the arguments. And for unknown and unassigned H the subexpression sign(H) evaluates immediately to simply 1 (one). That's why this next call below doen't produce the plot that you expected, because abs(H)^(1/2) * sign(H) evaluates to simply abs(H)^(1/2) .

plot( abs(H)^(1/2) * sign(H), H=-1..1);

Using the functional or operator form of the inputs to plot(), or using unevaluation quotes with the expression form, will produce what was expected.

plot( H->abs(H)^(1/2) * sign(H), -1..1);

plot( abs(H)^(1/2) * 'sign'(H), H=-1..1);

acer

The described behaviour sounds like what happens when the machine runs out of physical, hard memory and then starts to use virtual memory. Virtual memory as RAM imitated by a hard-disk is much, much slower than usual RAM. The shifting of memory in and out of such virtual space (swapping) is so slow that the machine can appear to be frozen (even if it's only temporary, but for a long time).

If the machine is greatly "swapped out", then the OS itself may be very slow. And hence it may appear that the Maple red "stop-sign" button is not functioning.

Computer algebra systems, which do exact symbolic computations, are well-known to consume vast amounts of memory. It's often intrinsic to exact symbolic computation, some might say.

I would suggest that you find out how to use the Maple start-up options which control the amount of memory that can be allocated. On Unix/Linux/OSX this can be done by launching the program with the -T switch. I am not sure how it is done using graphical launcher buttons on MS-Windows or OSX, but I'm sure that someone here can state it. When started with a hard memory limit, Maple will stop when it reaches the limit, but I believe that saving the worksheet should then be OK. The key bit would be to make the hard limit be less then the total amount of physical RAM in your computer (not just less than the total amount of OS virtual memory).

As far as Matrices and LinearAlgebra goes, you made another post a short while back about being confused and frustrated with matrices, Matrices, <<>> notation, Matrix() the constructor, LinearAlgebra and LinearAlgebra[Generic]. I can offer this advice on that:

  • Avoid lowercase matrix, vector, and array. They are for the deprecated linalg package. Use uppercase Matrix and Vector.
  • The angle-bracket <<>> notation builds an uppercase Matrix, just like the Matrix() constructor routine does. Similarly for <> and Vector().
  • Don't load either of LinearAlgebra or LinearAlgebra[Generic] (using the `with` command) if you intend on using routines from each side by side. Instead use their long form names such as LinearAlgebra[MatrixAdd], etc, to keep it explicitly clear. Read their help-pages if you are unsure which package does what.

acer

The term abstract linear algebra is quite often used to cover these tasks.

If you enter abstract linear algebra into the mapleprimes top-bar search field you can see that it's on several people's wish-lists.

acer

The Statistics[NonlinearFit] command can also do this sort of problem. I believe that it actually calls LSSolve to do the work. So which you prefer to use may be a matter of taste.

X:=map(t->op(1,t),data):
Y:=map(t->op(2,t),data):

model := ((A*exp((2*Pi*x)/L))/(1+exp((2*Pi*x)/L)))+((B*exp((2*Pi*x)/L))/(1+exp((2*Pi*x)/L))^2):

Statistics[NonlinearFit](model,X,Y,x);

The results I saw were the same as what Robert obtained with LSSolve.

acer

> RootFinding:-NextZero(x->BesselJ(0, x),0);
                                  2.404825558
 
> RootFinding:-NextZero(x->BesselJ(0, x),%);
                                  5.520078110
 
> RootFinding:-NextZero(x->BesselJ(0, x),%);
                                  8.653727913

acer

When you see 0.5e-1*x it means 0.5e-1 times x. And the "e" in 0.5e-1 is scientific notation (base 10). The "e-1" on the end means that it is scaled by 10^(-1).

So, 0.5e-1 = 0.5*10^(-1) = 0.05

One the other hand, the "e" in y=e^0.05 is the base of the natural logarithm, which gets translated to the exponential function exp().

Does that make sense? Can you copy and paste it back from notepad to maple and get the correct object once again?

acer

First 309 310 311 312 313 314 315 Last Page 311 of 330