acer

31404 Reputation

29 Badges

19 years, 133 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

> kernelopts(printbytes=false):
>
> p := "(0.575849 Cos[0.383015 t] Sin[0.383015 t] -\
>     1.66533*10^-16 Cos[2.33523 t] Sin[0.383015 t] +\
>     1.11022*10^-16 Cos[0.383015 t] Sin[2.33523 t] +\
>     1.01122 Cos[2.33523 t] Sin[2.33523 t])^2 + (0.462311 Cos[\
>       0.383015 t]^2 + 3.33067*10^-16 Cos[0.383015 t] Cos[2.33523 t] +\
>     0.537689 Cos[2.33523 t]^2 - 0.462311 Sin[0.383015 t]^2 -\
>     1.38778*10^-16 Sin[0.383015 t] Sin[2.33523 t] -\
>     0.537689 Sin[2.33523 t]^2)^2 == 1":
>
> eq := MmaTranslator:-FromMma(p):
>
> Digits:=30:
> s1 := fsolve(eq, t);
                                                           -7
                  s1 := 0.111758779601256604676477981151 10
 
> eval(eq,t=s1);
                     0.999999999999999999999999999998 = 1
 
>
> s2 := fsolve(eq, t, avoid={t=s1});
                                                           -7
                 s2 := -0.111758779601256604676477981151 10

acer

Is t\[Tau] = 0 a solution? Did you need some other, nontrivial solution?

Did you want a real solution? (I'm not sure that one exists.) If not, then how about,

t\[Tau] = 1.0
HBar = -0.7334865113 - 0.7613376564*I

Or are you looking for some sort of characterization or parametrization of all the solutions as a relation between t\[Tau] and HBar?

ps. It is easier for us if you use Maple syntax.

acer

As shown above, solve() can deal with this example nicely. But LinearSolve does not handle the initial problem nicely, and won't produce an answer until the RHS is instantiated at some partial solution (ie. an instantiation of the RHS which makes the system's conditional consistency more apparent).

A natural question to ask is whether LinearSolve might be made smarter, to get the solution itself right away on the original input with the symbolic RHS.

For LinearSolve with method=LU, it might compute something like the following.

> LUDecomposition(Matrix([a,b]),output=':-U')[Rank(a)+1,1..-1];
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
 
    -2 + exp(-2 B J_1 + B J_2) + exp(2 B J_1 + B J_2)]

The question then would be, how show a qualifying proviso alongside a returned result, that would usefully show that the augmented right-most entries of the above must equal zero for the result to hold?

And then there is method=solve to deal with. That is actually the method currently getting used for LinearSolve(a,b), as userinfo messages show. And it forms the equations and then calls SolveTools:-Linear. And that call is returning NULL. So it'd be desirable to figure out why, and how it might be avoided or followed up with a more general `solve` call, etc.

acer

I see no Matrix in use in this code. What I see are tables M and L, in the 2nd code.

Your claim, "changed my code to multi-threaded type" doesn't look true to me. Both 1st code and 2nd code use Threads in the same way, no? Isn't the only difference that of using tables versus using integers?

I don't know what paper you refer to, and what sort of data it deals with. Maybe it is discussing access of Maple's immediate integers versus other data. Maybe a more appropriate comparison would be with small integers versus entries of a pre-allocated datatype=integer[8] Matrix, But then, I'm not sure why you'd want to muddy the waters of a comparison of that issue by wrapping it all up in Threads use.

acer

Have a look at the help-page ?INTERFACE_HELP

I believe that a general strategy could be something like this,

  • copy an entire existing help-page to an empty session Worksheet.
  • edit it to taste
  • save it as some new .mw file
  • run INTERFACE_HELP on that .mw file

I'm not sure whether you have to worry about libname, and create a new empty .hdb help archive file (found first in libname), so as to ensure that the no previous "system" .hdb gets written to by mistake. The INTERFACE_HELP help-page suggests that it is so, however. So, before running INTERFACE_HELP make sure that some location owned by you is first in libname. (And then, it remains to be seen, whether it will form a new .hdb for you, or not...)

acer

In your first attempt, you are trying LinearSolve with the zero-vector as the right-hand-side. Maple is saying that that system is inconsistent. But then you use quite a different, nonzero rhs in your solve call.

The solve call (as below, and not wrapped in an assign call) produces three results. For each of those, the rhs seems to evaluate to a Vector with all entries equal to 1 (just with differing values of B, J_1, and J_2 to get that).

gl1 := t[11] + t[13] + t[31] + t[33] = exp(2*B*J_1 + B*J_2):
gl2 := t[21] + t[23] + t[31] + t[33] = 1:
gl3 := t[11] + t[13] + t[41] + t[43] = 1:
gl4 := t[12] + t[13] + t[32] + t[33] = 1:
gl5 := t[11] + t[14] + t[31] + t[34] = 1:
gl6 := t[21] + t[23] + t[41] + t[43] = exp(-2*B*J_1 + B*J_2):
gl7 := t[22] + t[23] + t[32] + t[33] = exp(2*B*J_1 - B*J_2):
gl8 := t[21] + t[24] + t[31] + t[34] = exp(-2*B*J_1 - B*J_2):
gl9 := t[12] + t[13] + t[42] + t[43] = exp(-2*B*J_1 - B*J_2):
gl10 := t[11] + t[14] + t[41] + t[44] = exp(2*B*J_1 - B*J_2):
gl11 := t[12] + t[14] + t[32] + t[34] = exp(-2*B*J_1 + B*J_2):
gl12 := t[12] + t[14] + t[42] + t[44] = 1:
gl13 := t[22] + t[24] + t[32] + t[34] = 1:
gl14 := t[21] + t[24] + t[41] + t[44] = 1:
gl15 := t[22] + t[23] + t[42] + t[43] = 1:
gl16 := t[22] + t[24] + t[42] + t[44] = exp(2*B*J_1 + B*J_2):
 
a,b:=LinearAlgebra:-GenerateMatrix(
 {gl1,gl2,gl3,gl4,gl5,gl6,gl7,gl8,gl9,gl10,gl11,gl12,gl13,gl14,gl15,gl16},
 [t[11],t[12],t[13],t[14],t[21],t[22],t[23],t[24],t[31],t[32],t[33],t[34],
  t[41],t[42],t[43],t[44]]);
                                                                                
LinearAlgebra:-LinearSolve(a,b);
 
s := solve({gl1, gl2, gl3, gl4, gl5, gl6, gl7, gl8, gl9, gl10, gl11,
            gl12, gl13, gl14, gl15, gl16});
 
nops([s]);
 
seq(eval(b,[s][i])^%T,i=1..nops([s]));
 
LinearAlgebra:-LinearSolve(a,eval(b,[s][1]));

acer

An important difference between,

(evalf@eval(f))/10^Digits;

and,

(evalf@f)()/10^Digits;

is that the latter involves a function call. The latter applies evalf@f to () which means it calls it with null arguments. That's why it returns a float when used in exactly that way inside `uniform`.

On the other hand, the former variant doesn't apply evalf@eval(f) to anything, and so just gives a procedure as the returned result from `uniform` instead of some specific float value. That procedure result is what gets assigned to U, in the programming guide. And calls to that U then result in varying numbers.

Your posted version of `uniform` contained the form (evalf@f)()/10^Digits , which doesn't match the version in the guide. But it differs in the way described above, by being a function call, as well as by omitting the eval().

Maybe you would be interested in forming procedure `uniform` with the variant,

(evalf@f)/10^Digits;

and then considering the behaviour from that? Or, has your understanding of the need for the eval() already been made clear? And it's just the difference between applying a proc and "applying" a number (which just returns that same number) which you wonder about?

acer

Are you after something like this,

> restart:
> assume(b > 0);
> x := b+1;
                                  x := b~ + 1
 
> x:=subs(b='b',x);
                                  x := b + 1
 
> b:='b';
                                    b := b
 
> about(b);
b:
  nothing known about this object

acer

What happens if you try to assign evalf(Pi,1000000) to a name (variable) and then end the line with a colon so as to supress printing of the result?

acer

You can use the time() command to measure how long a computation takes. See the help-page ?time for more details.

You might also be interested in ?profile and ?nprofile. The routine showtime() seldom gets mentioned, so maybe it is unpopular.

Note that, for a general program (which may not do numeric solving in repeated steps) the concept of "iterations" is not always meaningful. Even a command like Matlab's flops() doesn't make much sense in a computer algebra system.

It's likely off topic, but you might possibly be interested in ?SoftwareMetrics .

ps. flops no longer available in current Matlab. Some discussion, taken at pseudorandom.

acer

Try emailing it to support@maplesoft.com .

acer

You may want to contact support@maplesoft.com

Without seeing the actual code it's difficult to tell whether it is exiting due to some limit near 500MB, or whether it is sitting at 500MB and failing to further allocate memory way past 2GB for some new rtable object. I would guess the latter, but without seeing the code cannot tell how it maybe made to work.

You might also consider uploadind your code to this site using the green up-arrow to access the site's FileManager.

acer

The method=float should be OK, as it should call externally if it can. The code shows that,

kernelopts(opaquemodules=false):
showstat(LinearAlgebra:-LA_Main:-`Determinant/float`);

Now, it sounds like Maple is creating too many copies of your data, along the way. Let's see how to cut that down.

Don't create your Matrix with storage=sparse, because there is only compiled C external fast routine for non-sparse storage (eg, full rectangular). So for a sparse storage Matrix Maple is going to copy it to full rectangular anyway.

And as Alec say, create it with datatype=float[8]. So that's one or two copies removed.

Each copy of a 6000x6000 float[8] Matrix will take about 36*8 MB of memory to store.

Now, Maple usually has to create at least one copy of your Matrix, so that it can do an LU decomposition on that in-place. It makes a copy so that it doesn't have to overwrite your own original data. So, it could take about 600MB or allocated memory to get the job done if you were simply to call Determinant() on a float[8] rectangular storage 6000x6000 Matrix. It sounds to me as if your machine has enough memory for that.

The rest is just for fun, in case you want to halve the memory requirement further.

If the original Matrix data is not important to you then you can do the steps yourself and act in-place on your original. That would save a copy. And the total memory use should then only be about 300MB or so. The tricky thing about doing it this way is to get the sign of the scalar result correct. Here's some code that does it on a random Matrix,

> N:= 6000;
                                   N := 6000

> with(LinearAlgebra):

> M := RandomMatrix(N,generator=-0.1..0.1,density=0.05,
>                   outputoptions=[datatype=float[8]]):
> for i from 1 to N do M[i,i]:=1.0: end do:

> # for testing only
> #origM := copy(M);
> #Digits:=trunc(evalhf(Digits)):Determinant(origM);

> P,M := LUDecomposition(M,inplace=true,output=['NAG']):

> d := proc(M::Matrix,n)
> local i,res;
>   res := 1.0:
>   for i from 1 to n do
>     res := res*M[i,i];
>   end do;
> end proc:

> evalhf(d(M,N));
                              9.68585766336265408
 
> quit
memory used=277.3MB, alloc=276.8MB, time=62.88

As I mentioned, getting the sign correct need a little more work. It can be done by walking through the permutation Vector P created above. I don't have time at the moment to figure out code for that, but someone may find it an amusing exercise. It's in the format of parameter IPIV of CLAPACK routine dgetrf.

Can anyone think of a way to get the determinant efficiently from sparse float[8] Matrix by doing a LinearSolve? I mention it because there is a sparse linear solver, whose use would not require copying to full rectangular storage.

acer

It's good to point out these things, and ask these questions.

What if you wanted to get your hands on that 244 value for Pu, programmatically, so that your code could use it regardless?

What should we make of the Scientific Constants assistant (top menubar's Tools -> Assistants -> Scientific Constants) ? When the same query is made within that assistant, a Maplet error pops up and the value can't even be cut'n'pasted (Linux).

This is one of those interesting situations where Maple wants to let you know something special, as well as give a partial result. Warnings aren't satisfying in these situations since they can't be trapped programmatically. Issuing errors make the user feel that he's done something wrong, and can make it very difficult to get at the value.

The `solve` routine does this sort of thing by setting a global variable _SolutionsMayBeLost which may be queried programattically after a computation. But would Maple be better with more use of global or environment variables like this?

So, what's the best solution?

ps. Why does querying ?_EnvSolutionsMayBeLost return no help-page?

acer

Trying to follow up on Joe's suggestion to rework the code to avoid the problem: Maple can collect garbage and reclaim memory of variables no longer referenced. You might be able to put the looping action inside a procedure, and then call that procedures (several times) to work in batches, all from within Maple. Such a procedure might save the variables which you need to keep and, each time the procedure is exited, could allow other transient data to be collected and some memory to be reclaimed for the same ongoing session.

It would be easier to make concrete suggestions if you could post a small but representative piece of code which illustrates what youare trying already.

acer

First 305 306 307 308 309 310 311 Last Page 307 of 327