acer

32587 Reputation

29 Badges

20 years, 38 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

The strange digits at the very end of the displayed numbers in the results are merely artefacts and can be safely ignored.

If you lprint() the results you will see that the Vector (or Matrix) in question has a float[8] datatype. That means hardware double precision, and comes about because those floating-point results are computed using a precompiled library (external to the main Maple kernel). Now, hardware double precision is a base-2 (binary) representation, but Maple shows the results in base-10 so that we can recognize them.

Those training strange digits are artefacts of the conversion from the underlying base-2 stored value to the (nearest possible) base-10 number. Notice how those artefacts lie in decimal digits places even beyond the 14th or 15th, despite that being past trunc(evalhf(Digits)). Personally, I would prefer that those digits weren't displayed at all, and that only trunc(evalhf(Digits)) digits were shown for float[8] datatype Vector/Matrix/Array objects.

acer

The strange digits at the very end of the displayed numbers in the results are merely artefacts and can be safely ignored.

If you lprint() the results you will see that the Vector (or Matrix) in question has a float[8] datatype. That means hardware double precision, and comes about because those floating-point results are computed using a precompiled library (external to the main Maple kernel). Now, hardware double precision is a base-2 (binary) representation, but Maple shows the results in base-10 so that we can recognize them.

Those training strange digits are artefacts of the conversion from the underlying base-2 stored value to the (nearest possible) base-10 number. Notice how those artefacts lie in decimal digits places even beyond the 14th or 15th, despite that being past trunc(evalhf(Digits)). Personally, I would prefer that those digits weren't displayed at all, and that only trunc(evalhf(Digits)) digits were shown for float[8] datatype Vector/Matrix/Array objects.

acer

An example. First I create a .mla Maple archive (GUI or commandline session, no matter)

> restart:
> libname:=kernelopts(homedir),libname:
> LibraryTools:-Create(cat(kernelopts(homedir),kernelopts(dirsep),"foo.mla"));

Then I save a procedure to that archive. This bit could be automated by a nice batch file written to run cmaple against input source files which define procedures. The point is that the procedure below would be defined in a plaintext file, and not a .mw or .mws file. The source file would get read into cmaple, not the GUI. The LibraryTools:-Save call could be present in the source file or concatenated by the batch script.

> restart:
> p := proc(x) x^2; end proc:
> LibraryTools:-Save(p,cat(kernelopts(homedir),kernelopts(dirsep),"foo.mla"));

Then, in a new session (GUI or commandline session, your choice) the procedure is available.

> restart:
> libname:=kernelopts(homedir),libname:
> p(3);

                                       9

acer

You could think of the process as similar to that of compilation of C source and linking of the .o object into a dynamic library.

In that C analogy, when one wishes to amend the code, one doesn't usually disassemble the object file (though it's possible, sometimes with varying degrees of difficulty). Usually one goes back and edits the source file, and then recompiles and relinks it into a .so/.dll/.dylib.

The usually same is true for Maple, provided that you have the source file. Edit the source file, run maple against it, and then issue a savelib() or LibraryTools call once again to get it into a .mla archive.

If you lack the source file for a procedure, you can often reproduce it by setting interface(verboseproc=3) and calling eval() on the procedure name. If you have issued a writeto() before that eval() call, then you can have the output redirected to a file. A minor amount of cleanup, such as adding a (semi)colon after the `end proc` might well be enough. This sort of approach can reproduce a working, reloadable source file for quite a lot of procedures. But I wouldn't bother with it if I had the source, and I certainly wouldn't bother doing it more than once for a given procedure.

So, I would keep the sources of larger projects in plaintext file. I'd edit them with and fast lightweight editor such as `vi`. I'd familiarize myself with the commandline maple interface (cmaple on Windows), and become familizaried with how to save to .mla archives. As plaintext, I'd never have to worry about losing valuable source embedded in a more easily corruptible or unviewable Worksheet or Document. And I could set my .mapleinit or maple.ini file so that my .mla archives were easily accessible across maple sessions.

Others may prefer to do it differently.

acer

I'm not sure that I trust `is` to get to doing the simple type checks quickly enough. So I code it explicitly, with the type-check first. It's mostly (but not entirely) a habit.

acer

I'm not sure that I trust `is` to get to doing the simple type checks quickly enough. So I code it explicitly, with the type-check first. It's mostly (but not entirely) a habit.

acer

One can see which external compiled function Maple's LinearAlgebra:-Eigenvectors uses, for a generalized eigenvector problem A.x=lambda.B.x where A is symmetric floating-point and B is symmetric positive-definite floating-point.

> A := Matrix(1,shape=symmetric,datatype=float):
> B := Matrix(1,shape=symmetric,datatype\
> =float,attributes=[positive_definite]):
> infolevel[LinearAlgebra]:=1:
> LinearAlgebra:-Eigenvectors(A,B):
Eigenvectors:   "calling external function"
Eigenvectors:   "NAG"   hw_f02fdf

A web search for f02fdf gives this link which documents that function. It says, of the parameter array B, "the upper or lower triangle of B (as specified by UPLO) is overwritten by the triangular factor U or L from the Cholesky factorization of B".

This suggests that, with the shape and attributes on Maple Matrices A and B as in the example above, LinearAlgebra:-Eigenvectors will use a method involving the Cholesky facorization of B. It goes on to say, in the Further Comments section, "F02FDF calls routines from LAPACK in Chapter F08."

For some years now, Matlab has been using LAPACK. See this link from 2000 for an early note on that. It appears from a mirror of the release notes of Matlab 6.0 that the eig function was enhanced to solve exactly the same positive-definite symmetric generalized eigenproblem with the syntax eig(A,B,'chol').

I wouldn't be surprised if these two products schemes were not (at least when originally introduced) very similar implementations of an alternative to the usual QZ/QR algorithm. Notice however a difference in behaviour of the two systems. In Maple it is the shape and attributes of the Matrices whch allow the routine to select the algorithm. The algorithm cannot otherwise be forced. In Matlab the data is pretty much without qualities, and no clever method deduction can be done I think. But the method can be forced by an option to the routine.

ps. Is "without qualities" an eigen-pun, in German? Is the word qualities better translated here as Qualitaten or as Eigenschaften?

acer

One can see which external compiled function Maple's LinearAlgebra:-Eigenvectors uses, for a generalized eigenvector problem A.x=lambda.B.x where A is symmetric floating-point and B is symmetric positive-definite floating-point.

> A := Matrix(1,shape=symmetric,datatype=float):
> B := Matrix(1,shape=symmetric,datatype\
> =float,attributes=[positive_definite]):
> infolevel[LinearAlgebra]:=1:
> LinearAlgebra:-Eigenvectors(A,B):
Eigenvectors:   "calling external function"
Eigenvectors:   "NAG"   hw_f02fdf

A web search for f02fdf gives this link which documents that function. It says, of the parameter array B, "the upper or lower triangle of B (as specified by UPLO) is overwritten by the triangular factor U or L from the Cholesky factorization of B".

This suggests that, with the shape and attributes on Maple Matrices A and B as in the example above, LinearAlgebra:-Eigenvectors will use a method involving the Cholesky facorization of B. It goes on to say, in the Further Comments section, "F02FDF calls routines from LAPACK in Chapter F08."

For some years now, Matlab has been using LAPACK. See this link from 2000 for an early note on that. It appears from a mirror of the release notes of Matlab 6.0 that the eig function was enhanced to solve exactly the same positive-definite symmetric generalized eigenproblem with the syntax eig(A,B,'chol').

I wouldn't be surprised if these two products schemes were not (at least when originally introduced) very similar implementations of an alternative to the usual QZ/QR algorithm. Notice however a difference in behaviour of the two systems. In Maple it is the shape and attributes of the Matrices whch allow the routine to select the algorithm. The algorithm cannot otherwise be forced. In Matlab the data is pretty much without qualities, and no clever method deduction can be done I think. But the method can be forced by an option to the routine.

ps. Is "without qualities" an eigen-pun, in German? Is the word qualities better translated here as Qualitaten or as Eigenschaften?

acer

Thank you very much. How did you know to do that? From somewhere here on mapleprimes, or just experience with maplenet?

acer

Thank you very much. How did you know to do that? From somewhere here on mapleprimes, or just experience with maplenet?

acer

This sounds similar to a known issue with Maple 9.5, which was addressed in the point-release 9.5.2.

acer

Those advanced mathematical propositions are not within the scope of evalb's design or purpose.

They are not even within the current scope of the "mathematical" verifier `is`. (I don't see a continuous property or type, so I suppose that `is` couldn't make use of an assumption such as D(F)::continuous.) It would be lovely to learn that I'm wrong on that point.

acer

Those advanced mathematical propositions are not within the scope of evalb's design or purpose.

They are not even within the current scope of the "mathematical" verifier `is`. (I don't see a continuous property or type, so I suppose that `is` couldn't make use of an assumption such as D(F)::continuous.) It would be lovely to learn that I'm wrong on that point.

acer

The loop has n go from 1 to N.

Each time through the loop it computes y[n+1] for the current value of n.

The last (Nth) time through the loop, n=N.

So the last (Nth) time through, it computes y[N+1].

acer

The loop has n go from 1 to N.

Each time through the loop it computes y[n+1] for the current value of n.

The last (Nth) time through the loop, n=N.

So the last (Nth) time through, it computes y[N+1].

acer

First 557 558 559 560 561 562 563 Last Page 559 of 596