acer

32313 Reputation

29 Badges

19 years, 311 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Good things to know and think about, Jacques. You've shown that display and user-based control is the a contentious part of extensions, and other good stuff besides.

Looking a little harder at that system routine `print/rtable` shows that not only does it query the interface and printlevel, it also stuffs the memory address of the rtable into an RTABLE() call. Presumably that's so that the interfaces can get the sort of handle on the object that you describe their needing. It's interesting too, since each or the Maple interfaces must have their own mechanisms for displaying RTABLE calls to.

I especially like how context-menus in the Standard GUI work on the displayed result of a "bogus" RTABLE() call like RTABLE(32,MATRIX([a]),Matrix). I'm half surprised it doesn't crash. So, the unprotected and undocumented names RTABLE and MATRIX are removed from the space of names fully available for the user.

This produces different unpleasant results in the TTY and Standard interfaces.

MATRIX := proc(x) "hi"; end proc;
<<17>>;

What about Typesetting? I'd like to be able to turn off the default of having subscripts in 2D Math be interpreted as table references. But what about customizing typesetting of named function calls, similar to `print/foo`, etc?

I wonder how people feel about the relative importance of a New & Improved latex() functionality, versus customized typesetting. What about programmatic control of GUI elements -- something else that Mathematica 6 claims to have?

acer

Yes, I agree that the documentation should show much more clearly how to use Statistics:-Sample in the optimal ways possible.

But that example above, where the x in Poisson(x) is an unassigned name and the procedure returned by Sample() uses undeclared name x for a numeric value to an external call, that must be a bug. It has the look of an oversight.

In such a usage case, the procedure returned by Sample() would be better off accepting an additional numeric argument for the distribution's parameter. Or more than one extra argument, in the case of several parameters.

Since the number of parameters might vary according to the particular distribution, the first argument of Sample() should remain the posint that specifies the sample size.

So both the simple and the parametrized cases of calling Sample() repeatedly and efficiently should be clearly documented, and in one of the cases it should document a fixed behaviour.

acer

Yes, I agree that the documentation should show much more clearly how to use Statistics:-Sample in the optimal ways possible.

But that example above, where the x in Poisson(x) is an unassigned name and the procedure returned by Sample() uses undeclared name x for a numeric value to an external call, that must be a bug. It has the look of an oversight.

In such a usage case, the procedure returned by Sample() would be better off accepting an additional numeric argument for the distribution's parameter. Or more than one extra argument, in the case of several parameters.

Since the number of parameters might vary according to the particular distribution, the first argument of Sample() should remain the posint that specifies the sample size.

So both the simple and the parametrized cases of calling Sample() repeatedly and efficiently should be clearly documented, and in one of the cases it should document a fixed behaviour.

acer

Just to be clear, if the candidate form of the nonlinear equation is good then there doesn't seem to be much reason to prefer stats[fit] over Statistics[Fit] here.

The following produced a curve nearly ideantical to the rhs(c3) returned by stats[fit] in the above post.

Statistics[Fit](a+b*x+c*x^2+d*x^3,data,x);

acer

Just to be clear, if the candidate form of the nonlinear equation is good then there doesn't seem to be much reason to prefer stats[fit] over Statistics[Fit] here.

The following produced a curve nearly ideantical to the rhs(c3) returned by stats[fit] in the above post.

Statistics[Fit](a+b*x+c*x^2+d*x^3,data,x);

acer

The strange digits at the very end of the displayed numbers in the results are merely artefacts and can be safely ignored.

If you lprint() the results you will see that the Vector (or Matrix) in question has a float[8] datatype. That means hardware double precision, and comes about because those floating-point results are computed using a precompiled library (external to the main Maple kernel). Now, hardware double precision is a base-2 (binary) representation, but Maple shows the results in base-10 so that we can recognize them.

Those training strange digits are artefacts of the conversion from the underlying base-2 stored value to the (nearest possible) base-10 number. Notice how those artefacts lie in decimal digits places even beyond the 14th or 15th, despite that being past trunc(evalhf(Digits)). Personally, I would prefer that those digits weren't displayed at all, and that only trunc(evalhf(Digits)) digits were shown for float[8] datatype Vector/Matrix/Array objects.

acer

The strange digits at the very end of the displayed numbers in the results are merely artefacts and can be safely ignored.

If you lprint() the results you will see that the Vector (or Matrix) in question has a float[8] datatype. That means hardware double precision, and comes about because those floating-point results are computed using a precompiled library (external to the main Maple kernel). Now, hardware double precision is a base-2 (binary) representation, but Maple shows the results in base-10 so that we can recognize them.

Those training strange digits are artefacts of the conversion from the underlying base-2 stored value to the (nearest possible) base-10 number. Notice how those artefacts lie in decimal digits places even beyond the 14th or 15th, despite that being past trunc(evalhf(Digits)). Personally, I would prefer that those digits weren't displayed at all, and that only trunc(evalhf(Digits)) digits were shown for float[8] datatype Vector/Matrix/Array objects.

acer

An example. First I create a .mla Maple archive (GUI or commandline session, no matter)

> restart:
> libname:=kernelopts(homedir),libname:
> LibraryTools:-Create(cat(kernelopts(homedir),kernelopts(dirsep),"foo.mla"));

Then I save a procedure to that archive. This bit could be automated by a nice batch file written to run cmaple against input source files which define procedures. The point is that the procedure below would be defined in a plaintext file, and not a .mw or .mws file. The source file would get read into cmaple, not the GUI. The LibraryTools:-Save call could be present in the source file or concatenated by the batch script.

> restart:
> p := proc(x) x^2; end proc:
> LibraryTools:-Save(p,cat(kernelopts(homedir),kernelopts(dirsep),"foo.mla"));

Then, in a new session (GUI or commandline session, your choice) the procedure is available.

> restart:
> libname:=kernelopts(homedir),libname:
> p(3);

                                       9

acer

You could think of the process as similar to that of compilation of C source and linking of the .o object into a dynamic library.

In that C analogy, when one wishes to amend the code, one doesn't usually disassemble the object file (though it's possible, sometimes with varying degrees of difficulty). Usually one goes back and edits the source file, and then recompiles and relinks it into a .so/.dll/.dylib.

The usually same is true for Maple, provided that you have the source file. Edit the source file, run maple against it, and then issue a savelib() or LibraryTools call once again to get it into a .mla archive.

If you lack the source file for a procedure, you can often reproduce it by setting interface(verboseproc=3) and calling eval() on the procedure name. If you have issued a writeto() before that eval() call, then you can have the output redirected to a file. A minor amount of cleanup, such as adding a (semi)colon after the `end proc` might well be enough. This sort of approach can reproduce a working, reloadable source file for quite a lot of procedures. But I wouldn't bother with it if I had the source, and I certainly wouldn't bother doing it more than once for a given procedure.

So, I would keep the sources of larger projects in plaintext file. I'd edit them with and fast lightweight editor such as `vi`. I'd familiarize myself with the commandline maple interface (cmaple on Windows), and become familizaried with how to save to .mla archives. As plaintext, I'd never have to worry about losing valuable source embedded in a more easily corruptible or unviewable Worksheet or Document. And I could set my .mapleinit or maple.ini file so that my .mla archives were easily accessible across maple sessions.

Others may prefer to do it differently.

acer

I'm not sure that I trust `is` to get to doing the simple type checks quickly enough. So I code it explicitly, with the type-check first. It's mostly (but not entirely) a habit.

acer

I'm not sure that I trust `is` to get to doing the simple type checks quickly enough. So I code it explicitly, with the type-check first. It's mostly (but not entirely) a habit.

acer

One can see which external compiled function Maple's LinearAlgebra:-Eigenvectors uses, for a generalized eigenvector problem A.x=lambda.B.x where A is symmetric floating-point and B is symmetric positive-definite floating-point.

> A := Matrix(1,shape=symmetric,datatype=float):
> B := Matrix(1,shape=symmetric,datatype\
> =float,attributes=[positive_definite]):
> infolevel[LinearAlgebra]:=1:
> LinearAlgebra:-Eigenvectors(A,B):
Eigenvectors:   "calling external function"
Eigenvectors:   "NAG"   hw_f02fdf

A web search for f02fdf gives this link which documents that function. It says, of the parameter array B, "the upper or lower triangle of B (as specified by UPLO) is overwritten by the triangular factor U or L from the Cholesky factorization of B".

This suggests that, with the shape and attributes on Maple Matrices A and B as in the example above, LinearAlgebra:-Eigenvectors will use a method involving the Cholesky facorization of B. It goes on to say, in the Further Comments section, "F02FDF calls routines from LAPACK in Chapter F08."

For some years now, Matlab has been using LAPACK. See this link from 2000 for an early note on that. It appears from a mirror of the release notes of Matlab 6.0 that the eig function was enhanced to solve exactly the same positive-definite symmetric generalized eigenproblem with the syntax eig(A,B,'chol').

I wouldn't be surprised if these two products schemes were not (at least when originally introduced) very similar implementations of an alternative to the usual QZ/QR algorithm. Notice however a difference in behaviour of the two systems. In Maple it is the shape and attributes of the Matrices whch allow the routine to select the algorithm. The algorithm cannot otherwise be forced. In Matlab the data is pretty much without qualities, and no clever method deduction can be done I think. But the method can be forced by an option to the routine.

ps. Is "without qualities" an eigen-pun, in German? Is the word qualities better translated here as Qualitaten or as Eigenschaften?

acer

One can see which external compiled function Maple's LinearAlgebra:-Eigenvectors uses, for a generalized eigenvector problem A.x=lambda.B.x where A is symmetric floating-point and B is symmetric positive-definite floating-point.

> A := Matrix(1,shape=symmetric,datatype=float):
> B := Matrix(1,shape=symmetric,datatype\
> =float,attributes=[positive_definite]):
> infolevel[LinearAlgebra]:=1:
> LinearAlgebra:-Eigenvectors(A,B):
Eigenvectors:   "calling external function"
Eigenvectors:   "NAG"   hw_f02fdf

A web search for f02fdf gives this link which documents that function. It says, of the parameter array B, "the upper or lower triangle of B (as specified by UPLO) is overwritten by the triangular factor U or L from the Cholesky factorization of B".

This suggests that, with the shape and attributes on Maple Matrices A and B as in the example above, LinearAlgebra:-Eigenvectors will use a method involving the Cholesky facorization of B. It goes on to say, in the Further Comments section, "F02FDF calls routines from LAPACK in Chapter F08."

For some years now, Matlab has been using LAPACK. See this link from 2000 for an early note on that. It appears from a mirror of the release notes of Matlab 6.0 that the eig function was enhanced to solve exactly the same positive-definite symmetric generalized eigenproblem with the syntax eig(A,B,'chol').

I wouldn't be surprised if these two products schemes were not (at least when originally introduced) very similar implementations of an alternative to the usual QZ/QR algorithm. Notice however a difference in behaviour of the two systems. In Maple it is the shape and attributes of the Matrices whch allow the routine to select the algorithm. The algorithm cannot otherwise be forced. In Matlab the data is pretty much without qualities, and no clever method deduction can be done I think. But the method can be forced by an option to the routine.

ps. Is "without qualities" an eigen-pun, in German? Is the word qualities better translated here as Qualitaten or as Eigenschaften?

acer

Thank you very much. How did you know to do that? From somewhere here on mapleprimes, or just experience with maplenet?

acer

Thank you very much. How did you know to do that? From somewhere here on mapleprimes, or just experience with maplenet?

acer

First 552 553 554 555 556 557 558 Last Page 554 of 591