acer

32333 Reputation

29 Badges

19 years, 319 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

To create Matrices and Vectors with double precision real (C double) entries, stored in contiguous memory arrays, use the option datatype=float[8] when calling the Matrix and Vector constructors. And similarly use complex[8] for paired hardware doubles representing complex entries.

To ensure that the LinearAlgebra operations with these float[8] Matrices and Vectors always is done using hardware double precision external libraries (and not with arbitrary precision software float external libraries) then do the following:

  • set UseHardwareFloats:=true;
  • (or) keep Digits less than evalhf(Digits)

You might also try setting infolevel[LinearAlgebra]:=2 since that will allow LinearAlgebra commands to print additional information about which external routine is being used, and about whether copying from hardware to software datatypes is taking place. (The external function hw_f06ecf is hardware BLAS function daxpy, for example, while sw_f06ecf is the software float equivalent. The hw_ or sw_ prefix which can appear in this printed information is thus a key.)

The option hfloat switch for a procedure does not accomplish the items above. That is to say, it does not enable float[8] Matrix and Vector construction without the explicit datatype options being provided, it does not enforce use of hardware external libraries, and it does not prevent internal software float Matrix/Vector copying.

Instead, the hfloat option of a procedure affects scalars. It toggles the automatic creation of scalar floats as HFloat objects, and it toggles the retrieval of scalar entries from float[8] Matrices and Vectors also to be HFloat objects.

acer

This sounds similar to what I had suggested here (last paragraph). I actually implemented a rough version of this, which allowed me to extract the representation being used by Maple internally to store the GMP result of an evalf[n](Pi) call. And then a quick conversion to base 2 was easy. But I didn't bother to fix it up nicely for general use. I used an external call to DAXPY to copy the memory to an appropriate rtable, using an offset to the address of the DAG as the copying source. Presumably one could do the same thing using asembler, again using the address of the DAG of the stored GMP number.

The essence is just that Maple already has a nice internal representation in some 2^m base of the evalf[n](Pi) result, so there's not much more to do ideally than to examine just that.

Of course, as Jacques mentioned, there may be even better techniques working to directly generate a result in base 2^m, avoiding radix-10 high precision computation of the number as the initial step. That wouldn't be done in Maple proper, I guess.

acer

I would think that this can now be done even faster in Maple 12 using the new Bits package.

(edited: By "this" I mean conversion to base 2, not the original stated end goal of generating long bit strings with certain statistical properties.)

acer

Hi Joe, is BytesPerWord the same as kernelopts(wordsize)/8 ?

You might prefer to make that integer datatype's width be dynamic in your code above, rather than hardcoded at preprocessor (ie. read) time with a $define.

acer

The problem appears to be line 30 (Maple 12) of the routine ArrayTools:-AddAlongDimension2D .

> restart:
> kernelopts(opaquemodules=false):
> showstat(ArrayTools:-AddAlongDimension2D);

The line (30) which creates the object to contain the result looks like,

  x := Vector[row](nrows,('datatype') = Datatypey);

It should probably instead be,

  x := Vector[row](ncols,('datatype') = Datatypey);

That Vector is acted on in-place by NAG f06ecf which is really just the BLAS function daxpy. Line 33 shows that it will try to add ncols entries from y to whatever is already in ncols entries of x (since incx the x stride is 1). So x better be of length at least ncols (and not just nrows which in this example is smaller).

acer

That's nice to hear, thanks Will.

acer

It looks to me like a (2dim flattened to 1dim) sequence of text fields, with hard-coded 5x5 dimensions.

To view some of the source, try,

kernelopts(opaquemodules=false):
eval(Student:-LinearAlgebra:-MatrixBuilder:-ModuleApply);
eval(Student:-LinearAlgebra:-MatrixBuilder:-GetMatrix);

There's a local procedure named GetMatrix which tries this,

Raw := [Maplets:-Tools:-Get(
    seq(seq("TF" || i || j, j = 1 .. cols), i = 1  .. rows))];
M := Matrix(rows, cols, [seq([seq(CheckTextField(
    Raw[(i - 1)*cols + j], 'algebraic',
    `_MessageCatalogue/GetMessage`("an algebraic expression",
    "Maplets"), 0), j = 1 .. cols)], i = 1 .. rows)])

And those text fields appear to get set up in the ModuleApply with this,

seq(
    seq(Maplets:-Elements:-TextField["TF" || i || j](
    'value' = eval(initM[i, j]), 'width' = 10, mapletLightColor,
    'tooltip' =
    `_MessageCatalogue/GetMessage`("Enter a value", "Maplets"),
    'enabled' = true), j = 1 .. max), i = 1 .. max)

acer

This site has momentum. But some amount of energy still has to be put into the system, or it will slow down.

Having the site be down for 60 hours straight every few weeks can't be helping.

acer

Is it possible to start maple11.02 Classic GUI under Windows with the mtserver.exe multi-threaded kernel, and see whether that too is a problem alongside AVG 8?

acer

Aren't the Optimization, GlobalOptimization, and evalf/Int external processes already interruptible during callbacks to Maple proper? They make "eval" and "evalhf" callbacks to evaluate objectives and integrands.

Isn't external software precision LinearAlgebra interruptible during garbage collection?

acer

RootFinding:-Analytic is often not an efficient tool for find a sequence of roots on the real axis. Simply by the way that it works it will do a lot of complex contour work, trying to separate the roots, that is not necessary to solve the problem. (And that's on top of the potential problems with nonreal roots and the question of how fine to make the complex bounding box -- not a problem here but true in general).

On the other hand, RootFinding:-NextZero, while not perfect, is designed for precisely this purpose, finding an ordered set of zeros on the real axis starting from a given initial left-most point.

But of course Robert's answer to use BesselJZeros is best. (I had forgotten that the routine existed!)

acer

RootFinding:-Analytic is often not an efficient tool for find a sequence of roots on the real axis. Simply by the way that it works it will do a lot of complex contour work, trying to separate the roots, that is not necessary to solve the problem. (And that's on top of the potential problems with nonreal roots and the question of how fine to make the complex bounding box -- not a problem here but true in general).

On the other hand, RootFinding:-NextZero, while not perfect, is designed for precisely this purpose, finding an ordered set of zeros on the real axis starting from a given initial left-most point.

But of course Robert's answer to use BesselJZeros is best. (I had forgotten that the routine existed!)

acer

The result from the first of those looks nice. Wouldn't it be nicer still, though, if int() could return that under conditions on a and b. Let's call it a "weakness".

acer

The result from the first of those looks nice. Wouldn't it be nicer still, though, if int() could return that under conditions on a and b. Let's call it a "weakness".

acer

Something like this works, I think, to get a result for a and b as posints greater than or equal to 2. But I did it by hand.

> sol := Sum((-1)^i*binomial(b-1,i)*t^(a-1+i+1)/(a-1+i+1),i = 0 .. b):

> value(eval(sol,[a=3,b=4,t=2]));
                                     -4/5

acer

First 536 537 538 539 540 541 542 Last Page 538 of 591