acer

32727 Reputation

29 Badges

20 years, 94 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Z:=evalhf(exp(354+I));

                          153                         153  
    2.97086840260185732 10    + 4.62685339914530199 10    I

evalhf(1/Z);

                         -155                         -154  
   9.82630470844021329 10     - 1.53035628577376246 10     I

Z:=evalhf(exp(355+I));

                          153                         154  
    8.07565759353578019 10    + 1.25770915178406420 10    I

evalhf(1/Z);

                               0.

I think that some hardware float libraries (eg. cephes? I forget offhand. Axel probably knows) might be able to do boundary case reciprocals like this with less problem (ie. without dividing by abs(Z) say, or using more special care).

evalhf(abs(Z));

                        Float(infinity)

acer

Z:=evalhf(exp(354+I));

                          153                         153  
    2.97086840260185732 10    + 4.62685339914530199 10    I

evalhf(1/Z);

                         -155                         -154  
   9.82630470844021329 10     - 1.53035628577376246 10     I

Z:=evalhf(exp(355+I));

                          153                         154  
    8.07565759353578019 10    + 1.25770915178406420 10    I

evalhf(1/Z);

                               0.

I think that some hardware float libraries (eg. cephes? I forget offhand. Axel probably knows) might be able to do boundary case reciprocals like this with less problem (ie. without dividing by abs(Z) say, or using more special care).

evalhf(abs(Z));

                        Float(infinity)

acer

@okoolo You can see a list of the possible outputs from Statistics:-Fit in the Help system, here and here.

 

@okoolo You can see a list of the possible outputs from Statistics:-Fit in the Help system, here and here.

 

One of the key aspects that makes the Affine Scaling interior point method important is that it can be implemented efficiently.

The above implementation is, unfortunately, quite inefficient. It produces a lot of temporary objects which must be memory-managed. The collectible garbage consists of lots of unnecessary temporary non-scalar objects (eg. Matrices, Vectors and lists). Every line but one in the do-loop involves avoidable creation of one or more Matrix/Vector/list objects, and managing that memory is costly. An efficient implementation at the Maple "Library" level would involved operating inplace on float[8] datatype Matrices and Vectors (or Arrays).

Also, what is posted is not the Affine Scaling algorithm, which handles an arbitrary LPP. It's a hard-coded example for just a single given LPP.

Readers might be interested to know that Maple 15 has an interior point method implementation suitable for larger, sparse LPPs.

acer

When I qualified my suggestion with "if efficiency matters" and "or critical code" I had in mind other scenarios like pure computation, as opposed to this task of printing and viewing. Sorry if I wasn't clear enough about that.

acer

When I qualified my suggestion with "if efficiency matters" and "or critical code" I had in mind other scenarios like pure computation, as opposed to this task of printing and viewing. Sorry if I wasn't clear enough about that.

acer

@Robert Israel That's like what you answered in another Question posted by the same member.

@Robert Israel That's like what you answered in another Question posted by the same member.

@mzh It would take away too much from Maple if the constructing such things produces an error. For one thing, such constructions may have different interpretations, and thus should be "OK" to exist as intermediate objects.

Consider, as just one example

myequation := 32 = 64;

                            32 = 64

evalb(myequation); # of course

                             false

myequation mod 2;

                             0 = 0

evalb(myequation mod 2);

                              true

So, construction of `myequation` might get done as part of some involved computation involving several values, and only later might the intermediate objects (equations) get tested modulo some value. It wouldn't be possible to code that, if the construction immediately produced an error. In general, there are many more such examples.

@mzh It would take away too much from Maple if the constructing such things produces an error. For one thing, such constructions may have different interpretations, and thus should be "OK" to exist as intermediate objects.

Consider, as just one example

myequation := 32 = 64;

                            32 = 64

evalb(myequation); # of course

                             false

myequation mod 2;

                             0 = 0

evalb(myequation mod 2);

                              true

So, construction of `myequation` might get done as part of some involved computation involving several values, and only later might the intermediate objects (equations) get tested modulo some value. It wouldn't be possible to code that, if the construction immediately produced an error. In general, there are many more such examples.

Another way to handle this is to ensure that the arguments are row-column. For this column-column example one could transpose the first Vector. This causes LinearAlgebra:-Multiply to do the work, without the conjugation,

<a, b, c> . <d, e, f>;
                        _     _     _  
                        a d + b e + c f

<a, b, c>^%T . <d, e, f>;

                        a d + b e + c f

<a|b|c> . <d,e,f>; # same orientations as above

                        a d + b e + c f

Transposition can be done inplace (highly memory and speed efficient) on a Vector, either by using the LinearAlgebra:-Transpose command or using the `subtype` parameter of the `rtable_options` command

Other straightforward ways to do it include making the assumption that the relevant names are purely real (or to use a command like `evalc` which does that in a blanketing way),

evalc(<a, b, c> . <d, e, f>);

                        a d + b e + c f

<a, b, c> . <d, e, f> assuming real;

                        a d + b e + c f

<a, b, c> . <d, e, f> assuming a::real, b::real, c::real;

                        a d + b e + c f

acer

I overlooked the addition of Jacobi based svd as new functions dgejsv and dgesvj in version 3.2 or LAPACK. (See section 9.5 of these release v.3.2 notes.)

If you were to compile CLAPACK 3.2.1, and its accompanying reference BLAS, as a pair of .dll libraries with those entry point functions exported, then you should be able to use them directly from Maple using the `define-external` command. (I might try building this one first, perhaps. The only pre-built libraries for this that I know of are .lib, and you'd need dynamic .dll instead. So you'd need to build it yourself.)

acer

I overlooked the addition of Jacobi based svd as new functions dgejsv and dgesvj in version 3.2 or LAPACK. (See section 9.5 of these release v.3.2 notes.)

If you were to compile CLAPACK 3.2.1, and its accompanying reference BLAS, as a pair of .dll libraries with those entry point functions exported, then you should be able to use them directly from Maple using the `define-external` command. (I might try building this one first, perhaps. The only pre-built libraries for this that I know of are .lib, and you'd need dynamic .dll instead. So you'd need to build it yourself.)

acer

Very nice explanation, with more sophistication indeed than in my answer.

And thanks for `style=point`, which looks much better! I was trying to steer him towards bounding the error (or "last" term), but didn't want to give away what might be a homework question too easily.

This reminds me of a usenet thread from 2010, which revolved around confusion due to Maple's use of double-precision `sin` which is (on some platforms) not always the same as what the operating system's runtime provides. Axel provided a fascinating reference, during that discussion, BTW.

Any readers interested in the range reduction from values of much higher multiples of Pi into (0,Pi/4) could start with Mueller's great book, Elementary Functions, Algorithms, and Implementation. (Axel, we've discussed that book once before, I think...)

Posted code in reponses here by me and Axel contain conversion to Horner's form, and therein lies a whole other deep area. Which reminds me of something else I had sheets on for potential blogging: Horner's form for Matrix polynomial evaluation. And then there is Estrin's form for use with Threads...

acer

First 430 431 432 433 434 435 436 Last Page 432 of 599