acer

32373 Reputation

29 Badges

19 years, 333 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@Carl Love I have no initialization file in use. I was using Maple 17.02, and get the same results using both the 32bit and the 64bit version, run on Windows 7 Pro.

I also get the same results running each of Maple 11.02, 12.02, 13.01, 14.01, 15.01, and 16.02 in each of 32bit or 64bit Maple as available, all under Windows 7 in the Standard GUI. And I get the same results also from each of 10.03, 11.02, 12.02, 13.01, 14.01, 15.01, 16.02, and 17.02 in both 32bit and 64bit commandline Maple running on ubuntu 10.04. In all cases no initialization file was in use.

Also, the results are (to me) expected. Pi is approximated to floating-point by `evalf` via lookup up to 10000 digits. (Mma has 1000000 digits hard-code, if I recall...)

For example,

restart:
anames();
                       debugger/no_output

kernelopts(version);
    Maple 17.02, X86 64 WINDOWS, Sep 5 2013, Build ID 872941

UseHardwareFloats;
                            deduced

printlevel:= 1000:

evalf(3/Pi, 1);
{--> enter evalf/constant/Pi, args = 
                               3.
<-- exit evalf/constant/Pi (now at top level) = 3.}
                              0.9

Regardless of how it is obtained, for the first example `evalf` will evaluate (and round) Pi to just 1 decimal digit, obtaining 3. as that part of the intermediate computation. It doesn't matter whether that is obtained via lookup or via evalhf or external call to libhf or whatever.

Please pardon me, but I am going to repeat the aspects that I consider key to Christopher2222's question. What matters is that `evalf` will approximate the literal `3` in the input to one digit (result: 3.), then invert that and again round to one digit (result 0.3), then multiply that by the one digit approximation of Pi (final result 0.9).

@nm I have no problem with your approach; it is simple and understandable.

For fun, here are another pair (the second of which is similar to your own).

restart:
eq := (x+y)^2 + 1/(x + y):
G := ee -> `if`(ee::`+`,map(`*`,ee,denom(ee))/denom(ee),ee):
G(eq);

                                 3    
                          (x + y)  + 1
                          ------------
                             x + y    

restart:
eq := (x+y)^2 + 1/(x + y):
thaw(normal(subs(x+y=freeze(x+y),eq)));

                                 3    
                          (x + y)  + 1
                          ------------
                             x + y    

Being able to distribute a multiplication across a sum, without expanding, comes up now and then.

Are you using 17.02? It has some fixes for issues with international keyboards. See here.

acer

The key thing here is that evalf[d](...) or evalf(...,d) is not a request for `d` digits of accuracy in general. Rather, this syntax is a request to use a certain working precision, and intermediate values in the computation will get rounded at that working precision.

@Carl Love It will call `evalf/constant/Pi`, which rounds a precomputed result in the body of that procedure since Digits<51. (And _bigPi is another precomputed result, for slightly larger values of Digits, and then there is also `evalf/constant/bigPi` with 10000 digits).

> restart:         
> printlevel:=1000:
> evalf(3/Pi,1);   
{--> enter evalf/constant/Pi, args = 
                                      3.

<-- exit evalf/constant/Pi (now at top level) = 3.}
                                      0.9

And so now we have a hint to try,

restart:
showstat(`evalf/constant/Pi`);

@Alejandro Jakubi Hmm, I saw symbols displayed for pretty much all (or maybe all) of the values in 380..450, so the fact that all but five of those are excluded from the UnicodeToEntity list seems (to me) to indicate that list is not all related to font availability.

But I still wonder whether no symbol displayed for value 9813 is a font issue.

@Alejandro Jakubi I don't understand how that relates. I see only five results from `UnicodeToEntity` returned list, for the (lhs) number between 380 and 450, say.

kernelopts(opaquemodules=false):
T:=table(Typesetting:-UnicodeToEntity):
for i from 380 to 450 do
  try
  if assigned(T[i]) then print(i,T[i],cat(`&#`,i,`;`)); end if;
  catch:
  end try;
end do;

But I see the Standard GUI of Maple 17.02 running on Windows 7 Pro as being able to display symbols for many more numbers from that range. (I'm not sure that I can get them inlined in this comment, however.) All of the following print in my Maple 17.02 Standard GUI, as special symbols.

seq([i,cat(`&#`,i,`;`)],i=380..450);

But if I attempt to print for all unicode values then quite a few symbols do not get shown as anything except an empty box. To me this suggests a font issue.

It doesn't appear that the unicode entity need be recognized by Typesetting as an "Entity", in order for it to just print in the Std GUI.

@Alejandro Jakubi When I mentioned forming valid .mw I was thinking more of the the XML structure and the need for an up-to-date validating schema (more than, say, encoding of dotm) for the modern .mw worksheet.

@Alejandro Jakubi Turning expressions into the format stored in .mw can involve base64 encoding of "dotm" (aka .m format, like produced by sprintf with the %m descriptor), and sometimes Typesetting as well. Programmatically forming valid .mw files, to contain such encodings, is additional work.

It might be easier to just savelib results to a private .mla archive or save to a .m file, and then access them manually from there by making calls to evaluate or print them in a Standard GUI session.

@Pavel Holoborodko "memory used" or the result from kernelopts(bytesused) is the amount of memory that has been processed by Maple's garbage collector (memory manager). It is not the amount of allocated memory in use by the Maple kernel.

You have misuderstood that "memory used" figure of 4.65GB. It is not a measure of the total allocation at any given moment.

The command kernelopts(bytesalloc) should report on the allocation of memory done by the Maple kernel in the running session. The memory management system of Maple's kernel allows the total allocation to be higher than the minimal amount required to store the currently referenced objects, ie. unreferenced garbage may be allowed to stay around for a while.

I see that your site's present results of a 100-digit comparison of sparse solvers for direct LU only. As you are no doubt aware there are often several options for such solvers (controlloling degree of fill-in, or what have you). It can be tricky to find optimal values for such, on a problem by problem basis, which makes comparison amongst solvers tricky too. Also there is the attained accuracy to consider. For this and other reasons I believe that examples with indirect (iterative) solvers would be more interesting. And explicit statements about accuracy targets should probably be mentioned.

I see that your test collection has examples for sparse direct LU where the timings ratio is as low as a factor of 3 and as high as a factor of 140. I look forward to seeing indirect method's results. Seeing how the dedicated high precision solvers' performance changes as digits gets very high would also be interesting. Ie, how do they scale, with very high digits.

You have reported here the results just for dense solvers in the 34-digit case, where (of course) the dedicated quad-precision solver greatly outperforms the arbitary precision solver. That should be no surprise, I think.

@spradlig Maple will simplify 8^(1/3) to 2 and the `surd` command is not required in order to attain that.

Your post suggests that you also expect (-8)^(1/3) to produce the same real result of -2, and you seem to be suggesting that this is what all proper mathematician would want. And that is false.

The Description section in the Maple help page for `sqrt` explains that Maple is using the "principle square root", as exp(1/2*ln(x)).And the help page for `root` explains the analogous result for x^(1/n). And so that page mentions the `surd` command, etc. This is a convention that makes good sense for computing in the complex plane, which is Maple's default.

Complaining that Maple works with the complex numbers by default and uses a particular convention for choice of branches is probably not going to get you more joy than it has for the large number of individuals who have made the same points before you.

@Yankel If you already have an image then a "sampling" step is already done. You don't need to pepper the domain with random points and do any kind of MC simulation. Instead, just walk all the pixels and count how many are shaded, and then divide by the total.

Now, it may be that things are not so simple: you may not have a clear boundary, the background may not be monotone, etc. And such things could be handled. But they'd need to be handled regardless of whether you used the full existing image pixel data for a simple (possibly weighted) average or whether you sampled that given data randomly. I'm just pointing out that you seem to already have a finite sampling.

I stopped reading soon after realizing that they were comparing (only) a dedicated quad precision implementation against an arbitrary precision implementation.

If they compared iterative solvers at each of 50, 100, & 500 decimal digits and showed results for 64bit Linux (and giving Maple a libgmp upgrade), then it would be much more interesting.

It's no secret that Maple's timing for numerical linear algebra has a deep step when crossing the threshold from compiled double precision over to higher arbitrary precision. And this is true for both dense and sparse cases. And it's true for other areas of numerical computation. Duplication of all those external libraries (clapack, nag, cblas, etc) with quad precision implementations would be very nice for Maple.

The Advanpix site indicates that their product also does (higher than quad) arbitrary precision, and so much more interesting would be a thorough performance comparison including: even higher precision, and more interesting solver choice(s). That dedicated quad precision implementation soundly beats a general arbitrary precision (gmp based) is not at all surprising. So more interesting would be the comparison of apples with apples, stacking up the two arbitrary precision implementations against each other.

A comparison on Linux would also be interesting; I quite often see a mid-range 64bit Linux Intel quad-core i5 outperforming a more high-end 64bit Windows 7 Intel i7 on the same computations (both built with the Intel icc compiler).

acer

@J4James Perhaps multiple surfaces might also be of some use, eg.

plots:-display(
    seq(plot3d(eval(P,[R3=-5,k=4,R1=0.0006,Q=q]),
               x=0..1,PL=0..1,labels=[x,PL,'P'],
               color=RGB(0,1-q/4,q/4)),
        q=[1.2,2.0,4.0]));

@J4James Perhaps multiple surfaces might also be of some use, eg.

plots:-display(
    seq(plot3d(eval(P,[R3=-5,k=4,R1=0.0006,Q=q]),
               x=0..1,PL=0..1,labels=[x,PL,'P'],
               color=RGB(0,1-q/4,q/4)),
        q=[1.2,2.0,4.0]));
First 364 365 366 367 368 369 370 Last Page 366 of 592