acer

33188 Reputation

29 Badges

20 years, 209 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@Pavel Holoborodko "memory used" or the result from kernelopts(bytesused) is the amount of memory that has been processed by Maple's garbage collector (memory manager). It is not the amount of allocated memory in use by the Maple kernel.

You have misuderstood that "memory used" figure of 4.65GB. It is not a measure of the total allocation at any given moment.

The command kernelopts(bytesalloc) should report on the allocation of memory done by the Maple kernel in the running session. The memory management system of Maple's kernel allows the total allocation to be higher than the minimal amount required to store the currently referenced objects, ie. unreferenced garbage may be allowed to stay around for a while.

I see that your site's present results of a 100-digit comparison of sparse solvers for direct LU only. As you are no doubt aware there are often several options for such solvers (controlloling degree of fill-in, or what have you). It can be tricky to find optimal values for such, on a problem by problem basis, which makes comparison amongst solvers tricky too. Also there is the attained accuracy to consider. For this and other reasons I believe that examples with indirect (iterative) solvers would be more interesting. And explicit statements about accuracy targets should probably be mentioned.

I see that your test collection has examples for sparse direct LU where the timings ratio is as low as a factor of 3 and as high as a factor of 140. I look forward to seeing indirect method's results. Seeing how the dedicated high precision solvers' performance changes as digits gets very high would also be interesting. Ie, how do they scale, with very high digits.

You have reported here the results just for dense solvers in the 34-digit case, where (of course) the dedicated quad-precision solver greatly outperforms the arbitary precision solver. That should be no surprise, I think.

@spradlig Maple will simplify 8^(1/3) to 2 and the `surd` command is not required in order to attain that.

Your post suggests that you also expect (-8)^(1/3) to produce the same real result of -2, and you seem to be suggesting that this is what all proper mathematician would want. And that is false.

The Description section in the Maple help page for `sqrt` explains that Maple is using the "principle square root", as exp(1/2*ln(x)).And the help page for `root` explains the analogous result for x^(1/n). And so that page mentions the `surd` command, etc. This is a convention that makes good sense for computing in the complex plane, which is Maple's default.

Complaining that Maple works with the complex numbers by default and uses a particular convention for choice of branches is probably not going to get you more joy than it has for the large number of individuals who have made the same points before you.

@Yankel If you already have an image then a "sampling" step is already done. You don't need to pepper the domain with random points and do any kind of MC simulation. Instead, just walk all the pixels and count how many are shaded, and then divide by the total.

Now, it may be that things are not so simple: you may not have a clear boundary, the background may not be monotone, etc. And such things could be handled. But they'd need to be handled regardless of whether you used the full existing image pixel data for a simple (possibly weighted) average or whether you sampled that given data randomly. I'm just pointing out that you seem to already have a finite sampling.

I stopped reading soon after realizing that they were comparing (only) a dedicated quad precision implementation against an arbitrary precision implementation.

If they compared iterative solvers at each of 50, 100, & 500 decimal digits and showed results for 64bit Linux (and giving Maple a libgmp upgrade), then it would be much more interesting.

It's no secret that Maple's timing for numerical linear algebra has a deep step when crossing the threshold from compiled double precision over to higher arbitrary precision. And this is true for both dense and sparse cases. And it's true for other areas of numerical computation. Duplication of all those external libraries (clapack, nag, cblas, etc) with quad precision implementations would be very nice for Maple.

The Advanpix site indicates that their product also does (higher than quad) arbitrary precision, and so much more interesting would be a thorough performance comparison including: even higher precision, and more interesting solver choice(s). That dedicated quad precision implementation soundly beats a general arbitrary precision (gmp based) is not at all surprising. So more interesting would be the comparison of apples with apples, stacking up the two arbitrary precision implementations against each other.

A comparison on Linux would also be interesting; I quite often see a mid-range 64bit Linux Intel quad-core i5 outperforming a more high-end 64bit Windows 7 Intel i7 on the same computations (both built with the Intel icc compiler).

acer

@J4James Perhaps multiple surfaces might also be of some use, eg.

plots:-display(
    seq(plot3d(eval(P,[R3=-5,k=4,R1=0.0006,Q=q]),
               x=0..1,PL=0..1,labels=[x,PL,'P'],
               color=RGB(0,1-q/4,q/4)),
        q=[1.2,2.0,4.0]));

@J4James Perhaps multiple surfaces might also be of some use, eg.

plots:-display(
    seq(plot3d(eval(P,[R3=-5,k=4,R1=0.0006,Q=q]),
               x=0..1,PL=0..1,labels=[x,PL,'P'],
               color=RGB(0,1-q/4,q/4)),
        q=[1.2,2.0,4.0]));

@Carl Love Yes, thanks, I meant LimitTutor.

I was too hasty, and misread the citations at the end of that linked page. I apologize to all. Stewart is merely named as a reference, on that linked page. I had mistakenly thought that the whole thing was from his Calculus text. But rather it's just some online course. I don't know what the current Stewart Calculus edition does for this derivation.

But the overall meassage for the OP is the same: quite often sources give a logically faulty derivation of this limit, as explained. You might try here, for better.

Before you were taught the defintion of a derivative you ought to have been taught how to do such limits.

You should know that you can treat the x^3 and the 3*cos(x) separately, and then add. And the 3 will factor out, for the 3*cos(x) part. And you should be able to handle the x^3 easily (as indicated by your comments).

That leaves you with the job of figuring out the derivative of cos(x) using the usual definition (as a limit). You should be able to do the following steps by hand.

f:=(cos(x+h)-cos(x))/h;

                               cos(x + h) - cos(x)
                          f := -------------------
                                        h         

# using identity for cos(A+B)
T:=expand(f);

                      cos(x) cos(h)   sin(x) sin(h)   cos(x)
                 T := ------------- - ------------- - ------
                            h               h           h   

T:=collect(T,h);

                      cos(x) cos(h) - sin(x) sin(h) - cos(x)
                 T := --------------------------------------
                                        h                   

T:=collect(T,cos(x));

                       (cos(h) - 1) cos(x)   sin(x) sin(h)
                  T := ------------------- - -------------
                                h                  h      

So now you need to take the limit of the above as h->0. The above is the sum of two expressions, and you can treat them separately. The cos(x) and -sin(x) factors can be taken outside the limit. So, how do you find the value of the following? You could use Maple's limit tutor and get its hints. It may suggest an application of l'Hopital's rule.

Limit( sin(h)/h, h = 0 )

The same goes for the other part, Limit( (cos(h)-1)/h, h=0 ). And that is how some texts explain this example, by using l'Hopital's rule to find the limits of both (cos(h)-1)/h and sin(h)/h as h->0. See, for example, this online AP Calculus source. The bright student might object, at this point, on the grounds that it is circular logic to allow use of the derivative of numerator cos(h)-1 during an application of l'Hopital's rule while attempting to derive the derivative of cos(x) via its limit definition. And so it often goes, in a first Calculus course. (Geometric proof, or application of the squeeze theorem, may be beyond what you are responsible for in your course.)

acer

@digerdiga In 64bit Maple 17.02 running on Windows 7 Pro I am seeing `simplify(eq)` produce a result of `1` when running your uploaded worksheet.

Could you please show the exact sequence of commands where `simplify` alone (without that composed around `evalc`) fails under those assumptions?

Either plaintext 1D Maple notation, or an uploaded worksheet, would be helpful.

acer

@yaseen Which elements, and how do you wish to pair them in order to take products?

@yaseen Which elements, and how do you wish to pair them in order to take products?

@Mac Dude I certainly did not advocate suppressing the warning. I showed how to suppress the warning, and then I advocated dealing with it via explicit declaration instead of suppressing the warning. I suggested that dealing with it via explicit declaration was better; I gave an example of the supression potentially leading to problems.

The act of showing how to do something does not make any suggestion that it's a good idea.

@Mac Dude I certainly did not advocate suppressing the warning. I showed how to suppress the warning, and then I advocated dealing with it via explicit declaration instead of suppressing the warning. I suggested that dealing with it via explicit declaration was better; I gave an example of the supression potentially leading to problems.

The act of showing how to do something does not make any suggestion that it's a good idea.

Also possible is to use %a in the format string, rather than %e.

That should avoid any reliance on Digits or the length of the float.

acer

First 380 381 382 383 384 385 386 Last Page 382 of 607