acer

32627 Reputation

29 Badges

20 years, 45 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@Carl Love It will call `evalf/constant/Pi`, which rounds a precomputed result in the body of that procedure since Digits<51. (And _bigPi is another precomputed result, for slightly larger values of Digits, and then there is also `evalf/constant/bigPi` with 10000 digits).

> restart:         
> printlevel:=1000:
> evalf(3/Pi,1);   
{--> enter evalf/constant/Pi, args = 
                                      3.

<-- exit evalf/constant/Pi (now at top level) = 3.}
                                      0.9

And so now we have a hint to try,

restart:
showstat(`evalf/constant/Pi`);

@Alejandro Jakubi Hmm, I saw symbols displayed for pretty much all (or maybe all) of the values in 380..450, so the fact that all but five of those are excluded from the UnicodeToEntity list seems (to me) to indicate that list is not all related to font availability.

But I still wonder whether no symbol displayed for value 9813 is a font issue.

@Alejandro Jakubi I don't understand how that relates. I see only five results from `UnicodeToEntity` returned list, for the (lhs) number between 380 and 450, say.

kernelopts(opaquemodules=false):
T:=table(Typesetting:-UnicodeToEntity):
for i from 380 to 450 do
  try
  if assigned(T[i]) then print(i,T[i],cat(`&#`,i,`;`)); end if;
  catch:
  end try;
end do;

But I see the Standard GUI of Maple 17.02 running on Windows 7 Pro as being able to display symbols for many more numbers from that range. (I'm not sure that I can get them inlined in this comment, however.) All of the following print in my Maple 17.02 Standard GUI, as special symbols.

seq([i,cat(`&#`,i,`;`)],i=380..450);

But if I attempt to print for all unicode values then quite a few symbols do not get shown as anything except an empty box. To me this suggests a font issue.

It doesn't appear that the unicode entity need be recognized by Typesetting as an "Entity", in order for it to just print in the Std GUI.

@Alejandro Jakubi When I mentioned forming valid .mw I was thinking more of the the XML structure and the need for an up-to-date validating schema (more than, say, encoding of dotm) for the modern .mw worksheet.

@Alejandro Jakubi Turning expressions into the format stored in .mw can involve base64 encoding of "dotm" (aka .m format, like produced by sprintf with the %m descriptor), and sometimes Typesetting as well. Programmatically forming valid .mw files, to contain such encodings, is additional work.

It might be easier to just savelib results to a private .mla archive or save to a .m file, and then access them manually from there by making calls to evaluate or print them in a Standard GUI session.

@Pavel Holoborodko "memory used" or the result from kernelopts(bytesused) is the amount of memory that has been processed by Maple's garbage collector (memory manager). It is not the amount of allocated memory in use by the Maple kernel.

You have misuderstood that "memory used" figure of 4.65GB. It is not a measure of the total allocation at any given moment.

The command kernelopts(bytesalloc) should report on the allocation of memory done by the Maple kernel in the running session. The memory management system of Maple's kernel allows the total allocation to be higher than the minimal amount required to store the currently referenced objects, ie. unreferenced garbage may be allowed to stay around for a while.

I see that your site's present results of a 100-digit comparison of sparse solvers for direct LU only. As you are no doubt aware there are often several options for such solvers (controlloling degree of fill-in, or what have you). It can be tricky to find optimal values for such, on a problem by problem basis, which makes comparison amongst solvers tricky too. Also there is the attained accuracy to consider. For this and other reasons I believe that examples with indirect (iterative) solvers would be more interesting. And explicit statements about accuracy targets should probably be mentioned.

I see that your test collection has examples for sparse direct LU where the timings ratio is as low as a factor of 3 and as high as a factor of 140. I look forward to seeing indirect method's results. Seeing how the dedicated high precision solvers' performance changes as digits gets very high would also be interesting. Ie, how do they scale, with very high digits.

You have reported here the results just for dense solvers in the 34-digit case, where (of course) the dedicated quad-precision solver greatly outperforms the arbitary precision solver. That should be no surprise, I think.

@spradlig Maple will simplify 8^(1/3) to 2 and the `surd` command is not required in order to attain that.

Your post suggests that you also expect (-8)^(1/3) to produce the same real result of -2, and you seem to be suggesting that this is what all proper mathematician would want. And that is false.

The Description section in the Maple help page for `sqrt` explains that Maple is using the "principle square root", as exp(1/2*ln(x)).And the help page for `root` explains the analogous result for x^(1/n). And so that page mentions the `surd` command, etc. This is a convention that makes good sense for computing in the complex plane, which is Maple's default.

Complaining that Maple works with the complex numbers by default and uses a particular convention for choice of branches is probably not going to get you more joy than it has for the large number of individuals who have made the same points before you.

@Yankel If you already have an image then a "sampling" step is already done. You don't need to pepper the domain with random points and do any kind of MC simulation. Instead, just walk all the pixels and count how many are shaded, and then divide by the total.

Now, it may be that things are not so simple: you may not have a clear boundary, the background may not be monotone, etc. And such things could be handled. But they'd need to be handled regardless of whether you used the full existing image pixel data for a simple (possibly weighted) average or whether you sampled that given data randomly. I'm just pointing out that you seem to already have a finite sampling.

I stopped reading soon after realizing that they were comparing (only) a dedicated quad precision implementation against an arbitrary precision implementation.

If they compared iterative solvers at each of 50, 100, & 500 decimal digits and showed results for 64bit Linux (and giving Maple a libgmp upgrade), then it would be much more interesting.

It's no secret that Maple's timing for numerical linear algebra has a deep step when crossing the threshold from compiled double precision over to higher arbitrary precision. And this is true for both dense and sparse cases. And it's true for other areas of numerical computation. Duplication of all those external libraries (clapack, nag, cblas, etc) with quad precision implementations would be very nice for Maple.

The Advanpix site indicates that their product also does (higher than quad) arbitrary precision, and so much more interesting would be a thorough performance comparison including: even higher precision, and more interesting solver choice(s). That dedicated quad precision implementation soundly beats a general arbitrary precision (gmp based) is not at all surprising. So more interesting would be the comparison of apples with apples, stacking up the two arbitrary precision implementations against each other.

A comparison on Linux would also be interesting; I quite often see a mid-range 64bit Linux Intel quad-core i5 outperforming a more high-end 64bit Windows 7 Intel i7 on the same computations (both built with the Intel icc compiler).

acer

@J4James Perhaps multiple surfaces might also be of some use, eg.

plots:-display(
    seq(plot3d(eval(P,[R3=-5,k=4,R1=0.0006,Q=q]),
               x=0..1,PL=0..1,labels=[x,PL,'P'],
               color=RGB(0,1-q/4,q/4)),
        q=[1.2,2.0,4.0]));

@J4James Perhaps multiple surfaces might also be of some use, eg.

plots:-display(
    seq(plot3d(eval(P,[R3=-5,k=4,R1=0.0006,Q=q]),
               x=0..1,PL=0..1,labels=[x,PL,'P'],
               color=RGB(0,1-q/4,q/4)),
        q=[1.2,2.0,4.0]));

@Carl Love Yes, thanks, I meant LimitTutor.

I was too hasty, and misread the citations at the end of that linked page. I apologize to all. Stewart is merely named as a reference, on that linked page. I had mistakenly thought that the whole thing was from his Calculus text. But rather it's just some online course. I don't know what the current Stewart Calculus edition does for this derivation.

But the overall meassage for the OP is the same: quite often sources give a logically faulty derivation of this limit, as explained. You might try here, for better.

Before you were taught the defintion of a derivative you ought to have been taught how to do such limits.

You should know that you can treat the x^3 and the 3*cos(x) separately, and then add. And the 3 will factor out, for the 3*cos(x) part. And you should be able to handle the x^3 easily (as indicated by your comments).

That leaves you with the job of figuring out the derivative of cos(x) using the usual definition (as a limit). You should be able to do the following steps by hand.

f:=(cos(x+h)-cos(x))/h;

                               cos(x + h) - cos(x)
                          f := -------------------
                                        h         

# using identity for cos(A+B)
T:=expand(f);

                      cos(x) cos(h)   sin(x) sin(h)   cos(x)
                 T := ------------- - ------------- - ------
                            h               h           h   

T:=collect(T,h);

                      cos(x) cos(h) - sin(x) sin(h) - cos(x)
                 T := --------------------------------------
                                        h                   

T:=collect(T,cos(x));

                       (cos(h) - 1) cos(x)   sin(x) sin(h)
                  T := ------------------- - -------------
                                h                  h      

So now you need to take the limit of the above as h->0. The above is the sum of two expressions, and you can treat them separately. The cos(x) and -sin(x) factors can be taken outside the limit. So, how do you find the value of the following? You could use Maple's limit tutor and get its hints. It may suggest an application of l'Hopital's rule.

Limit( sin(h)/h, h = 0 )

The same goes for the other part, Limit( (cos(h)-1)/h, h=0 ). And that is how some texts explain this example, by using l'Hopital's rule to find the limits of both (cos(h)-1)/h and sin(h)/h as h->0. See, for example, this online AP Calculus source. The bright student might object, at this point, on the grounds that it is circular logic to allow use of the derivative of numerator cos(h)-1 during an application of l'Hopital's rule while attempting to derive the derivative of cos(x) via its limit definition. And so it often goes, in a first Calculus course. (Geometric proof, or application of the squeeze theorem, may be beyond what you are responsible for in your course.)

acer

@digerdiga In 64bit Maple 17.02 running on Windows 7 Pro I am seeing `simplify(eq)` produce a result of `1` when running your uploaded worksheet.

Could you please show the exact sequence of commands where `simplify` alone (without that composed around `evalc`) fails under those assumptions?

Either plaintext 1D Maple notation, or an uploaded worksheet, would be helpful.

acer

First 369 370 371 372 373 374 375 Last Page 371 of 597