acer

32460 Reputation

29 Badges

20 years, 3 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Thank you Carl, that is what I meant (but failed to convey adequately, sorry). A wrapping procedure around the integrand can raise working precision (Digits). In this way the functional evaluations of the integrand can be computed to higher accuracy while the numeric quadrature scheme can still be the externally-compiled/hardware-double-precision NAG function d01ajc (which has some decent handling around singularities).

However I don't quite use the form that Carl showed for the wrapper procedure around the integrand calls, for a reason I'll now try to explain.

@vv Some years ago the evalhf mechanism was enhanced to allow for temporary escape from within a procedure running in/under evalhf-mode. It was also amended to allow Digits to be set within such a procedure. But such a setting of Digits would only affect the temporary "escapes", not the evalhf running of the current proc. (I think that this is a slightly flawed design because I imagine that it can confuse people -- tripping them up into thinking that such a Digits change might affect the running proc itself...)

Here are some examples which I hope will illustrate more. Personally I usually throw in a dummy list reference to forcibly disable evalhf when I really want to be 100% sure I've disabled evalhf-mode under the `evalf/Int` or Optimization or plotting call that I'm testing.

[edit. Extra calls to forget(evalf) might assist insight in some cases, even if it doesn't alter the results... ]

restart;

Digits:= 25:

F := proc(x)
       if 1 = true then
         print("running under evalhf");
       end if;
       sin(x);
end proc:


         

S1:= proc(x::realcons)
    Digits:= 50;
    evalf(F(x));
end proc:

evalhf(S1(1.2));

"running under evalhf"

.932039085967226288

evalf(S1(1.2));

.9320390859672263496701344

S2:= proc(x::realcons)
    []; # this is now non-evalhf'able
    Digits:= 50;
    evalf(F(x));
end proc:

evalhf(S2(1.2));

Error, unable to evaluate expression to hardware floats: []

evalf(S2(1.2));

.9320390859672263496701344

S3:= proc(x::realcons)
    Digits:= 50;
    eval(evalf(F(x)));
end proc:

evalhf(S3(1.2));

.932039085967226288

evalf(S3(1.2));

.9320390859672263496701344

S4:= proc(x::realcons)
    Digits:= 50;
    F(x);
end proc:

evalhf(S4(1.2));

"running under evalhf"

.932039085967226288

evalf(S4(1.2));

.9320390859672263496701344

 

Download evalhf_notes.mw

One of my points is this: when I want to deliberately prevent computation under evalhf (because, say, I'm testing performance and wish to examine the case without it) then I have to take special care to prevent it altogether. Otherwise the "escaped evalhf" business can be a little too tricky for me to know exactly what the control flow and effective working precision has been.

I know that some Optimization/evalf-Int/plotting commands have userinfo messages that indicate which of the evalhf/evalf modes they are using. And some respect UseHardwareFloats=false for the control flow. But those things are no guarantee against implementation bugs. When I want a wrapper that definitely won't work under evalhf, or which is guaranteed to force and utilize some higher working precision, then I make my proc non-evalhf-able. (An exception might be when a no-op list reference confused codegen's gradient. Such fun.)

ps. Yes, I meant to convey that forcing a tolerance through the epsilon option is more careful and robust than passing a small value for the digits option, for evalhf(Int(...)) .

What would you do with such a solution, if say it turned out to involve expressions that took 100 pages to pretty-print?

It's a serious question: what, precisely, are your plans for such a symbolic solution?

@mmcdara It is certainly not true that evalf(Int(...)) cannot be used with Digits>15.

Even the specific method=_d01ajc can be used alongside functional evaluations of the integral that are done at higher-than-double precision (ie. Digits>15 and non-evalhf). But some care is needed for some variants.

It was not clear whether the OP needed values very close to the cusp. The curve cusp can't easily be very well visualized except on a very narrow range of theta, so I am not sure that I understand what the motivation is for doing so.

The OP makes a mistake by applying evalf to A1 and A2 prior to the subsequent integration and plotting. That already incurs numeric error which could skew attempts are doing the integration/plotting at higher working precision. I had removed those early evalf calls, in my Answer. And I have indeed run the whole thing at significantly higher non-hardware precision. So far I haven't detected major inaccuracy for the fast hardware double precision evalhf approach, except so very close the cusp that (IMO) it hardly matters.

But the OP has not provided insight into what he wants from the plot: eg. qualitative insight, or quantitative optimization estimates, or something else...

ps. Your call,
     int(eval(A1+A2,omega=20.099752),theta=0..Pi, numeric, digits=5)
is not the right way to get 5 accurate digits in general, or even to force a coarse tolerance near say 1e-5. Also, one should not rely upon evalhf'ability of the integrand, IMO. More appropriate is to specify the accuracy tolerance explicitly, instead of having it be implied by an unholy low value for the working precision. (I am not sure of the value you obtained for this, btw.)

In future please use the green up-arrow in the Mapleprimes editor to upload and attach your worksheet .mw file.

Providing only an image of your code is unhelpful. Nobody else should have to retype it.

@Carl Love I understand now, you wanted longer subtickmarks. Thanks.

And you don't like multiplying by `°` because the ensuing spacing is too wide. (I think you could get by with `if`(irem(k,4)=0, cat(k*45/4,"°"), "") in Maple 2018/2021, though it's not a great deal terser.)

@Carl Love I'm not sure that I understand why that use of `if` and irem are used. For flexibility? Perhaps more straightforward for the common case might be, say,

plots:-polarplot(phi, phi=0..2*Pi,
                 axis[angular]=[tickmarks=[seq(i*Pi=i*180*`°`,
                                               i=0..2,1/4)],
                                gridlines=[32,majorlines=4]]);

I don't mean this as any kind of criticism.

@Kitonum As Carl mentioned, a mix of both approaches looks nice and it reasonably legible.

And Unit(degree) could be used in the range, instead of using the Pi/180 factor for each occurrence of the variable.

polaraxisdesgrees.mw

This also works in Maple 2018.2,

plots:-polarplot(cos(3*phi*Pi/180), phi=0..360,
                 axis[angular]=[tickmarks=[seq(i=cat(i,"°"),
                                               i=0..360,45)],
                                gridlines=[32,majorlines=4]],
                 angularunit=degrees);

plots:-polarplot(cos(3*phi), phi=0..360*Unit(degree),
                 axis[angular]=[tickmarks=[seq(i=cat(i,"°"),
                                               i=0..360,45)],
                                gridlines=[32,majorlines=4]],
                 angularunit=degrees, labels=["",""]);

@tarik_mohamadi Substituting Bi=1 after simplifying/computing under the assumption Bi>1 seems to me like the kind of thing that can lead to invalid results.

@tarik_mohamadi You could divide out that multiplicative factor Bi-1, although I don't see what mathematical justification you have for doing so. The results of that are no longer identically zero when Bi=1, but only you would know what meaning they'd provide.

Determinant_ac.mw

@PsiSquared I see the same problematic effect when exporting to PDF in Maple 2021.0 for 64bit Linux.

@nm I do not understand why you would not provide one of your actual LaTeX string examples when originally submitting the Question.

Here is some more, trying to represent multiple x-y pairs across various numeric E-values.

Once the roots for a fixed set of E values are computed (first plot with that numpoints) then repeated calls are quick for those same E-values, allowing easier customiztion of view, slicing/dicing,etc. Hence the use of adaptive=false and common numpoints.

There is a simple approach, putting all x-red and y-blue together. This shows some ostensible curves, but loses the x-y groupings.

There is also an attempt at "tracking", keeping the colors but using different point symbols. A similar thing could be done with, say, 3 shades of red, and 3 shade of blue. This kind of identification of roots is difficult. The differing scales don't help. Sorting is an alternative scheme. But all schemes can be defeated by some example.

If you know the expected ranges for x and y then this is the time to say so,,,

Error_John2020_acc.mw

 

@John2020 You have attempted to utilize the name Z as a dummy variable, for the plot call.

But you have already assigned a numeric value to Z, as one of the parameters in your equations set up. So your call to plot is invalid.

So you could try unassigning to Z just before plotting, or use another dummy variable name for the plot call, or use operator-form calling sequence of the plot command, etc. In the attachment I show one simple alternative.

Error_John2020_a.mw

I do not make any effort for efficiency through memoization in this attachment. It can be done, and it ought to be done, but first you need to figure out how you want to handle the fact that your equations might have multiple x-y solution pairs for each E value in your domain. Do you how to "track" solution pairs that appear to be along curves, or point-plot them all together (but still with x=red and y=blue), etc, etc.

@John2020 Please don't submit it as another, separate Question thread.

@Carl Love I do not prefer that mechanism using ToInert,specfunc,pointto because it has less direct connection to the nature (or location or details) of the mystery procedure.

Your comment about reliance on opaquemodules=false is not really an accurate description of that code. I favor a mechanism, where that doesn't setting come into play if unrelated; it's not needed for the routine to be accessed at runtime, so a mechanism to reveal it independent of that setting is less kludgey.

Also, your comment about results from LibraryTools is not very on target here, since it describes far more usual procedures than the very special case considered here for partition1.

First 135 136 137 138 139 140 141 Last Page 137 of 594