acer

32323 Reputation

29 Badges

19 years, 316 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@C_R Quite a lot of your last response indicates that you consider ()() as a distinct piece of syntax. But it isn't, really. It is use together of two separate bits of syntax: one pair of parentheses for grouping or delimiting terms that will be applied (like an operator), and a second pair of parentheses that denotes function application (and groups the arguments of that, if any).

Yes, parentheses thus serve two different syntactic purposes here.  Both these purposes of parentheses (grouping or delimiting of terms, or denoting function application) can also be used separately. Yes, you can utilize both together, but it's not a single piece of syntax.

The case of grouping or delimiting expressions (that are to be used as an operator) isn't just useful for composition and arithmetic of the terms. Another important class of example is using parentheses to delimit an expression or an anonymous procedure.

There are currently at least seven examples of using the grouping/delimiting parentheses together with parentheses for function application, on the examples/functionaloperators Help page.

I like your example using parentheses to delimit an equation used in function application. An example such as (f=g)(x) would be useful on that Help page.

ps. I think that whattype isn't highly useful and shouldn't be emphasized as if it were.

pps. Round-bracket parentheses can also denote indexing of rtables.

@rlopez Here are some examples (using row Vectors to render results side-by-side without separating commas). You should be able to see which names in which subscripts are in italic versus upright roman. 

restart;

 

makesub:=(e,s)->nprintf("#msub(mi(\"%a\"),mn(\"%a\"));",e,s):

 

makesub2:=(e,s)->nprintf("#msub(mi(\"%a\"),%a);", e,
                         subs(['mi'='mn'],
                              convert(Typesetting:-Typeset(s),`global`))):

 

< x[p] | x__p | makesub(x,p) | makesub2(x,p)>;

Vector[row](4, {(1) = x[p], (2) = `#msub(mi("x"),mi("p"))`, (3) = x__p, (4) = x__p})

< U[Y] | U__Y | makesub(U,Y) | makesub2(U,Y) >;

Vector[row](4, {(1) = U[Y], (2) = `#msub(mi("U"),mi("Y"))`, (3) = U__Y, (4) = U__Y})

< U[U] | U__U | makesub(U,U) | makesub2(U,U) >;

Vector[row](4, {(1) = U[U], (2) = `#msub(mi("U"),mi("U"))`, (3) = U__U, (4) = U__U})

< U[4] | U__4 | makesub(U,4) | makesub2(U,4) >;

Vector[row](4, {(1) = U[4], (2) = `#msub(mi("U"),mi("4"))`, (3) = U__4, (4) = U__4})

< U[sin(x)] | `U__sin(x) ` | makesub(U,sin(x)) | makesub(U,sin(x)) >;

Vector[row](4, {(1) = U[sin(x)], (2) = `#msub(mi("U"),mi("sin(x) "))`, (3) = `#msub(mi("U"),mn("sin(x)"));`, (4) = `#msub(mi("U"),mn("sin(x)"));`})

< U[sqrt(H)] | `U__sqrt(H) ` | makesub(U,sqrt(H)) | makesub2(U,sqrt(H)) >;

Vector[row](4, {(1) = U[sqrt(H)], (2) = `#msub(mi("U"),mi("sqrt(H) "))`, (3) = `#msub(mi("U"),mn("H^(1/2)"));`, (4) = `#msub(mi("U"),msqrt(mn("H")))`})

expr := Int(f(x),x=a..b):
< U[expr] | makesub(U,expr) | makesub2(U,expr) >;

Vector[row](3, {(1) = U[Int(f(x), x = a .. b)], (2) = `#msub(mi("U"),mn("Int(f(x),x = a .. b)"));`, (3) = `#msub(mi("U"),mrow(msubsup(mo("&int;"),mn("a"),mn("b")),mn("f"),mo("&ApplyFunction;"),mfenced(mn("x")),mo("&DifferentialD;"),mn("x")))`})

Download subscriptroman.mw

That makesub2 is a little "stronger" than that makesub, as it can handle pretty-printing of some additional compound expressions.

ps. I see only mention of indexed names in the current version of the Question. Did the OP actually ever ask about "subscripted" names? Otherwise, I don't understand why Carl wrote at length here about so-called atomic, subscripted names versus indexed names.

Thank you Carl, that is what I meant (but failed to convey adequately, sorry). A wrapping procedure around the integrand can raise working precision (Digits). In this way the functional evaluations of the integrand can be computed to higher accuracy while the numeric quadrature scheme can still be the externally-compiled/hardware-double-precision NAG function d01ajc (which has some decent handling around singularities).

However I don't quite use the form that Carl showed for the wrapper procedure around the integrand calls, for a reason I'll now try to explain.

@vv Some years ago the evalhf mechanism was enhanced to allow for temporary escape from within a procedure running in/under evalhf-mode. It was also amended to allow Digits to be set within such a procedure. But such a setting of Digits would only affect the temporary "escapes", not the evalhf running of the current proc. (I think that this is a slightly flawed design because I imagine that it can confuse people -- tripping them up into thinking that such a Digits change might affect the running proc itself...)

Here are some examples which I hope will illustrate more. Personally I usually throw in a dummy list reference to forcibly disable evalhf when I really want to be 100% sure I've disabled evalhf-mode under the `evalf/Int` or Optimization or plotting call that I'm testing.

[edit. Extra calls to forget(evalf) might assist insight in some cases, even if it doesn't alter the results... ]

restart;

Digits:= 25:

F := proc(x)
       if 1 = true then
         print("running under evalhf");
       end if;
       sin(x);
end proc:


         

S1:= proc(x::realcons)
    Digits:= 50;
    evalf(F(x));
end proc:

evalhf(S1(1.2));

"running under evalhf"

.932039085967226288

evalf(S1(1.2));

.9320390859672263496701344

S2:= proc(x::realcons)
    []; # this is now non-evalhf'able
    Digits:= 50;
    evalf(F(x));
end proc:

evalhf(S2(1.2));

Error, unable to evaluate expression to hardware floats: []

evalf(S2(1.2));

.9320390859672263496701344

S3:= proc(x::realcons)
    Digits:= 50;
    eval(evalf(F(x)));
end proc:

evalhf(S3(1.2));

.932039085967226288

evalf(S3(1.2));

.9320390859672263496701344

S4:= proc(x::realcons)
    Digits:= 50;
    F(x);
end proc:

evalhf(S4(1.2));

"running under evalhf"

.932039085967226288

evalf(S4(1.2));

.9320390859672263496701344

 

Download evalhf_notes.mw

One of my points is this: when I want to deliberately prevent computation under evalhf (because, say, I'm testing performance and wish to examine the case without it) then I have to take special care to prevent it altogether. Otherwise the "escaped evalhf" business can be a little too tricky for me to know exactly what the control flow and effective working precision has been.

I know that some Optimization/evalf-Int/plotting commands have userinfo messages that indicate which of the evalhf/evalf modes they are using. And some respect UseHardwareFloats=false for the control flow. But those things are no guarantee against implementation bugs. When I want a wrapper that definitely won't work under evalhf, or which is guaranteed to force and utilize some higher working precision, then I make my proc non-evalhf-able. (An exception might be when a no-op list reference confused codegen's gradient. Such fun.)

ps. Yes, I meant to convey that forcing a tolerance through the epsilon option is more careful and robust than passing a small value for the digits option, for evalhf(Int(...)) .

What would you do with such a solution, if say it turned out to involve expressions that took 100 pages to pretty-print?

It's a serious question: what, precisely, are your plans for such a symbolic solution?

@mmcdara It is certainly not true that evalf(Int(...)) cannot be used with Digits>15.

Even the specific method=_d01ajc can be used alongside functional evaluations of the integral that are done at higher-than-double precision (ie. Digits>15 and non-evalhf). But some care is needed for some variants.

It was not clear whether the OP needed values very close to the cusp. The curve cusp can't easily be very well visualized except on a very narrow range of theta, so I am not sure that I understand what the motivation is for doing so.

The OP makes a mistake by applying evalf to A1 and A2 prior to the subsequent integration and plotting. That already incurs numeric error which could skew attempts are doing the integration/plotting at higher working precision. I had removed those early evalf calls, in my Answer. And I have indeed run the whole thing at significantly higher non-hardware precision. So far I haven't detected major inaccuracy for the fast hardware double precision evalhf approach, except so very close the cusp that (IMO) it hardly matters.

But the OP has not provided insight into what he wants from the plot: eg. qualitative insight, or quantitative optimization estimates, or something else...

ps. Your call,
     int(eval(A1+A2,omega=20.099752),theta=0..Pi, numeric, digits=5)
is not the right way to get 5 accurate digits in general, or even to force a coarse tolerance near say 1e-5. Also, one should not rely upon evalhf'ability of the integrand, IMO. More appropriate is to specify the accuracy tolerance explicitly, instead of having it be implied by an unholy low value for the working precision. (I am not sure of the value you obtained for this, btw.)

In future please use the green up-arrow in the Mapleprimes editor to upload and attach your worksheet .mw file.

Providing only an image of your code is unhelpful. Nobody else should have to retype it.

@Carl Love I understand now, you wanted longer subtickmarks. Thanks.

And you don't like multiplying by `&deg;` because the ensuing spacing is too wide. (I think you could get by with `if`(irem(k,4)=0, cat(k*45/4,"&deg;"), "") in Maple 2018/2021, though it's not a great deal terser.)

@Carl Love I'm not sure that I understand why that use of `if` and irem are used. For flexibility? Perhaps more straightforward for the common case might be, say,

plots:-polarplot(phi, phi=0..2*Pi,
                 axis[angular]=[tickmarks=[seq(i*Pi=i*180*`&deg;`,
                                               i=0..2,1/4)],
                                gridlines=[32,majorlines=4]]);

I don't mean this as any kind of criticism.

@Kitonum As Carl mentioned, a mix of both approaches looks nice and it reasonably legible.

And Unit(degree) could be used in the range, instead of using the Pi/180 factor for each occurrence of the variable.

polaraxisdesgrees.mw

This also works in Maple 2018.2,

plots:-polarplot(cos(3*phi*Pi/180), phi=0..360,
                 axis[angular]=[tickmarks=[seq(i=cat(i,"&deg;"),
                                               i=0..360,45)],
                                gridlines=[32,majorlines=4]],
                 angularunit=degrees);

plots:-polarplot(cos(3*phi), phi=0..360*Unit(degree),
                 axis[angular]=[tickmarks=[seq(i=cat(i,"&deg;"),
                                               i=0..360,45)],
                                gridlines=[32,majorlines=4]],
                 angularunit=degrees, labels=["",""]);

@tarik_mohamadi Substituting Bi=1 after simplifying/computing under the assumption Bi>1 seems to me like the kind of thing that can lead to invalid results.

@tarik_mohamadi You could divide out that multiplicative factor Bi-1, although I don't see what mathematical justification you have for doing so. The results of that are no longer identically zero when Bi=1, but only you would know what meaning they'd provide.

Determinant_ac.mw

@PsiSquared I see the same problematic effect when exporting to PDF in Maple 2021.0 for 64bit Linux.

@nm I do not understand why you would not provide one of your actual LaTeX string examples when originally submitting the Question.

Here is some more, trying to represent multiple x-y pairs across various numeric E-values.

Once the roots for a fixed set of E values are computed (first plot with that numpoints) then repeated calls are quick for those same E-values, allowing easier customiztion of view, slicing/dicing,etc. Hence the use of adaptive=false and common numpoints.

There is a simple approach, putting all x-red and y-blue together. This shows some ostensible curves, but loses the x-y groupings.

There is also an attempt at "tracking", keeping the colors but using different point symbols. A similar thing could be done with, say, 3 shades of red, and 3 shade of blue. This kind of identification of roots is difficult. The differing scales don't help. Sorting is an alternative scheme. But all schemes can be defeated by some example.

If you know the expected ranges for x and y then this is the time to say so,,,

Error_John2020_acc.mw

 

@John2020 You have attempted to utilize the name Z as a dummy variable, for the plot call.

But you have already assigned a numeric value to Z, as one of the parameters in your equations set up. So your call to plot is invalid.

So you could try unassigning to Z just before plotting, or use another dummy variable name for the plot call, or use operator-form calling sequence of the plot command, etc. In the attachment I show one simple alternative.

Error_John2020_a.mw

I do not make any effort for efficiency through memoization in this attachment. It can be done, and it ought to be done, but first you need to figure out how you want to handle the fact that your equations might have multiple x-y solution pairs for each E value in your domain. Do you how to "track" solution pairs that appear to be along curves, or point-plot them all together (but still with x=red and y=blue), etc, etc.

First 132 133 134 135 136 137 138 Last Page 134 of 591