acer

32632 Reputation

29 Badges

20 years, 47 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Thanks, so it seem like the differences are either bugs, or implicit multiplication, or the following singleton,

restart:
# works differently in 1D and 2D
m!=n;

  "This is, as you observed, by design. When you enter != or
   paste it, it immediately turns into <>, so at least users
   are not misled into thinking it has a different meaning." (pchin)

Even better than turning this into m <> n would be to leave it as m! = n which was already a meaningful mathematical syntax in Maple for many releases. It's wrong to change it, on several different levels.

  • It already meant something well understood mathematically, for many Maple releases.
  • It's rather gratuitous clobbering of Maple 1D syntax with some non-universally understood (C...) syntax.
  • It creates a difference between the parsers, which should be avoided unless it brings a really significant functionality improvement (which this does not). See Jacque's characterization in the link.
  • It will incorrectly rewrite any piece of Maple code containing it, that anyone finds on the web or in course notes and pastes into Standard with its default 2D Math mode.

acer

Thanks. It might be interesting to see what makes the int@diff method fall down, compared to the double limit method in the OP's Mma code fragment (...taken from Mma's `Re` help-page).

acer

Thanks. It might be interesting to see what makes the int@diff method fall down, compared to the double limit method in the OP's Mma code fragment (...taken from Mma's `Re` help-page).

acer

I can explain a little more what I mean when I state that the following is a bug.

# valid in 2D but not in 1D
2.sin(x);

In 1D Maple notation, the decimal point has a greater precedence in parsing than does the infix `.` multiplication operator. So the above example is actually implicit multiplication by 2. the floating-point value. So, really, this is just another implicit multiplication example.

The system has anomalies. Precedence is not merely an operator issue (see ?operators,precedence ), but is also a parsing issue. Where can one see it documented that the decimal point has a "greater parsing precedence" than does infix `.`, but less than infix `..`?

# In either entry mode, this gives a range
2..sin(x);

At the very least, this is a missing documentation bug (much like this ). Why doesn't ?syntax mention 2D Math's implicit multiplication, or link to a 2D Math syntax page?

While I'm at it, the ?syntax help-page describes the ambiguity of a^b^c and the need for brackets, in 1D Maple notation where it makes sense. But the ?operators,precedence help-page has the same explanation in 2D Math notation, where it doesn't make any sense.

acer

Thanks, Axel.

There are quite a few different floating-point computational modes in Maple now. It would be interesting to test more than just Maple's "usual" interpreter.

There is the usual interpreter, the evalhf interpreter, the runtime provided by Compiler:-Compile, and the usual interpreter running inside a proc with `option hfloat`. Then there are modes, such as whether UseHardwareFloats=true/false, whether one uses usual floats (SFloats) or HFloats. And some of those situations can be mixed together. What else have I missed?

I'm deliberately not mentioning Maple's rounding modes. There are paranoia tests for rounding specifically, and obviously these would give different results for non-default Maple rounding mode. I'd probably only look at the default Maple rounding mode (at least, at first).

acer

Thanks, Axel.

There are quite a few different floating-point computational modes in Maple now. It would be interesting to test more than just Maple's "usual" interpreter.

There is the usual interpreter, the evalhf interpreter, the runtime provided by Compiler:-Compile, and the usual interpreter running inside a proc with `option hfloat`. Then there are modes, such as whether UseHardwareFloats=true/false, whether one uses usual floats (SFloats) or HFloats. And some of those situations can be mixed together. What else have I missed?

I'm deliberately not mentioning Maple's rounding modes. There are paranoia tests for rounding specifically, and obviously these would give different results for non-default Maple rounding mode. I'd probably only look at the default Maple rounding mode (at least, at first).

acer

The recursive version calls itself. Your original version (or Joe's amended version) does not call itself. That's the difference. One of them recurses on itself, and the others do not.

acer

The recursive version calls itself. Your original version (or Joe's amended version) does not call itself. That's the difference. One of them recurses on itself, and the others do not.

acer

I will submit an SCR against evalhf returning Float(infinity) when evaluating abs(z) for your example of z=1e200+I*1e200, Axel.

I notice that Compiler:-Compile seems to do OK with that.

> restart:

> z:=10^200+I*10^200:
> evalhf(abs(z));
Float(infinity)

> p:=proc(x::complex(numeric)) abs(x); end proc:
> cp:=Compiler:-Compile(p):
> trace(evalf):
> cp(z); # is not using evalf
                                         201
                  0.141421356237309504 10

It would be better if evalhf and Compiler:-Compile shared their more robust (complex arithmetic) runtime features.

acer

I will submit an SCR against evalhf returning Float(infinity) when evaluating abs(z) for your example of z=1e200+I*1e200, Axel.

I notice that Compiler:-Compile seems to do OK with that.

> restart:

> z:=10^200+I*10^200:
> evalhf(abs(z));
Float(infinity)

> p:=proc(x::complex(numeric)) abs(x); end proc:
> cp:=Compiler:-Compile(p):
> trace(evalf):
> cp(z); # is not using evalf
                                         201
                  0.141421356237309504 10

It would be better if evalhf and Compiler:-Compile shared their more robust (complex arithmetic) runtime features.

acer

It is recursive because the procedure bisection calls itself.

It calls itself differently, according to whether f(c) has the same sign as f(a). (I just used your logic there. You could improve it.)

When it calls itself, it replaces either argument a or b by c, according to your logic as mentioned.

That whole process, of bisection calling itself with new arguments, over and over, happens until the tolerance eps is met.

You can first issue trace(bisection) before calling it with example inputs. to see more printed detail of what it does when it runs. See the ?trace help-page.

acer

It is recursive because the procedure bisection calls itself.

It calls itself differently, according to whether f(c) has the same sign as f(a). (I just used your logic there. You could improve it.)

When it calls itself, it replaces either argument a or b by c, according to your logic as mentioned.

That whole process, of bisection calling itself with new arguments, over and over, happens until the tolerance eps is met.

You can first issue trace(bisection) before calling it with example inputs. to see more printed detail of what it does when it runs. See the ?trace help-page.

acer

yankyank (with its extra simplify) isn't actually needed here. yank will do. Indeed, if you hit it with a final combine then you get pretty much what I posted above. ie. I did what yank does.

acer

yankyank (with its extra simplify) isn't actually needed here. yank will do. Indeed, if you hit it with a final combine then you get pretty much what I posted above. ie. I did what yank does.

acer

I often get a feeling of surprise that floating-point accuracy (as opposed to working precision!) does not come up on here on mapleprimes as a topic of intense discussion.

Routines such as evalf/Int and Optimization's exports have numerical tolerance parameters. And "atomic arithmetic operations", single calls to trig, and some special functions compute results accurate within so many ulps. But other than that, for compound expressions, all bets are off. The expression might be badly conditioned for floating-point evaluation.

Now, Maple has Digits as a control over working precision but it has no more general specifier of requested accuracy. Compare with Mathematica, which claims to have both. So, in Maple one must either do analysis or raise Digits to some problem-specific mystery value. The evalr command (and its less well-known friend, shake) is not really strong enough to help much. An argument that I have sometimes heard against making any progress in this area is that its an open field of study, and partial, fuzzily-bounded coverage is to be avoided. If we all accepted that sort of thinking all the time, Maple might have normal but no radnormal, evala/Normal, simplify/trig let alone simplify proper.

acer

First 474 475 476 477 478 479 480 Last Page 476 of 597