While doing some Maple plotting I found myself asking why a high-end scientific computation application like Maple, which is capable of essentially very high ("arbitrary") precision floating-point computation, sometimes makes only crude use of hardware precision plot drivers. I looked around a bit, and found that and related issues are not restricted to Maple.

Let's look first at Maxima. Here's my first example, in Maxima (not Maple) syntax,

   wxplot2d([exp(x-714)], [x,5,10]);

That produces the following plot,

Now I make a small edit,

   wxplot2d([exp(x-715)], [x,5,10]);

And the above produces an empty plot.

The reason why is easy to guess: a compiled plot driver is unable to handle values less that the smallest double-precision floating-point number, which is usually not much bigger than 10^-(308). Ie, in Maple,

> evalhf(DBL_MIN);

-307
1.00000010000000001 10

Notice how there are no tick marks on that empty plot. It'd be up to the user to figure out what kind of range of values are being computed, if the whole thing is to be manually corrected by some scaling tricks. (More on that, below.)

Pretty much the same thing happens on WolframAlpha. This URL produces a plot, while this URL produces an empty plot.

There seem to be at least two hurdles here. One involves actually producing valid plot data for the problematic domain. That may in fact be what is tripping up Maxima and WolframAlpha above. (Their plot drivers may not even be seeing good data for this example.)

Many users are not prepared for the kinds of explanation usually given for such issues. The argument may be made, that if the systems are at all capable of figuring out the right plot automatically, then they should do so.

For the next example below, Maple's Standard GUI generates an improper plot at its default working precision (due to using evalhf). Yet for some ranges smaller than what hardware doubles usually allow, no correct plot is rendered regardless of the computational working precision (ie. forcibly using the "software float" evalf interpreter).

> plot(proc(x) exp(x-752); end proc, 5..10, numpoints=500);

 

Ideally, the above plot should not be jagged. The computational engine has actually produces jagged data inside the plot structure here, at default precision. It didn't fail entirely and produce an all-zero plot, however.

Now, if one raises Digits to 1400+, puts something non-evalhf'able inside the proc, etc, then the problem persists. The data in the plot structure should then be fine and gently varying. But the plot still incorrectly comes out jagged. We can do a pointplot and demonstrate that gently varying values are plotted clumpily. The issue is in the plot driver, not the computational engine.

Note the nicely varying y-values for the x-values from 7.0 to 7.2 below. Those y-values all end up being shown incorrectly as the same value, in the rendered plot. The plot data is right, but the rendering goes wrong.

> Digits:=1400:

> G:=proc(x) []; exp(x-752); end proc:

> [seq(G(x), x in [7.0, 7.05, 7.1, 7.15, 7.2])]:

> evalf[10](%);

[ -324 -324 -324
[2.822350730 10 , 2.967055747 10 , 3.119179948 10 ,

-324 -324]
3.279103724 10 , 3.447226967 10 ]

> L:=[seq([5+5*(x-1)/100,G(5+5*(x-1)/100)], x=1..101)]:
> L:=evalf[10](L):
> L[41..45];

[[ -324] [ -324]
[[7., 2.822350730 10 ], [7.050000000, 2.967055747 10 ],

[ -324]
[7.100000000, 3.119179948 10 ],

[ -324]
[7.150000000, 3.279103724 10 ],

[ -324]]
[7.200000000, 3.447226967 10 ]]

> plots:-pointplot(L);

My argument is that this behaviour is not justified, by any of these systems. I'd understand if a through-and-through double-precision system like Matlab did it. But Maple, Mathematica, and Maxima can do essentially practially arbitrarily high precision float computation. All that's needed to correctly render such a plot is to detect that the data is out of hardware range, scale it, and then render the scaled data using accurate axis tickmarking.

At some point the `plot` (or related) command itself internally invokes a command and passes along the data. But at some moment the range of the data could be checked and, if detected as being outside `hardware' range, rescaled and appropriate tickmarks deduced. (Obviously, the plot driver cannot do such a check, since it is apparently incapable of properly recognizing such values. But other parts of Maple could, earlier.)

Look, the following may not be super efficient, but it works and could be done programmatically after an automatic range check. It's not changing how the y-values are spaced relative to each other -- they were already ok. It's just scaling them all, by the same multiplicative factor. That takes the above incorrect plot and turns it into this below.

> Digits:=10:
> plots:-pointplot(map(t->[t[1],t[2]*1e323],L),
> ytickmarks=evalf[2]([seq(i*L[-1][2]/5*1e323
> =convert(i*L[-1][2]/5,string),i=0..6)]));

All that did was scale the y-values, and forcibly dictate the ytickmarks. Those tasks, including computing a decent scaling factor, could be automated. Detecting whether all y-values are less than 1e-307 and larger than -1e-307 (or similar exponents) is a simple test.

The same kind of scaling could be done on the data produced by a regular `plot` command, and tickmarks inserted.

> restart:
> Digits:=320:
> P:=plot(exp(x-752), x=5..10):
> P;

> L:=map2(op,2,op(1,op(1,P))):
> A,B := evalf[10](min(L)), evalf[10](max(L)):
> plots:-display(plottools:-scale(P,1.0,1e323),
> ytickmarks=[seq((A+i*(B-A)/6)*1e323
> =convert(evalf[2](A+i*(B-A)/6),string),
> i=0..5)]);

acer


Please Wait...