acer

32385 Reputation

29 Badges

19 years, 341 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

We've gotten a little side-tracked, on the topic of 3D plot sizing and aspect rations, rather than the original question about meshing and grid appearance on surfaces. My comments below continue on the side-topic.

3D plot rendering in the Standard GUI changed in several ways after Maple 12. In modern Maple there is also access to three rotation angles (orientation, and in manual rotation with the mouse too).

And while stretching of the rendered 3D plot (following manual restretching of the 2-dimensional "plot window") provided another bit of functionality in Maple 12, it had its limitations. For example it was still hampered by the inability to manually rotate in the 3d orientation angle. And it was only available for the scaling=unconstrained case, and it still forced the x-y axes' to be in a 1-1 aspect ratio. And the aspect ratios of the x-z or y-z axes could only be done roughly with mouse action, and not be done precisely and programmatically.

By coincidence I have been working on assembling a document which I hope will illustrate a more complete set of desirable cases. Basically I was something like this:

1) The `size` option for 3D plots, to allow forced and precise control of the 2-dimensional viewing window, so that the user can avoid wasted white-space (above and below), ensure that long labels or long tickmark values are visible, etc.

2) Aspect ratios for the 3D plot axes, to get full and precise, programmatic control of the relative lengths of the visible portions of the axes (ie, pairwise relative). This detail should probably be done as a new bit of data in the PLOT3D data structures axis substructure, and available to commands with the `axis[n]` options, say.

After giving it considerable thought I am pretty sure that 1) and 2) need to be separate. Ie, a three-valued `size` option would not suffice, and would be a misimplemenation that does not adequately cover enough situations.

What I'm trying to assemble is a set of 3D plots, in a worksheet, that exhibit the varied scenarios, so that a full and proper solution might be realized. I happen do be constructing these using Library-size techniques of re-scaling, manually made tickmarks, and a little programmatic content generation and embedding. (And I suspect that this might be useful as an interim stop-gap for people who want to force a particular appearance in a document, now.) But that's just to build the cases -- I still want a PLOT3D datatstructure and GUI rendering implementation as the ideal.

How does that code you showed have anything to do with setting a time limit?

Explain how Maple's timelimit command doesn't provide what you want. Are you attempting numeric dsolve computations? Are you hoping to get partial numeric results, upon timing out?

acer

Supply the source code. Be clear in explaining your objections.

Use the same for both plots. You have one plot with x from -2 to 2 and another from -2 to 0.

 

@Kitonum Thanks for that.

While compoly is certainly a useful addition to one's toolbag there are still some easy examples where both it and simplify miss the mark.

The following example is not even in the harder class in which terms which need to be added/subtracted also contain the relevant variables (eg. my 7th power expansion above). And the degrees are low.

We can observe that compoly requires just the right variable names in order to make progress (which may still be incomplete). The CompleteSquare command does better, with less guidance.

restart;

C := proc(p, vars::set(name):=NULL)
  local u;
  u:=[compoly(p, vars)];
  if u=[FAIL] then return FAIL
  else subs([u[2]],u[1]) end if;
end proc:

p := expand( (2*a+b)^2+c+128 );

4*a^2+4*a*b+b^2+c+128

C(p, {a,b});

(2*a+b)^2+c+128

C(p);

FAIL

 

p := expand( (2*a+b)^2+(c+128)^2 );

4*a^2+4*a*b+b^2+c^2+256*c+16384

C(p, {a,b});

(2*a+b)^2+c^2+256*c+16384

C(p);

FAIL

C(p, {c});

FAIL

Student:-Precalculus:-CompleteSquare(p);

(2*a+b)^2+(c+128)^2

Student:-Precalculus:-CompleteSquare(p, {a,b});

(2*a+b)^2+c^2+256*c+16384

Student:-Precalculus:-CompleteSquare(p, {c});

(c+128)^2+4*a^2+4*a*b+b^2

simplify(p,size);

(c+128)^2+4*a^2+4*a*b+b^2

Download compoly.mw

I don't pretend that for higher degrees this is not a hard task in general, or course.

@vv Sure, and I expect that many of us have been frustrated with that before.

Example, to get from x^7 - 7*x^6 + 21*x^5 - 35*x^4 + 35*x^3 + 7*x - 1 to (x-1)^7 + 21*x^2 .

But it makes me a little sad that when I want to do something (which ought to be) easy like completing-the-square I reach for something as unintuitive as an export of the Student:-Precalculus package.

evalf[4] looks like a really poor choice there.

Note that its main effect on evaluating those compound expressions (containing exp calls and things raised to the 0.5 power) wil be to enforce only 4 digits of working precision. That's just asking for a lot of round-off error. If you want to round the Matrix entries to 4 places then do it only after computing those expressions at adequate working precision.

For example, your procedure could map evalf[4] onto a whole Matrix, just before returning. Or those individual Matrix element computations in the loops could each compute at adequate working precision, assign to a temp name, and only then could the Matrix element assignment be made using evalf[4] applied to the temp's running value.

Wrapping those compound expressions in evalf[4] does not mean that you'll get an answer accurate to 4 decimal places. It means that those scalar float computations will be done using only 4 decimal digits of precision (and quite possibly an inadequate few guard digits for the atomic operations like the call to exp, float powering, etc).

Never wrap a floating-point scalar computation in low-precision `evalf` calls if the purpose is just to round the results.

acer

@tomleslie As I wrote before, the attributes are on the returned function calls to Units:-Unit. That is done by the Units:-Unit command (which happens to return function calls to itself as its return value, albeit with attributes).

When you multiply something by exact 1 in Maple then it automatically simplifies to that same thing. The value assigned to A is a returned Units:-Unit function call. That value of A (which is a Units:-Unit function call) has attributes. The attributes are put there by the Units:-Unit command.

In contrast, the values assigned to B and C are not Units:-Unit function calls. Rather, their values are products, and their zeroth operands are both `*` and not Units:-Unit. That is why the values of B and C do not have attributes on them.

But B and C each do have a generated function call to Units:-Unit as one of their respective multiplicand (operands). And those Units:-Unit multiplicands in the values of B and C do indeed have the same kinds of attributes as does the Units:-Unit function call which is the value of A.

The meaning of the previous paragraph above is demonstrated in my earlier code, via the indets command which picked off the Units:-Unit subexpressions of B and C (which happen to be multiplicands).

I see now what can work better.

At first I made the legend color blobs like,

  `#mrow(mn(".      .",mathbackground="#00ff00"))`;

but that made showed the black dots. So then I tried it as,

  `#mrow(mn(".      .",mathcolor="#00ff00",mathbackground="#00ff00"))`;
but that doesn't render as colored at all. So I fudged it like, say,
  `#mrow(mn(".      .",mathcolor="#00ff00",mathbackground="#00ff01"))`;

And now I see that it could be better done as,

  Typesetting:-mrow(Typesetting:-mn("        ",mathbackground="#00ff01"));

And that is nicer because it also allows sizing. Eg,

  Typesetting:-mrow(Typesetting:-mn("            ",mathbackground="#00ff01",size=8));

@tomleslie Your #2 is not happening so weirdly. It's only the calls to Units:-Unit which have such attributes, and not the products or arithmetic expressions which contain them.

restart:

A:=1*Units:-Unit(m):
B:=2*Units:-Unit(m):
C:=1.0*Units:-Unit(m):

lprint(A);

Units:-Unit(m)

map(`[]`@attributes,[indets(A,specfunc(Units:-Unit))[]]);

[[Units:-UnitStruct(1, metre, contexts:-SI), inert]]

lprint(B);

2*Units:-Unit(m)

map(`[]`@attributes,[indets(B,specfunc(Units:-Unit))[]]);

[[Units:-UnitStruct(1, metre, contexts:-SI), inert]]

lprint(C);

1.0*Units:-Unit(m)

map(`[]`@attributes,[indets(C,specfunc(Units:-Unit))[]]);

[[Units:-UnitStruct(1, metre, contexts:-SI), inert]]

F:=2.3*K*Unit(sec^2/mile)+1.7*Unit(day)/(Unit(m)+Unit(cm));

2.3*K*Units:-Unit(('s')^2/('mi'))+1.7*Units:-Unit('d')/(Units:-Unit('m')+Units:-Unit('cm'))

lprint(F);

2.3*K*Units:-Unit(s^2/mi)+1.7*Units:-Unit(d)/(Units:-Unit(m)+Units:-Unit(cm))

map(`[]`@attributes,[indets(F,specfunc(Units:-Unit))[]]);

[[Units:-UnitStruct(1/100, metre, contexts:-SI), inert], [Units:-UnitStruct(1, day, SI), inert], [Units:-UnitStruct(1, metre, contexts:-SI), inert], [Units:-UnitStruct(1, second, SI)^2/Units:-UnitStruct(1, mile, standard), inert]]

 


Download NoSoStrange.mw

Commands such as Units:-Standard:=``+` and its `*` separate the Units:-Unit calls from their coefficients, to change into SI base units and combine them. It's when they look for these attributes on the Units:-Unit calls themselves that it goes awry.

 

@AmusingYeti I didn't really show how to get faster perfromance in the case that the working precision for the evaluations of the inner integrand has to be greater than 15 (ie, trunc(evalhf(Digits)) ). It just happens that for your example a higher working precision is not required, it seems.

In fact using the _cuhre method does just as well, if not better, at Digits=15. That's understandable, as it's purpose is to avoid the hit of doing iterated single-integrals numerically. (That difference can get more severe as the number of dimensions gets higher.) I illustrate this in the attached worksheet.

So if you encounter an example for which Digits>15 is in fact required so as to keep the roundoff error of the inner integrand down, let us know. It's tricker to set up, but often there is something that can be done even then.

L_sum_int_3.mw

A few guidelines may help:

1) evalf(Int(...)) offers controls for both working precision and target accuracy (tolerance), via its `digits` and `epsilon` options. These are often useful, especially if you need to use increased working precision to obtain only a coarser result.

2) Using an operator for the integrand can help prevent evalf@Int from expanding the integrand (as an expression). It doesn't like a mix of operators and expressions between its nested calls, though, hence the somewhat convoluted use in my code of the `eval` and `unapply` commands in the inner integral form.

3) When using the `epsilon` option in a (nested) multiple-inegral case it is usually necessary to ensure that the inner integral's accuracy target is finer than that of the outer integral. This is because the outer scheme needs results that are stable in a few more digits, so that it can ascertain that it is succeeding. The degree to which the inner integrals `epsilon` tolerance needs to be finer (smaller) than the tolerance at the level one-higher can be problem specific -- for this example it seems that it needs to be at least two or more factors of 10 finer.

4) avoid calling `simplify` with no options on expressions that contain floating-point coefficients (and radicals, or those and fractions of polynomials, etc) because it can introduce coefficients with much higher powers of 10 (and their purported cancelling factort, elsewhere). This can sometimes make roundoff issues more severe, perhaps even leading to a false impression that higher working precision is strictly necesary.

With those guidelines, it can often help to have the mindset that the problem Can Indeed Be Done Quickly, and that one's task is to Find The Way. Sometimes this means that, if it appears to be computing slowly then one must interrupt and adjust. Often the adjustment will involve finding the balance between inner and outer tolerances, or the digits vs epsilon balance at a single level. And repeat...

It can also help to take an instance of the inner integral for a particular value of the variable coming from the outer level, to test whether the inner integral is computing quickly enough (given that fixed value). Eg, given an `r2` numeric value, how will the inner integration perform, for its devised scheme.

I rather doubt that the reported result in the sheet is very accurate, for at least two reasons. 1) it is dependent on using 35 partitons in the boole method, and 2) the raw `simplify` command can introduce very large and very small exponents (base 10) in coefficients of the altered integrand. Matching very many digits of the originally reported result doesn't appear to be an especially good criterion for success, though I'd be glad to hear why someone considers it much more accurate.

What is the degree of accuracy that you want? It's a key detail.

You may have raised Digits to help deal with roundoff error during evaluations of the integrands. But it may be that you don't want nearly such a tight accuracy tolerance as anything like 10^(-30).

You method using the Quadrature command (from a package) is giving a result like 0.26683977722550432669157299491441 when using Digits=32 and the `boole` method and partitions=35. But I suspect that only about ten decimal digits of that are correct. Either that is unacceptable accuracy for you (in which case you need another to take another tack), or it is acceptable. But there are faster ways to get ten correct decimal digits for your given example. So I am also curious about to why you are using the Quadrature command.

acer

@Bendesarts Create it somewhere else other than the Maple installation location! You should not be experimenting with creating/modifying/deleting files there, at risk of breaking something. (Trying all this with OS admin privileges would be even worse.)

Just make a folder somewhere else. Some new folder under your Documents, say. You don't need to adjust libname in order to build the module and save it to the .mla archive, and in fact it's safer if you don't. Just make a new folder, and assign `path` accordingly.

You only need to adjust `libname` if you want to be able to call and test and run the exports of your module after `restart`, when you aren't building the .mla or executing the code that defines the module.

Your process would be like this:

Step 0, create the archive file

path:=cat(myfolder, "/mypackage.mla");
LibraryTools:-Create(path);

Step 1, build the module and save it

mypackage := module()
                ...
             end module:
path:=cat(myfolder, "/mypackage.mla");
LibraryTools:-Save(mypackage, path);

Step 2, restart and test or use it

restart;
libname:=cat(myfolder, "/mypackage.mla"), libname;
with(mypackage);

Did you first execute a call to LibraryTools:-Create to construct that .mla file?

You don't need to do it before each call to LibraryTools:-Save, but you need to have done it once.

The argument you pass could be the same as the second argument to your call to LibraryTools:-Save.

Eg,

fn := cat(libname[1], "/TrigoTransform.mla");

LibraryTools:-Create(fn);

LibraryTools:-Save(TrigoTransform, fn);

acer

First 307 308 309 310 311 312 313 Last Page 309 of 592