acer

32385 Reputation

29 Badges

19 years, 340 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@steweb You could name your assisting procedures, as I did in the example with `thatproc`. That seems to allow the results from `simplify` to also use that same name where appropriate in the output.

If you only ever need to apply the resulting procedure, then you might even be ok leaving it as is. By which I mean that since the escaped local `f` seems to evaluate to the right procedural body, then applying it to some arguments might work ok. It's just the display of the simplified operators that looks goofy.

I noticed that in one earlier Post by you, there was also heavy manipulation of operators or procedures. I might be mistaken, and I don't know your motivating task, but it did make me wonder whether you really needed procedures. Are expressions inadequate for the programming aspects of your task?

You see (and if I am off the mark wholly here then I apologize), sometimes people new to Maple inadvertantly latch on to an approach not best for their goal. Sometimes this is repetition: what worked for an earlier job should work here too, and sometimes it's mimicry: it was advice given to others, etc. And so we sometimes see these worksheets with a ton of nested, lexically scoping operators, all defined in mixed order, when at the end of the day it's all just passed to `int` as some function call like P(foo,Q(bar..)..) which will evaluate up front to an expression anyway. Experts can have a rough time debugging such monsters, and the less experience user can flounder. It can sometimes be much more straightforward, when feasible, using expressions from the get go. Just a thought.

acer

@steweb You could name your assisting procedures, as I did in the example with `thatproc`. That seems to allow the results from `simplify` to also use that same name where appropriate in the output.

If you only ever need to apply the resulting procedure, then you might even be ok leaving it as is. By which I mean that since the escaped local `f` seems to evaluate to the right procedural body, then applying it to some arguments might work ok. It's just the display of the simplified operators that looks goofy.

I noticed that in one earlier Post by you, there was also heavy manipulation of operators or procedures. I might be mistaken, and I don't know your motivating task, but it did make me wonder whether you really needed procedures. Are expressions inadequate for the programming aspects of your task?

You see (and if I am off the mark wholly here then I apologize), sometimes people new to Maple inadvertantly latch on to an approach not best for their goal. Sometimes this is repetition: what worked for an earlier job should work here too, and sometimes it's mimicry: it was advice given to others, etc. And so we sometimes see these worksheets with a ton of nested, lexically scoping operators, all defined in mixed order, when at the end of the day it's all just passed to `int` as some function call like P(foo,Q(bar..)..) which will evaluate up front to an expression anyway. Experts can have a rough time debugging such monsters, and the less experience user can flounder. It can sometimes be much more straightforward, when feasible, using expressions from the get go. Just a thought.

acer

@kh2n In general, only when the imaginary component of the complex root is zero (or relatively very small in magnitude, say, with respect to the working precision) can that complex root be taken as a real root.

@kh2n In general, only when the imaginary component of the complex root is zero (or relatively very small in magnitude, say, with respect to the working precision) can that complex root be taken as a real root.

@Markiyan Hirnyk Windows 7 Pro, 64 bit Maple 15.01, Standard GUI, Worksheet,

restart:
expr:=1.778895759*Sigma-1831241.099/(76553.66445-.576e-5*Sigma^2)
+6600.970252*Sigma/(76553.66445-.576e-5*Sigma^2)
+.5739576533e-1*Sigma^2/(76553.66445-.576e-5*Sigma^2)
+.4735119433e-4*exp(.7618258041e-2*Sigma)*Sigma^2/exp(.9051395693e-5*Sigma)
/(76553.66445-.576e-5*Sigma^2)
-39332.76308*exp(.7618258041e-2*Sigma)*(41.17+1/(1-exp(-160)))/exp(.9051395693e-5*Sigma)
/(-69.17083220+Sigma)/(-69.17083220-Sigma)
-629324.2088*exp(.7618258041e-2*Sigma)/exp(.9051395693e-5*Sigma)/
(76553.66445-.576e-5*Sigma^2)-1.778895759*exp(.7618258041e-2*Sigma)
*Sigma/exp(.9051395693e-5*Sigma)
-8.220693466*(41.17+1/(1-exp(-160)))*Sigma^2/(-69.17083220+Sigma)/(-69.17083220-Sigma)
-323.1910570+8.220693466*exp(.7618258041e-2*Sigma)*(41.17+1/(1-exp(-160)))
*Sigma^2/exp(.9051395693e-5*Sigma)/(-69.17083220+Sigma)/
(-69.17083220-Sigma)+323.1910568*exp(.7618258041e-2*Sigma)/exp(.9051395693e-5*Sigma)
+39332.76308*(41.17+1/(1-exp(-160)))/(-69.17083220+Sigma)/(-69.17083220-Sigma) = 0:

plot(z->evalf[1000](subs(Sigma=z,lhs(expr))), -69.4..-69.0);

But it doesn't matter whether the plot is refined enough (small granularity) to detect it or not. Once you suspect that it's there, confirming it by other means is not hard.

It doesn't look like Digits must be very high, to find it.

restart:

Digits:=20:

expr:=1.778895759*Sigma-1831241.099/(76553.66445-.576e-5*Sigma^2)
+6600.970252*Sigma/(76553.66445-.576e-5*Sigma^2)
+.5739576533e-1*Sigma^2/(76553.66445-.576e-5*Sigma^2)
+.4735119433e-4*exp(.7618258041e-2*Sigma)*Sigma^2/exp(.9051395693e-5*Sigma)
/(76553.66445-.576e-5*Sigma^2)
-39332.76308*exp(.7618258041e-2*Sigma)*(41.17+1/(1-exp(-160)))/exp(.9051395693e-5*Sigma)
/(-69.17083220+Sigma)/(-69.17083220-Sigma)
-629324.2088*exp(.7618258041e-2*Sigma)/exp(.9051395693e-5*Sigma)/
(76553.66445-.576e-5*Sigma^2)-1.778895759*exp(.7618258041e-2*Sigma)
*Sigma/exp(.9051395693e-5*Sigma)
-8.220693466*(41.17+1/(1-exp(-160)))*Sigma^2/(-69.17083220+Sigma)/(-69.17083220-Sigma)
-323.1910570+8.220693466*exp(.7618258041e-2*Sigma)*(41.17+1/(1-exp(-160)))
*Sigma^2/exp(.9051395693e-5*Sigma)/(-69.17083220+Sigma)/
(-69.17083220-Sigma)+323.1910568*exp(.7618258041e-2*Sigma)/exp(.9051395693e-5*Sigma)
+39332.76308*(41.17+1/(1-exp(-160)))/(-69.17083220+Sigma)/(-69.17083220-Sigma) = 0:

sol1:=fsolve(lhs(expr),Sigma=-100-10*I..100+10*I,complex);

                 -69.170832173780333987 + 0. I

Raising Digits very high serves to corroborate that it truly is an actual real-valued root.

acer

@Markiyan Hirnyk Windows 7 Pro, 64 bit Maple 15.01, Standard GUI, Worksheet,

restart:
expr:=1.778895759*Sigma-1831241.099/(76553.66445-.576e-5*Sigma^2)
+6600.970252*Sigma/(76553.66445-.576e-5*Sigma^2)
+.5739576533e-1*Sigma^2/(76553.66445-.576e-5*Sigma^2)
+.4735119433e-4*exp(.7618258041e-2*Sigma)*Sigma^2/exp(.9051395693e-5*Sigma)
/(76553.66445-.576e-5*Sigma^2)
-39332.76308*exp(.7618258041e-2*Sigma)*(41.17+1/(1-exp(-160)))/exp(.9051395693e-5*Sigma)
/(-69.17083220+Sigma)/(-69.17083220-Sigma)
-629324.2088*exp(.7618258041e-2*Sigma)/exp(.9051395693e-5*Sigma)/
(76553.66445-.576e-5*Sigma^2)-1.778895759*exp(.7618258041e-2*Sigma)
*Sigma/exp(.9051395693e-5*Sigma)
-8.220693466*(41.17+1/(1-exp(-160)))*Sigma^2/(-69.17083220+Sigma)/(-69.17083220-Sigma)
-323.1910570+8.220693466*exp(.7618258041e-2*Sigma)*(41.17+1/(1-exp(-160)))
*Sigma^2/exp(.9051395693e-5*Sigma)/(-69.17083220+Sigma)/
(-69.17083220-Sigma)+323.1910568*exp(.7618258041e-2*Sigma)/exp(.9051395693e-5*Sigma)
+39332.76308*(41.17+1/(1-exp(-160)))/(-69.17083220+Sigma)/(-69.17083220-Sigma) = 0:

plot(z->evalf[1000](subs(Sigma=z,lhs(expr))), -69.4..-69.0);

But it doesn't matter whether the plot is refined enough (small granularity) to detect it or not. Once you suspect that it's there, confirming it by other means is not hard.

It doesn't look like Digits must be very high, to find it.

restart:

Digits:=20:

expr:=1.778895759*Sigma-1831241.099/(76553.66445-.576e-5*Sigma^2)
+6600.970252*Sigma/(76553.66445-.576e-5*Sigma^2)
+.5739576533e-1*Sigma^2/(76553.66445-.576e-5*Sigma^2)
+.4735119433e-4*exp(.7618258041e-2*Sigma)*Sigma^2/exp(.9051395693e-5*Sigma)
/(76553.66445-.576e-5*Sigma^2)
-39332.76308*exp(.7618258041e-2*Sigma)*(41.17+1/(1-exp(-160)))/exp(.9051395693e-5*Sigma)
/(-69.17083220+Sigma)/(-69.17083220-Sigma)
-629324.2088*exp(.7618258041e-2*Sigma)/exp(.9051395693e-5*Sigma)/
(76553.66445-.576e-5*Sigma^2)-1.778895759*exp(.7618258041e-2*Sigma)
*Sigma/exp(.9051395693e-5*Sigma)
-8.220693466*(41.17+1/(1-exp(-160)))*Sigma^2/(-69.17083220+Sigma)/(-69.17083220-Sigma)
-323.1910570+8.220693466*exp(.7618258041e-2*Sigma)*(41.17+1/(1-exp(-160)))
*Sigma^2/exp(.9051395693e-5*Sigma)/(-69.17083220+Sigma)/
(-69.17083220-Sigma)+323.1910568*exp(.7618258041e-2*Sigma)/exp(.9051395693e-5*Sigma)
+39332.76308*(41.17+1/(1-exp(-160)))/(-69.17083220+Sigma)/(-69.17083220-Sigma) = 0:

sol1:=fsolve(lhs(expr),Sigma=-100-10*I..100+10*I,complex);

                 -69.170832173780333987 + 0. I

Raising Digits very high serves to corroborate that it truly is an actual real-valued root.

acer

@DJKeenan

If this is causing you huge grief, then you could of course write your own gradient procedure which did numeric estimation (differencing, say) without its getting too worried about precision (or accuracy).

I think I see what you mean: why isn't there a relationship between Optimization's 'optimality` or other tolerances and the precision demanded for computing the gradient. Sounds like a good question, though in practice it may depend on quantitative qualities of the specific objective function.

Optimization is invoking `fdiff` without supplying fdiff's workprec=n option, thus allowing its default behaviour. But that optional argument just controls the factor by which fdiff might add even more guard digits -- the lowest value is the default, workprec=1.

Note that Optimization has already set Digits=15 and so that is what fdiff sees as the inbound value of Digits, rather than the original session default value of Digits=10. When Digits comes in at 10, then fdiff raises it to 17.

Glancing over showstat(fdiff) it looks like it might even add guard digits on top of whatever workprec=n specifies (I'm not sure).

I've haven't looked very hard at fdiff, but writing a numeric routine which tries to handle as many kinds of examples as possible, with as little intervention by the user to tweak optional controls (tolerances, etc), is difficult.

There are other known instances where nested calls to Library routines cause a cascade, each one trying to be clever and augment Digits by guard digits. Some routines like dsolve/numeric and fsolve seem to have some environment variables to help prevent this, by allowing them each to test whether they have been called from within one another. (Something about this mechanism makes me uneasy. What if there are ten such routines, with different rules for one another?)

One reason this is a big deal is that, for things like evalf/Int, the value of Digits can dictate whether evalhf or double-precision external-calling gets used. So keeping Digits at 15 can sometimes mean the difference between good and bad performance. (This might mean that it's at least problematic for fdiff to raise Digits from session default value 10 to 17.)

It sounds tricky, to get this right and make as many examples work best (and fastest also). Maybe Optimization should temporarily reduce Digits (eg, invert what fdiff will itself do! Invert its formula, to set Digits=8 when Digits=15. Sheesh). Or maybe the "best" thing is to expose everything for user control, allowing a new optional argument like digits=value and fdiffoptions=[...]. Some people are bound to find that too onerous.

But that's the rub. Since every numeric method can usually be broken by some tricky example, numerically, then exposure of all tolerances and controls is often the Best Way to go. Trying to handle it all, magically and invisibly, is a hard game.

acer

 

@DJKeenan

If this is causing you huge grief, then you could of course write your own gradient procedure which did numeric estimation (differencing, say) without its getting too worried about precision (or accuracy).

I think I see what you mean: why isn't there a relationship between Optimization's 'optimality` or other tolerances and the precision demanded for computing the gradient. Sounds like a good question, though in practice it may depend on quantitative qualities of the specific objective function.

Optimization is invoking `fdiff` without supplying fdiff's workprec=n option, thus allowing its default behaviour. But that optional argument just controls the factor by which fdiff might add even more guard digits -- the lowest value is the default, workprec=1.

Note that Optimization has already set Digits=15 and so that is what fdiff sees as the inbound value of Digits, rather than the original session default value of Digits=10. When Digits comes in at 10, then fdiff raises it to 17.

Glancing over showstat(fdiff) it looks like it might even add guard digits on top of whatever workprec=n specifies (I'm not sure).

I've haven't looked very hard at fdiff, but writing a numeric routine which tries to handle as many kinds of examples as possible, with as little intervention by the user to tweak optional controls (tolerances, etc), is difficult.

There are other known instances where nested calls to Library routines cause a cascade, each one trying to be clever and augment Digits by guard digits. Some routines like dsolve/numeric and fsolve seem to have some environment variables to help prevent this, by allowing them each to test whether they have been called from within one another. (Something about this mechanism makes me uneasy. What if there are ten such routines, with different rules for one another?)

One reason this is a big deal is that, for things like evalf/Int, the value of Digits can dictate whether evalhf or double-precision external-calling gets used. So keeping Digits at 15 can sometimes mean the difference between good and bad performance. (This might mean that it's at least problematic for fdiff to raise Digits from session default value 10 to 17.)

It sounds tricky, to get this right and make as many examples work best (and fastest also). Maybe Optimization should temporarily reduce Digits (eg, invert what fdiff will itself do! Invert its formula, to set Digits=8 when Digits=15. Sheesh). Or maybe the "best" thing is to expose everything for user control, allowing a new optional argument like digits=value and fdiffoptions=[...]. Some people are bound to find that too onerous.

But that's the rub. Since every numeric method can usually be broken by some tricky example, numerically, then exposure of all tolerances and controls is often the Best Way to go. Trying to handle it all, magically and invisibly, is a hard game.

acer

 

Things like this below are worrisome, indicating that codegen[GRADIENT] may be problematic for some innocuous-looking objective procedures.

> restart:
> f := proc(x)
>     x^2;
> end proc:

> codegen[GRADIENT](f);  # ok

                          proc(x) return 2*x end proc;

> evalhf( %(4.5) );

                                     9.


> restart:
> f := proc(x)
>     print(f);
>     x^2;
> end proc:

> codegen[GRADIENT](f);  # ok

                    proc(x) print(f); return 2*x; end proc;

> evalhf( %(4.5) );

                             4.50000000000000000
                                     9.

> restart:
> f := proc(x)
>     [];  # just another no-op, you'd imagine
>     x^2;
> end proc:

> codegen[GRADIENT](f);  # jeepers

                          proc(x) return  end proc;

> evalhf( %(4.5) );

                              Float(undefined)

The last example was an objective that itself was non-evalhfable. But the following example is more troublesome, and produces a wrong numeric result for the gradient.

> restart:

> f := proc(x)
>     5.7;  # not an assignment, but also not a return value!
>     x^2;
> end proc:

> codegen[GRADIENT](f);
                           proc(x) return 0 end proc

> %(4.5);
                                       0

All this means that `fdiff` may be sometimes be more attractive as a means to get a useful gradient (even if numerical diferentiation is "frowned upon" on stability grounds).

The procedure in the top-post could be changed to automatically handle an (indeterminate) variable number of arguments (to the original objective). It could also be altered to supply optional arguments to `fdiff`, such as the 'workprec'=n argument which controls the working precision that `fdiff sets for itself internally. This would help in the case that one wanted `fdiff` to not raise Digits too high (internally, temporarily) when calling the objective procedure for its computation of finite differences.

acer

@PatrickT It's great that you have a workaround. (Sorry for not mentioning the change I made to the quoting, too.)

I did not know that the Standard driver's `plotoptions` height and width options even worked at all, until trying it without the `pt` units appended today. I suspect that this might be a secret that unlocks its usefulness to quite a few people (since it means that it's not 100% broken, even if its functionality is very much obscured).

I have taken the great liberty of branching your followup, as a post in its own right. I agree with your that it is a very important topic, and should be addressed with great seriousness. I expect that several people will add their own additional commentary to it, so perhaps its best to have it be separate.

acer

@PatrickT It's great that you have a workaround. (Sorry for not mentioning the change I made to the quoting, too.)

I did not know that the Standard driver's `plotoptions` height and width options even worked at all, until trying it without the `pt` units appended today. I suspect that this might be a secret that unlocks its usefulness to quite a few people (since it means that it's not 100% broken, even if its functionality is very much obscured).

I have taken the great liberty of branching your followup, as a post in its own right. I agree with your that it is a very important topic, and should be addressed with great seriousness. I expect that several people will add their own additional commentary to it, so perhaps its best to have it be separate.

acer

The expression is not real-valued for cp>-3220 and cp<3220, and moreover the imaginary component is nonzero for the region cp>0, cp<3220 and x>0, x<14. So the equation expression=0 does not hold in that region, and thus the implicit plot is empty there.

expr:=tan(x*sqrt(cp^2/3220^2-1))*(cp^2/3220^2-2)^2
      +4*tan(x*sqrt(cp^2/6450^2-1))*sqrt((cp^2/3220^2-1)*(cp^2/6450^2-1));

plot3d(Im(expr),x = 0 .. 14, cp = 0 .. 7000,axes=box);

acer

The expression is not real-valued for cp>-3220 and cp<3220, and moreover the imaginary component is nonzero for the region cp>0, cp<3220 and x>0, x<14. So the equation expression=0 does not hold in that region, and thus the implicit plot is empty there.

expr:=tan(x*sqrt(cp^2/3220^2-1))*(cp^2/3220^2-2)^2
      +4*tan(x*sqrt(cp^2/6450^2-1))*sqrt((cp^2/3220^2-1)*(cp^2/6450^2-1));

plot3d(Im(expr),x = 0 .. 14, cp = 0 .. 7000,axes=box);

acer

@PatrickT How about changing one line to,

for j from 2 to 51 do R[j]:= X[trunc(R[j-1])][j] end do:

@PatrickT How about changing one line to,

for j from 2 to 51 do R[j]:= X[trunc(R[j-1])][j] end do:
First 430 431 432 433 434 435 436 Last Page 432 of 592