acer

32358 Reputation

29 Badges

19 years, 331 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@fbackelj Right. This side discussion about Normalizer is about only the possible forms of valid exact results for your example. I hope that no other readers accidentally get the idea that I'm suggesting something necessary to get a valid exact result equivelent to exact 120 for your example.

For your example:

1) If you leave it as the default Normalizer=eval(normal) then LinearAlgebra:-LUDecomposition detects that and examines the Matrix. It finds the trig terms in your example Matrix and assigns Normalizer:=(x -> normal(simplify(x, 'trig'))). This results in a valid but more complicated answer, but has the virtue that Testzero is stronger and more likely to not use hidden zeros as pivots because it can reduce the intermediary expressions involving trig calls. That virtue may not bear on your example Matrix.

showstat(LinearAlgebra:-LUDecomposition,241..242); # Maple 2015.0

 241     if eval(Normalizer) = eval(normal) then
 242       Normalizer := DeduceNormalizer(mU)
         end if;

2) If you force the assignment Normalizer:=(x->normal(x)) then LinearAlgebra:-LUDecomposition detects that it's been assigned away from the default, since now evalb(eval(Normalizer) = eval(normal)) returns false, and it respects your choice and does not automatically examine whether it should make Normalizer something stronger. This happens to result in a less complicated valid result for your particular example Matrix.

As I mentioned, notice that these are not identical,

evalb( eval(x->normal(x)) = eval(normal) );

                                     false

If in situation 2) above you don't also reassign to Testzero then Testzero will only be as strong as your forced choice of Normalizer. And of course normal is not generally suitable for detecting that expressions containing trig calls may be zero. So it's dubious to assign to Normalizer without considering whether Testzero remains strong enough.

So I did something subtle in my followup comment. I dumbed down Normalizer to be only as strong as normal, but without it actually being normal. I was interested in seeing whether ConditionNumber would produce a different or more compact result. I was deliberately disabling the automatic promotion of Normalizer as done by LUDecomposition when Normalizer actually is identically normal. I wanted Normalizer as strong as normal, without actually being identical to normal.

Within LUDecomposition (for the exact symbolic case) Testzero is used to test whether candidate pivots are nonzero, while Normalizer is used during row-reduction to keep down expression-swell.

 

@fbackelj You still sound like you think it is not working "fine" in the exact case, by default. Such as claim, which you originally put forth in your Question, is not true.

In fact your posted example did not exhibit a bug in ConditionNumber. The exact result you got for n=5 is, well, exactly equal to 120.

And ConditionNumber is returning a valid exact result that can be simplified (somehow) to exact 120, whether one uses the default Normalizer or not.

Your Question did exhibit the fact that the simplify command is not powerful enough on its own to reduce every constant trig expressions to a rational (if it happens to equal such).

I changed Normalizer only to show that a more compact exact result could attain. That doesn't affect the fact that the default answer is also valid in the exact case. In both cases an expression is returned which involves constant trig terms, and which some form of exact simplification can reduce to 120.

Sorry if you found my remarks about Normalizer unclear.

In LinearAlgebra one can determine that MatrixInverse and LUDecomposition call an internal DeduceNormalizer routine.

That DeduceNormalizer routine checks whether eval(Normalizer)=eval(normal) and if so it examines the types of the Matrix entries and tries to set Normalizer appropriately. This happens automatically. And when it happens then Testzero is strengthened by virtue of the fact that by default Testzero calls Normalizer. The old linalg package does not do this. The old linalg package will produce some incorrect results if it uses a hidden zero (not detectable by just normal) as an elimination pivot.

When I set Normalizer:=x->normal(x) then I am making Normalizer only as strong as normal, but (and this is key) it is no longer true that evalb(eval(Normalizer)=eval(normal)) will return true. So in doing that I have disabled the safer computation mode of LinearAlgebra. For your example that may not affect correctness, as a hidden-zero pivot situation may not have been around, but it does affect the size of the exact result. That is all. I was trying to say that if one does (unwisely!) forcibly dumb down Normalizer in order to try and obtain such more compact exact results then one had better (!) forcibly strengthen Testzero so that it does not stay as weak as normal.

In other words: If I leave Normalizer=eval(normal) then LinearAlgebra will detect that default situation and attempt to strengthen Normalizer (and Testzero, in consequence) to protect against accidentally using a hidden-zero as an elimination pivot. But if I assign anything else to Normalizer then LinearAlgebra will recognize that and use my assigned choice. And if I do that then I should prudently ensure that make Testzero adequately strong. If I deliberately change Normalizer then I ought to ensure that Testzero is adequate. If I don't change Normalizer and Testzero away from their default then LinearAlgebra attempts to strengthen them.

Note that x->normal(x) is not the very same procedure as normal. They have different addresses and are distinct under evalb. They happen to do the same thing, of course.

Are you just trying to test whether any entry of a Matrix is negative? Is each of the entries constant?

Or do you need to detect the minus sign inside an entry like, say, exp(-x) ?

acer

What do you see if you lprint it?

acer

@akiel123 Did you intend Pi/4 instead of 90 inside your calls to tan? Maple's trig functions work with radians.

Whether 90 or Pi/4, some of your constaints are evaluating to impossible inequalities, such as 0 <= -6 . You'll need to figure those out and correct.

@akiel123 Your defined it as minCorner but you call it as MinCorner.

Also, you can get a smaller form by forcing use of normal as the Normalizer (in say MatrixInverse).

Note that by itself this may produce an invalid result if a "hidden zero" is used as a pivot during elimination. LinearAlgebra will try to use a stronger zero detection (Testzero) and Normalizer, if it sees that Normalizer<>eval(normal), for example t->simplify(t,trig) if it detects trig stuff, or even simplify itself. So it might be safer to also force a stronger and "suitable" Testzero if one is going to forcibly dumb down Normalizer. I have seen examples involving trig terms where linalg produce wrong inverse results for just these reasons.

 

restart;

with( LinearAlgebra ):

n := 5;

M := [ seq( cos( Pi*j/n ), j=0..n ) ]:

V := VandermondeMatrix( M ):

5

Normalizer := t -> normal(t):

ConditionNumber(V);

6*(cos((1/5)*Pi)^2+cos((2/5)*Pi)^2)/((cos((2/5)*Pi)+1)*(cos((1/5)*Pi)^2*cos((2/5)*Pi)-cos((1/5)*Pi)^2-cos((2/5)*Pi)+1))-6*(cos((2/5)*Pi)^2+1)/((cos((1/5)*Pi)^2-1)*(cos((1/5)*Pi)+cos((2/5)*Pi))*cos((1/5)*Pi)*(-cos((2/5)*Pi)+cos((1/5)*Pi)))-6*(cos((1/5)*Pi)^2+1)/(cos((2/5)*Pi)*(cos((1/5)*Pi)*cos((2/5)*Pi)+cos((2/5)*Pi)^2+cos((1/5)*Pi)+cos((2/5)*Pi))*(-cos((2/5)*Pi)+cos((1/5)*Pi))*(cos((2/5)*Pi)-1))

simplify(%);

120

 

 

Download condnormal.mw

Perhaps a somewhat safer Testzero might be,

Testzero:=proc(z) evalb(simplify(z)=0); end proc:

in the case that one had set Normalizer as I did above. By default Testzero simply checks evalb(Normalizer(z) = 0) for input z.

@Markiyan Hirnyk 


kernelopts(version);

`Maple 2015.2, X86 64 LINUX, Nov 13 2015, Build ID 1087698`

restart;

ans:=int((exp(x)*(4420*cos(4)*sin(4)-544*cos(4)^2+148147*exp(-1)-4225*cos(4)-215203)/(71825*exp(1)-71825*exp(-1))-exp(-x)*(4420*cos(4)*sin(4)-544*cos(4)^2+148147*exp(1)-4225*cos(4)-215203)/(71825*exp(1)-71825*exp(-1))+(32/4225)*cos(4*x)^2+(1/71825)*(4225+(2210*x-6630)*sin(4*x))*cos(4*x)+x^2+8434/4225)^2, x = 0 .. 1):

length(sprintf("%a",ans));

6582

op([2,10],ans);

15720304037068800*cos(1)^14*exp(2)

restart;

ans:=int((exp(x)*(4420*cos(4)*sin(4)-544*cos(4)^2+148147*exp(-1)-4225*cos(4)-215203)/(71825*exp(1)-71825*exp(-1))-exp(-x)*(4420*cos(4)*sin(4)-544*cos(4)^2+148147*exp(1)-4225*cos(4)-215203)/(71825*exp(1)-71825*exp(-1))+(32/4225)*cos(4*x)^2+(1/71825)*(4225+(2210*x-6630)*sin(4*x))*cos(4*x)+x^2+8434/4225)^2, x = 0 .. 1):

length(sprintf("%a",ans));

6581

op([2,10],ans);

-11523037593600*exp(2)*cos(1)^8*cos(4)

 


Download resmore.mw

@Markiyan Hirnyk The exact results differ in form from session to session. At default working precision of Digits=10 they evaluate to slightly different floats due to roundoff, but at higher working precision they evaluate to results which agree to more decimal figures.

It is just as Preben has said.


kernelopts(version);

`Maple 2015.2, X86 64 LINUX, Nov 13 2015, Build ID 1087698`

restart;

ans:=int((exp(x)*(4420*cos(4)*sin(4)-544*cos(4)^2+148147*exp(-1)-4225*cos(4)-215203)/(71825*exp(1)-71825*exp(-1))-exp(-x)*(4420*cos(4)*sin(4)-544*cos(4)^2+148147*exp(1)-4225*cos(4)-215203)/(71825*exp(1)-71825*exp(-1))+(32/4225)*cos(4*x)^2+(1/71825)*(4225+(2210*x-6630)*sin(4*x))*cos(4*x)+x^2+8434/4225)^2, x = 0 .. 1):

op(0,ans); # exact symbolic integration succeeded

`*`

evalf(ans);

0.5951084845e-3

evalf[100](ans): evalf[10](%);

0.5951230437e-3

restart;

ans:=int((exp(x)*(4420*cos(4)*sin(4)-544*cos(4)^2+148147*exp(-1)-4225*cos(4)-215203)/(71825*exp(1)-71825*exp(-1))-exp(-x)*(4420*cos(4)*sin(4)-544*cos(4)^2+148147*exp(1)-4225*cos(4)-215203)/(71825*exp(1)-71825*exp(-1))+(32/4225)*cos(4*x)^2+(1/71825)*(4225+(2210*x-6630)*sin(4*x))*cos(4*x)+x^2+8434/4225)^2, x = 0 .. 1):

op(0,ans); # exact symbolic integration succeeded

`*`

evalf(ans);

0.5951112410e-3

evalf[100](ans): evalf[10](%);

0.5951230437e-3

restart;

ans:=int((exp(x)*(4420*cos(4)*sin(4)-544*cos(4)^2+148147*exp(-1)-4225*cos(4)-215203)/(71825*exp(1)-71825*exp(-1))-exp(-x)*(4420*cos(4)*sin(4)-544*cos(4)^2+148147*exp(1)-4225*cos(4)-215203)/(71825*exp(1)-71825*exp(-1))+(32/4225)*cos(4*x)^2+(1/71825)*(4225+(2210*x-6630)*sin(4*x))*cos(4*x)+x^2+8434/4225)^2, x = 0 .. 1):

op(0,ans); # exact symbolic integration succeeded

`*`

evalf(ans);

0.5951129853e-3

evalf[100](ans): evalf[10](%);

0.5951230437e-3

restart;

ans:=int((exp(x)*(4420*cos(4)*sin(4)-544*cos(4)^2+148147*exp(-1)-4225*cos(4)-215203)/(71825*exp(1)-71825*exp(-1))-exp(-x)*(4420*cos(4)*sin(4)-544*cos(4)^2+148147*exp(1)-4225*cos(4)-215203)/(71825*exp(1)-71825*exp(-1))+(32/4225)*cos(4*x)^2+(1/71825)*(4225+(2210*x-6630)*sin(4*x))*cos(4*x)+x^2+8434/4225)^2, x = 0 .. 1):

op(0,ans); # exact symbolic integration succeeded

`*`

evalf(ans);

0.5950996605e-3

evalf[100](ans): evalf[10](%);

0.5951230437e-3

 


Download res.mw

@akiel123 If you use the big green arrow in the Mapleprimes editor to upload your worksheet then we could work on it directly, thanks. It's quite difficult to make suggestions based on just an image of your code (at any resolution).

Also you can now see that there is no real need to have the procedure OptimizeSpring, since all it does it call Maximize. You might as well just call Maximize directly.

Your procedure is very strangely written.

I can't tell whether you are trying to use function calls as if they were local variables (via remember tables), or whether you are using a nonstandard syntax for operator definition.

What do you think that call to d__plP is supposed to be, used on the left hand side of an assignment statement inside that procedure OptimizeSpring?

Whichever it is, you defined d__plP as having 4 parameters, but then only call it with 2 when using it on the right-hand side of the assignment to d__min(...). You make this same mistake in several places.

Since you didn't get get an error about insufficient arguments then we can see that Maple thinks that you've assigned to function calls (remember table assignments), rather than that you've defined operators.

Define operators like this:

d__lpP := (P__1x,P__1y,P__2x,P__2y) -> some_expression_like_you_have_above;

Note that it's not very efficient to have your procedure OptimizeSpring redefine the same operators each time it's called. It's not clear whether you really want and need operators, at this point. Please clarify as your your intent.

Or if you just wanted to use function calls as if you had local variables (which is not a very usual thing to do) then make sure that 1) you declare your locals, 2) you always make the number of arguments match across all instances of each. Even better would be to not use function calls like local variables unless you completely understand all these aspects of usage. Use simple names, or indexed names, if you're just trying to use local variables.

acer

@software_c This appears to be merely a non-general variant of Joe's Answer.

@Carl Love It's some mysterious part of the GUI plot renderer that figure out how to draw the surface, given an ISOSURFACE structure. I don't know what interpolating mechanism it uses. I have wondered, whether in principal it could be converted to some MESH.

There was another beast for which I have a recollection that transforming might not work. Hmm. I suspect it may have been 3D contour plot structures. I recall having difficulty in doing a 3D->2D projection transformation on such (and possibly also for getting at the data otherwise).

@Carl Love An interesting example of "any two- or three-dimensional plot" might be,

plots:-implicitplot3d(r=(1.3)^x*sin(y), r=0.1..5, x=-1..2*Pi, y=0..Pi, coords=spherical);

Why not upload the worksheet that contains the plot, and we could then see what might be possible?

acer

First 315 316 317 318 319 320 321 Last Page 317 of 592