MaPal93

175 Reputation

6 Badges

2 years, 331 days

MaplePrimes Activity


These are replies submitted by MaPal93

@dharr thanks.

1. I understand now. In the file I attach in 2. I got rid of gamma^9*sigma__d^9, but can't get rid of the rest...

2. What am I missing in the single equation case: Case_with_radicals.mw? And how do I extend this to a system of 2 equations (those two at the bottom of the script)?

@dharr thanks a lot. I have two follow-up questions:

  1. In Case_1.mw (your script) dividing by sigma__d1^4 does not seem to change the difference to something other than 0 (see lines highlighted in yellow). Why doesn't it matter?
  2. In Case_with_radicals.mw I tried to apply your script to two much simpler expressions but with a (simple) radical...the matrix of exponents gets weird. I suspect that I need to divid the radical out before that step but I am not sure. Would you please take a look?

Thanks a lot.
[I confirm that p3_complex is fine (see Case_2.mw) but that p3_mostcomplex requires one more nondim var (see Case_3.mw). However, Case 3 simply collapses to Case 2 if the "extra" nondim var is set to 1, exactly as I expected.]

@dharr thanks a lot for all the details!

I added:

allvals := map(simplify, [allvalues(Lambda)]):
plot3d(%,Gamma=0..10,Psi=0..10);


at the bottom of your script but it crashes during execution (after about ~30s). I have Maple 2023. Moreover, I get this prompt when trying to save the file (before I try to execute):

 

Separately, do you think two non-dim variables will also be enough for the last two degree-10 polynomials in non-dimensionalization.mw?

@dharr thanks a lot!

I included two simpler examples for a 4th-degree polynomial and a 6th-degree polynomial, respectively. The examples are from @dharr.

@mmcdara @Kitonum can I assume that Maple is always (consistently) responsive to the sign of terms below square roots?

In other words, whenever Maple outputs a result (to whatever computation I ask it to do, not just limit) which does not include signum(), should I conclude that (A) Maple didn't encounter any term of ambiguous sign or (B) Maple did encounter ambiguous terms but implicitly assumed their signs, let's say positive, so that its final output is without signum()? 

@mmcdara very interesting...

I ran your with Maple 2023. Perhaps Maple 2023 thinks my expression (b) is simpler than yours (a):

a := 2*`α__1`^4*`λ__1`^2*`σ__d`^4+4*`α__1`^2*`α__1s`^2*`λ__1`*`λ__2`*`σ__d`^4+4*`α__1`^2*`α__2s`^2*`λ__1`^2*`σ__d`^4+8*`α__1`^2*`β__1`^2*`λ__1`^2*`σ__ν`^2*`σ__d`^2+8*`α__1`*`α__1s`*`α__2`*`α__2s`*`λ__1`*`λ__2`*`σ__d`^4+16*`α__1`*`α__1s`*`β__1`*`β__2`*`λ__1`*`λ__2`*`σ__ν`^2*`σ__d`^2+2*`α__1s`^4*`λ__2`^2*`σ__d`^4+4*`α__1s`^2*`α__2`^2*`λ__2`^2*`σ__d`^4+8*`α__1s`^2*`β__2`^2*`λ__2`^2*`σ__ν`^2*`σ__d`^2+2*`α__2`^4*`λ__2`^2*`σ__d`^4+4*`α__2`^2*`α__2s`^2*`λ__1`*`λ__2`*`σ__d`^4+8*`α__2`^2*`β__2`^2*`λ__2`^2*`σ__ν`^2*`σ__d`^2+16*`α__2`*`α__2s`*`β__1`*`β__2`*`λ__1`*`λ__2`*`σ__ν`^2*`σ__d`^2+2*`α__2s`^4*`λ__1`^2*`σ__d`^4+8*`α__2s`^2*`β__1`^2*`λ__1`^2*`σ__ν`^2*`σ__d`^2+2*`α__3`^4*`λ__3`^2*`σ__d3`^4+8*`α__3`^2*`β__3`^2*`λ__3`^2*`σ__ν`^2*`σ__d3`^2+8*`β__1`^4*`λ__1`^2*`σ__ν`^4+16*`β__1`^2*`β__2`^2*`λ__1`*`λ__2`*`σ__ν`^4+16*`β__1`^2*`β__3`^2*`λ__1`*`λ__3`*`σ__ν`^4+8*`β__2`^4*`λ__2`^2*`σ__ν`^4+16*`β__2`^2*`β__3`^2*`λ__2`*`λ__3`*`σ__ν`^4+8*`β__3`^4*`λ__3`^2*`σ__ν`^4+4*`α__1`^3*`λ__1`^2*`σ__d`^4+4*`α__1`*`α__1s`^2*`λ__1`*`λ__2`*`σ__d`^4+4*`α__1`*`α__1s`*`α__2s`*`λ__1`*`λ__2`*`σ__d`^4+4*`α__1`*`α__2s`^2*`λ__1`^2*`σ__d`^4+8*`α__1`*`β__1`^2*`λ__1`^2*`σ__ν`^2*`σ__d`^2+4*`α__1s`^2*`α__2`*`λ__2`^2*`σ__d`^4+4*`α__1s`*`α__2`*`α__2s`*`λ__1`*`λ__2`*`σ__d`^4+8*`α__1s`*`β__1`*`β__2`*`λ__1`*`λ__2`*`σ__ν`^2*`σ__d`^2+4*`α__2`^3*`λ__2`^2*`σ__d`^4+4*`α__2`*`α__2s`^2*`λ__1`*`λ__2`*`σ__d`^4+8*`α__2`*`β__2`^2*`λ__2`^2*`σ__ν`^2*`σ__d`^2+8*`α__2s`*`β__1`*`β__2`*`λ__1`*`λ__2`*`σ__ν`^2*`σ__d`^2+4*`α__3`^3*`λ__3`^2*`σ__d3`^4+8*`α__3`*`β__3`^2*`λ__3`^2*`σ__ν`^2*`σ__d3`^2-4*`α__1`^2*`β__1`*`λ__1`*`σ__ν`^2*`σ__d`^2+2*`α__1`^2*`λ__1`^2*`σ__d`^4-4*`α__1`*`α__1s`*`β__1`*`λ__1`*`σ__ν`^2*`σ__d`^2-4*`α__1`*`α__1s`*`β__2`*`λ__2`*`σ__ν`^2*`σ__d`^2-4*`α__1s`^2*`β__2`*`λ__2`*`σ__ν`^2*`σ__d`^2+`α__1s`^2*`λ__2`^2*`σ__d`^4+2*`α__1s`*`α__2s`*`λ__1`*`λ__2`*`σ__d`^4-4*`α__2`^2*`β__2`*`λ__2`*`σ__ν`^2*`σ__d`^2+2*`α__2`^2*`λ__2`^2*`σ__d`^4-4*`α__2`*`α__2s`*`β__1`*`λ__1`*`σ__ν`^2*`σ__d`^2-4*`α__2`*`α__2s`*`β__2`*`λ__2`*`σ__ν`^2*`σ__d`^2-4*`α__2s`^2*`β__1`*`λ__1`*`σ__ν`^2*`σ__d`^2+`α__2s`^2*`λ__1`^2*`σ__d`^4-8*`α__3`^2*`β__3`*`λ__3`*`σ__ν`^2*`σ__d3`^2+2*`α__3`^2*`λ__3`^2*`σ__d3`^4-8*`β__1`^3*`λ__1`*`σ__ν`^4-8*`β__1`^2*`β__2`*`λ__1`*`σ__ν`^4-16*`β__1`^2*`β__3`*`λ__1`*`σ__ν`^4+2*`β__1`^2*`λ__1`^2*`σ__ν`^2*`σ__d`^2-8*`β__1`*`β__2`^2*`λ__2`*`σ__ν`^4-8*`β__1`*`β__3`^2*`λ__3`*`σ__ν`^4-8*`β__2`^3*`λ__2`*`σ__ν`^4-16*`β__2`^2*`β__3`*`λ__2`*`σ__ν`^4+2*`β__2`^2*`λ__2`^2*`σ__ν`^2*`σ__d`^2-8*`β__2`*`β__3`^2*`λ__3`*`σ__ν`^4-16*`β__3`^3*`λ__3`*`σ__ν`^4+2*`β__3`^2*`λ__3`^2*`σ__ν`^2*`σ__d3`^2-2*`α__1`*`β__1`*`λ__1`*`σ__ν`^2*`σ__d`^2-2*`α__1s`*`β__1`*`λ__1`*`σ__ν`^2*`σ__d`^2-2*`α__2`*`β__2`*`λ__2`*`σ__ν`^2*`σ__d`^2-2*`α__2s`*`β__2`*`λ__2`*`σ__ν`^2*`σ__d`^2-4*`α__3`*`β__3`*`λ__3`*`σ__ν`^2*`σ__d3`^2+`α__1`^2*`σ__ν`^2*`σ__d`^2+`α__1s`^2*`σ__ν`^2*`σ__d`^2+`α__2`^2*`σ__ν`^2*`σ__d`^2+`α__2s`^2*`σ__ν`^2*`σ__d`^2+2*`α__3`^2*`σ__ν`^2*`σ__d3`^2+3*`β__1`^2*`σ__ν`^4+2*`β__1`*`β__2`*`σ__ν`^4+8*`β__1`*`β__3`*`σ__ν`^4+3*`β__2`^2*`σ__ν`^4+8*`β__2`*`β__3`*`σ__ν`^4+8*`β__3`^2*`σ__ν`^4

8*beta__1^4*`σ__ν`^4*lambda__1^2+16*beta__1^2*beta__2^2*`σ__ν`^4*lambda__1*lambda__2+16*beta__1^2*beta__3^2*`σ__ν`^4*lambda__1*lambda__3+8*beta__1^2*`σ__ν`^2*alpha__1^2*lambda__1^2*sigma__d^2+8*beta__1^2*`σ__ν`^2*alpha__2s^2*lambda__1^2*sigma__d^2+16*beta__1*beta__2*`σ__ν`^2*alpha__1*alpha__1s*lambda__1*lambda__2*sigma__d^2+16*beta__1*beta__2*`σ__ν`^2*alpha__2*alpha__2s*lambda__1*lambda__2*sigma__d^2+8*beta__2^4*`σ__ν`^4*lambda__2^2+16*beta__2^2*beta__3^2*`σ__ν`^4*lambda__2*lambda__3+8*beta__2^2*`σ__ν`^2*alpha__1s^2*lambda__2^2*sigma__d^2+8*beta__2^2*`σ__ν`^2*alpha__2^2*lambda__2^2*sigma__d^2+8*beta__3^4*`σ__ν`^4*lambda__3^2+8*beta__3^2*`σ__ν`^2*alpha__3^2*lambda__3^2*sigma__d3^2+2*alpha__1^4*lambda__1^2*sigma__d^4+4*alpha__1^2*alpha__1s^2*lambda__1*lambda__2*sigma__d^4+4*alpha__1^2*alpha__2s^2*lambda__1^2*sigma__d^4+8*alpha__1*alpha__1s*alpha__2*alpha__2s*lambda__1*lambda__2*sigma__d^4+2*alpha__1s^4*lambda__2^2*sigma__d^4+4*alpha__1s^2*alpha__2^2*lambda__2^2*sigma__d^4+2*alpha__2^4*lambda__2^2*sigma__d^4+4*alpha__2^2*alpha__2s^2*lambda__1*lambda__2*sigma__d^4+2*alpha__2s^4*lambda__1^2*sigma__d^4+2*alpha__3^4*lambda__3^2*sigma__d3^4+8*beta__1^2*`σ__ν`^2*alpha__1*lambda__1^2*sigma__d^2+8*beta__1*beta__2*`σ__ν`^2*alpha__1s*lambda__1*lambda__2*sigma__d^2+8*beta__1*beta__2*`σ__ν`^2*alpha__2s*lambda__1*lambda__2*sigma__d^2+8*beta__2^2*`σ__ν`^2*alpha__2*lambda__2^2*sigma__d^2+8*beta__3^2*`σ__ν`^2*alpha__3*lambda__3^2*sigma__d3^2+4*alpha__1^3*lambda__1^2*sigma__d^4+4*alpha__1*alpha__1s^2*lambda__1*lambda__2*sigma__d^4+4*alpha__1*alpha__1s*alpha__2s*lambda__1*lambda__2*sigma__d^4+4*alpha__1*alpha__2s^2*lambda__1^2*sigma__d^4+4*alpha__1s^2*alpha__2*lambda__2^2*sigma__d^4+4*alpha__1s*alpha__2*alpha__2s*lambda__1*lambda__2*sigma__d^4+4*alpha__2^3*lambda__2^2*sigma__d^4+4*alpha__2*alpha__2s^2*lambda__1*lambda__2*sigma__d^4+4*alpha__3^3*lambda__3^2*sigma__d3^4-8*beta__1^3*`σ__ν`^4*lambda__1-8*beta__1^2*beta__2*`σ__ν`^4*lambda__1-16*beta__1^2*beta__3*`σ__ν`^4*lambda__1+2*beta__1^2*`σ__ν`^2*lambda__1^2*sigma__d^2-8*beta__1*beta__2^2*`σ__ν`^4*lambda__2-8*beta__1*beta__3^2*`σ__ν`^4*lambda__3-4*beta__1*`σ__ν`^2*alpha__1^2*lambda__1*sigma__d^2-4*beta__1*`σ__ν`^2*alpha__1*alpha__1s*lambda__1*sigma__d^2-4*beta__1*`σ__ν`^2*alpha__2*alpha__2s*lambda__1*sigma__d^2-4*beta__1*`σ__ν`^2*alpha__2s^2*lambda__1*sigma__d^2-8*beta__2^3*`σ__ν`^4*lambda__2-16*beta__2^2*beta__3*`σ__ν`^4*lambda__2+2*beta__2^2*`σ__ν`^2*lambda__2^2*sigma__d^2-8*beta__2*beta__3^2*`σ__ν`^4*lambda__3-4*beta__2*`σ__ν`^2*alpha__1*alpha__1s*lambda__2*sigma__d^2-4*beta__2*`σ__ν`^2*alpha__1s^2*lambda__2*sigma__d^2-4*beta__2*`σ__ν`^2*alpha__2^2*lambda__2*sigma__d^2-4*beta__2*`σ__ν`^2*alpha__2*alpha__2s*lambda__2*sigma__d^2-16*beta__3^3*`σ__ν`^4*lambda__3+2*beta__3^2*`σ__ν`^2*lambda__3^2*sigma__d3^2-8*beta__3*`σ__ν`^2*alpha__3^2*lambda__3*sigma__d3^2+2*alpha__1^2*lambda__1^2*sigma__d^4+alpha__1s^2*lambda__2^2*sigma__d^4+2*alpha__1s*alpha__2s*lambda__1*lambda__2*sigma__d^4+2*alpha__2^2*lambda__2^2*sigma__d^4+alpha__2s^2*lambda__1^2*sigma__d^4+2*alpha__3^2*lambda__3^2*sigma__d3^4-2*beta__1*`σ__ν`^2*alpha__1*lambda__1*sigma__d^2-2*beta__1*`σ__ν`^2*alpha__1s*lambda__1*sigma__d^2-2*beta__2*`σ__ν`^2*alpha__2*lambda__2*sigma__d^2-2*beta__2*`σ__ν`^2*alpha__2s*lambda__2*sigma__d^2-4*beta__3*`σ__ν`^2*alpha__3*lambda__3*sigma__d3^2+3*beta__1^2*`σ__ν`^4+2*beta__1*beta__2*`σ__ν`^4+8*beta__1*beta__3*`σ__ν`^4+3*beta__2^2*`σ__ν`^4+8*beta__2*beta__3*`σ__ν`^4+8*beta__3^2*`σ__ν`^4+`σ__ν`^2*alpha__1^2*sigma__d^2+`σ__ν`^2*alpha__1s^2*sigma__d^2+`σ__ν`^2*alpha__2^2*sigma__d^2+`σ__ν`^2*alpha__2s^2*sigma__d^2+2*`σ__ν`^2*alpha__3^2*sigma__d3^2

(1)

b := (8*`λ__1`^2*`β__1`^4-8*`β__1`^2*(-2*`β__2`^2*`λ__2`-2*`β__3`^2*`λ__3`+`β__1`+`β__2`+2*`β__3`)*`λ__1`+8*`λ__2`^2*`β__2`^4-8*`β__2`^2*(-2*`β__3`^2*`λ__3`+`β__1`+`β__2`+2*`β__3`)*`λ__2`+3*`β__1`^2+(-8*`β__3`^2*`λ__3`+2*`β__2`+8*`β__3`)*`β__1`+3*`β__2`^2+(-8*`β__3`^2*`λ__3`+8*`β__3`)*`β__2`+8*`β__3`^2*(`β__3`*`λ__3`-1)^2)*`σ__ν`^4+((8*`β__1`^2*(`α__2s`^2+(`α__1`+1/2)^2)*`λ__1`^2+16*`β__1`*(`β__2`*((`α__1`+1/2)*`α__1s`+`α__2s`*(`α__2`+1/2))*`λ__2`+(-(1/4)*`α__1`-1/8)*`α__1s`-(1/4)*`α__1`^2-(1/4)*`α__2`*`α__2s`-(1/4)*`α__2s`^2-(1/8)*`α__1`)*`λ__1`+8*`β__2`^2*(`α__1s`^2+(`α__2`+1/2)^2)*`λ__2`^2-(4*(`α__1`*`α__1s`+`α__1s`^2+(`α__2s`+`α__2`)*(`α__2`+1/2)))*`β__2`*`λ__2`+`α__1`^2+`α__2`^2+`α__1s`^2+`α__2s`^2)*`σ__d`^2+8*(`λ__3`*(`α__3`+1/2)*`β__3`-(1/2)*`α__3`)^2*`σ__d3`^2)*`σ__ν`^2+((2*`α__2s`^4+4*(`α__1`+1/2)^2*`α__2s`^2+2*`α__1`^2*(`α__1`+1)^2)*`λ__1`^2+(4*((`α__1`^2+`α__1`)*`α__1s`^2+(2*(`α__2`+1/2))*(`α__1`+1/2)*`α__1s`*`α__2s`+`α__2`*`α__2s`^2*(`α__2`+1)))*`λ__2`*`λ__1`+2*`λ__2`^2*(`α__1s`^4+2*(`α__2`+1/2)^2*`α__1s`^2+`α__2`^2*(`α__2`+1)^2))*`σ__d`^4+2*`λ__3`^2*`σ__d3`^4*`α__3`^2*(`α__3`+1)^2

(8*lambda__1^2*beta__1^4-8*beta__1^2*(-2*beta__2^2*lambda__2-2*beta__3^2*lambda__3+beta__1+beta__2+2*beta__3)*lambda__1+8*lambda__2^2*beta__2^4-8*beta__2^2*(-2*beta__3^2*lambda__3+beta__1+beta__2+2*beta__3)*lambda__2+3*beta__1^2+(-8*beta__3^2*lambda__3+2*beta__2+8*beta__3)*beta__1+3*beta__2^2+(-8*beta__3^2*lambda__3+8*beta__3)*beta__2+8*beta__3^2*(beta__3*lambda__3-1)^2)*`σ__ν`^4+((8*beta__1^2*(alpha__2s^2+(alpha__1+1/2)^2)*lambda__1^2+16*beta__1*(beta__2*((alpha__1+1/2)*alpha__1s+alpha__2s*(alpha__2+1/2))*lambda__2+(-(1/4)*alpha__1-1/8)*alpha__1s-(1/4)*alpha__1^2-(1/4)*alpha__2*alpha__2s-(1/4)*alpha__2s^2-(1/8)*alpha__1)*lambda__1+8*beta__2^2*(alpha__1s^2+(alpha__2+1/2)^2)*lambda__2^2-4*(alpha__1*alpha__1s+alpha__1s^2+(alpha__2s+alpha__2)*(alpha__2+1/2))*beta__2*lambda__2+alpha__1^2+alpha__2^2+alpha__1s^2+alpha__2s^2)*sigma__d^2+8*(lambda__3*(alpha__3+1/2)*beta__3-(1/2)*alpha__3)^2*sigma__d3^2)*`σ__ν`^2+((2*alpha__2s^4+4*(alpha__1+1/2)^2*alpha__2s^2+2*alpha__1^2*(alpha__1+1)^2)*lambda__1^2+4*((alpha__1^2+alpha__1)*alpha__1s^2+2*(alpha__2+1/2)*(alpha__1+1/2)*alpha__1s*alpha__2s+alpha__2*alpha__2s^2*(alpha__2+1))*lambda__2*lambda__1+2*lambda__2^2*(alpha__1s^4+2*(alpha__2+1/2)^2*alpha__1s^2+alpha__2^2*(alpha__2+1)^2))*sigma__d^4+2*lambda__3^2*sigma__d3^4*alpha__3^2*(alpha__3+1)^2

(2)

simplify(a-b)

0

(3)

NULL

Download simp.mw

Anyway, best answer of course!

@mmcdara thanks a lot for all the work. But how about the simplify issue that I mentioned?

@mmcdara so mean and variance of my A+B+C sum would simply be these

Separately, I noticed (also in other occasions) that simplify() does not behave consistently. Your simplify() on Cov(A,B) gives you a linear expression (no weird fractions), which is easier to read than mine with fractions. How to make Maple "prioritize" your simplify() output over mine?

restart

with(Statistics):

RVS := [
         Nu__1    = RandomVariable(Normal(0, sigma__nu)),
         Nu__2    = RandomVariable(Normal(0, sigma__nu)),
         Delta__1 = RandomVariable(Normal(0, sigma__d)),
         Delta__2 = RandomVariable(Normal(0, sigma__d)),
         Delta__3 = RandomVariable(Normal(0, sigma__d3))
       ]

[Nu__1 = _R, Nu__2 = _R0, Delta__1 = _R1, Delta__2 = _R2, Delta__3 = _R3]

(1)

X__1 := beta__1*(Nu__1+Nu__2)+alpha__1*Delta__1+alpha__2s*Delta__2;
X__2 := beta__2*(Nu__1+Nu__2)+alpha__2*Delta__2+alpha__1s*Delta__1;
X__3 := beta__3*(Nu__1+Nu__2)+alpha__3*Delta__3;

beta__1*(Nu__1+Nu__2)+alpha__1*Delta__1+alpha__2s*Delta__2

 

beta__2*(Nu__1+Nu__2)+alpha__2*Delta__2+alpha__1s*Delta__1

 

beta__3*(Nu__1+Nu__2)+alpha__3*Delta__3

(2)

A := X__1*(-lambda__1*X__1-lambda__1*Delta__1+Nu__1);
B := X__2*(-lambda__2*X__2-lambda__2*Delta__2+Nu__2);
C := X__3*(-lambda__3*X__3-lambda__3*Delta__3+Nu__1+Nu__2);

(beta__1*(Nu__1+Nu__2)+alpha__1*Delta__1+alpha__2s*Delta__2)*(-(beta__1*(Nu__1+Nu__2)+alpha__1*Delta__1+alpha__2s*Delta__2)*lambda__1-lambda__1*Delta__1+Nu__1)

 

(beta__2*(Nu__1+Nu__2)+alpha__2*Delta__2+alpha__1s*Delta__1)*(-(beta__2*(Nu__1+Nu__2)+alpha__2*Delta__2+alpha__1s*Delta__1)*lambda__2-lambda__2*Delta__2+Nu__2)

 

(beta__3*(Nu__1+Nu__2)+alpha__3*Delta__3)*(-(beta__3*(Nu__1+Nu__2)+alpha__3*Delta__3)*lambda__3-lambda__3*Delta__3+Nu__1+Nu__2)

(3)

`Cov(A, B)` := Covariance(eval(A, RVS), eval(B, RVS)):

`Cov(A, B)` := simplify(`Cov(A, B)`);

2*((alpha__1^2+alpha__1)*alpha__1s^2+2*(alpha__1+1/2)*(alpha__2+1/2)*alpha__1s*alpha__2s+alpha__2*alpha__2s^2*(alpha__2+1))*lambda__2*lambda__1*sigma__d^4+8*sigma__nu^2*(beta__1*(beta__2*((alpha__1+1/2)*alpha__1s+alpha__2s*(alpha__2+1/2))*lambda__2+(-(1/4)*alpha__1-1/8)*alpha__1s-(1/4)*alpha__2*alpha__2s)*lambda__1-(1/4)*lambda__2*beta__2*(alpha__1*alpha__1s+alpha__2s*(alpha__2+1/2)))*sigma__d^2+8*sigma__nu^4*beta__1*beta__2*(beta__1*(beta__2*lambda__2-1/2)*lambda__1-(1/2)*beta__2*lambda__2+1/8)

(4)

# and so on for Cov(A, C) and Cov(B, C)

Obj := A+B+C;

(beta__1*(Nu__1+Nu__2)+alpha__1*Delta__1+alpha__2s*Delta__2)*(-(beta__1*(Nu__1+Nu__2)+alpha__1*Delta__1+alpha__2s*Delta__2)*lambda__1-lambda__1*Delta__1+Nu__1)+(beta__2*(Nu__1+Nu__2)+alpha__2*Delta__2+alpha__1s*Delta__1)*(-(beta__2*(Nu__1+Nu__2)+alpha__2*Delta__2+alpha__1s*Delta__1)*lambda__2-lambda__2*Delta__2+Nu__2)+(beta__3*(Nu__1+Nu__2)+alpha__3*Delta__3)*(-(beta__3*(Nu__1+Nu__2)+alpha__3*Delta__3)*lambda__3-lambda__3*Delta__3+Nu__1+Nu__2)

(5)

`Var(Obj)` := Variance(eval(Obj, RVS)):

`Var(Obj)` := simplify(`Var(Obj)`);

(8*lambda__1^2*beta__1^4-8*beta__1^2*(-2*beta__2^2*lambda__2-2*beta__3^2*lambda__3+beta__1+beta__2+2*beta__3)*lambda__1+8*lambda__2^2*beta__2^4-8*beta__2^2*(-2*beta__3^2*lambda__3+beta__1+beta__2+2*beta__3)*lambda__2+3*beta__1^2+(-8*beta__3^2*lambda__3+2*beta__2+8*beta__3)*beta__1+3*beta__2^2+(-8*beta__3^2*lambda__3+8*beta__3)*beta__2+8*beta__3^2*(beta__3*lambda__3-1)^2)*sigma__nu^4+((8*(alpha__2s^2+(alpha__1+1/2)^2)*beta__1^2*lambda__1^2+16*(beta__2*((alpha__1+1/2)*alpha__1s+alpha__2s*(alpha__2+1/2))*lambda__2+(-(1/4)*alpha__1-1/8)*alpha__1s-(1/4)*alpha__1^2-(1/4)*alpha__2*alpha__2s-(1/4)*alpha__2s^2-(1/8)*alpha__1)*beta__1*lambda__1+8*beta__2^2*(alpha__1s^2+(alpha__2+1/2)^2)*lambda__2^2-4*beta__2*(alpha__1s^2+alpha__1*alpha__1s+(alpha__2s+alpha__2)*(alpha__2+1/2))*lambda__2+alpha__1^2+alpha__2^2+alpha__1s^2+alpha__2s^2)*sigma__d^2+8*sigma__d3^2*(lambda__3*(alpha__3+1/2)*beta__3-(1/2)*alpha__3)^2)*sigma__nu^2+((2*alpha__2s^4+4*(alpha__1+1/2)^2*alpha__2s^2+2*alpha__1^2*(alpha__1+1)^2)*lambda__1^2+4*((alpha__1^2+alpha__1)*alpha__1s^2+2*(alpha__1+1/2)*(alpha__2+1/2)*alpha__1s*alpha__2s+alpha__2*alpha__2s^2*(alpha__2+1))*lambda__2*lambda__1+2*(alpha__1s^4+2*(alpha__2+1/2)^2*alpha__1s^2+alpha__2^2*(alpha__2+1)^2)*lambda__2^2)*sigma__d^4+2*lambda__3^2*sigma__d3^4*alpha__3^2*(alpha__3+1)^2

(6)

`Exp(Obj)` := Mean(eval(Obj, RVS)):

`Exp(Obj)` := simplify(`Exp(Obj)`);

(-2*beta__1^2*lambda__1-2*beta__2^2*lambda__2-2*beta__3^2*lambda__3+beta__1+beta__2+2*beta__3)*sigma__nu^2+((-alpha__1^2-alpha__2s^2-alpha__1)*lambda__1-lambda__2*(alpha__1s^2+alpha__2^2+alpha__2))*sigma__d^2-lambda__3*sigma__d3^2*alpha__3*(alpha__3+1)

(7)
 

``

Download No_problem_MaPal.mw

@acer good catch. My mistake. See new worksheet in question body. 

@dharr thanks a lot! Your three answers all make sense to me now.

@dharr thanks a lot. I really appreciate your help.

This should be correct if I am not overlooking anything: derivatives_final.mw. The file only includes lambda-lambda, beta-beta, and alpha-alpha comparisons. Next, I will try lambda-beta, lambda-alpha, and beta-alpha comparisons (6 additional comparisons in total, i.e., for each pair wrt to sigma_d and sigma_v). Hopefully I can get back to you if I encounter issues.

Three minor questions:

  1. Two second derivatives change sign. For each, I find the zero with solve() but it returns two zeros instead of just one as I expected. Why?
  2. I find minima and maxima with [DirectSearch]GlobalOptima, which is most of the time accurate but quite slow. Any alternative that is more efficient? I write "most of the time" because occasionally it gives me some very random results, completely off from the peaks and troughs I can eyeball from the plots.
  3. My axes are always gamma*sigma_d^2 vs gamma*sigma_d*sigma_v. Are there any other param combinations that I can use? Would it make sense to scale the plots together with the actual axes, e.g., dividing both by gamma*sigma_d so that my new y-axis and x-axis are, respectively, sigma_d and sigma_v? How to do so? By doing so perhaps the plots are easier to interpret...

Thanks a lot.

 

@dharr working_example2.mw works great. Thanks a lot for the helpful comments too!

About the derivatives, please check derivatives_2.mw (with alias, without approximation) or derivatives_2_(1).mw (without alias, with approximation). Perhaps in the second one the derivatives look nicer? I am not sure...anyway...

Questions:

  1. How to determine the signs? For example, partial derivative of lambda wrt sigma_v should be positive according to the inert form (our knowledge of Lambda being positive, at least the root I care about, and d(Lambda)/d(Gamma) being negative) but my if else loop always gives me negative...How to fix it?
  2. Are my second derivatives correct? (See header of worksheet and the 8 additional derivatives compared to your version.) Specifically, does the Symmetry of second derivatives always hold in my cases? If not, should I derive first wrt to sigma_d or sigma_v and then wrt gamma or vice versa if I want to compare the 8 derivatives across each other?
  3. The scaling seemed easy, but I encountered issues when plotting. See for example the comparison at the bottom of the script.

Thank you for looking into it

@dharr I think I managed to do the beta1-beta3 comparison: working_example.mw. Could you please check if it is correct?

Questions:

  1. However, I have sigma_d3/sigma_d on the y-axis instead of sigma_d/sigma_d3 as for the lambda1-lambda3 comparison you did. Does it make sense to have an inverted y-axis or should I try to have the same y-axis to facilitate interpretation?
  2. For the alpha-alpha_s comparison, I found that alpha > alpha_s regardless of the parameter values. However, I am having a tough time doing the beta-alpha and beta-alpha_s comparisons. Can you help?
  3. For the partial derivatives, I set them up using the chain rule and previous code of yours. Are they all correct (see the inert forms)? Now, if I want to scale them so that I can compare them with each other, I would first need to visualize them with aliases, but it's not working as I expected (see my script). However, even if I am successfull with using the aliases, it still seems quite hard to do the manual scaling.

What do you propose? (and should I migrate this to a separate question which I would title "Non-dimensionalization and finding the key pair of parameters combinations"?)    

(Notation: L is lambda_1=lambda_2, B is beta_1=beta_2, A is alpha_1=alpha_2, As is alpha_2s=alpha_1s.)

 

@dharr thank you, as always! After reading your details, I understand that what you propose is the simplest and most effective way to assess magnitudes and do what I wanted to do. (Best answer went to @mmcdara because his approach was the first being proposed and tackled effectively my initial request for simpler functions. I admit that I later changed my request to solve a more complicated case and I would have given you best answer if I asked for help for the complicated case since the beginning.)

Back to your approach, I think I got it and I have a working example I will upload here once MaplePrimes allows me to upload files again (are you also encountering issues?)

In summary, I think:

  1. Your approach always requires manual scaling tailored for the specific functions at hand but it's very effective
  2. Working with Lambda, which is just a quartic in implicit form, is actually simpler than working with its approximate form
  3. The underlying idea is to obtain two parameters combinations that I can plot as x vs y and identify the equality line

I will try to make it work for my derivatives too now and get back to you here.

4 5 6 7 8 9 10 Last Page 6 of 17