MaPal93

175 Reputation

6 Badges

2 years, 331 days

MaplePrimes Activity


These are replies submitted by MaPal93

@tomleslie thank you for the exhaustive answer !

@dharr understood. Do you mind double-checking whether I am missing anything from my analysis? 250523_SOLUTION_analytical.mw

I am not confident about it, I might have missed some corner solutions, as I got confused by the different domains...

From my (imperfect) analysis, it seems that for all 6 calibrations there's no solution allowing for both lambda_2 and lambda_3 to be positive. Would you confirm it? 

@tomleslie thank you very much for your clear answer. Yes, I realized the index issue but did not know how to collect each iteration in the right data structure. Your plots are very useful. I have two follow-up questions:

  1. Can you show me calibration 12 and just one example representative for calibrations 9 to 11? These are correlations, so I'll need something like for INDEX from -1 by 0.002 to 1 do for calibrations 9 to 11 and for INDEX from -0.998 by 0.002 to 1 do for calibration 12 (since I noticed that from -1 gives me error for this calibration).
  2. You wrote "You just have to decide exactly what you want to see". I wrote that in the body of my question. I'd like to use subplots. In particular, for the lambda plots I want to combine them for multiple calibrations (please refer to what I wrote above), while for the 12 beta plots I think is visually better to split the 6 data series so that I have just 2 for each sub-plot (but I do agree with you that there's no need for dualaxis plots).

@dharr thank you for the detailed analysis for gamma.

"if you look at RegularChains:-Triangularize that PolynomialSysyem is using, you can see how to more specifically add inequalities.."

  • How about I just add the inequalities to the set of equations I pass to the solver (regardless of the solver I use)?

EDIT: I cannot do this apparently, since the only acceptable inputs are polynomials.

but it might be faster to process them after as needed."

  • Does it mean that the quadratic solutions I obtain from the solver would be the same regardless of the assumptions on my variables and parameters (and only the roots to these quadratic equations depend on the assumptions)? 

@dharr a related question: is it possible that the absence of all-positive solutions is because the solver I used may ignore assumptions on the parameters (in the main script I defined them in the top execution block, same as I did in 220523_SOLUTION_reducedform_calibrations.mw)?

Because I know that standard solve() sometimes ignore these (sometimes even if you add the "assuming" option), but at least it outputs a warning...while I don't see such warning in the ouput of SolveTools:-PolynomialSystem. Does it mean that it took into account the assumptions while solving?

@dharr understood, thank you. I guess this is good news in the sense that from now onwards I can work with a 2-eqs-system. Do you mind sharing me your script for gamma? What about the other equations? 

@dharr thank you. I think the symmetry is intrinsic to the problem I am dealing with. Just to clarify: if I evaluate a 3-eqs-system for lambda_1=lambda_2 and nops() gives me 2, is it the definitive proof that one equation is redundant for such system? If it does, then I observed that even more general systems (not all free parameters set to 1) can be reduced to a 2-eqs-system and be solved very fast.

This is exactly what I did. I have 6 parameters (gamma, rho_u, rho_v, sigma_u, sigma_v, sigma_epsilon). For each calibration, I keep only one as a free parameter and fix the others to 1. [The run in which I keep all 6 as free parameters is still evaluating.] These are my solutions: 

SOLUTION_LAMBDAS_cal_gamma.pdf

SOLUTION_LAMBDAS_cal_rhou.pdf

SOLUTION_LAMBDAS_cal_rhov.pdf

SOLUTION_LAMBDAS_cal_sigmau.pdf

SOLUTION_LAMBDAS_cal_sigmav.pdf

SOLUTION_LAMBDAS_cal_sigmaeps.pdf

Can you kindly help me analyze just the quadratic solution for each of these 6? I collected them here (with startup code included): 220523_SOLUTION_reducedform_calibrations.mw. I got a bit confused with applying Descartes rules on these (you can see my attempts)...I am still looking only for real and positive closed-form solutions for lambda_2 (= lambda_1) and lambda_3.

Thanks a lot. 

@dharr with an alternative methodology to create the EqNs to be fed into the solver, it seems that I managed to obtain a real and positive root for my lambda_1, lambda_2, and lambda_3 (from the 5th solving equation). My analysis is in: 210523_SOLUTION_normalized_version2.mw (input solution file is SOLUTION_LAMBDAS_normalized_v2.pdf and startup code includes the new EqNs).

Can you confirm that my analysis is correct? In particular:

  • Do you confirm that such positive and real solution from the 5th solving equation is unique? That is, do you confirm that the 1st solving equation (bottom of my analysis script) do produces infinite real solution but none positive (by Descartes rule of sign)?

Thank you!

 

@dharr sorry for my late reply. Your script and your answers really helped me understand more about the normalized case. Thank you.

You wrote: There are some interesting symmetries that might help you, For example EqN[2] and EqN[3] are the same if lambda__1 and lambda__2 are exchanged (and this was true for sol[1].) Is this expected?

How does EqN maps to MyEqs (both in black)? When you say EqN[2] and EqN[3] do you refer to the normalized 'eql1 - lambda_1' and 'eql2 - lambda_2'? Then absolutely! the equations for lambda_1 and lambda_2 are "specular" by construction.  Now look at indets(MyEqs) minus MyVars: these are all my parameters.

MyEqs  := {eql1 - lambda__1, eql2 - lambda__2, eql3 - lambda__3}:
MyVars := {lambda__1, lambda__2, lambda__3}:

indets(MyEqs) minus MyVars;

#print~([ seq([i=indets(MyEqs[i], name) intersect MyVars], i=1..nops(MyEqs)) ]):
 

{Cov_S12, Cov_u12, Cov_u13, Cov_u23, Var_S1, Var_S2, Var_nu1, Var_nu2, Var_u1, Var_u2, Var_u3, gamma, Cov_nu12, Cov_nu12_S, Var_nu1_S, Var_nu2_S, theta__11, theta__12, theta__21, theta__22}

(1)

NULL

StringTools:-FormatTime("%H:%M:%S"); infolevel[solve] := 4; P := `minus`(indets(MyEqs, name), MyVars); EqN := (`@`(`~`[`@`(`@`(`@`(numer, evala), :-Norm), numer)], eval))(MyEqs, `~`[`=`](P, 1)); kernelopts(numcpus); SOLUTION_LAMBDAS_parallel_triade := CodeTools:-Usage(([SolveTools:-PolynomialSystem])(EqN, MyVars, engine = triade, backsubstitute = false, preservelabels)); (length, `~`[length], nops)(SOLUTION_LAMBDAS_parallel_triade); StringTools:-FormatTime("%H:%M:%S")

NULL

Download EqN.mw

I am trying to progressively move away from the normalized (all params set to 1) case by setting them to other values. Either to the same common value for a few parameters, or to zero. If I use cal1 (which I prefer over cal2), the solver starts to evaluate for about 1h and then I get "kernel connection lost - stack limit reached". I have plenty of computational resources and time to spend on my problem. Why the solver outputs this error message and why so early instead, for example, of keep evaluating for a few days straight? What exactly is the source of this error and why didn't I get it in the normalized case? Is there a way to bypass this and force the solver to keep working until I stop the command manually?

# Calibration:

P := indets(MyEqs, name) minus MyVars;

cal1 := [
  Cov_S12 = Cov_nu,
  Cov_u12 = Cov_u,
  Cov_u13 = Cov_u,
  Cov_u23 = Cov_u,
  Var_S1 = Var_S,
  Var_S2 = Var_S,
  Var_nu1 = Var_nu,
  Var_nu2 = Var_nu,
  Var_u1 = Var_u,
  Var_u2 = Var_u,
  Var_u3 = Var_u,
  Cov_nu12 = Cov_nu
]:

cal2 := [
  Cov_S12 = 0,
  Cov_u12 = 0,
  Cov_u13 = 0,
  Cov_u23 = 0,
  Var_S1 = Var_S,
  Var_S2 = Var_S,
  Var_nu1 = Var_nu,
  Var_nu2 = Var_nu,
  Var_u1 = Var_u,
  Var_u2 = Var_u,
  Var_u3 = Var_u,
  Cov_nu12 = 0
]:

MyEqs_cal := eval(MyEqs,cal1):
EqN_cal := ((numer@evala@:-Norm@numer)~@eval)(MyEqs_cal):
EqN := simplify(EqN_cal,size):
P_cal := indets(EqN, name) minus MyVars;

{Cov_S12, Cov_u12, Cov_u13, Cov_u23, Var_S1, Var_S2, Var_nu1, Var_nu2, Var_u1, Var_u2, Var_u3, gamma, Cov_nu12, Cov_nu12_S, Var_nu1_S, Var_nu2_S, theta__11, theta__12, theta__21, theta__22}

 

{Cov_nu, Cov_u, Var_S, Var_nu, Var_u, gamma, Cov_nu12_S, Var_nu1_S, Var_nu2_S, theta__11, theta__12, theta__21, theta__22}

(1)

NULL

Download caltriade.mw

I will now try to feed EqN calibrated to cal2 to see if I obtain anything, but I still expect to get the kernel connection lost error and, in any case, I would lose the covariance terms and, therefore, the richness of my model.

Back to the symmetry between EqN[2] and EqN[3]:

Can you think of any calibration of my P or P_cal that takes advantage of such symmetry? Is it even thinkable, for example, of rewriting polynomials EqN[2] and EqN[3] as a single polynomial to be put in a system only with EqN[1]? More in general, how to use the fact that lambda_1 and lambda_2 are "exchangeable"?

@dharr thank you.

Let me be clear about my goal. I am looking for a (i) closed-form, (ii) real, and (iii) positive solution. Even just one solution satisfying all three constraints would be fantastic.

Now, it seems that sol[1] (2 equations in the lambdas) and sol[5] (3 equations in the lambdas) do not look weird and are the most promising for further analysis.

My questions:

1) How did you find the unique and real but negative lambdas from sol[5]?

2) You showed me one example in which also sol[1] produces a real but negative solution. In the same comment you mentioned "but real and positive is harder". Why is it the case? If mathematically real and positive solutions are not ruled out a priori, is there a more systematic way to find them? (even just a loop which tries many substitutions and stops as soon as we find one would do, I guess?)

3) It's not clear to me the benefit of dividing out the common factor in the system with all parameters normalized to 1 (since we already found solutions and, factorized or not, these solutions would be the same). The benefit would be clear if such removable common factor existed in the uncalibrated equations EqN := ((numer@evala@:-Norm@numer)'tilde'@eval)(MyEqs). How to verify this possibility?

4) Surely a naive question, but would a solution found as in 2) solve also the uncalibrated system? That is, would simplify(eval(EqN,sol)) still give me 0 for EqN := ((numer@evala@:-Norm@numer)'tilde'@eval)(MyEqs) instead of EqN := ((numer@evala@:-Norm@numer)'tilde'@eval)(MyEqs,P='tilde'1)?

5) Related to 4). My end goal is to study how lambda_1, lambda_2, and lambda_3 vary with my parameters. Is it legit to pick a real and positive lambda_2 and lambda_3 (found as in 2)) and plug them back into uncalibrated EqN[1] (and solve for lambda_1), then pick lambda_1 and lambda_3 and plug them back into uncalibrated EqN[2] (and solve for lambda_2), finally, pick lambda_1 and lambda_2 and plug them back into uncalibrated EqN[3] (and solve for lambda_3)? See my original script 160523_stylized_triade.mw: the first equation is for lambda_1, second for lambda_2, and third for lambda_3.

Really thank you for helping me out with this.

@dharr lambdas are not eigenvalues but some coefficients derived from a conjecture. Let me think if I can somehow derive them as eigenvalues (but I don't think so).

Good catch about the common factor (and yes knowing my system, I think it makes sense that there are some common factors across the 3 equations). Can you show me which is such common factor with the example above? and can you show me how to re-write the 3 equations accordingly?

I tried to pass MyEqs directly and while keeping the parameters (with backsubstitute=true) once and I got the error "kernel connection lost - stack limit reached" after about 45 minutes. So how could I verify if such common factor across the 3 equations also exists in the uncalibrated system? Can I assume that it does, ex-ante instead of ex-post, and somehow (how?) re-write my 3 equations in a simplified way before feeding them to the solver?

@dharr thank you for your answers. I think is great to at least know that the system has solutions and that these can be found with triade. Now I need to recover some interpretability. Should I now try to:

  1. Keep my parameters instead of normalizing them all to 1?
  2. Pass MyEqs directly instead of EqN? (I expect it would take much longer to solve but, apart from this, the system should still be solvable. Am I correct? Are there any mathematical reasons for my system to potentially be unsolvable if I keep my parameters?)

Any other suggestion? I don't know much about RealTriangularize...but what if lambda_3 is always going to be imaginary? 

@dharr wait a minute...

1) what is exactly Eqn? could you load my solution instead of hard coding expressions?

2) is this system indeterminate (infinitely many solutions)?

3) would it be preferred if I chose backsubstitute = true?

4) what happened to my parameters P := indets(MyEqs, name) minus MyVars ? I was hoping to obtain solutions as functions of those.

5) is this solving approach (a) reliable and (b) preferable to engine=groebner or standard solve()?

6) how to enforce RealDomain?

Thank you a lot! much appreciate your explanations.

@dharr thanks for your feedback. Instead, are you aware of SolveTools:-PolynomialSystem with engine=triade?

I am using a procedure recommended me long time ago by @Carl Love and @acer, but in all honesty I do not understand the output I just obtained: 160523_stylized_triade.mw

It took me about 1h to finish the TriangularDecomposition. Please take a look at my saved output SOLUTION_triade.pdf (convert it to .m for further analysis). Thank you a lot!

@mmcdara, I would totally agree with you if 240323_simpler_mmcdara.mw didn't give us a one-line-long, interpretable  and beautifully understandable solution, but it did! see:

Yes, it was after "unveiling" the large coefficients, yes it was after calibration, and yes it was after simplify(allvalues) and combine(radical). But we finally did it and obtained results that made sense to me. I really don't see how the same cannot be achieved with a 6x6 instead of a 4x4 system.

Again, I have already calibrated my system before attempting to solve it and, again, a 3x3 sub-system exists which is linear in the 3 mus and thus solvable independently (the other 3 equations do not depend on mus). Standard solve() is still evaluating and there's no kernel lost error so I am hopeful:

 

UPDATE: it took about 3 days on my machine but I solved the linear sub-system for the mus with standard solve(). The solution is incredibly simple and it makes sense to me: {mu_1=nu_0, mu_2=nu_0, mu_3=2*nu_0}

Now, I am trying to solve (using the SolveTools:-PolynomialSystem with engine=groebner) the remaining 3x3 sub-system for the 3 lambdas. infolevel() reads: "GroebnerBasis: system has a low, positive Hilbert dimension" and then "GroebnerBasis: computing a factored plex basis using Groebner[Solve]". Is this positive news and should I be hopeful to obtain a solution if I am patient enough?

My machine has 128 cores (256 threads) and over 1TB of RAM. GPU is an NVIDIA Tesla A100. kernelopts(numcpus) gives me 256 and parallelization is definitely working:

 

However, I have a question for you. All six equations depend on the 3 lambdas non-linearly, but I am solving for the lambdas only the first 3 equations (those not depending on mus). Is this correct? If this sub-system happens to have no solution for the lambdas (or other issues), should I instead solve for the lambdas only the last 3 equations (those giving me the mus)?

Thank you guys!

First 9 10 11 12 13 14 15 Page 11 of 17