MaPal93

175 Reputation

6 Badges

2 years, 331 days

MaplePrimes Activity


These are replies submitted by MaPal93

@dharr thanks. Perhaps this is good enough for f_1 (= f_2) and f_3 but it can be complicated to customize this for "other functions". In contrast, @mmcdara's solve() should be more of a standard approach where I just plug in my two functions and it returns the precise inequality relationship (between the parameters or products/ratios of them) that dictates the relative magnitudes (without any scaling, which I think is only necessary for 2D plotting). So, how to fix complicated_comparison.mw? Or perhaps there are advantages of using your "plotting approach" rather than solve() but I am not seeing them? (See my last paragraph below.)

@dharr What do I mean by "other functions"? I mean (a) functions built on combinations of f_1 (I call them g_1, g_2, and g_3 in my script right below and beta_1, beta_2, and beta_3 in the script at the bottom of my comment), (b) partial derivatives of f_1, f_3 wrt to my underlying parameters. For both (a) and (b) perhaps is useful to directly use the approximate form of Lambda instead of Lambda directly, but your "plotting approach" can get tricky: complicated_comparison2.mw.

What do I mean by (b)? See partial_derivatives.mw. My END GOAL is to do exactly the same type of comparisons I did here for the limit of gamma to infinity, but for a finite gamma (so I will have additional partial derivatives wrt gamma). Hence the convenience of using the approximate form of Lambda instead of Lambda directly (as the partial derivatives for the latter get too messy). I am interested in both the sign and the relative absolute magnitude  of the partial derivatives. Perhaps for the finite gamma case (i) there are sign switches for some partial derivatives, e.g., derivative being positive for a range of parameter values but negative for another, (ii) your 2D plotting approach can actually be handy for the relative magnitudes in cases where the boundary function of the two domains is easier to plot than to express as a formula (in contrast to the gamma to infinity case, where plotting is not needed as the inequalities are very simple).

@mmcdara 

@mmcdara thanks. I fixed my mistakes and adapted your script for constructing piecewise comparisons for my other simple functions depending on just 3 parameters. 

Now I have a more complicated case with 4 parameters: gamma, sigma__v, sigma__d, sigma__d3 which are all strictly positive. I am pretty sure this is still simple enough for solve() to tackle. Note that the first function depends only on gamma, sigma__v, and sigma__d while the second function only on sigma__v and sigma__d3.

Precice specifications: I want to build a piecewise function that finds the parameters ranges such that (a) f_1 > f_2,  (b) f_1 < f_2, and (c) f_1 = f_2. I need:

  1. To exclude the trivial solutions param = param and param > 0 (e.g., sigma__v = sigma__v and sigma__v > 0 and the same for the other 3 params)
  2. Express all the other solutions in the most meaningful way, e.g., perhaps is simplest/most compact to express param1 > ...combination of other 3 params... than param2 > ...combination of other 3 params... (what is the simplest of the 4 variants?) or, perhaps, it's simpler and more meaningful to have products of params on both sides of the inequalities, e.g., param1*param2 > param3*param4 or similar...   

My failed attempt: 

complicated_comparison.mw

Thanks!

@mmcdara Please check the following file. What am I not seeing about f_1 and f_3 comparison? Note that sigma__d is different from sigma__d3.

restart

beta = (5*sigma__d^2)/(8*sigma__v^2):
f__1 := rhs(%);
alpha = sqrt(5)*sigma__d/(8*sigma__v):
f__2 := rhs(%);
beta__3 = (sigma__d3^2)/(4*sigma__v^2):
f__3 := rhs(%);

(5/8)*sigma__d^2/sigma__v^2

 

(1/8)*5^(1/2)*sigma__d/sigma__v

 

(1/4)*sigma__d3^2/sigma__v^2

(1)

a__1 := (solve([f__1 > f__2, sigma__v > 0], [sigma__v]) assuming sigma__d > 0)[]:
b__1 := (solve([f__1 < f__2, sigma__v > 0], [sigma__v]) assuming sigma__d > 0)[]:
c__1 := (solve([f__1 = f__2, sigma__v > 0], [sigma__v]) assuming sigma__d > 0)[]:

piecewise(
  remove(has, a__1, 0)[], ('f__1' > 'f__2'),
  remove(has, b__1, 0)[], ('f__1' < 'f__2'),
  remove(has, c__1, 0)[], ('f__1' = 'f__2')
)

piecewise(`&sigma;__v` < `&sigma;__d`*sqrt(5), f__2 < f__1, `&sigma;__d`*sqrt(5) < `&sigma;__v`, f__1 < f__2, `&sigma;__v` = `&sigma;__d`*sqrt(5), f__1 = f__2)

(2)

a__2 := (solve([f__1 > f__3, sigma__v > 0], [sigma__v]) assuming sigma__d > 0, sigma__d3 > 0)[]:
b__2 := (solve([f__1 < f__3, sigma__v > 0], [sigma__v]) assuming sigma__d > 0, sigma__d3 > 0)[]:
c__2 := (solve([f__1 = f__3, sigma__v > 0], [sigma__v]) assuming sigma__d > 0, sigma__d3 > 0)[]:

piecewise(
  remove(has, a__2, 0)[], ('f__1' > 'f__3'),
  remove(has, b__2, 0)[], ('f__1' < 'f__3'),
  remove(has, c__2, 0)[], ('f__1' = 'f__3')
)


 

Error, extra argument required to apply `has` predicate

 

a__3 := (solve([f__2 > f__3, sigma__v > 0], [sigma__v]) assuming sigma__d > 0, sigma__d3 > 0)[]:
b__3 := (solve([f__2 < f__3, sigma__v > 0], [sigma__v]) assuming sigma__d > 0, sigma__d3 > 0)[]:
c__3 := (solve([f__2 = f__3, sigma__v > 0], [sigma__v]) assuming sigma__d > 0, sigma__d3 > 0)[]:

piecewise(
  remove(has, a__3, 0)[], ('f__2' > 'f__3'),
  remove(has, b__3, 0)[], ('f__2' < 'f__3'),
  remove(has, c__3, 0)[], ('f__2' = 'f__3')
)

 

piecewise(2*`&sigma;__d3`^2*sqrt(5)/(5*`&sigma;__d`) < `&sigma;__v`, f__3 < f__2, `&sigma;__v` < 2*`&sigma;__d3`^2*sqrt(5)/(5*`&sigma;__d`), f__2 < f__3, `&sigma;__v` = 2*`&sigma;__d3`^2*sqrt(5)/(5*`&sigma;__d`), f__2 = f__3)

(3)

NULL

Download comparisons.mw

@mmcdara thanks. I will get back to you if I have any issue with adapting your script to other f_1 and f_2. 

you write "there is no such thing in the worksheet". I know it's quite odd (I never encountered this before), but the blur is on the plot in the actual worksheet. Here's a screenshot. My plot looks exactly like this in the worksheet:

(This is a minor concern but I was curious about it.)

@mmcdara thanks a lot! Both the plotting approach and the piecewise approach are useful. I think this should be enough. I will run this for a multitude of comparisons I want to do and hopefully I can hear back from you if I encounter any issue with other f_1 and f_2?

When you write "For a more complex case than yours" in the first script you mean more convoluted forms of f_1 and f_2 but still depending on just two variables right? If I have, let's say, three variables then I can use add_on.mw right?

A minor follow-up: why is the plot significantly blurred (legend, numbers on axes, axes titles, inequalities on plot, and even the y=x blue line...) 

@dharr it runs smoothly now. Thanks!

Explore(plot) is indeed a nice way to pin down good approximations.

@dharr what am I doing wrong? MaPal93approx(3).mw I was trying to play with the Explore command to replicate your a and b...

@dharr that's such a simple expression and so accurate!

I managed to get 2.6% for (f(0)-f(infinity))*exp(a*x)*(1 +c_1*x+c_2*x^2) + f(infinity) if I interpolate the quadratic polynomial using the two roots of L-polynomials but yours is truly impressive.

What's the rationale of abandoning the P(x)? Replicating the P(x) role of "submission" to the decaying exponential at infinity while introducing a denominator as in the conventionally more accurate rational functions? How did you come up with the -1/8 and the 5/6? 

@dharr My objective, as you guessed, was indeed (1). I was getting hooked after I learned more and more about the different approaches from you and acer. Sorry for not making that explicit enough.

I have been seeking a simple and interpretable approximation since the very beginning, so (1) is more aligned to my goal. However, I think it was still interesting to maximize the accuracy of the approximation on the shorter scale, but that wouldn't be my primary concern.

I really appreciate all the details!

@dharr thanks. Your efforts, comprehensiveness, and explanations deserve best answer.

"One possibility is to use roots of Laguerre polynomials, which are spread out in this way, and are used in Guass-Laguerre quadrature for functions in a semi-infinite range; see the end of the file." Do you mean that if I use those 5 specific points from your last command (plus my derivative at 0) as interpolation points for our manually constructed approximation function together with DS even a P(x) of degree 5 would do well? From your file, my randomly chosen interpolation points seem very badly chosen...

Moreover, I found that in the range of interest your manually constructed approximation function performs better than @acer polynomial approximation, while @acer rational polynomial approximation performs best by far (virtually zero error/oscillations for that range?). Is this justified? approximations.mw.

@dharr great! Sometimes fsolve() doesn't go through for smaller degree polynomials: Approx_new(1)_error.mw

@dharr thanks for the explanations.

  1. Yes, after a second read I realize those references are useful for data fitting, which I don't need to care about it for now.
  2. But the command plot(fapprox-f(x))/f(x) returns another plot? How to turn that into numbers? It could be interesting to me to have avg error (in %) for the whole curve and perhaps also min and max error with associated x values for which these two occurs? Just to have a measure of the interpolation accuracy beyond the visuals...or how else do you suggest?
  3. I must be doing something wrong to find the decay coefficient a: Approx_new.mw 

Thanks

@dharr thanks for giving more context. I like the idea of trying to "hack things that work well at both ends". Not only it could lead to simple approximate expressions but it should also help with interpretability, which is important to me. 

You write "but you interpolated the polynomial not the overall function fapprox.". Yes, my mistake.

Since you mention fitting vs interpolation and simplicity of the function is important to me, which approach do you recommend? If I understood correctly, the paper shows that the beautiful and simple Eq.(13) can replace well Eq.(2) for both low and high frequencies. I read "complex nonlinear least squares fitting was done using Maple’s NLPSolve routine using the nonlinearsimplex option [15], with a custom calling program [16] that also derives the standard errors in the usual way, i.e., from the values of the derivatives of the impedance with respect to the parameters at the minimum [17]."

  1. Should I consider looking into [15,16,17] to try and find a similarly accurate and simple approximation for my function? Would that be challenging yet worthy? Would you help?
  2. You also mention "relative errors (with respect to Eq. (2))" and "systematic error in the parameters can be estimated by individually varying the parameters to find the minimum in the residual sum of squares". I think it could be interesting to quantify the errors for my approximation as well
  3. Talking about interpolation instead, you mention "Exact value and derivative at zero preclude any of the things that fit (as opposed to interpolate) arbitrary functions unless they are carefully designed not to disrupt the exact values" and "c1*x term will mess up the derivative at zero". Which replacement term would preserve the derivative at 0?​​

Thanks

@acer both approximations look nice. I confirm that the rational polynomial, while more strange looking, performs slightly better: approx_ac2_moreplots.mw

@dharr thanks.

3. is indeed true!

Is this what you meant in 4.? Approx.mw (which perhaps can be implemented more concisely using loops?)

5 6 7 8 9 10 11 Last Page 7 of 17