dharr

Dr. David Harrington

8235 Reputation

22 Badges

20 years, 341 days
University of Victoria
Professor or university staff
Victoria, British Columbia, Canada

Social Networks and Content at Maplesoft.com

Maple Application Center
I am a retired professor of chemistry at the University of Victoria, BC, Canada. My research areas are electrochemistry and surface science. I have been a user of Maple since about 1990.

MaplePrimes Activity


These are replies submitted by dharr

If I comment out with # the line setting values to various variables, then look at the output of the main loop I notice:

rps1.mw

1. Theta' is being interpreted as a derivative with respect to x. Probably you want to use declare(prime=eta)

2. You have multiplication before square brackets [] in two places, which Maple does not know how to interpret - if these are just for grouping you should use ().

3. The derivative with respect to eta m-2 times has not worked - the "d" should appear as upright, not italics. Use the calculus palette to form this, or use diff( ... , eta $ (m-2))

4. You have both Theta and Theta[a] and Theta with other subscripts. If it has subscripts you should not use Theta without subscripts as well.

5. If you are intending Theta without subscripts to be a function of eta you need to write Theta(eta)  or use declare so that Maple does not think it is a constant when it is differentiating.

6. use add instead of sum for just adding a finite number of terms.

Perhaps after fixing these, it will be clearer what you want to do, but at the moment I do not understand.

In Windows, if you click on the plot, the controls appear.

@Anthrazit Here's the CompleteGraph version (usually in traveling salesman problems there are not roads between every combination of cities). If you want shortest paths between two vertices that don't have to visit all vertices, DijkstrasAlgorithm or BellmanFordAlgorithm can be used - both can find shortest paths from vertex 1 to all other vertices, and then you can use an undirected graph - a path will not revisit an edge). If that is what you want I can set it up.

TravelingSalesmanCompleteGraph.mw

@C_R Yes, I forgot the abs; it works OK with it. I only used it without abs for a single "period" where I knew the sign, when I was playing around with it. I agree that the integral to infinity without abs is not relevant to @Mariusz Iwaniuk's integral, but I'm not sure whether or not the answer is wrong.

@Rouben Rostamian  PDEtools:-dchange gives the simpler result that you found, but comparing this with the IntegrationTools:-Change result shows that they are equivalent, so this is not a bug.

I1:=Int(sin(x^4)/(sqrt(x)+x^2),x=0..infinity);
PDEtools:-dchange(x=u^(1/4),I1);

dchange.mw

warnlevel = 0 works for me in Maple 2023.2.1

warnlevel.mw

@jalal To draw an arc representing the angle A in the triangle ABS, you can first find the cross product of the vector AB and AS to use as a rotation axis. Then find the rotation matrix tor rotation about this axis by arbitrary angle t. Apply this to a shortened version of vector AB, and its components (offset by A) are a parametric curve for the arc with parameter t, where t runs from zero to the angle A.

triangle(ABS, [A, B, S]);
angleSAB := FindAngle(ABS, A);
vAB := Vector(coordinates(B) - coordinates(A));
vAS := Vector(coordinates(S) - coordinates(A));
normalA := LinearAlgebra:-CrossProduct(vAB, vAS);
arcBAS := convert(0.2*((Student:-LinearAlgebra:-RotationMatrix(t, normalA)) . vAB), list);
plotarcBAS := spacecurve(coordinates(A) + arcBAS, t = 0 .. angleSAB, color = red, thickness = 2);
display(draw(ABS), plotarcBAS);

The code is at the end of the document.

rotation.mw

@MaPal93 Not sure why that wasn't working. I added simplify and it worked.

 MaPal93approx(3).mw

[But now I can delete that line and it still works.]

@MaPal93 You did pretty well with the quadratic.

It's not easy to explain exactly how I came up with it, because I just played around with different things. I tried some things where P(x) was linear playing with the interpolation point, and I tried some polynomials on a transformed x-axis -> w = 1 - exp(-x), now from 0 to 1, which preserves derivatives at the origin - it looks close to linear but it was hard to improve. Most of these were below f(x). Then I tried some simple rational functions that preserve the slope at the origin. At some point I had something higher than f(x) and then added back the exp(), then simplified some more.

Once I had the general form (f0-finf)*exp(-a*x)/(1+b*x), I solved for b that gave the correct slope at the origin. That left only a as a parameter. I just used Explore on the plot and adjusted the slider for a. That looked good, so I played with the slider on the relative error plot - there is a range of a that works well, and 0.126 seemed roughly the best. That was close enough to 1/8, for which b was then 0.837, and 5/6 = 0.833 was close enough and (perhaps) nicer.

Simple and resonably accurate

restart;

f := u -> RootOf(8*_Z^4 + 12*u*_Z^3 + (5*u^2 - 4)*_Z^2 - 4*u*_Z - u^2):
f0 := 1/sqrt(2):
finf := 1/sqrt(5):
Df0 := -1/4:

fapprox:=(f0-finf)*exp(-x/8)/(1+(5/6)*x)+finf; # 5/6 could be 0.837 for D(fapprox)(0) = -0.25002
evalf(eval(diff(fapprox,x),x=0));

((1/2)*2^(1/2)-(1/5)*5^(1/2))*exp(-(1/8)*x)/(1+(5/6)*x)+(1/5)*5^(1/2)

-.2490643028

plot([f(x),fapprox],x=0..20,0..0.8,color=[red,blue]);

Maximum relative error for 0 to infinity <2%

plot((fapprox-f(x))/f(x),x=0..20);

NULL

Download MaPal93approx.mw

@MaPal93 I remain confused about what you actually want:

1. a simple function (a few parameters) that gives good agreement for the function at 0 and infinity and for the derivative at 0, but is not necessarily that accurate.

2. an accurate approximation over a limited range.

3. an accurate approximation over the full range.

My objective was (1), because you asked for a 1-line function and you seemed to want to interpret it easily. That was also what I did in that paper. But now you are comparing as though you want (2), which you did ask for at some point and @acer gave a nice answer to. For (3) you can just use the function itself.

Since the comparison you provided is for (2) then a fair comparison requires the same number of parameters, which is 8 here in each case (though maybe 9 for the rational function; I'm not sure). (That's more that I was originally thinking of for (1).) Now plot them out to 200 and you will see that mine is OK at infinity (by design), but has the oscillation. The second one is a polynomial and will become large at infinity. The rational function can go to a constant value at infinity if the degree of the numerator and denominator polynomials are the same, and @acer chose [4,4], presumably for this reason. As @acer pointed out, the rational functions are usually better than the polynomials anyway, and you see that here.

The numapprox routines are focussed on accuracy over the whole range and they do well for that. They should also do well for derivatives at zero since they are based on some series expansions around that point. I'm assuming they are based on evaluating the function at evenly spaced points across the chosen interval or some criterion that treats all the interval evenly. If you get to choose the interpolation points then you can do better than if they are evenly spaced. In Gaussian quadrature, you optimize this, so (from memory) an (n/2)-degree polynomial with optimally chosen points can do as well as an n-degree polynomial at evenly spaced points. This is not quadrature, so I don't think the Laguerre points are optimal, but the basic idea is that they should spread out with more near the origin (where the function is changing) and fewer at large values (where it isn't changing much). But that again is assuming you want to approximate out to infinity. And then why not (3)? 
 

@MaPal93 Nonlinear equations can be tricky. DirectSearch, an external package from the Maple Applications Centre can solve this. You need some interpolation points closer to the origin, and then spreading out with fewer later where the function is featureless. One possibility is to use roots of Laguerre polynomials, which are spread out in this way, and are used in Guass-Laguerre quadrature for functions in a semi-infinite range; see the end of the file. But that is just a guess.

Approx_new_DS.mw

@MaPal93 That's a nice fit (0.2%) over that range. Df0 looks good. If you want more than the probe accuracy, you can use numapprox:-infnorm. You have oscillations outside the cutoff. If you want it to work over a longer range, you will have to spread the interpolation points out. High-degree polynomials can oscillate - you can likely remove the oscillations with a lower degree polynomial and still get reasonable accuracy. But maybe it is fine as it is.

Approx_new.mw

@MaPal93 

  1. Should I consider looking into [15,16,17] to try and find a similarly accurate and simple approximation for my function? Would that be challenging yet worthy? Would you help?
    Those refs are about fitting to experimental data and not relevant. I developed that approximation by playing around using series and asympt and some intuition. In fact a referee asked me to explain how I had "derived it", but I was unable to answer that question and so only the vague description is is the paper. For your case the series/asympt expressions are complicated, so I don't think they will help; another approach will be required. Notice that the approximation was not accurate (up to 4%); the errors only had to be small compared to the experimental errors. I'll take a look to see if anything is obvious.
  2. You also mention "relative errors (with respect to Eq. (2))" and "systematic error in the parameters can be estimated by individually varying the parameters to find the minimum in the residual sum of squares". I think it could be interesting to quantify the errors for my approximation as well.
    The relative error is just from a plot of (fapprox-fexact)/fexact. The other errors are related to the fit to experimental data.
  3. Talking about interpolation instead, you mention "Exact value and derivative at zero preclude any of the things that fit (as opposed to interpolate) arbitrary functions unless they are carefully designed not to disrupt the exact values" and "c1*x term will mess up the derivative at zero". Which replacement term would preserve the derivative at 0?​​
    The simplest would be to replace the Df0 exponential decay constant with a parameter a, differentiate the whole thing and set its value at zero equal to Df0 as another equation to be solved.

@Preben Alsholm Vote up. Nice analysis.

First 29 30 31 32 33 34 35 Last Page 31 of 85