acer

32587 Reputation

29 Badges

20 years, 38 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

The printed message about nonconvergence seems to make most sense outside and after the loop. That's why I moved it outside the loop, because of that purpose. But a test on abs(x[k]-x[k-1]) is not out of place inside the loop, for another purpose. It would also be quite sensible to put such a check inside the loop so that an early return could be made if convergence occurred while k was still less than N. You may not want it to continue with the full N iterations if the tolerance has already been met.

acer

You need to make sure that x[k] and x[k-1] have been assigned a numeric value, for each time that they are compared and for the value of k at that moment. Make sure that you stick with either a scheme with indexed x[k],x[k-1],etc or a scheme with x,xnew,xold,etc. You were mixing x and indexed x[k], which wouldn't work.

Also, you indicated that you only wanted to print the message if convergence failed for all k from 1 to N, so put it after the loop and not inside the loop.

NR2:=proc(f::mathfunc,x0::complex,N::posint,eps)
> local x,k:
>   x[0] := x0:
>   for k to N do:
>     x[k] := evalf( x[k-1]-f(x[k-1])/D(f)(x[k-1]) );
>   end do;
>   if abs(x[N]-x[N-1]) >= eps then
>     printf("Convergence has not been achieved after %a iterations!\n",N);
>   else
>     return x[N];
>   end if;
> end proc:
>
> f:= x-> x^5-1:
>
> NR2(f,0.6+I*0.6,10,0.00001);
Convergence has not been achieved after 10 iterations!
> NR2(f,0.2+I*0.6,10,0.00001);
                         0.3090169944 + 0.9510565163 I

Side tip: maple's for-loop counters finish with a value incremented one step more than the last used value, when they have finished. For example, a for-loop counting k from 1 to 10 will have value 11 after it's finished. This matters, if you plan to refer to x[k] after it's finished. Notice that I referred to x[N] after the loop. I could also have referred to x[k-1]=x[10] but not to x[k]=x[11] which is unassigned.

Lastly, Robert's suggestion to use evalf was so that a large (potentially huge) symbolic expression did not accumulate via the iterative process. Using evalf can cure that, but only if it's done prior to assigning to x or x[k]. You had it done only as a separate task afterwards. I put it right in the iterative step.

acer

You need to make sure that x[k] and x[k-1] have been assigned a numeric value, for each time that they are compared and for the value of k at that moment. Make sure that you stick with either a scheme with indexed x[k],x[k-1],etc or a scheme with x,xnew,xold,etc. You were mixing x and indexed x[k], which wouldn't work.

Also, you indicated that you only wanted to print the message if convergence failed for all k from 1 to N, so put it after the loop and not inside the loop.

NR2:=proc(f::mathfunc,x0::complex,N::posint,eps)
> local x,k:
>   x[0] := x0:
>   for k to N do:
>     x[k] := evalf( x[k-1]-f(x[k-1])/D(f)(x[k-1]) );
>   end do;
>   if abs(x[N]-x[N-1]) >= eps then
>     printf("Convergence has not been achieved after %a iterations!\n",N);
>   else
>     return x[N];
>   end if;
> end proc:
>
> f:= x-> x^5-1:
>
> NR2(f,0.6+I*0.6,10,0.00001);
Convergence has not been achieved after 10 iterations!
> NR2(f,0.2+I*0.6,10,0.00001);
                         0.3090169944 + 0.9510565163 I

Side tip: maple's for-loop counters finish with a value incremented one step more than the last used value, when they have finished. For example, a for-loop counting k from 1 to 10 will have value 11 after it's finished. This matters, if you plan to refer to x[k] after it's finished. Notice that I referred to x[N] after the loop. I could also have referred to x[k-1]=x[10] but not to x[k]=x[11] which is unassigned.

Lastly, Robert's suggestion to use evalf was so that a large (potentially huge) symbolic expression did not accumulate via the iterative process. Using evalf can cure that, but only if it's done prior to assigning to x or x[k]. You had it done only as a separate task afterwards. I put it right in the iterative step.

acer

You could throw in this below as an option to DEplot. I used the layout palette to obtain the typesetting incantation for x-dot.

labels=[t,typeset(`#mover(mi("x"),mrow(mo("⁢"),mo(".")))`)]

acer

You could throw in this below as an option to DEplot. I used the layout palette to obtain the typesetting incantation for x-dot.

labels=[t,typeset(`#mover(mi("x"),mrow(mo("⁢"),mo(".")))`)]

acer

I find problems like this can be tough to do with Maple.

Ferr:=-10*(.7845815999*u2+3.141592654)*sinh(10*(.7845815999*u2+3.141592654)/((100+fout)^(1/2)*(.1998118316*u2+1)))*cos(10*(.7845815999*u2+3.141592654)/((100+fout)^(1/2)*(.1998118316*u2+1)))*u2/((100+fout)^(1/2)*(.1998118316*u2+1))+200*(.7845815999*u2+3.141592654)^2*sinh(10*(.7845815999*u2+3.141592654)/((100+fout)^(1/2)*(.1998118316*u2+1)))*sin(10*(.7845815999*u2+3.141592654)/((100+fout)^(1/2)*(.1998118316*u2+1)))/((100+fout)*(.1998118316*u2+1)^2)+10*(.7845815999*u2+3.141592654)*cosh(10*(.7845815999*u2+3.141592654)/((100+fout)^(1/2)*(.1998118316*u2+1)))*u2*sin(10*(.7845815999*u2+3.141592654)/((100+fout)^(1/2)*(.1998118316*u2+1)))/((100+fout)^(1/2)*(.1998118316*u2+1)):
plots:-implicitplot(Ferr,u2=0..50,fout=-1..1,numpoints=30000, gridlines=true);

These look right, judging from the graph, for the maximum and minimum points,

> Optimization:-Maximize(fout,{Ferr=0},
>        initialpoint=[u2=0,fout=0],u2=5..50,fout=-1..1);

[0.00826786487008719304,
    [u2 = 13.6048493803282895, fout = 0.00826786487008719304]]

> Optimization:-Minimize(fout,{Ferr=0},
>        initialpoint=[u2=0,fout=0],u2=0..10,fout=-1..1);

[-0.0594716927922686461,
    [u2 = 1.19113556326925552, fout = -0.0594716927922686461]]

Maple seemed to need a (feasible?) initial point in order to proceed above.

acer

I find problems like this can be tough to do with Maple.

Ferr:=-10*(.7845815999*u2+3.141592654)*sinh(10*(.7845815999*u2+3.141592654)/((100+fout)^(1/2)*(.1998118316*u2+1)))*cos(10*(.7845815999*u2+3.141592654)/((100+fout)^(1/2)*(.1998118316*u2+1)))*u2/((100+fout)^(1/2)*(.1998118316*u2+1))+200*(.7845815999*u2+3.141592654)^2*sinh(10*(.7845815999*u2+3.141592654)/((100+fout)^(1/2)*(.1998118316*u2+1)))*sin(10*(.7845815999*u2+3.141592654)/((100+fout)^(1/2)*(.1998118316*u2+1)))/((100+fout)*(.1998118316*u2+1)^2)+10*(.7845815999*u2+3.141592654)*cosh(10*(.7845815999*u2+3.141592654)/((100+fout)^(1/2)*(.1998118316*u2+1)))*u2*sin(10*(.7845815999*u2+3.141592654)/((100+fout)^(1/2)*(.1998118316*u2+1)))/((100+fout)^(1/2)*(.1998118316*u2+1)):
plots:-implicitplot(Ferr,u2=0..50,fout=-1..1,numpoints=30000, gridlines=true);

These look right, judging from the graph, for the maximum and minimum points,

> Optimization:-Maximize(fout,{Ferr=0},
>        initialpoint=[u2=0,fout=0],u2=5..50,fout=-1..1);

[0.00826786487008719304,
    [u2 = 13.6048493803282895, fout = 0.00826786487008719304]]

> Optimization:-Minimize(fout,{Ferr=0},
>        initialpoint=[u2=0,fout=0],u2=0..10,fout=-1..1);

[-0.0594716927922686461,
    [u2 = 1.19113556326925552, fout = -0.0594716927922686461]]

Maple seemed to need a (feasible?) initial point in order to proceed above.

acer

The term least squares is used to refer to a method for solving various different problems. Roughly, it means minimizing a sum of squares (usually of differences).

In this case, you indicated that you wanted to use it as a method for finding a line of best fit. The two choices of routine that I showed can both serve this purpose of fitting a line to data. The results they returned are both the equations of a line, ie. p*t+q , which is the form you requested. (I couldn't make it p*x+q because you had already assigned to the name x.)

But there is also, for example, least squares as a means of solving an overdetermined system of linear equations. Indeed, this can be the way that the abovementioned fitting computation can be done, behind the scenes. If you really wanted to, you could figure out how to use your data to construct such an overdetermined linear system, and then call Optimization:-LSSolve on it, and then re-interpret the Vector result to get the equation of the line. I guessed that you'd prefer having one of those two fitting routines do all that bookkeeping for you.

acer

The term least squares is used to refer to a method for solving various different problems. Roughly, it means minimizing a sum of squares (usually of differences).

In this case, you indicated that you wanted to use it as a method for finding a line of best fit. The two choices of routine that I showed can both serve this purpose of fitting a line to data. The results they returned are both the equations of a line, ie. p*t+q , which is the form you requested. (I couldn't make it p*x+q because you had already assigned to the name x.)

But there is also, for example, least squares as a means of solving an overdetermined system of linear equations. Indeed, this can be the way that the abovementioned fitting computation can be done, behind the scenes. If you really wanted to, you could figure out how to use your data to construct such an overdetermined linear system, and then call Optimization:-LSSolve on it, and then re-interpret the Vector result to get the equation of the line. I guessed that you'd prefer having one of those two fitting routines do all that bookkeeping for you.

acer

As already mentioned above, the lowess option of ScatterPlot does a form of weighted least squares. And a Vector of weights may be provided to NonlinearFit. It may be useful to think about the differences of these two approaches. An interesting issue is the possible availability of the fitted function and all its computed parameter values.

The way to supply weights to NonlinearFit is clear from its help-page which describes the weights option for this. I don't quite inderstand how those weights are then used, as weights don't seem to be an option for Optimization:-LSSolve. I understand that in weighted least squares problems with data errors it is usual for such weights to be taken using variance of the data. But I don't know exactly how the Maple solver works here. What I suspect is that xerrors and yerrors optional parameters of ScatterPlot may be used to compute weights to be passed on to NonlinearFit. I haven't confirmed this.

It's not clear from the ScatterPlot help-page exactly how the weights for lowess smoothing are chosen. Its three options related to the lowess smoothing are degree, robust, and lowess. It's not clear from that help-page in what way (if any) the xerrors or yerrors options may tie into weighting. I suspect that the don't relate at all. And then there is the question of whether a formulaic fitting result is wanted, since the lowess method will not make that available. The lowess method uses a series of weighted least squares for different points, where weights are used to modify the influence of near neighboring points (rather than to correct for measurement uncertainty directly). I now believe that this is not what the original poster wants.

So here's a question. When passing xerrors and yerrors data to ScatterPlot, when supplied with the fit option, is estimated variance of that extra data used to produce the weights which are then passed along to NonlinearFit? Tracing the Maple computation in the debugger might show whether this is true. If it is, then it may be possible to extract the method for doing it "by hand". In such a way, it may be possible to extract the parameter values that result from the nonlinear fit.

I know that, when calling ScatterPlot with the fit option, Statistics:-NonlinearFit is called, and that Optimization:-LSSolve is also called. It remains to figure out exactly how xerrors and yerrors are used, and whether they modify the above to produce weights for NonlinearFit.

acer

As already mentioned above, the lowess option of ScatterPlot does a form of weighted least squares. And a Vector of weights may be provided to NonlinearFit. It may be useful to think about the differences of these two approaches. An interesting issue is the possible availability of the fitted function and all its computed parameter values.

The way to supply weights to NonlinearFit is clear from its help-page which describes the weights option for this. I don't quite inderstand how those weights are then used, as weights don't seem to be an option for Optimization:-LSSolve. I understand that in weighted least squares problems with data errors it is usual for such weights to be taken using variance of the data. But I don't know exactly how the Maple solver works here. What I suspect is that xerrors and yerrors optional parameters of ScatterPlot may be used to compute weights to be passed on to NonlinearFit. I haven't confirmed this.

It's not clear from the ScatterPlot help-page exactly how the weights for lowess smoothing are chosen. Its three options related to the lowess smoothing are degree, robust, and lowess. It's not clear from that help-page in what way (if any) the xerrors or yerrors options may tie into weighting. I suspect that the don't relate at all. And then there is the question of whether a formulaic fitting result is wanted, since the lowess method will not make that available. The lowess method uses a series of weighted least squares for different points, where weights are used to modify the influence of near neighboring points (rather than to correct for measurement uncertainty directly). I now believe that this is not what the original poster wants.

So here's a question. When passing xerrors and yerrors data to ScatterPlot, when supplied with the fit option, is estimated variance of that extra data used to produce the weights which are then passed along to NonlinearFit? Tracing the Maple computation in the debugger might show whether this is true. If it is, then it may be possible to extract the method for doing it "by hand". In such a way, it may be possible to extract the parameter values that result from the nonlinear fit.

I know that, when calling ScatterPlot with the fit option, Statistics:-NonlinearFit is called, and that Optimization:-LSSolve is also called. It remains to figure out exactly how xerrors and yerrors are used, and whether they modify the above to produce weights for NonlinearFit.

acer

I see what you are after, now. As far as I know the x- and y-errors are not used in the fitting calculation, even when using the lowess (weighted least squares) smoothing. But it seems (now, to me) that you are after a statistical (or stochastic) model, and not the sort of deterministic formulaic model that NonlinearFit gives.

The sort of regression analysis of time series data that you describe (and which was hinted at in the image URL you posted) isn't implemented directly in Maple as far as I know. If you have access to a numeric library like NAG then you might be able to get what you are after using a GARCH process or similar from their g13 routines.

Do you have a URL for that Origin software? I am curious about what they might document, for any routine of theirs which does what you describe.

acer

I see what you are after, now. As far as I know the x- and y-errors are not used in the fitting calculation, even when using the lowess (weighted least squares) smoothing. But it seems (now, to me) that you are after a statistical (or stochastic) model, and not the sort of deterministic formulaic model that NonlinearFit gives.

The sort of regression analysis of time series data that you describe (and which was hinted at in the image URL you posted) isn't implemented directly in Maple as far as I know. If you have access to a numeric library like NAG then you might be able to get what you are after using a GARCH process or similar from their g13 routines.

Do you have a URL for that Origin software? I am curious about what they might document, for any routine of theirs which does what you describe.

acer

No. evalf(3/4) will give as many zeros as makes sense at the current Digits setting.

I suspect that your fundamental difficulty lies in thinking that 0.75 is somehow the best (exact, judging by your followup) floating-point representation of 3/4 the exact rational. What I tried to explain earlier is that 0.75 is merely one of many possible representations of an approximation to an exact value. It is not, in itself, exact.

What I tried to argue was that in some sense the number of trailing zeros is an indicator of how accurately the system knows the floating-point value. I'm not actually saying that this is why Maple behaves this way. (It isn't, really. That's why the explanation breaks down for 3/4. as your first example. To get such careful accuracy and error handling one would have to go to a special package such as the two I mentioned above.) But this behaviour for conversion (approximation) of exact rationals via evalf can be somewhat useful, because it has a somewhat natural interpretation in terms of accuracy.

The idea is that when you write 0.75000 you are claiming something about the accuracy of the approximation -- namely that it is accurate to within 0.000005 (or half an ulp). Similarly, writing 0.7500000000 makes an even stronger claim about the accuracy. So, if you start off with the exact value 3/4 then how many zeros should it get, for its floating-point approximation? There's not much sense in giving it more zeros that is justified by the current working precision, and so Maple gives a number of trailing zeros that reflects the current value of Digits (depending on how many nonzero leading digits precede them, of course).

acer

No. evalf(3/4) will give as many zeros as makes sense at the current Digits setting.

I suspect that your fundamental difficulty lies in thinking that 0.75 is somehow the best (exact, judging by your followup) floating-point representation of 3/4 the exact rational. What I tried to explain earlier is that 0.75 is merely one of many possible representations of an approximation to an exact value. It is not, in itself, exact.

What I tried to argue was that in some sense the number of trailing zeros is an indicator of how accurately the system knows the floating-point value. I'm not actually saying that this is why Maple behaves this way. (It isn't, really. That's why the explanation breaks down for 3/4. as your first example. To get such careful accuracy and error handling one would have to go to a special package such as the two I mentioned above.) But this behaviour for conversion (approximation) of exact rationals via evalf can be somewhat useful, because it has a somewhat natural interpretation in terms of accuracy.

The idea is that when you write 0.75000 you are claiming something about the accuracy of the approximation -- namely that it is accurate to within 0.000005 (or half an ulp). Similarly, writing 0.7500000000 makes an even stronger claim about the accuracy. So, if you start off with the exact value 3/4 then how many zeros should it get, for its floating-point approximation? There's not much sense in giving it more zeros that is justified by the current working precision, and so Maple gives a number of trailing zeros that reflects the current value of Digits (depending on how many nonzero leading digits precede them, of course).

acer

First 545 546 547 548 549 550 551 Last Page 547 of 596