acer

32333 Reputation

29 Badges

19 years, 323 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@BarKop That option is spelled summarize.

You can also get programmatic access to additional details such as the "rsquared" statistic, etc, using the solutionmodule output.

restart;

Rok := Vector([2013, 2014, 2015, 2016, 2017, 2018], datatype = float):
TrzbyCelkemEmco := Vector([1028155, 1134120, 1004758, 929584, 995716, 1152042], datatype = float):

Sol := Statistics:-PolynomialFit(3, Rok, TrzbyCelkemEmco, x, svdtolerance=1e-20,
                               summarize, output=solutionmodule):

Summary:
----------------
Model: -.14312624e15+.21307569e12*x-.10573703e9*x^2+17490.365*x^3
----------------
Coefficients:
              Estimate            Std. Error            t-value  P(>|t|)
Parameter 1   -143126242466195.0625    46387428274554.6406     -3.0855   0.0909
Parameter 2    213075694358.4329       69046084657.8044         3.0860   0.0909
Parameter 3   -105737030.9423          34257558.0979           -3.0865   0.0909
Parameter 4    17490.3649              5665.6838                3.0871   0.0909
----------------
R-squared: 0.8874, Adjusted R-squared: 0.7185

Sol:-Results("rsquared");

HFloat(0.8873925487823793)

Sol:-Results("rsquaredadjusted");

HFloat(0.7184813719559482)

W := Sol:-Results("leastsquaresfunction");
 

-HFloat(1.4312624246619506e14)+HFloat(2.1307569435843292e11)*x-HFloat(1.0573703094228968e8)*x^2+HFloat(17490.364890056557)*x^3

# Same as earlier, using W
#p1:=plot(W,x=min(Rok)..max(Rok),color=blue):
#p2:=plot(Rok,TrzbyCelkemEmco,style=point,symbol=solidcircle,symbolsize=15):
#plots:-display(p1,p2,view=min(TrzbyCelkemEmco)..max(TrzbyCelkemEmco),size=[500,400]);

 


 

Download svdfit_solmod.mw

@Joe Riel I thought that you were describing incorrect behavior, ie. a bug. The example that you've shown  in your last Comment contains correct output, unless I'm missing some point.

But the original behavior shown -- with the Typesetting stuff -- was clearly incorrect and illustrates a bug.

If the OP was trying to ask something about the output from statements within do-loops then it could have been stated more clearly (because it is not specific to rand, etc). In that case, sure, terminating the end do with a colon, or setting printlevel, etc, could be appropriate.

This is interesting, and will require a careful read. Thank you.

I have been reading posts over at the Wolfram Community site, for the past few weeks. In particular:
 - Robert Rimmer's articles, including here (updated here), and here (related to here).
 - Robert Nachbar's article here on epidemiological models.
 - Vitaliy Kaurov's compendium article here.

[edit] An interesting and more recent post (by the aforementioned Robert Rimmer) on a logistic growth model is here.

@Joe Riel Hi Joe, please could you be more specific about what's still added, and with examples (regressions, or not)? Thanks.

@Christopher2222 I don't understand what you mean, once again. Why do you post comments about issues without full accompanying code?!

The spline fit grows large for y values above the largest in xx. Extrapolation isn't a great idea.

restart;
yy:=[3,3,3,4,6,7,8,6,7,13,8,9,6,7,12,14,16,34,42,23,32,45]:
xx:=[seq(i, i = 1 .. nops(yy))]:
s:=CurveFitting:-Spline(xx, yy, v):
plots:-animate(plot3d,[[s, th, v], v=0..y, th=0..2*Pi,
                       coords=cylindrical, style=surface],
               y=0..25);

@Christopher2222 Without resorting to weird and arcane methods, then I'd expect it to compare well, yes.

What I showed computed on my machine about 13 times faster than rotate@SurfaceofRevolution (as Tom originally had it) and about 8 times faster if rotate were skipped (since that's just an orientation thing).

Do you need it to construct fast or play fast? (Explore vs `animate`, say).

I might mention that I used the very same (pretty straightforward) method for both your examples.

ps. plottools:-rotate can be expensive, in time and memory. Use orientation or viewpoint instead, if possible.

@Carl Love I was mistaken about a detail, yes, although the overall rationale is right. Thanks! It's not the float approximation to Pi per se that hobbles the expensive checking. Changing the coefficient of 2 ro 2.0 within the sqrt call inside P1 is adequate (and that's what was happening in Kitonum's suggestion to unapply the evalf'd expression to become Jhk).

For example (significantly faster, but not fastest),

P1:=(r,R)->(2/Pi)*(arccos(r/(2*R))-(r/(2*R))*sqrt(1-(r/(2.0*R))^2)):
J0:=(r,shk)-> BesselJ(0, 2*Pi*r*shk):
Jhk:=(s,shk,R)-> evalf((1/s)*Int(P1(r,R)*J0(r,shk)*sin(2*Pi*r*s), r=0..2*R)):

CodeTools:-Usage(plot(Jhk(s,2.14,38), s=0..5)):
memory used=441.40MiB, alloc change=139.01MiB, cpu time=4.58s, real time=4.47s, gc time=293.50ms

Also quicker is altering the upper end-point of the range of integration slightly, ie, r=0..(2-10^(-9))*R , with P1 left as is, even with no floats present and no evalf of the unapplied expression.

restart;
P1:=(r,R)->(2/Pi)*(arccos(r/(2*R))-(r/(2*R))*sqrt(1-(r/(2*R))^2)):
J0:=(r,shk)-> BesselJ(0, 2*Pi*r*shk):
Jhk:=(s,shk,R)-> ((1/s)*Int(P1(r,R)*J0(r,shk)*sin(2*Pi*r*s), r=0..(2-10^(-9))*R)):

CodeTools:-Usage(plot(Jhk(s,214/100,38), s=0..5));
memory used=447.00MiB, alloc change=133.01MiB, cpu time=4.62s, real time=4.50s, gc time=299.21ms

I am going to submit an SCR for a keyword option to disable expensive checking.

@rowlesmr The two main effects are:
1) It avoids calling J0 and P1 for each different numeric value of s generated by plot.
2) Since J0 and P1 are called before evalf, the symbolic Pi within the J0 call (withing the unapply call) becomes the float approximation 6.28... . And that has a beneficial effect on how evalf(Int(...)) "wastes" time poking about the integrands looking for discontinuties.

If you remove the evalf but keep that outer unapply then it can also take long.

Sorry that it's tricky to explain thoroughly. See my Answer for more technical comments.

@Christopher2222 Are you using 2D Input mode? Do you have the right-panel (aka context-panel) open, and if so does the problem recur if you keep it closed?

@ttonon Yes, you can define such procedures (f2,f3, or f[2], f[3], etc) and then generate equations which utilize them in the terms. The limits (in principle) of how many variables and equations are very large and are not approached by what you've described so far. What may well happen, though, is that there are practical limits.

As the number of terms of the equations (and how involved they are) grows the necessary working precision required to avoid numeric round-off error may well grow. And if these equations are highly nonlinear then the difficulty in generating staring points which converge to a solution may well also increase. These considerations may greatly affect how long it takes to find an acceptable solution. It may also happen that some of the equations's residuals may only become close to zero (but not ever attain zero, or cross it if sign is accounted for depending on how it is expressed) and as such it may be better taken as an optimization problem rather than a root-finding problem. It can get trickier still if you only want to accept some of the equations' residuals becoming very small but not others, or they vary significantly in scale. These issues can affect any numeric solver, and not just those implemented in Maple.

That is why it's difficult to say more without good details of your equations. A smaller working example could help, which you might upload and attach here.

If it turns out that your examples are better approached as global optimization problems then it might be that the DirectSearch add-on package (free, in this case) would be one of the better ways to try and tackle it from within Maple.

What compression format do both have, if any? Perhaps it differs.

Show how far you've gotten with this homework question so far.

@Stretto You wrote, "It's hard to describe", while still omitting to upload a worksheet that reproduces your issue.

What have you been able to do so far with this homework question?

The Maple 2018 changes were for the univariate case, not multivariate systems.

First 187 188 189 190 191 192 193 Last Page 189 of 591