acer

32348 Reputation

29 Badges

19 years, 329 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@mmcdara You may wish to measure the difference in time[rea]() , using an explicit loop where each time through the loop you forget that procedure (which would have to be a module export, or with kernelopts(opaquemodules=false) so forget can be called on it as a module local).

 

@mmcdara It might be interesting to compare Proc2 with iterations=100 alongside Proc1 with a loop from 1 to 100.

And you should probably extract the "real time" from Usage, rather than the "cpu time", and ensure your machine is not otherwise under significant load.

But the results still might vary. And neither may be a clean representation of the arithmetic average of the running time. One reason for that is memoization, where some intermediary results of the computations are cached/remembered. It's not always possible to `forget` all such, bewteen iterations.

There is also the possibility that garbage collection is being triggered differently between your two approaches. And it could be making a mess of both or either of the timing results. You could try and pare off any timing contribution by the garbage-collector using the "real gc time" value (which is not normally printed by Usage), which can be subtracted from the "real time" result. But in modern Maple it's now tricky/tough to force a call to gc() to work right away.

And so it becomes very difficult to distinguish meaningfully between garbage-collection overhead that occurs during a computation from that which happens afterwards when gc is triggered on remember tables that had `option remember, system` in order to clean up garbage accrued during the measured calculations.

Measuring the effective average time for a short computation is thus difficult to do properly. And it's not even always clear what "properly" means, because of the gc mess.

A practical approach may make sense: the timing performance that matters should be measured in a way that matches the expected, eventual use. If a short computation is to be done just once then that's all you can properly measure. You can only properly measure the timing of many short computations  (even repeats) by qualifying its meaning to correspond to an equivalent usage scenario.  Eg, if eventual end-usage includes the case of many short computations including memory management then it's sensible to measure the timing of all of that together.

@anton_dys Yes,  I happen to know that someone is working on interactive plots and animations in the new interactive PlotBuilder.

I feel it to be reasonably likely that the old Maplets-based interactive plot builder will not be removed as an accessible stand-alone command (ie. plots:-interactive) merely on the grounds that the new Embedded-Components-based PlotBuilder might get all that functionality. Maplesoft has a long history of not removing old commands, even when deprecated or superceded. Examples include LinearAlgebra and linalg, Statistics and stats, NumberTheory and numtheory, etc, where the older packages still exist in the product.

But the older Maplets-based interactive plot builder may only be available in user-interfaces that support Maplets and Java popups. So it's not available in the MaplePlayer, or in interactive Apps in the MapleCloud. But those interfaces have growing support for Embedded Components.

By the way, have you used the new Embedded-Components-based interactive plot builder in the right-side panel? I haven't seen comments about it on this forum.

@sand15 

In general one could collect both the time() and the time[real]() data, ie the "cpu time" and the "real time" respectively.

The "cpu time" as reported by Usage is an accumulation from all kernel threads that might be used during your calculation. But if there is successful parallelism then that "cpu time" can include a sum of timings of subcalculuations that may have overlapped temporally. So under successful kernel-level parallelism "cpu time" can be significantly and deceptively larger than the wall-clock duration of the computation. (By "wall-clock" I mean the time as measured by your kitchen clock or wristwatch.)  Your manual measurements obtained as the difference of time() calls is this cpu time.

The "real time" as reported by Usage is the wall-clock timing. It can be larger than the "cpu time" if your machine is loaded by other running programs or services.  On an otherwise unloaded machine the "real time" is usually the one that matters. Manual measurements of the difference of time[real]() calls can mirror this real time.

@anton_dys 

You could try out this attachment.

oldPBmenu.mw

@nm My previous comment also contained a second attachment, with another way, using just indets and no subs. 

Your task starts off with the requirement that y is the dependent variable and x is the independent variable. But dsolve will approach it from another way -- examining all the derivatives and the function calls within them, so as to infer the class of the problem and sort out the dependencies. I think it's highly likely that it uses indets for a significant part of that (or some utility procedures which in turn use indets). But its validation will differ from yours,  because the requirements do.

@nm Does this do what you want?

restart;

F:=ee->`if`(indets(subs(y(x)=__y(x),ee),
                   {identical(y),'specfunc'(y)})={},
            "OK","not OK"):

expr1:=y(x)^2+diff(y(x),x,x)+g(y(x))+diff(y(x),x)^2+Int(f(x),x);

F(expr1);

y(x)^2+diff(diff(y(x), x), x)+g(y(x))+(diff(y(x), x))^2+Int(f(x), x)

 

"OK"

(1)

expr2:=y(x)^2+diff(y(x),x,x)+g(y(x))+diff(y(x),x)^2+Int(f(x),x)
       +y^2;

F(expr2);

y(x)^2+diff(diff(y(x), x), x)+g(y(x))+(diff(y(x), x))^2+Int(f(x), x)+y^2

 

"not OK"

(2)

expr3:=y(x)^2+diff(y(x),x,x)+g(y(x))+diff(y(x),x)^2+Int(f(x),x)
       +diff(y(x,z),x,x)+g(y(z))+diff(y(z),z)^2+Int(f(z),z);

F(expr3);

y(x)^2+diff(diff(y(x), x), x)+g(y(x))+(diff(y(x), x))^2+Int(f(x), x)+diff(diff(y(x, z), x), x)+g(y(z))+(diff(y(z), z))^2+Int(f(z), z)

 

"not OK"

(3)

expr4:=y+y(x)+y(x)^2;

F(expr4);

y+y(x)+y(x)^2

 

"not OK"

(4)

expr5:=y(x)+y(x)^2;

F(expr5);

y(x)+y(x)^2

 

"OK"

(5)

expr6:=sin(y)*cos(y(x))*sin(y(x)^2);

F(expr6);

sin(y)*cos(y(x))*sin(y(x)^2)

 

"not OK"

(6)

expr7:=cos(y(x))*sin(y(x)^2);

F(expr7);

cos(y(x))*sin(y(x)^2)

 

"OK"

(7)

expr8:=sin(y(z))*cos(y(x))*sin(y(x)^2);

F(expr8);

sin(y(z))*cos(y(x))*sin(y(x)^2)

 

"not OK"

(8)

 


Download indets_subs.mw

Or you might try either F or G here.

indets_subs2.mw

@Fabio92 This approach only works if both worksheets are opened and executed.

That means that -- every time he wants to work in any worksheet that doesn't define/construct the modules and procedures -- he first has also to open the defining worksheet and execute that in the parallel.

That's quite a lot of ongoing effort.

@nm I didn't see before that the plain `y` was unwanted, as my eye saw a call y() in one of you later examples. Adjusting the type used by indets can accommodate that easily. (I'm away from my computer but will add it later.)

Using `has` to attempt this kind of thing, is not The Way, IMO. The subsequent predicate used in remove/select ends up having to do the heavy lifting, and then either needs to utilize indets or be some complicated fragile mess that simulates indets .

@rahinui I'm glad if it helps.

The conditional stull with `tools/genglobal/name_counter` is just there so that the first substituiton is done with _F1 or "higher", rather than with _F or _F0. I just thought it looked more sensible that way.

The heavy lifting is done by `tools/genglobal`, which is quite commonly used in system Library commands' code for the purpose of creating new, unassigned global names based upon some given name-stem.

 

@Carl Love Yes, thanks, it's clear what you meant now.

@Kitonum 

restart;
Kx:=tau->(10+2*cos(5*tau)*cos(3*tau))*exp(-tau^2):

temp:=(D@@2)(r->-Kx(r));

                                                                       2
    temp := r -> -(-68 cos(5 r) cos(3 r) + 60 sin(5 r) sin(3 r)) exp(-r )

                                                                   2
         + 4 (-10 sin(5 r) cos(3 r) - 6 cos(5 r) sin(3 r)) r exp(-r )

                                              2                                  2       2
         + 2 (10 + 2 cos(5 r) cos(3 r)) exp(-r ) - 4 (10 + 2 cos(5 r) cos(3 r)) r  exp(-r )

temp(0);                                          
                                              92

@Carl Love Sorry, I don't understand what you're trying to to convey by that last sentence, "Since the display is not done by side effect, it can be blocked by error."

@Kitonum I don't see how that approach would be robust in cases where the terms in expr are a mix of valid and invalid calls to y.

For example, it looks like the call to y(z,x) here should be noticed as not OK.

expr := diff(y(x),x)*diff(y(z,x),x) + diff(y(x),x,x):

candidates:=convert(select(has,expr,y),list):

lprint(candidates);
[(diff(y(x), x))*(diff(y(z, x), x)), diff(diff(y(x), x), x)]

map(t->`if`(has(t,y(x)),"OK","not ok"), candidates);

                          ["OK", "OK"]

Have you tried it using the inert Int command instead of the lowercase active int command?

First 251 252 253 254 255 256 257 Last Page 253 of 592