acer

32363 Reputation

29 Badges

19 years, 333 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@Markiyan Hirnyk I agree, it is easier to set up this kind of problem in Mma, and the surface is rendered more cleanly. Here it is in Maple, with some additional look & feel tweaks.

lts:=[x^2+x*(y+2*z)-1, y^2+y*(x+2*z)-3]:
eqs:=y*z*(x+y+2*z)^3-8:

plots:-implicitplot3d( min(eqs,-max(lts)),
                       x=-10..10, y=-10..10, z=-10..10, grid=[80,80,160],
                       style=surfacecontour, contours=14, orientation=[-62,54,0],
                       color=gold, lightmodel=light2, glossiness=0.7 );

I half expected the Std GUI to get bogged down at that grid resolution, by in 64bit Maple 18.01 for Windows the computation and rendering times were just a few seconds, and manual plot rotation stayed reasonably responsive for me. In Classic for 32bit Maple 16.02 the rendering took much longer but the manual rotation stayed fast.

@Carl Love As far as I know all other kernel threads (Grid child mservers excepted) stop while gc is working. That's why it can make sense to think of the gc real time as being a simple portion of the total real time for a computation.

It's great that gc can be done using multiple threads acting in parallel in Maple 18. But the number reported as `gc time` is the sum over all such gc threads (acting concurrently), so it can't sensibly be compared with other timing numbers. That's similar to how `cpu time` is the sum of the computation time over all (possibly concurrent) threads, and as such cannot easily be sensibly compared to other timing results.

But the wall-clock `real gc time` and the wall-clock `real time` are two numbers that can be compared sensibly.

And indeed CodeTools:-Usage already computes `gc real time` internally. It just doesn't include that by default as part of the printed results.

It is interesting how the non-kernel dispatch works, in that it invokes the non-kernel procedure body of the same name (like how procname works), and not the Library Vector procedure.

So, in the second example below it is f that gets invoked, and not the Library Vector command.

restart:
f:=proc() option builtin=ConstructRow; J; end proc:
trace(f):

f(x,y,z); # computed in the kernel
execute f, args = x, y, z
                                  [x, y, z]

f(sin(x),y,z);  # not computed in the kernel
execute f, args = sin(x), y, z
{--> enter f, args = sin(x), y, z
                                      J
<-- exit f (now at top level) = J}
                                      J

@Alejandro Jakubi Yes, I did know this. But it doesn't affect my wish for commands Vector, Matrix to be made into faster builtins (like the Array command).

I did not write that `<|>` is of type builtin. I wrote that it is a builtin, which to me means that it has option builtin and can compute some results via a built-in kernel function. But one could also sensibly call it partially builtin.

The behavior is a bit complicated.

<sin(x)|y|Pi|sqrt(2)> gets dispatched to the Library Vector procedure, while <x|y|Pi|sqrt(2)> does not.

The presence of at least function calls, tables, sets, and names which evaluate to procedures can get dispatched to the interpreted Library procedure. But some nonnumeric data gets done just by the kernel.

If Pi has been assigned to name `a` then <a> is done by the kernel but <'a'> is dispatched to the Library procedure.

It can also make a difference whether one is at the top level. At the top level the following will be done by the kernel.

b:=a: a:=Pi: <b>;

But inside a proc the following (with 1-level evaluation of LOCALs) gets dispatched to the Library procedure.

proc() local a,b; b:=a: a:=Pi: <b>; end proc();

This has become a tangent discussion.

@Carl Love Thanks for catching my typo. But `<|>` is a builtin now. Don't be misled by its also having a procedure body...

restart:
kernelopts(version);

   Maple 18.01, X86 64 WINDOWS, Mar 28 2014, Build ID 935137

op(3,eval(`<|>`))[1];

                         builtin = 495

This thread reminds me of a few things:

I think that CodeTools:-Usage should report `gc real time`, and perhaps not even bother with `gc time`, when used in this common way where it prints its results. In Maple 18 the garbage collector can operate in parallel threads (w.r.t. itself, only). So `gc time` is the sum of time spent for all gc threads. More interesting, and something that could be compared with `time real`, would be `gc time real`. Then we could gauge how much of the total wall clock time is spent in memory management.

For example, CartProd produces several lists. I don't offhand see how to reduce that here, but in general the portion of real time spent in gc might indicate some measure of possible saving (provided a layer of temps could even possibly be removed, of course).

The other item is the Matrix constructor, introduced in Maple 6. When the shortcut `<|>` appeared it was originally a Library procedure and generally less efficient that `Matrix`. Now `<|>` is a kernel builtin and is much faster. I think that `Matrix` needs to be a builtin, because there are many important scenarios where its extra functionality is needed. And for constructing very many, very small, Matrices the overhead of the Library level constructor is relatively crushing.

As seen in this example (starting with my earlier revision to your final Matrix constructing map), we could not do a single call to rtable since it does not wrap around when processing a flat list. That is, for a m*n length list L the rtable constructor will drop all but the first m entries when called as say rtable(1..m,1..n,L,subtype=Matrix). Hence the need for ArrayTools:-Alias, for reshaping a 1-by-m*n Matrix to a m-by-n Matrix. It should be made easier than this, to attain near maximal speed.

@Carl Love Yes, thanks, I ought to have tried that variant. But I believe that it's not quite right, as subtype=Matrix will require a second dimansion (range 1..1) and thus use of map[3].

Here are timings. So mapping of Matrix construction across your CartProd procedure's output has gone from about 22.6sec down to 7.9sec and now down to 5.7sec on my machine for the m,n,p=3,4,3 case. Not bad compared with the Iterator getting about 3.2sec for that case.

restart:
CartProd:= proc(L::list(list))
local S, _i, V:= _i||(1..nops(L));
 [eval(subs(S= seq, foldl(S, [V], (V=~ L)[])))]
end proc:

(m,n,p):= (2,3,3):
CodeTools:-Usage(map(ArrayTools:-Alias, 
                     map[3](rtable,1..1,1..m*n,
                            CartProd([[$0..p-1] $m*n]),
                            subtype=Matrix),[m,n])):
memory used=1.98MiB, alloc change=0 bytes, cpu time=16.00ms, real time=21.00ms, gc time=0ns

(m,n,p):= (3,3,3):
CodeTools:-Usage(map(ArrayTools:-Alias, 
                     map[3](rtable,1..1,1..m*n,
                            CartProd([[$0..p-1] $m*n]),
                            subtype=Matrix),[m,n])):
memory used=19.03MiB, alloc change=36.61MiB, cpu time=109.00ms, real time=111.00ms, gc time=0ns

(m,n,p):= (3,4,3):
CodeTools:-Usage(map(ArrayTools:-Alias, 
                     map[3](rtable,1..1,1..m*n,
                            CartProd([[$0..p-1] $m*n]),
                            subtype=Matrix),[m,n])):
memory used=0.57GiB, alloc change=0.55GiB, cpu time=8.28s, real time=5.68s, gc time=5.05s

@Markiyan Hirnyk I omitted Aladjev's code (procedures 'tuples' and 'allmatrices", using the link in your Comment to this thread) because it is very slow and the coding style is quite poor. For the 2x3 and 3x3 matrix cases with three possible entry values I got these results:

CodeTools:-Usage(allmatrices(M, 2, 3, {"0", "1","2"})):
memory used=22.27MiB, alloc change=32.00MiB, cpu time=94.00ms, real time=98.00ms, gc time=0ns

CodeTools:-Usage(allmatrices(M, 3, 3, {"0", "1","2"})):
memory used=4.84GiB, alloc change=187.49MiB, cpu time=108.51s, real time=106.87s, gc time=4.87s

I lost patience in the 3x4 matrix case and stopped the computation after about 4500 seconds.

Your links also had a few other approaches, timings for which I include below. 

restart:
AllMatrices := proc (A::set, k::posint, n::posint) 
local B, C, E:
 B := [[]]:
 C := proc () 
B := [seq(seq([A[i], op(B[j])], i = 1 .. nops(A)), j = 1 .. nops(B))]:
 end proc:
 E := (C@@(k*n))(B):
 seq(Matrix(k, n, E[m]), m = 1 .. nops(A)^(k*n));
 end proc:

CodeTools:-Usage(AllMatrices({0, 1, 2}, 2, 3)):
memory used=3.10MiB, alloc change=0 bytes, cpu time=31.00ms, real time=25.00ms, gc time=0ns

CodeTools:-Usage(AllMatrices({0, 1, 2}, 3, 3)):
memory used=85.58MiB, alloc change=44.01MiB, cpu time=671.00ms, real time=669.00ms, gc time=62.40ms

CodeTools:-Usage(AllMatrices({0, 1, 2}, 3, 4)):
memory used=2.33GiB, alloc change=0.63GiB, cpu time=26.71s, real time=22.53s, gc time=8.69s

restart:
F:=(m,n)->op(Matrix~(combinat[permute]([0$(m*n),1$(m*n),2$(m*n)], m*n),n,m)):

CodeTools:-Usage(F(2,3)):
memory used=3.41MiB, alloc change=32.00MiB, cpu time=31.00ms, real time=37.00ms, gc time=0ns

CodeTools:-Usage(F(3,3)):
memory used=90.53MiB, alloc change=14.01MiB, cpu time=655.00ms, real time=652.00ms, gc time=46.80ms

CodeTools:-Usage(F(3,4)):
memory used=2.50GiB, alloc change=0.63GiB, cpu time=29.22s, real time=24.26s, gc time=10.34s

And here is how my machine runs on Kitonum's CartProd1.

restart;
CartProd1:=proc(L::list, N::posint)
  local It;
  It:=proc(M::list)
  [seq(seq([L[i],op(M[j])], i=1..nops(L)), j=1..nops(M))];
  end:
  (It@@(N-1))(L);
end:

CodeTools:-Usage(map(Matrix,CartProd1([0,1,2],6),2,3)):
memory used=3.12MiB, alloc change=0 bytes, cpu time=31.00ms, real time=25.00ms, gc time=0ns

CodeTools:-Usage(map(Matrix,CartProd1([0,1,2],9),3,3)):
memory used=86.03MiB, alloc change=44.16MiB, cpu time=656.00ms, real time=661.00ms, gc time=62.40ms

CodeTools:-Usage(map(Matrix,CartProd1([0,1,2],12),3,4)):
memory used=2.35GiB, alloc change=0.67GiB, cpu time=28.72s, real time=23.56s, gc time=10.48s

@Joe Riel I would suggest that the time to compute the call to MixedRadixTuples should be considered as part of the cost.

My Maple 18 for 64bit Windows has a functioning Compiler:-Compile. And I see a better timing for a second identical call to MixedRadixTuples. Is this because Compiler overhead is avoided on the second attempt?

I believe that overhead of the Matrix constructor (Library procedure) can be removed from Carl's example, with better timings on an otherwise unchanged use of his CartProd generator.

On my machine I see Joe's Iterator method (including MixedRadixTuples generation cost) being slower than a revision to Carl's CartProd method, on the first attempt of the call to MixedRadixTuples after a restart. But I see the Iterator method as being faster than the revised CartProd method, on a repeated attempt without restart.

restart:
(m,n,p):= (3,3,3):
CartProd:= proc(L::list(list))
local S, _i, V:= _i||(1..nops(L));
 [eval(subs(S= seq, foldl(S, [V], (V=~ L)[])))]
end proc:
CodeTools:-Usage(map[3](Matrix, m, n, CartProd([[$0..p-1] $ m*n]))):
memory used=85.78MiB, alloc change=46.01MiB, cpu time=655.00ms, real time=651.00ms, gc time=46.80ms

restart:
(m,n,p):= (3,3,3):
CartProd:= proc(L::list(list))
local S, _i, V:= _i||(1..nops(L));
 [eval(subs(S= seq, foldl(S, [V], (V=~ L)[])))]
end proc:
CodeTools:-Usage(map(L->ArrayTools:-Alias(rtable(1..1,1..m*n,L,subtype=Matrix),
                                                [n,m]),
                           CartProd([[$0..p-1] $ m*n]))):
memory used=20.71MiB, alloc change=38.01MiB, cpu time=171.00ms, real time=176.00ms, gc time=0ns

restart:
with(Iterator):
(m,n,p):= (3,3,3):
P := CodeTools:-Usage(MixedRadixTuples([p $ m*n])):
(h,g) := ModuleIterator(P):
M := ArrayTools:-Alias(g(),[m,n]):
CodeTools:-Usage([seq(M[], p in P)]):
memory used=7.34MiB, alloc change=32.00MiB, cpu time=250.00ms, real time=253.00ms, gc time=0ns
memory used=6.90MiB, alloc change=192.00KiB, cpu time=78.00ms, real time=72.00ms, gc time=0ns

P := CodeTools:-Usage(MixedRadixTuples([p $ m*n])):
(h,g) := ModuleIterator(P):
M := ArrayTools:-Alias(g(),[m,n]):
CodeTools:-Usage([seq(M[], p in P)]):
memory used=2.48MiB, alloc change=0 bytes, cpu time=62.00ms, real time=56.00ms, gc time=0ns
memory used=6.90MiB, alloc change=192.00KiB, cpu time=78.00ms, real time=75.00ms, gc time=0ns

I don't offhand see an easy way to use map[4] and the 'rtable' constructor while avoiding the extra layer of a custom operator (as in my revision above).

Increasing the problem parameters gets a broader spread of timings.

restart:
(m,n,p):= (3,4,3):
CartProd:= proc(L::list(list))
local S, _i, V:= _i||(1..nops(L));
 [eval(subs(S= seq, foldl(S, [V], (V=~ L)[])))]
end proc:
ans1:=CodeTools:-Usage(map[3](Matrix, m, n, CartProd([[$0..p-1] $ m*n]))):
memory used=2.34GiB, alloc change=0.68GiB, cpu time=27.35s, real time=22.63s, gc time=9.66s

restart:
(m,n,p):= (3,4,3):
CartProd:= proc(L::list(list))
local S, _i, V:= _i||(1..nops(L));
 [eval(subs(S= seq, foldl(S, [V], (V=~ L)[])))]
end proc:
ans2:=CodeTools:-Usage(map(L->ArrayTools:-Alias(rtable(1..1,1..m*n,L,subtype=Matrix),
                                                [n,m]),
                           CartProd([[$0..p-1] $ m*n]))):
memory used=0.58GiB, alloc change=0.58GiB, cpu time=11.56s, real time=7.86s, gc time=6.79s

restart:
with(Iterator):
(m,n,p):= (3,4,3):
P := CodeTools:-Usage(MixedRadixTuples([p $ m*n])):
(h,g) := ModuleIterator(P):
M := ArrayTools:-Alias(g(),[m,n]):
CodeTools:-Usage([seq(M[], p in P)]):
memory used=7.34MiB, alloc change=32.00MiB, cpu time=265.00ms, real time=258.00ms, gc time=0ns
memory used=193.90MiB, alloc change=435.46MiB, cpu time=3.68s, real time=2.93s, gc time=1.44s

P := CodeTools:-Usage(MixedRadixTuples([p $ m*n])):
(h,g) := ModuleIterator(P):
M := ArrayTools:-Alias(g(),[m,n]):
CodeTools:-Usage([seq(M[], p in P)]):
memory used=2.48MiB, alloc change=0 bytes, cpu time=63.00ms, real time=68.00ms, gc time=0ns
memory used=193.90MiB, alloc change=133.62MiB, cpu time=4.51s, real time=3.34s, gc time=2.14s

@casperyc Here is something... showing an updating plot of just the objective value.

test_ec.mw

@Aakanksha Could you attach a .mw worksheet containing the code you've used, with output, and also at the end of it could you call the command,

kernelopts(version);

@casperyc Do you want to see results for all the objective evaluations computed internally by DirectSearch, or only those which improve on the previous best?

Do you want to see the objective value, or a 3D point plot of the x,y,z values?

@Axel Vogt It seems to me to have been a bug in the numerical evaluation of MeijerG in the special case of floats present in  certain parameters. The problem was present in Maple 9.5, and fixed some time later (Maple 11?).

In Maple 9.5 the errant result occurred no matter how high I raised Digits.

And in Maple 9.5 the conversion of those parameters to exact rationals is a workaround (with no extra precision required).

@casperyc I think I cautioned against having multiple calls to `p` and running with the menubar's triple-exclamation (execute worksheet). It seems necessary for the GUI to finish inserting components before calling `p` again, else perhaps it gets confused about the names of inserted components.

It does seem ok to call `p` (once each) in separate and multiple execution groups or document blocks, as long they don't occur in too rapid a succession.

Capital P is the local variable which gets assigned the plot. I'm not sure why it might return unevaluated. If you run into issues with this then you could send me a private message and your code. I didn't use any try..catch mechanisms or other checks that plots had indeed been created. Were you planning on calling it many times, or were you just trying to stress test it all?

@casperyc Here is a version in which the procedure creates and embeds a Plot Component, updates it while computing, and then finally returns the final plot (after apparently clearing off the embedded component).

Calling procedure `p` shows intermediate results by updating an embedded Plot Component, but the return value of a completed call to `p` is an actual plot and can be assigned like any other result (as long as the call to `p` is allowed to complete).

It works best in a Worksheet, in an execution group. It may work in a Document, in a paragraph (document block). But it appears to not work as in an execution group in a Document. It likely works best in Maple 18.01, and not at all in anything before 18.

The technique is full of undocumented commands. The basic idea is also a trick, relying on the fact that in M18 (and perhaps M17) a procedure call in an input region can have not only a usual output region but also a task region. (This is how Grading:-Quiz and Explore work, btw.)

The attached worksheet may only work as intended if the seperate examples of calls to procedure `p` are in separate execution groups.

ticker2.mw

Upon reflection, it might be easier to handle efficiency as simply as possible here. I've pretty much just relied on the garbage collector doing a decent job. And an easy improvement is to only update the plot every 100 (or whatever) iterations.

First 350 351 352 353 354 355 356 Last Page 352 of 592