acer

32400 Reputation

29 Badges

19 years, 344 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@Alejandro Jakubi Yes, all that information is provided by CodeTools:-Usage if I invoke it like,

CodeTools:-Usage(...something...,quiet,output=[output,realtime,cputime,bytesused,bytesalloc]);

But that's more effort and typing than I care for. That's why I like the idea of `tools/bench`, or something shorter still and universally avalable which has the default bahaviour that I want. (You write yourself that you like `tools/bench`, but of course you too could just use CodeTools:-Usage with the appropriate options.) It's just nicer to have a short alias. Of course I could stick a short alias for it in all my personal initialization files, and merely suffer a little when using other nonnetworked machines. I guess that it's human nature to imagine that everyone else wants the same defaults as oneself -- where I, you, John, etc might be naturally prone to that just like most others are.

Which reminds me of something I've been wondering: if Maple can share worksheets on the Cloud then why not also allow "login" via Maple's GUI to load GUI preferences (there are so many!) and even .mapleinit material?

Since `tools/bench` has been mentioned here, I'd like to add that I usually want the more pieces of information, including change in time, change in memory "used", and change in memory allocated. Eg,

(st,kbu,kba):=time(),kernelopts(:-bytesused),kernelopts(:-bytesalloc):
 ...some computation...
time()-st,kernelopts(:-bytesused)-kbu,kernelopts(:-bytesalloc)-kba:

My reason is that the change in bytesused shows more how much memory is being "managed", while I am often more interested in how much extra memory allocation on Maple's behalf is brought about by the computation. (I do realize that there is interplay between the concepts, and that once the gcfreq limit is hit Maple might collect memory before it allocates more. So care is needed in interpreting bytesused and bytesalloc if done in mid-session.)

I also find that, w.r.t. measuring Thread'd applications, I'm also interested in the change in wall-clock time. Now, the time() command measures total cycles used by all Threads, and adds them up. So in order to effectively estimate true elapsed computation time I have to resort to time[real] (and hope that other processes on my machine aren't slowing it down and affecting the wall-clock difference). So these days I find I do,

(swt,st,kbu,kba):=time[:-real](),time(),kernelopts(:-bytesused),kernelopts(:-bytesalloc):
 ...some computation...
time[:-real]()-swt,time()-st,kernelopts(:-bytesused)-kbu,kernelopts(:-bytesalloc)-kba:

I have another quibble. I often want to benchmark a computation and see both the performance stats as well as the actual computational result. I find the invocation time(...some command...) disappointing in that it doesn't provide me with the actual result. And I find that `tools/bench` suffers in the same way. I'd prefer it if `tools/bench` also included its actual result eval(e) as part of the returned expression sequence. I don't want to have to make my computational argument to `tools/bench` be wrapped in `assign`.

As for the ::uneval topic, I'll mention this use (which I'd only mentioned to Jacques privately before I saw that this is a popular Post). It's not something most people will ever want to do I think, but basically one can uneval top-level commands and then manipulate them as one wishes. Of course this too does not prevent automatic simplification, as Claire has mentioned.

I enjoyed this series of related posts very much.

They do remind me of a related topic which was discussed (a few different times) on this site some years ago. it was the issue of how to efficiently generate many different medium-sized samples of a given distribution. At that time, the focus was mostly on reducing overhead within the Statistics package, or reducing the overhead of garbage collecting (a.k.a. memory management) of Array/Vector containers for such samples. The judgement then (IIRC) was in preference of generating a single sample up front -- as large as one can bear in terms of memory allocation.

Now it seems to me that you have mostly been discussing user-defined distributions, and that presumably there are some better/dedicated methods for sample well-known stock distributions. Please correct me if that is wrong. So my related question is: for these relevant distributions how can the overhead of duplicated recomputation of all these subdivisions (that you have shown above) be avoided in the case that one wishes to resample a distribution many different times? Eg, if I wish to generate finite-sized samples of 10,000 entries, repeated say 100,000 times.

acer

@JacquesC Interesting. I only use the cloud to share stuff that needs Components, is otherwise worksheet-centric, or is less critical. It's... handy.

For serious code I use a revision control system (even at home). And that means Maple code too. But I want useful "diffs", and a revision control system in which I can resolve code differences in a text editor. These are super strong reasons for keeping my code in plaintext files, and not worksheets.

Of course, other important reasons to keep Maple code in plaintext files include the fact that only Maple's  GUI reads 'em. And, more crucial, that worksheets can get corrupted. (See recent posts here for examples of that.)

I see it that there are two things to share: GUI documents or apps with Components, and code/procedures. The tasks of sharing either of those two classes are quite different. I like the Cloud for the former.

On the other hand, I think that source code should be shared amongst individuals as plaintext and be multiply accessed amongst sessions via .mla archive or via $include directives. There have been no developments with code edit regions, inter-Worksheet referencing, etc, that have come even close to changing my opinion on that. I don't think that the worksheet should contain the best code since the best code should be easily re-usable elsewhere, safe from corruption, and editable outside of the GUI. I guess that I feel that better documentation and better tools for the great underlying mechanisms of re-using code via library archive would be better than more less-useful new mechanisms (like start-up, code-edit, and hidden block regions) for embedding code inside worksheets. These are important but contentious issues, which means they deserve the very best forethought.

acer

I tried that code, to get a rank 4 approximation from one of the posted images. But what I got was this,

acer

How far have you gotten with this so far, on your own?

What Maple procedures have you found in the help system, that deal with checking primality or generating primes?

Do you think that it would be easier to check whether the number which is two greater than a given prime is also prime (repeating that test for every given prime in your range), or to check which consecutive pairs of entries in the set of ordered primes in your range happen to be two apart?

Work out the basics of how you want your code to work, first (even if only in words to yourself or "pseudocode"), and then try writing it in Maple.

acer

I've been working on some blog post(s) on this topic, including some code, in a few spare moments. I intend it as a branched continuation of this interesting thread.

Pls stay tuned. I'll add a link here when I post.

The code consists of improving on the naive approach of doubling-Digits-until-it-stops-changing by using shake instead at each increase to the working precision (Digits). I started off focusing on constant expressions, but then got a little side-tracked with 2-argument `eval`.

Since I'm using `shake` then my approach relies on the existence of an appropriate `evalr` extension in order handle function calls. So it doesn't do anything for hypergeom, for counterexample.

To the original poster: Note that some numerical commands in Maple provide optional arguments to specify an error tolerance. Eg, dsolve/numeric or evalf/Int.

acer

I've been working on some blog post(s) on this topic, including some code, in a few spare moments. I intend it as a branched continuation of this interesting thread.

Pls stay tuned. I'll add a link here when I post.

The code consists of improving on the naive approach of doubling-Digits-until-it-stops-changing by using shake instead at each increase to the working precision (Digits). I started off focusing on constant expressions, but then got a little side-tracked with 2-argument `eval`.

Since I'm using `shake` then my approach relies on the existence of an appropriate `evalr` extension in order handle function calls. So it doesn't do anything for hypergeom, for counterexample.

To the original poster: Note that some numerical commands in Maple provide optional arguments to specify an error tolerance. Eg, dsolve/numeric or evalf/Int.

acer

@Axel Vogt Hey Axel.

It needs the Standard GUI of Maple 14 or later, since accessing Maple Cloud worksheets can only be done at present via a special "cloud" entry in the palette panel.

So yes, one can upload a .mws worksheet, once one has opened that in the Standard GUI. But when someone else downloads it then it doesn't seem to be either .mw or .mws until such time as it gets saved to a file.

cheers,
acer

Not as terse as may be done in APL, but still,

Vector(5,exp);

exp~(<($1..5)>);

exp~([$1..5]);

More seriously, could it be that the Original Poster's issue is related to exp(1), exp(2), etc being exact quantities rather than float approximations? If so, then just apply the command evalf to most of the earlier responses.

acer

Not as terse as may be done in APL, but still,

Vector(5,exp);

exp~(<($1..5)>);

exp~([$1..5]);

More seriously, could it be that the Original Poster's issue is related to exp(1), exp(2), etc being exact quantities rather than float approximations? If so, then just apply the command evalf to most of the earlier responses.

acer

You have not told us what platform or operating system you are using.

Can you use keyboard short-cuts instead? (Ie. on Windows, Ctl-v instead of right-click->Paste)

acer

restart:

Mnemosyne:=proc(x::posint)

  option remember;
  local Xc;

  if x=1 then return x,0; end if;

  if x=2 then Xc := 1,0;

  elif irem(x,2)=0 then Xc := procname(x/2);

  else Xc := procname(3*x+1);

  end if;
  x, Xc[2]+1;

end proc:

st:=time():

plots:-pointplot([seq(Mnemosyne(i),i=1..10000)]);

time()-st;

acer

restart:

Mnemosyne:=proc(x::posint)

  option remember;
  local Xc;

  if x=1 then return x,0; end if;

  if x=2 then Xc := 1,0;

  elif irem(x,2)=0 then Xc := procname(x/2);

  else Xc := procname(3*x+1);

  end if;
  x, Xc[2]+1;

end proc:

st:=time():

plots:-pointplot([seq(Mnemosyne(i),i=1..10000)]);

time()-st;

acer

How about applying it to a quasirandom sequence generated by Rule 30.

(nb. Rule 30 must be somebody's obsession, by rules 34 and 36.)

acer

First 449 450 451 452 453 454 455 Last Page 451 of 592