acer

32385 Reputation

29 Badges

19 years, 339 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@Kitonum It might be interesting to see whether the Bits package makes at least some aspects of these kinds of computation easier (to author, or understand), or not.

@AdamBarker I suspect that there is a scaling of the iteration count that can visually convey that information, yes. It was the code comment "## Optional, scale the saturation by the time to converge" where I did something for that which works visually only for a smaller maximal iteration limit.

It likely can be done with a suitable call to `ln`, say. I'll play with it and let you know if it looks nicer. I'm not sure within which channels (H,S, or V) it's be more useful. I should try and make the color-selector faster too, where it assigns a color based on which root it's converging towards.

How do you expect that to work when you define ex1 and ex2 as operators but pass use them like expressions (in x,t,z) when calling implicitplot3d?

What's that extra `e` doing inside ex2?

acer

You aren't going to be able to distinguish amongst roots with the same absolute value, using that Newton command. So if you're trying to color by the value of the roots then that command won't always get you there.

But it should be possible using the IterativeMaps:-Escape command, by using the real and imaginart pats of converged value for two separate layers in the result.

What platform and OS are you using? 32bit, 64bit, OSX, Windows, Linux? Any other OS details? Does the Compiler work for you, otherwise?

I can't see the images of code you posted in your question. Perhaps upload a worksheet (green arrow) along with OS details.

acer

@Clarins The example you're citing relates to 3D data and interpolation, and the input points (at which one wants to compute interpolated values) are expected to be supplied in an Array with the corresponding dimension(s).

That input Array just looks a little funny when there is just one single new point at which to interpolate. But the structure could alternatively hold multiple such points.

If you intend 1/(x*(1-x))*(1+2*x) then type it in that way, rather than as 1/(x(1-x))(1+2*x) . What you had involved function calls, not multiplication.

acer

@Markiyan Hirnyk 

restart;

showstat((Optimization::GlobalUnivariate)::FindLConstant,2):


(Optimization::GlobalUnivariate):-FindLConstant := proc(func, a, b, useevalhf)
local i, L, L1, L2, n, temp1, temp2, V, Fv, fpoly, fpolyvec, reven, rodd, qeven, qodd;
       ...
   2   V := Vector(n,i -> a+(i-1)*(b-a)/n,('datatype') = ('float'));
       ...
end proc

proc()
  local oldkop,T;
  try
    oldkop:=kernelopts(':-opaquemodules'=false);
    T:=:-ToInert(eval(:-Optimization:-GlobalUnivariate:-FindLConstant));
    if op([5,2,2,2,2,5,1,2,3,1],T)=':-_Inert_LEXICAL_LOCAL'(3) then
      unprotect(:-Optimization:-GlobalUnivariate:-FindLConstant);
      :-Optimization:-GlobalUnivariate:-FindLConstant:=
        :-FromInert(:-subsop([5,2,2,2,2,5,1,2,3,1]
                             =':-_Inert_SUM'(':-_Inert_LEXICAL_LOCAL'(3),
                                             ':-_Inert_INTNEG'(1)),T));
      protect(:-Optimization:-GlobalUnivariate:-FindLConstant);
    end if;
  catch:
  finally
    protect(:-Optimization:-GlobalUnivariate:-FindLConstant);
    kernelopts(':-opaquemodules'=oldkop);
  end try;
  NULL;
end proc();

showstat((Optimization::GlobalUnivariate)::FindLConstant,2):


(Optimization::GlobalUnivariate):-FindLConstant := proc(func, a, b, useevalhf)
local i, L, L1, L2, n, temp1, temp2, V, Fv, fpoly, fpolyvec, reven, rodd, qeven, qodd;
       ...
   2   V := Vector(n,i -> a+(i-1)*(b-a)/(n-1),('datatype') = ('float'));
       ...
end proc

g:=(c,d)->Optimization:-NLPSolve(x^4+x^3+c*x^2+d*x-c-1, x=-1..1, maximize,
                                 method=branchandbound)[1]:

CodeTools:-Usage( plot3d(g, -5..5, -5..5, style=surface, color="DarkOliveGreen",
                         orientation=[-25,35,-5],
                         lightmodel=Light1, glossiness=0.9, style=surface) );

memory used=1.39GiB, alloc change=36.00MiB, cpu time=8.06s, real time=8.07s, gc time=604.00ms

 

bandbpatch.mw

 

@lg674 Kitonum's code works for me in Maple 18.02. I attach it in a Worksheet.

curve.mw

I also added the option numpoints=500 to the calls to spacecurve, to get a smoother curve (for the full range of t) in that Maple version.

@Axel Vogt Hi Axel, I've made a few posts about cusomiztion of context-menus in the past, ie, here, here, here, and a few others. But none of those generated much commentary or response, even by those more expert at Maple programming.

Maybe I should write the code to augment the stock context-menus with calls to the DirectSearch package's commands (possibly in the submenu used for Optimization?)

@Chris It must be tough to use a Maple Document or Worksheet, including stock context-menus for instructing primary school students. How do you help them not become confused by the mathematical jargon that is years beyond their current knowledge?

There are some interactive popup applications (Maplet based "applets") available from the main menubar, via Tools->Options->Tutors (or Assistants). The one's from the Precalculus package might provide a gentle experience for the very young or mathematically inexperienced.

I should probably mention that the third argument of the Entries:-Add command in the code snippet I gave above is the type which the right-clicked expression must match in order for the context-menu item to appear. Now, (x-1)/(x-2) will trigger it, since that is of type ratpoly which is the type I used in that code example. But y=(x-1)/(x-2) is not of that type, and so won't trigger it. You could either relax that type to be something like, literally, anything instead of ratpoly so that it is matched by more variants of expression. Or you could construct a more involved type that more expressions would satisfy.

You can also augment the context-menu with even more items, such as more of the available tutors. Or if you find (or author) even more popup applets then you could augment the context-menu with those too.

In my code snippet above the first line makes a full copy of the "stock" Library-side context-menus. And you can add your own customized items into existing submenus (like I did above), or make your own submenus for them. You could even start from scratch with an essentially empty set rather than use the Copy command as I did above, and add only your own items. Or you could experiment with the ContextMenu[CurrentContext][Entries][Disable] command if your students are overwhelmed by the jargon in the slew of stock items. The system is pretty flexible, but setting up a customization does require programming.

This is an area for which I haven't seen a lot of interest in customization by users, over the years. It's possible that this is because there are only programmatic means available for customization. I wonder whether a Maplet or Embedded-Components based graphical interface to customization would make a big difference.

Why do some of the equations use lambda and sigma as multiplicative terms ( eg, note the `*` in 3*P[0](s)*lambda ) while the latter equations also make function calls to lambda and sigma, eg, lambda(P[3](s)...) ?

Or did you intend the latter equations to be more like the following. Note the `*` following the lambda.

P[3](s) = 2*lambda*(P[1](s)+P[2](s))/(s+2*mu[1]+lambda)

Also, it's not clear what you meant by "rearrange". And you comments about functions and assignment are also unclear. Why don't you 1) upload a worksheet, and 2) use -> notation for creating operators (if that's what you really want)? Why would making operators help anyway, when you could instead create equations? Are you just trying to solve for, or eliminate, some names ( or function calls P[i](s) )?

acer

@Carl Love This is on 64bit Linux of Maple 2015.2.

First, this is for many inner lists, each with few entries. I'm not really sure, but the cost of the function calls seems to less than the cost of production and reclamation(?).

restart: kernelopts(printbytes=false):
LL:=[seq([seq(i*j,i=1..100)],j=1..100000)]:
CodeTools:-Usage(map(parse@cat@op, LL)):
memory used=1.17GiB, alloc change=448.00MiB, cpu time=7.58s, real time=6.47s, gc time=1.69s

restart: kernelopts(printbytes=false):     
LL:=[seq([seq(i*j,i=1..100)],j=1..100000)]:
CodeTools:-Usage(map(`@`(parse,cat,op), LL)):
memory used=1.17GiB, alloc change=448.00MiB, cpu time=7.59s, real time=6.48s, gc time=1.68s

restart: kernelopts(printbytes=false):       
LL:=[seq([seq(i*j,i=1..100)],j=1..100000)]:  
CodeTools:-Usage(map(L-> parse(cat(L[])), LL)):
memory used=297.72MiB, alloc change=448.00MiB, cpu time=3.68s, real time=3.29s, gc time=592.00ms

Now, with more entries per inner list, and with fewer inner lists,

restart: kernelopts(printbytes=false):         
LL:=[seq([seq(i*j,i=1..1000000)],j=1..10)]:    
CodeTools:-Usage(map(parse@cat@op, LL)):
memory used=0.96GiB, alloc change=75.43MiB, cpu time=5.96s, real time=5.95s, gc time=24.00ms

restart: kernelopts(printbytes=false):     
LL:=[seq([seq(i*j,i=1..1000000)],j=1..10)]:
CodeTools:-Usage(map(`@`(parse,cat,op), LL)):
memory used=0.96GiB, alloc change=75.43MiB, cpu time=6.04s, real time=6.04s, gc time=24.00ms

restart: kernelopts(printbytes=false):       
LL:=[seq([seq(i*j,i=1..1000000)],j=1..10)]:  
CodeTools:-Usage(map(L-> parse(cat(L[])), LL)):
memory used=0.96GiB, alloc change=75.43MiB, cpu time=6.02s, real time=6.01s, gc time=20.00ms

Are these results affected by the choice of example? Perhaps random entries would give significantly different results?

Are you asking a question similar to this?

acer

@Markiyan Hirnyk It is a curious bug in the branchandbound method, which only seems triggered for some ranges. The `nodelimit` option doesn't help.

I will submit a bug report.

restart;
p := -x^2+1:

Optimization:-NLPSolve(p,x=0..2,method=branchandbound);

           [-2.24, [x = 1.8]]

eval(p,x=2.0);

                             -3.00

Optimization:-NLPSolve(p,x=0..2.1,method=branchandbound);

               [-3.41, [x = 2.1]]

eval(p,x=2.1);

                             -3.41

@Bendesarts I haven't forgotten this. It is a challenge to do this kind of thing exhaustively while being efficient and reducing repeated computations (ie. the memoization can be tricky to do "well").

But I've run into this kind of problem before, and that makes me supect that it it could be worthwhile and useful.

I have you example running at 10-15 minutes on a fast machine, if I do it exhaustively and somewhat naively. I'm working on making it more efficient.

I can add a few bells and whistles (such as extraction of best found result after early termination, optional early termination on time-limit, optional early termination upon achieving a target "size", etc). But making it more efficient is till something that I'm working on.

First 306 307 308 309 310 311 312 Last Page 308 of 592