acer

32597 Reputation

29 Badges

20 years, 41 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

I ran the code below on an Intel i7-7700 3.60GHz which has 4 physical cores with hyperthreading (8 threads or logical cores). Since Maple doesn't recognize this as having "new" improved hyperthreading it sets kernelopts(numcpus) to 4 by default. So in some of the examples below I set kernelopts(numcpus) to 7 and kernelopts(gcmaxthreads) to 7 or 8 (which seems optimal on this host for some examples).

I also ran it on an Intel i5-7400 3.00GHz which has 4 physical cores without hyperthreading (4 threads or logical cores). The relative timing behavior amongst the example was generally the same, except timings were not as good. I left kernelopts(numcpus) and kernelopts(gcmaxthreads) alone, at their default value of 4, since raising those resulted in worse performance.

I did all that using 64bit Maple 2019.0 on Linux.

I observed that almost half of the total computation time is spent in memory management (garbage collection). I found that by reducing the setting of kernelopts(gcthreadmemorysize) from its default I could improve the total timing. This benefit seemed to happen across other setting variants.

I made sure that the machines were not otherwise significantly loaded, since the timing measurements are in real (wall clock) time. I made sure that for size nelems=10^7 the machines had at least 2.5GB of free RAM and so would not swap out the computation.

One interesting result is that simple use of Threads:-Seq attained almost as good performance as the more complicated Threads:-Task set up, under the tweaked kernelopts settings.

For the Threads:-Task set up I tried to ensure that few temporary Arrays were created (ie. without concatenation). The exception to that was that I did not find improvement for replacing seq calls with inplace map calls or straight Array construction with initializer.

restart;
'numcpus'=kernelopts(numcpus), 'gcmaxthreads'=kernelopts(gcmaxthreads),
'gcthreadmemorysize'=kernelopts(gcthreadmemorysize);

nelems := 10000000:
n := 374894756873546859847556:

str,sgctr := time[real](), kernelopts(gctotaltime[real]):
A := Array(1 .. 4, 1 .. nelems):
A[1,1..nelems] := Array([seq( i^10*n, i=1..nelems)]):
A[2,1..nelems] := Array([seq( length(A[1,i])-1, i=1..nelems)]):
A[3,1..nelems] := Array([seq(iquo(A[1,i], 10^(A[2,i]-2)),i=1..nelems)]):
A[4,1..nelems] := Array([seq(irem(A[1,i],1000),i=1..nelems)]):
RT,GCRT := time[real]()-str, kernelopts(gctotaltime[real])-sgctr:
print(sprintf("total real time: %a secs   gc real time: %a secs",
              evalf[4](RT), evalf[4](GCRT)));

numcpus = 4, gcmaxthreads = numcpus, gcthreadmemorysize = 67108864

"total real time: 53.82 secs   gc real time: 29.47 secs"

restart;
kernelopts(numcpus=7): kernelopts(gcmaxthreads=7): kernelopts(gcthreadmemorysize=2^17):
'numcpus'=kernelopts(numcpus), 'gcmaxthreads'=kernelopts(gcmaxthreads),
'gcthreadmemorysize'=kernelopts(gcthreadmemorysize);

nelems := 10000000:
n := 374894756873546859847556:

str,sgctr := time[real](), kernelopts(gctotaltime[real]):
A := Array(1 .. 4, 1 .. nelems):
A[1,1..nelems] := Array([seq(i^10*n, i=1..nelems)]):
A[2,1..nelems] := Array([seq(length(A[1,i])-1, i=1..nelems)]):
A[3,1..nelems] := Array([seq(iquo(A[1,i], 10^(A[2,i]-2)),i=1..nelems)]):
A[4,1..nelems] := Array([seq(irem(A[1,i],1000),i=1..nelems)]):
RT,GCRT := time[real]()-str, kernelopts(gctotaltime[real])-sgctr:
print(sprintf("total real time: %a secs   gc real time: %a secs",
              evalf[4](RT), evalf[4](GCRT)));

numcpus = 7, gcmaxthreads = 7, gcthreadmemorysize = 131072

"total real time: 44.62 secs   gc real time: 20.62 secs"

restart;
kernelopts(numcpus=7): kernelopts(gcmaxthreads=7): kernelopts(gcthreadmemorysize=2^17):
'numcpus'=kernelopts(numcpus), 'gcmaxthreads'=kernelopts(gcmaxthreads),
'gcthreadmemorysize'=kernelopts(gcthreadmemorysize);

nelems := 10000000:
n := 374894756873546859847556:

str,sgctr := time[real](), kernelopts(gctotaltime[real]):
A := Array(1 .. 4, 1 .. nelems):
A[1,1..nelems] := Array([Threads:-Seq(i^10*n, i=1..nelems)]):
A[2,1..nelems] := Array([Threads:-Seq(ilog10(A[1,i]), i=1..nelems)]):
A[3,1..nelems] := Array([Threads:-Seq(iquo(A[1,i], 10^(A[2,i]-2)),i=1..nelems)]):
A[4,1..nelems] := Array([Threads:-Seq(irem(A[1,i],1000),i=1..nelems)]):
RT,GCRT := time[real]()-str, kernelopts(gctotaltime[real])-sgctr:
print(sprintf("total real time: %a secs   gc real time: %a secs",
              evalf[4](RT), evalf[4](GCRT)));

numcpus = 7, gcmaxthreads = 7, gcthreadmemorysize = 131072

"total real time: 18.00 secs   gc real time: 9.000 secs"

restart;
kernelopts(numcpus=7): kernelopts(gcmaxthreads=7): kernelopts(gcthreadmemorysize=2^17):
'numcpus'=kernelopts(numcpus), 'gcmaxthreads'=kernelopts(gcmaxthreads),
'gcthreadmemorysize'=kernelopts(gcthreadmemorysize);

subtask := proc(lo,hi,AA,nn) local i;
  AA[1,lo..hi] := Array([seq(i^10*n, i=lo..hi)]);
  AA[2,lo..hi] := Array([seq(length(AA[1,i])-1, i=lo..hi)]);
  AA[3,lo..hi] := Array([seq(iquo(AA[1,i],10^(AA[2,i]-2)), i=lo..hi)]);
  AA[4,lo..hi] := Array([seq(irem(AA[1,i],1000), i=lo..hi)]);
  return NULL;
end proc:                  
fulltask := proc(num::posint,n::posint,N::posint) local A,i;
  A := Array(1..4,1..num,order=C_order);
  Threads:-Task:-Start(null, Task=[subtask, 1, ceil(num/N), A, n],
                       seq(Task=[subtask, ceil((i-1)*num/N)+1, ceil(i*num/N), A, n],
                           i=2..N));
  return A;
end proc:

nelems := 10000000:
n := 374894756873546859847556:
N := kernelopts(numcpus):
A,RT,GCRT := CodeTools:-Usage( Threads:-Task:-Start( fulltask, nelems, n, N),
                               output=[output, realtime, gcrealtime], quiet ):
print(sprintf("total real time: %a secs   gc real time: %a secs",
              evalf[4](RT), evalf[4](GCRT)));

numcpus = 7, gcmaxthreads = 7, gcthreadmemorysize = 131072

"total real time: 17.54 secs   gc real time: 9.306 secs"

 

Download threadthingm.mw

On these Linux machines the best performing variant above was about 3.1 times as fast as the first (unoptimzed) variant. The timings do change slightly upon re-execution, however. It was also faster than other versions posted so far for this Question. It would be interesting to see results for a 4-core (modern) hyperthreaded host on 64bit MS-Windows.

Using Maple versions 17.02 (2013) through to 2019.0,

solve([x^2+y^2+z^2-4*x+6*y-2*z-11 = 0, 2*x+2*y-z = -18],
      [x, y, z], parametric, real);

           [[x = -4/3, y = -19/3, z = 8/3]]

 

  • Widget driven interface (displayed inline in worksheet in Maple GUI or Maple Cloud or Maple Player, created via source or mouse layout):
       Embedded Components, a newer alternative to Maplets
    See Help topics:
       - EmbeddedComponents
       - examples,ProgrammaticContentGeneration
       - DocumentTools,Layout
  • "Clickable Math" access to commands applied to 2D output or input (right-click popup menu in Maple 2017 and earlier, and side-panel menu in Maple 2018 and later):
       customized Context-Menu entries
    See Help topics:
       - ContextMenu
       - examples,ContextMenu
    See more ideas:
       - package-specific context-menu additions
       - context-menu additions for "phasor" package (see ModuleLoad)
       - context driven help

Here are a few more workarounds.

restart;
solve({x^2 = 2, x^3 = sqrt(2)^3}, [x,W]):
subs((W=W)=NULL, %);

                         [[     (1/2)]]
                         [[x = 2     ]]

Of course, these next two are restricted to a particular kind of problem or domain.

restart;
SolveTools:-PolynomialSystem({x^2 = 2, x^3 = sqrt(2)^3});

                          /     (1/2)\ 
                         { x = 2      }
                          \          / 

restart;
solve({x^2 = 2, x^3 = sqrt(2)^3}, [x], real);

                         [[     (1/2)]]
                         [[x = 2     ]]

When solve behaves weirdly I usually look at eliminate as well.

Call the randomize command first with the same seed value.

restart:

kernelopts(version);

`Maple 2019.0, X86 64 LINUX, Mar 9 2019, Build ID 1384062`

(1)

with(DETools):

a1:=m^2*((1-N)/(2-N))*(r/2)*Dp:
a2:=((1-N)*m^2/(2-N))*((G[r]*(Nb-Nt)/64)*(r^5/6-h^2*r/2)-(B[r]/4)*(r^3/4-h^2*r/2)*(Nt/Nb)):
a3:=a1-a2:
a4:=r^2*a3:
DE1:=r^2*diff(v(r),r,r)+r*diff(v(r),r)-(m^2*r^2+1)*v(r)=a4:

b1:=dsolve(DE1,v(r));

v(r) = BesselI(1, m*r)*_C2+BesselK(1, m*r)*_C1-(1/128)*(-1+N)*((G[r]*(-(1/3)*r^4+h^2)*Nb^2+(-(-(1/3)*r^4+h^2)*Nt*G[r]+64*Dp)*Nb-16*B[r]*(h^2-(1/2)*r^2)*Nt)*m^4+(-8*Nb^2*r^2*G[r]+8*Nb*Nt*r^2*G[r]+64*Nt*B[r])*m^2-64*Nb*G[r]*(Nb-Nt))*r/(Nb*m^4*(-2+N))

(2)

sort(N-1,order=plex(N)):
sort(N-2,order=plex(N)):

 

F:=proc(u) local hu,ho;
     if u::`*` then
       hu,ho:=selectremove(t->depends(t,[r,m]) or t::rational,u);
       sort(expand(`*`(op(hu))),order=plex(r))*`*`(op(ho));
     else u; end if;
end proc:

 

collect(rhs(b1), [BesselI, BesselK, G[r], B[r]], u->F(simplify(u,size)));

BesselI(1, m*r)*_C2+BesselK(1, m*r)*_C1+((1/384)*r^5+(1/16)*r^3/m^2-(1/128)*h^2*r+(1/2)*r/m^4)*(N-1)*(Nb-Nt)*G[r]/(N-2)+(-(1/16)*r^3+(1/8)*h^2*r-(1/2)*r/m^2)*(N-1)*Nt*B[r]/(Nb*(N-2))-(N-1)*Dp*r/(2*N-4)

(3)

 

Download collect_example.mw


[edit] That F procedure is only there to get the polynomial terms in r in the form you showed. You may wish to omit it. Ie,

collect(rhs(b1), [BesselI, BesselK, G[r], B[r]], factor);

BesselI(1, m*r)*_C2+BesselK(1, m*r)*_C1-(1/384)*(N-1)*(-m^4*r^4+3*h^2*m^4-24*m^2*r^2-192)*r*(Nb-Nt)*G[r]/(m^4*(N-2))+(1/16)*(N-1)*Nt*(2*h^2*m^2-m^2*r^2-8)*r*B[r]/(m^2*Nb*(N-2))-(1/2)*(N-1)*Dp*r/(N-2)

(1)

collect(rhs(b1), [BesselI, BesselK, G[r], B[r]], u->simplify(u,size));

BesselI(1, m*r)*_C2+BesselK(1, m*r)*_C1-(1/128)*(N-1)*(-64+(-(1/3)*r^4+h^2)*m^4-8*m^2*r^2)*r*(Nb-Nt)*G[r]/(m^4*(N-2))+(1/16)*(N-1)*Nt*(2*h^2*m^2-m^2*r^2-8)*r*B[r]/(m^2*Nb*(N-2))-(N-1)*Dp*r/(2*N-4)

(2)

 

Is this what you want?

restart;

with(plots):

R := 5; alpha := (1/9)*Pi; beta := (1/3)*Pi; n := 100; dt := 2*Pi/n;

5

(1/9)*Pi

(1/3)*Pi

100

(1/50)*Pi

C1 := plot([R*cos(t), R*sin(t), t = 0 .. 2*Pi], color = blue):

local O := [0, 0];

M := [R*cos(beta), R*sin(beta)];

[5/2, (5/2)*3^(1/2)]

OM := plot([O, M]):

A := [R*cos(alpha), R*sin(alpha)];

[5*cos((1/9)*Pi), 5*sin((1/9)*Pi)]

B := [R*cos(alpha+Pi), R*sin(alpha+Pi)];

[-5*cos((1/9)*Pi), -5*sin((1/9)*Pi)]

AB := plot([A, B]):

P := [R*cos(t0*dt), R*sin(t0*dt)];

[5*cos((1/50)*t0*Pi), 5*sin((1/50)*t0*Pi)]

Q := [R*cos(dt*t0+Pi), R*sin(dt*t0+Pi)];

[-5*cos((1/50)*t0*Pi), -5*sin((1/50)*t0*Pi)]

tp := textplot([[A[1]+.3, A[2], "A"], [B[1]-.3, B[2], "B"], [M[1]+.3, M[2]+.3, "M"]]):

 

all := [seq(display(AB, OM, C1, tp,
                    plot([M, P]), plot([M, Q]),
                    plot([P, Q], color = green)),
            t0 = 0 .. n)]:

 

display(all, insequence = true);


OManim.mw

The behavior in this attachment may be what the author of Statistics:-DataSummary had intended.

QuestionDataWeights_acc.mw

A more robust way to discern the position of the weights optional parameter could be devised. I only checked against Maple 2019.0, Maple 2018.0 and Maple 2018.2, but chances are reasonable that weights corresponds to either PARAM(4) or PARAM(5) of the InertForm in earlier versions that have DataSummary.

I use the ToInert/subsop/FromInert to repair what seems like a typo in the source of DataSummary. On the source line of that assigns to w,ww the first argument passed to PreProcessData is edited from X to weights. See showstat(Statistics:-DataSummary) for clarification.

I'll submit a bug report.

The use of nested loops to assign distinct deep copies of a Record to the Array entries seems somewhat unnatural and stilted to me.

The Array command accepts an initializer procedure as an optional argument, which can be used to populate each entry. The distinct Records (and Vectors in their fields) can be created on the fly.

This gets rid of the need for nested loops (as many as there are dimensions) to assign all the entries their Records. It gets rid of the need for copy or deep copying of a Record. The values of the indices (i,j,k) could be passed to the constructing procedure, in the case that some Record field were desired to be set up depending on their values.

The Records could also share some Vector(s), if desired. The copy approach gets in the way of that.

restart;

create_nodeprop := () -> Record(

                    'inode'      = 0 ,
                    'nnb'        = 6 ,
                    'nind'       = Vector( 1 .. 6 ,  0  ) ,
                    'U'          = 100. ,
                    'Ul'         = 100. ,
                    'Uprev'      = 100. ,
                    'Ueq'        = 100. ,
                    'Ueql'       = 100. ,
                    'blin'       = 0.   ,
                    'q'          = 0.   ,
                    'rx'         = 2.   ,
                    'ry'         = 2.   ,
                    'rz'         = 2.   ,
                    'rfin'       = Vector( 1 .. 6 , -1. ) ,
                    'conduct'    = Vector( 1 .. 6 , -1. ) ,
                    'c'          = 2.   ,
                    'tau'        = 0.   ,
                    'k'          = 0.   ,
                    'sumconduct' = 0.   ,
                    'sharedB'    = `if`(B::rtable,ArrayTools:-Alias(B),NULL)
                  ) :

nx  := 10  :    # X direction dimension.
ny  :=  4  :    # Y direction dimension.
nz  :=  1  :    # Z direction dimension.

 

B := Vector([1,2,3,4]):

 

node := Array( 1 .. ny , 1 .. nx , 1 .. nz,
               (i,j,k) -> create_nodeprop() ) :

ic := 0 :
for k from 1 to nz do
    for j from 1 to ny do
        for i from 1 to nx do

            ic := ic + 1 :
            
            node[  j , i , k ]:-inode := ic :
            
        od : # for i from 1 to nx do
    od : # for j from 1 to ny do
od : # for k from 1 to nz do

for k from 1 to nz do
    for j from 1 to ny do
        for i from 1 to nx do

            ic := 0 :

#          +X
            if ( i + 1 <= nx ) then
                 ic := ic + 1 :
                 node[ j , i , k ]:-nind[ic] := node[ j , i + 1 , k ]:-inode :
            end if : # if ( i + 1 <= nx )

#          -X
            if ( 1 <= i - 1 ) then
                 ic := ic + 1 :
                 node[ j , i , k ]:-nind[ic] := node[ j , i - 1 , k ]:-inode :
            end if : # if ( 1 <= i - 1 )

#          +Y
            if ( j + 1 <= ny ) then
                 ic := ic + 1 :
                 node[ j , i , k ]:-nind[ic] := node[ j + 1 , i , k ]:-inode :
            end if : # if ( j + 1 <= ny )

#          -Y
            if ( 1 <= j - 1 ) then
                 ic := ic + 1 :
                 node[ j , i , k ]:-nind[ic] := node[ j - 1 , i , k ]:-inode :
            end if : # if ( 1 <= j - 1 )

#          +Z
            if ( k + 1 <= nz ) then
                 ic := ic + 1 :
                 node[ j , i , k ]:-nind[ic] := node[ j , i , k + 1 ]:-inode :
            end if : # if ( k + 1 <= nz )

#          -Z
            if ( 1 <= k - 1 ) then
                 ic := ic + 1 :
                 node[ j , i , k ]:-nind[ic] := node[ j , i , k - 1 ]:-inode :
            end if : # if ( 1 <= k - 1 )

            node[ j , i , k ]:-nnb := ic : # Store number of neighbors.

        od : # for i from 1 to nx do
     od : # for j from 1 to ny do
od : # for k from 1 to nz do

 

node[2,3,1]:-nind

Vector(6, {(1) = 14, (2) = 12, (3) = 23, (4) = 3, (5) = 0, (6) = 0})

 

node[2,3,1]:-nind   should be [ 14 , 12 , 23 , 3 , 0 , 0 ]

 

 

node[3,6,1]:-nind

Vector(6, {(1) = 27, (2) = 25, (3) = 36, (4) = 16, (5) = 0, (6) = 0})

 

node[ 3,6,1 ]:-nind   should be [ 27 , 25 , 36 ,16 , 0 , 0 ]

 

``

node[ 1 , 1 , 1 ]:-inode;

1

node[ 2 , 2 , 1 ]:-inode;

12

 

We've already seen that the Records get their own distinct Vectors.

But they can also share certain designated Vectors.

 

node[1,2,1]:-sharedB[3] := 27:

node[4,3,1]:-sharedB;

Vector(4, {(1) = 1, (2) = 2, (3) = 27, (4) = 4})

 

We can even unassign the name global name `B`, for re-use.

 

B := 'B';

B

node[4,3,1]:-sharedB[2] := 64:

node[1,2,1]:-sharedB;

Vector(4, {(1) = 1, (2) = 64, (3) = 27, (4) = 4})

 

``

Download Why_initializer.mw

Are you looking for something like this?

GAMMA(3/2);

                      1/2
                    Pi
                    -----
                      2

convert((x)!, GAMMA);

                 GAMMA(x + 1)                                                  
                                                                                                              
convert((1/2)!, GAMMA);

                      1/2
                    Pi
                    -----
                      2

restart;

kernelopts(version);

`Maple 2018.0, X86 64 LINUX, Mar 9 2018, Build ID 1298750`

G := solve(sin(x*Pi/180)=sin(x),x,allsolutions);

180*Pi*(2*_Z2+_B2)/(Pi+360*_B2-180)

b := select(is,indets(G,name),OrProp(0,1)); # 0,1 valued

{_B2}

r := select(is,indets(G,name) minus b,integer); # integer valued

{_Z2}

S:=[seq(seq(eval(G,[b[1]=i,r[1]=j]),i=[0,1]),j=[-2,-1,0,1,2])];

[-720*Pi/(Pi-180), -540*Pi/(180+Pi), -360*Pi/(Pi-180), -180*Pi/(180+Pi), 0, 180*Pi/(180+Pi), 360*Pi/(Pi-180), 540*Pi/(180+Pi), 720*Pi/(Pi-180), 900*Pi/(180+Pi)]

evalf(S);

[12.78959109, -9.263106254, 6.394795546, -3.087702085, 0., 3.087702085, -6.394795546, 9.263106254, -12.78959109, 15.43851042]

Student:-Calculus1:-Roots(sin(x*Pi/180) = sin(x), x=-4*Pi .. 4*Pi);

Warning, some roots are returned as numeric approximations

[-9.263106254, -6.394795546, -3.087702085, 0, 3.087702085, 6.394795546, 9.263106254]

fsolve(sin(x*Pi/180) = sin(x), x, -4*Pi .. 4*Pi, maxsols=20);

-9.263106258, -6.394795545, -3.087702086, 0., 3.087702085, 6.394795544, 9.263106257

 

Download solve_example.mw

You can get closer to the exact form you requested.

Be careful about the multiplication signs in the definition of Gc. Notice them in the terms k*(-b*s+1) and s*(t*c+b) .

restart;

Gc := (s^2*t^2+2*s*t*x+1)*(-b*s+1)/(k*(-b*s+1)*s*(t*c+b));

(s^2*t^2+2*s*t*x+1)/(k*s*(c*t+b))

new := expand(convert(Gc,parfrac,s));

t^2*s/(k*(c*t+b))+2*t*x/(k*(c*t+b))+1/(k*s*(c*t+b))

temp := sort(collect(new,s,u->u/coeff(new,s,0)),
             order=plex(s),ascending);

1+(1/2)/(t*x*s)+(1/2)*t*s/x

new2 := coeff(new,s,0)*temp;

2*t*x*(1+(1/2)/(t*x*s)+(1/2)*t*s/x)/(k*(c*t+b))

coeff(new,s,0); # A

2*t*x/(k*(c*t+b))

1/coeff(temp,s,-1); # B

2*t*x

coeff(temp,s,1); # C

(1/2)*t/x

new3 := `%*`(coeff(new,s,0), temp):
InertForm:-Display(new3, inert=false);

2*t*x*(1+1/(2*t*x*s)+t*s/(2*x))/(k*(c*t+b))

simplify(Gc-new);
simplify(Gc-new2);
simplify(Gc-expand(new3));

0

0

0

 

Download form.mw

The problem seems to be that the backend MapleNet server of this site cannot handle the new default rtablesize="[10, 10]" attribute in the MapleNet-Properties section of the XML format .mw file.

If I execute the interface command using the older form (a single value) then the worksheet gets saved in Maple 2019.0 with that form in the .mw XML file's MapleNet-Properties. A subsequent upload to this site then succeeds.

#
# Recursive Fibonacci generator
#
  myFib:= proc(n::integer)
               option remember;
               if   n=1
               then return 1
               elif n=0
               then return 0
               else return myFib(n-1)+myFib(n-2):
               fi:
          end proc:

  seq(myFib(j), j=0..20);

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765

(1)

interface(rtablesize=10):

 


Download fibon2019_rtablesize.mw

I can even delete that input line (as long as I've executed it). And it still uploads OK here.

#
# Recursive Fibonacci generator
#
  myFib:= proc(n::integer)
               option remember;
               if   n=1
               then return 1
               elif n=0
               then return 0
               else return myFib(n-1)+myFib(n-2):
               fi:
          end proc:

  seq(myFib(j), j=0..20);

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765

(1)

Download fibon2019_rtablesize_deleted.mw

I can even put interface(rtablesize=10) inside my personal ~/.mapleinit file on Linux, launch the Maple 2019.0 GUI without the -s option, and save and upload OK here.

#
# Recursive Fibonacci generator
#
  myFib:= proc(n::integer)
               option remember;
               if   n=1
               then return 1
               elif n=0
               then return 0
               else return myFib(n-1)+myFib(n-2):
               fi:
          end proc:

  seq(myFib(j), j=0..20);

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765

(1)

 

Download fibon2019_rtablesize_mapleinit.mw

So for now I suppose that a last step before saving/uploading could be to execute the interface command with the older syntax of a single value for rtablesize. And delete that line if you choose.

It's a bit of an effort, but not nearly as much of an extra effort as having to execute all 2D plots under gridlines=false just so that the back-end server doesn't render with grid-lines by default (another, much older bug here).

I would really like to see the back-end server of this site updated, so that it would handle default 2D plot grid-lines properly, not have this rtablesize issue, and also allow inlining of Task regions as embedded by Explore, PlotBuilderImageTools:-Embed, DocumentTools:-Tabulate, etc.

How would applying Grid:-Seq to the call to f (for each M[i,j] ) improve via parallelization? I dont see how the syntax you gave makes any sense.

How about using Grid:-Map to apply f to the Matrix M? There's no guarantee it will always improve performance but that seems like the natural syntax for the job as you've described it.

Or (slightly less direct to code) how about using Grid-Seq to generate the n*m results of f(M[i,j]) but with a range argument supplied to Grid:-Seq itself.

Why don't you show us your f and your Matrix M, in a complete working example as an uploaded worksheet?

Just for fun. If you really want to generate the frames with their own calls to plot with colorscheme then the key is to have the end-points of the curve's parametrization depend on the animating variable and to use the "linear" color scheme.

restart;
N:=75:

S := [seq(plot([t + N/10*(1 + cos(a)), N/10*(1 + sin(a)),
                a = 0 - Pi/2 - t/(N-1)*2*Pi .. 2*Pi - Pi/2 - t/(N-1)*2*Pi],
               thickness=3,
               colorscheme = ["linear", ["Blue", "Yellow"]]),
          t=0..N)]:

plots:-display(S, gridlines=false, size=[700, 200],
               scaling=constrained, insequence=true);

First 159 160 161 162 163 164 165 Last Page 161 of 339