acer

32632 Reputation

29 Badges

20 years, 45 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@Christopher2222 The eventual slowdown in the Standard GUI that you described when animating the globe also occurs in later Maple versions. I believe that it is due (effectively) to some kind of memory leak in the GUI that occurs when displaying many 3D plots/frames, which you might observe using the MS-Windows Task Manager or Unix/Linus `top` utility.

I encourage you (too) to submit an SCR on it. The more such problems are reported the better the chance it might get fixed.

Consider x^3-4*x+2=0 for which the solve command returns three explicit expressions containing I. Hence, some simplification may be necessary before testing for the presence of I.

restart:

p := x^3-4*x+2:
#plot(p, x=-3..3);

S := [solve( p=0, x )]:

remove( has, S, I );

                               []

remove(has, map(simplify,S,constant), I):
evalf(%);

           [1.675130870, -2.214319743, 0.5391888726]

And simplify(...,constant) may not suffice, in general. And `is` will depend upon Digits (akin to fnormal after evalf), so can also be caught out. Applying evalc@Im can turn this into zero-testing.

acer

Consider x^3-4*x+2=0 for which the solve command returns three explicit expressions containing I. Hence, some simplification may be necessary before testing for the presence of I.

restart:

p := x^3-4*x+2:
#plot(p, x=-3..3);

S := [solve( p=0, x )]:

remove( has, S, I );

                               []

remove(has, map(simplify,S,constant), I):
evalf(%);

           [1.675130870, -2.214319743, 0.5391888726]

And simplify(...,constant) may not suffice, in general. And `is` will depend upon Digits (akin to fnormal after evalf), so can also be caught out. Applying evalc@Im can turn this into zero-testing.

acer

@Markiyan Hirnyk I was just querying what you wanted. It's not hard to get it as a sum (of a product of two numbers of combinations, ie. combinat[numbcomb] calls).

N:=(s,k)->add(combinat[numbcomb](s, i)*combinat[numbcomb](i, k-i),i=0..s);

But you want a closed form for that as a symbolic sum? Can it be in terms of calls to GAMMA and hypergeom (...I'm guessing not)?

Is it just this, for n=0,1,2,3,4,...?

 PolynomialTools:-CoefficientList(expand( (1+x+x^2)^n ),x)

So, do you want something in terms of factorials?

acer

@J4James Since eq2 and bc are both sets, pass the DE system to the dsolve command as the argument,

eq2 union bc

rather than as,

{eq2 union bc}

@J4James Since eq2 and bc are both sets, pass the DE system to the dsolve command as the argument,

eq2 union bc

rather than as,

{eq2 union bc}

@Mac Dude Try setting up your links to the saved help-page using Command,package in the hyperlink. So, its hover-over balloon would say, "Help:Command,package".

(It seemed to work ok for me in Maple 15.01, using a link with either Package,command or Package/command. I saved the help page as topic Package/command. Perhaps if you posted the precise `makehelp` incantation that you used to save the page?)

@Mac Dude Try setting up your links to the saved help-page using Command,package in the hyperlink. So, its hover-over balloon would say, "Help:Command,package".

(It seemed to work ok for me in Maple 15.01, using a link with either Package,command or Package/command. I saved the help page as topic Package/command. Perhaps if you posted the precise `makehelp` incantation that you used to save the page?)

Which applications (from the Application Center) are these, that you looked at and which run in MapleV but not in modern Maple?

Did you write that they run in the Classic GUI, but not in the Standard GUI?

acer

Hi Axel,

I'm not sure that it is ok to measure the accuracy of the approximant by evaluating it under Digits=18 of radix-10 working precision. The Questioner mentioned using the result as C source.

Please pardon any mistakes, but here is some attempt at accuracy measurement:

 

restart:

H := proc(z) option inline;
 (.466433940928848490e-3+(1.11523043757015811+(-1.72989528300231501
 +(4.49729788979802074+(-.319393610864014770+(.545253820724829192e-2
 -.228858722931549036e-4*z)*z)*z)*z)*z)*z)/(.139930182278641728e-2
 +(3.34513159198135990+(-6.52796503942793992+(15.5613951451798833+
 (-6.34533642326600689+(.374356830875784296+(-.590832471659976894e-2
 +(.235408172559887276e-4+.461516094568396518e-9*z)*z)*z)*z)*z)*z)*z)*z);
end proc:
approx:= proc(x::float) option inline; H(x^2)*x^3+x end proc:
capprox:=Compiler:-Compile(approx):

J := proc(x::float)
  local t1;
  t1 := x^2;
  (.4290978931547805+(-0.2150922379775017e-1+(0.1194845227917730e-4
   +(0.1351777597551553e-6+0.1467942640582486e-8*t1)*t1)*t1)*t1)*t1*x
   /(1.287293679464341+(-.5794451431789617+0.2339492595289925e-1*t1)*t1)
   +x;
end proc:
cJ:=Compiler:-Compile(J):

populate_approx:=proc(V::Vector(datatype=float[8]),
                      N::integer[4])
  local i::integer[4], q::float[8], x::float[8];
  q:=Pi/(4.0*N);
  for i from 1 to N do
     x:=(i-1.0)*q;
     V[i]:=approx(x);
  end do:
  NULL:
end proc:
cpopulate_approx:=Compiler:-Compile(populate_approx):

populate_J:=proc(V::Vector(datatype=float[8]),
                 N::integer[4])
  local i::integer[4], q::float[8], x::float[8], t1::float[8];
  q:=Pi/(4.0*N);
  for i from 1 to N do
    x:=(i-1.0)*q;
    t1 := x^2;
    V[i]:=(.4290978931547805+(-0.2150922379775017e-1+(0.1194845227917730e-4
           +(0.1351777597551553e-6+0.1467942640582486e-8*t1)*t1)*t1)*t1)*t1*x
           /(1.287293679464341+(-.5794451431789617+0.2339492595289925e-1*t1)*t1)
           +x;
  end do:
  NULL:
end proc:
cpopulate_J:=Compiler:-Compile(populate_J):

populate_approx_diff:=proc(V::Vector,N::posint)
   local i, q, x;
   q:=evalf(Pi/(4.0*N));
   for i from 1 to N do
      x:=(i-1.0)*q;
      V[i]:=abs(capprox(x)-tan(x));
   end do:
   NULL:
end proc:

populate_J_diff:=proc(V::Vector,N::posint)
   local i, q, x, t1;
   q:=evalf(Pi/(4.0*N));
   for i from 1 to N do
      x:=(i-1.0)*q;
      V[i]:=abs(cJ(x)- tan(x));
   end do:
   NULL:
end proc:

 

We can compute a large number of values, using these candidate replacements of the tangent function. By inlining

the formulae right into procedures that do the calculations inplace on float[8] Vectors we can use Compiler:-Compile

and obtained compiled (from C) programs that run until finished without pausing to call back to Maple.

 

So these next timings illustrate how quickly the function can do (nc=10^7) executions, when compiled from C source.

There is just the initial overhead of making the external call, but no extra overhead per individual execution of the

compiled tangent candidate functions.

 

Unsurprisingly, compiled `J` is faster to populate the double precision array, as it does fewer arithmetic operations

than does compiled `approx`. (Another potential benefit of "smaller" code may be the it's easier to shoehorn into

the limited space on an embedded device.)

 

nc:=10^7:

v1:=Vector[row](nc,datatype=float[8]):
CodeTools:-Usage( cpopulate_approx(v1,nc) );

memory used=0.51KiB, alloc change=0 bytes, cpu time=375.00ms, real time=368.00ms

v2:=Vector[row](nc,datatype=float[8]):
CodeTools:-Usage( cpopulate_J(v2,nc) );

memory used=0.51KiB, alloc change=0 bytes, cpu time=234.00ms, real time=238.00ms

 

Now we wish to compute the absolute errors of our candidate functions.

 

If we compared with the values of tan(x) computed at the same hardware double precision then we would not be taking

into account that values from the double precision builtin `tan` also have some error. We want to compare out candidate

functions against more correct values of tangent, and avoid inadvertantly conjoining any errors in the builtin. Hence we

should raise the working precision when computing the baseline tangent values, and it's also a good idea to do the

subtraction at higher precision. Below, we'll use Digits=30, which ought to suffice.

 

But it's important to note that the candidate replacements for tangent should themselves not be run under higher working

precision, since the end goal is to use then in compiled C, say. The easiest way to get their values is to run Maple's own

Compiler:-Compile against them, which generates something similar to CodeGeneration[C] and then compiles/links to

a shared library that Maple can access via its external calling. All that process is normally hidden, by the Compiler.

 

Digits:=30:

nc:=10^5:

v1diff:=Vector(nc,datatype=float[8]):
populate_approx_diff(v1diff,nc):

v2diff:=Vector(nc,datatype=float[8]):
populate_J_diff(v2diff,nc);

 

We can find the largest absolute error, in both of those collections of values computed. These are very close, about

2.5 ulps each.

 

max(v1diff), max(v2diff);

0.252205478780976976e-15, 0.248056073686935993e-15

 

We can compute the average absolute error, in both of those collections of values computed. Again, they are quite close.

 

[add(t, t in v1diff), add(t, t in v2diff)]/nc: evalf[5](%)[];

0.28429e-16, 0.28208e-16

 

We can also plot the two collections of absolute errors.

 

xpts:=Vector(nc,i->evalhf((i-1.0)*Pi/(4.0*nc)),datatype=float[8]):

plots:-display(Array([
    plot(Matrix([xpts,v1diff],datatype=float[8]),style=point,
         view=0..3e-16,symbolsize=1),
    plot(Matrix([xpts,v2diff],datatype=float[8]),style=point,
         view=0..3e-16,symbolsize=1)
                      ]));

 

Download compiledcompared2.mw

acer

Hi Axel,

I'm not sure that it is ok to measure the accuracy of the approximant by evaluating it under Digits=18 of radix-10 working precision. The Questioner mentioned using the result as C source.

Please pardon any mistakes, but here is some attempt at accuracy measurement:

 

restart:

H := proc(z) option inline;
 (.466433940928848490e-3+(1.11523043757015811+(-1.72989528300231501
 +(4.49729788979802074+(-.319393610864014770+(.545253820724829192e-2
 -.228858722931549036e-4*z)*z)*z)*z)*z)*z)/(.139930182278641728e-2
 +(3.34513159198135990+(-6.52796503942793992+(15.5613951451798833+
 (-6.34533642326600689+(.374356830875784296+(-.590832471659976894e-2
 +(.235408172559887276e-4+.461516094568396518e-9*z)*z)*z)*z)*z)*z)*z)*z);
end proc:
approx:= proc(x::float) option inline; H(x^2)*x^3+x end proc:
capprox:=Compiler:-Compile(approx):

J := proc(x::float)
  local t1;
  t1 := x^2;
  (.4290978931547805+(-0.2150922379775017e-1+(0.1194845227917730e-4
   +(0.1351777597551553e-6+0.1467942640582486e-8*t1)*t1)*t1)*t1)*t1*x
   /(1.287293679464341+(-.5794451431789617+0.2339492595289925e-1*t1)*t1)
   +x;
end proc:
cJ:=Compiler:-Compile(J):

populate_approx:=proc(V::Vector(datatype=float[8]),
                      N::integer[4])
  local i::integer[4], q::float[8], x::float[8];
  q:=Pi/(4.0*N);
  for i from 1 to N do
     x:=(i-1.0)*q;
     V[i]:=approx(x);
  end do:
  NULL:
end proc:
cpopulate_approx:=Compiler:-Compile(populate_approx):

populate_J:=proc(V::Vector(datatype=float[8]),
                 N::integer[4])
  local i::integer[4], q::float[8], x::float[8], t1::float[8];
  q:=Pi/(4.0*N);
  for i from 1 to N do
    x:=(i-1.0)*q;
    t1 := x^2;
    V[i]:=(.4290978931547805+(-0.2150922379775017e-1+(0.1194845227917730e-4
           +(0.1351777597551553e-6+0.1467942640582486e-8*t1)*t1)*t1)*t1)*t1*x
           /(1.287293679464341+(-.5794451431789617+0.2339492595289925e-1*t1)*t1)
           +x;
  end do:
  NULL:
end proc:
cpopulate_J:=Compiler:-Compile(populate_J):

populate_approx_diff:=proc(V::Vector,N::posint)
   local i, q, x;
   q:=evalf(Pi/(4.0*N));
   for i from 1 to N do
      x:=(i-1.0)*q;
      V[i]:=abs(capprox(x)-tan(x));
   end do:
   NULL:
end proc:

populate_J_diff:=proc(V::Vector,N::posint)
   local i, q, x, t1;
   q:=evalf(Pi/(4.0*N));
   for i from 1 to N do
      x:=(i-1.0)*q;
      V[i]:=abs(cJ(x)- tan(x));
   end do:
   NULL:
end proc:

 

We can compute a large number of values, using these candidate replacements of the tangent function. By inlining

the formulae right into procedures that do the calculations inplace on float[8] Vectors we can use Compiler:-Compile

and obtained compiled (from C) programs that run until finished without pausing to call back to Maple.

 

So these next timings illustrate how quickly the function can do (nc=10^7) executions, when compiled from C source.

There is just the initial overhead of making the external call, but no extra overhead per individual execution of the

compiled tangent candidate functions.

 

Unsurprisingly, compiled `J` is faster to populate the double precision array, as it does fewer arithmetic operations

than does compiled `approx`. (Another potential benefit of "smaller" code may be the it's easier to shoehorn into

the limited space on an embedded device.)

 

nc:=10^7:

v1:=Vector[row](nc,datatype=float[8]):
CodeTools:-Usage( cpopulate_approx(v1,nc) );

memory used=0.51KiB, alloc change=0 bytes, cpu time=375.00ms, real time=368.00ms

v2:=Vector[row](nc,datatype=float[8]):
CodeTools:-Usage( cpopulate_J(v2,nc) );

memory used=0.51KiB, alloc change=0 bytes, cpu time=234.00ms, real time=238.00ms

 

Now we wish to compute the absolute errors of our candidate functions.

 

If we compared with the values of tan(x) computed at the same hardware double precision then we would not be taking

into account that values from the double precision builtin `tan` also have some error. We want to compare out candidate

functions against more correct values of tangent, and avoid inadvertantly conjoining any errors in the builtin. Hence we

should raise the working precision when computing the baseline tangent values, and it's also a good idea to do the

subtraction at higher precision. Below, we'll use Digits=30, which ought to suffice.

 

But it's important to note that the candidate replacements for tangent should themselves not be run under higher working

precision, since the end goal is to use then in compiled C, say. The easiest way to get their values is to run Maple's own

Compiler:-Compile against them, which generates something similar to CodeGeneration[C] and then compiles/links to

a shared library that Maple can access via its external calling. All that process is normally hidden, by the Compiler.

 

Digits:=30:

nc:=10^5:

v1diff:=Vector(nc,datatype=float[8]):
populate_approx_diff(v1diff,nc):

v2diff:=Vector(nc,datatype=float[8]):
populate_J_diff(v2diff,nc);

 

We can find the largest absolute error, in both of those collections of values computed. These are very close, about

2.5 ulps each.

 

max(v1diff), max(v2diff);

0.252205478780976976e-15, 0.248056073686935993e-15

 

We can compute the average absolute error, in both of those collections of values computed. Again, they are quite close.

 

[add(t, t in v1diff), add(t, t in v2diff)]/nc: evalf[5](%)[];

0.28429e-16, 0.28208e-16

 

We can also plot the two collections of absolute errors.

 

xpts:=Vector(nc,i->evalhf((i-1.0)*Pi/(4.0*nc)),datatype=float[8]):

plots:-display(Array([
    plot(Matrix([xpts,v1diff],datatype=float[8]),style=point,
         view=0..3e-16,symbolsize=1),
    plot(Matrix([xpts,v2diff],datatype=float[8]),style=point,
         view=0..3e-16,symbolsize=1)
                      ]));

 

Download compiledcompared2.mw

acer

Code that produced the second procedure:

restart:

corra,corrm:=x,x^3:

usefun:=proc(T) local t;
   Digits:=70;
   t:=sqrt(T);
   evalf((tan(t)-eval(corra,x=t))/eval(corrm,x=t));
end proc:

Digits:=19:

this:=numapprox:-minimax('usefun'(x),x=0..sqrt(Pi/3),[4,2],1,'maxerror'):

ans:=subs(x=x^2,this)*corrm+corra:
tanfun:=unapply(evalf[16](ans),[x::float],'proc_options'=[]):
tanfun:=codegen[optimize](tanfun);

Note that generating such approximations at a high working precision (eg. Digits=19, or 24, or 40, etc) and then rounding the coefficients to 16 (or 18) decimal digits is a bit dodgy. The final rounded procedure will not give the same accuracy as the original one with the high number of decimal digit coefficients, and that is even more so if the rounded procedure is compiled while the original is run at some higher working precision. So the value of `maxerror` as computed above is not a promise of accuracy about the final approximant. It's not even clear that finding the pair [m,n] of degrees for numerator & denominator which produce a minimal `maxerror` will lead to the most accurate procedure after rounding coefficients to 16 or so places.

Code that produced the second procedure:

restart:

corra,corrm:=x,x^3:

usefun:=proc(T) local t;
   Digits:=70;
   t:=sqrt(T);
   evalf((tan(t)-eval(corra,x=t))/eval(corrm,x=t));
end proc:

Digits:=19:

this:=numapprox:-minimax('usefun'(x),x=0..sqrt(Pi/3),[4,2],1,'maxerror'):

ans:=subs(x=x^2,this)*corrm+corra:
tanfun:=unapply(evalf[16](ans),[x::float],'proc_options'=[]):
tanfun:=codegen[optimize](tanfun);

Note that generating such approximations at a high working precision (eg. Digits=19, or 24, or 40, etc) and then rounding the coefficients to 16 (or 18) decimal digits is a bit dodgy. The final rounded procedure will not give the same accuracy as the original one with the high number of decimal digit coefficients, and that is even more so if the rounded procedure is compiled while the original is run at some higher working precision. So the value of `maxerror` as computed above is not a promise of accuracy about the final approximant. It's not even clear that finding the pair [m,n] of degrees for numerator & denominator which produce a minimal `maxerror` will lead to the most accurate procedure after rounding coefficients to 16 or so places.

@PatrickT If the Array of color values has less entries than is needed for the grid or mesh of the plot structure's surface representation then one can get that error message from the GUI, and the surface may even be rendered as entirely black.

If your create a 3D plot with m*n surface data points (by passing the grid=[m,n] option to plot3d, say) then the float[8] Array of color values has to contain at least three times as many values, ie m*n*3 entries. Each data point on the surface gets a triple of values when coloring the surface by the RGB scheme. These triples can be taken from the RGB layers of an image Array.

If you start off with a 600x400 pixel image then you'll need to do at least these things: 1) create a 3D plot using the grid=[m,n] option where m and n are in a same ratio as 600 and 400, and 2) resize the image's Array to match the dimensions m*n*3. That's why I like using ImageTools:-Preview, since it rejigs the Array, and can set the size to match what's in the plot.

You should be able to use single forward-slashes instead of double backslashes in Maple on Windows. It works for me.

Perhaps you were using the Classic GUI when you avoided implosion during the animation.

The image of planet Mars which your comment cites, as URL, is not the right projection for the simple coords=cylindrical that John showed. And it's not just that the view of Mars does not have the right projection. The image also has a lot of extraneous white space in its wide side-borders, and text, etc. All of that is what would get used in the transferral, and of course that's not what you want.

The site that John mentioned also has images of other planets of our solar system.

 

restart:

 

http://www.vendian.org/mncharity/dir3/planet_globes/TemporaryURL/mars0_src_smaller.jpg

im:=ImageTools:-Read(cat(kernelopts('homedir'),
                     "/Downloads/mars0_src_smaller.jpg")):

n := 128;

128

p:=ImageTools:-Preview(ImageTools:-Transpose(im),2*n,n):

q:=plot3d(1, x=0..2*Pi, y=0..Pi, coords=spherical, style=surface,

          grid=[2*n,n]):

plots:-display(PLOT3D(MESH(op([1,1],q), op([1,4..-1],p)), op(2..-1,q)));

Download marsglobe1.mw 

 

First 401 402 403 404 405 406 407 Last Page 403 of 597