acer

32597 Reputation

29 Badges

20 years, 41 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

There are several aspects to improving performance here.

One aspect is the Maple level overhead that occurs before the external call to the compiled cuhre library. Another aspect is the what optimizations can be gained by parallelization.

It's often a good idea to make sure that the serial version runs as fast as possible before trying to paralellize.

Let's consider Library-level overhead first.
 - Don't load all of Student:-MultivariableCalculus. If you need one of its exported commands (and it appears that you don't) then call by its explicit long-form name.
 - Ensure that the integrand is evalhf'able (yours was, so good).
 - Use the operator form calling sequence, rather than expression form, for the integrand. (That saves Maple from having to figure out just how to call `makeproc`.)

I am not sure that the usual entry-points of evalf(Int(...) or int(...,numeric) are thread-safe at the Library level, even when using operator form. At least once I got a seg-fault from the external routine, which I suspect is because the threads were mashing data at the Library level (but I'm not 100% sure why).

If I get time I may look at what overhead could be avoided by calling `evalf/int/MultipleInt`:-cuhre directly with operator form integrand, or possibly even duplicate its methodology. I suspect that Library-level overhead is a measurable portion even for the best timing I show below.

As far as parallelization goes, there is still the distinction between the timing cost of the external code and the preliminary Library-level lead-up the external call.
 - The call_external to the cuhre library may be thread-blocking (I haven't checked yet) and if so then there may also be even greater parallelization benefit in toggling that off (presuming it's thread-safe).
 - There are some small float[8] Arrays that get created each time, and it's plausible that making them be thread-local to a rewritten module implementation of `evalf/int/MultipleInt`:-cuhre could allow them to be be re-used and save on memory management overhead.
 - For this kind of work I'm often only satisfied when I see the amount of garbage collection ("memory used") get very small. At N=30000 it is still 500MB in the best below. The total gc real-time may be only 0.25-0.5sec but I'd prefer it were closer to 0sec.

Anyway, there's lots to consider. Maybe I'll find some spare time for it.

Your original with operator form: thread_mp_ac1.mw

Here's a larger example (starting from Carl's example), with various flavors of operator versus expression form. Run in 64bit Linux Maple 2019.0 on a quad-core i5.

Using Threads:-Add along with operator form integrands has the best performance of these.

restart; randomize(37):

L1:=10: N:=30000: xx:=Statistics:-Sample(Uniform(0,100),N):

# without Threads, expression form
CodeTools:-Usage(
   add(int(sin(beta)/(100+ZZ*sin(beta)-xx[i]*cos(beta))^(5/2),
           [beta=0..1,ZZ=0..L1],numeric,epsilon=0.01,method=_cuhre),
       i=1..N));

memory used=0.97GiB, alloc change=36.00MiB, cpu time=12.64s, real time=12.64s, gc time=391.34ms

313.5727603

restart; randomize(37):

L1:=10: N:=30000: xx:=Statistics:-Sample(Uniform(0,100),N):

# with Threads, expression form
CodeTools:-Usage(
   Threads:-Add(int(sin(beta)/(100+ZZ*sin(beta)-xx[i]*cos(beta))^(5/2),
                    [beta=0..1,ZZ=0..L1],numeric,epsilon=0.01,method=_cuhre),
                i=1..N));

memory used=1.00GiB, alloc change=414.57MiB, cpu time=16.97s, real time=4.86s, gc time=1.17s

313.5727603

restart; randomize(37):

L1:=10: N:=30000: xx:=Statistics:-Sample(Uniform(0,100),N):

# without Threads, operator form without scoping
F:=(beta,ZZ)->sin(beta)/(100+ZZ*sin(beta)-__XX*cos(beta))^(5/2):
CodeTools:-Usage(
   add(int(subs(__XX=xx[i],eval(F)),
           [0..1,0..L1],numeric,epsilon=0.01,method=_cuhre),
       i=1..N));

memory used=0.51GiB, alloc change=36.00MiB, cpu time=7.61s, real time=7.61s, gc time=209.40ms

313.5727603

restart; randomize(37):

L1:=10: N:=30000: xx:=Statistics:-Sample(Uniform(0,100),N):

# with Threads, operator form with scoping
CodeTools:-Usage(
   Threads:-Add(int((beta,ZZ)->sin(beta)/(100+ZZ*sin(beta)-xx[i]*cos(beta))^(5/2),
                    [0..1,0..L1],numeric,epsilon=0.01,method=_cuhre),
                i=1..N));

memory used=508.07MiB, alloc change=206.57MiB, cpu time=10.28s, real time=3.91s, gc time=580.84ms

313.5727603

restart; randomize(37):

L1:=10: N:=30000: xx:=Statistics:-Sample(Uniform(0,100),N):

# with Threads, operator form without scoping

F:=(beta,ZZ)->sin(beta)/(100+ZZ*sin(beta)-__XX*cos(beta))^(5/2):
CodeTools:-Usage(
   Threads:-Add(int(subs(__XX=xx[i],eval(F)),
                    [0..1,0..L1],numeric,epsilon=0.01,method=_cuhre),
                i=1..N));

memory used=0.53GiB, alloc change=318.57MiB, cpu time=10.56s, real time=3.44s, gc time=685.54ms

313.5727603

 

Download cuhreperf.mw

 

(It's more polite to include a link to your uploaded worksheet, so that nobody else has to type it all out.)

There are a variety of possible ways.

There's less need for extra care and effort if you know that j+1 won't appear anywhere but as an index of T.

restart;

PDE:=k*diff(T(x,t),x$2)=diff(T(x,t),t);

k*(diff(diff(T(x, t), x), x)) = diff(T(x, t), t)

tay2_imp:=(T[i+1,j+1]-2*(T[i,j+1])+T[i-1,j+1])/(Dx^2);

(T[i+1, j+1]-2*T[i, j+1]+T[i-1, j+1])/Dx^2

taylot:=(T[i,j+1]-T[i,j])/(Dt);

(T[i, j+1]-T[i, j])/Dt

PDE_imp:=expand(subs({diff(T(x,t),x$2)=tay2_imp,diff(T(x,t),t)=taylot},PDE));

k*T[i+1, j+1]/Dx^2-2*k*T[i, j+1]/Dx^2+k*T[i-1, j+1]/Dx^2 = T[i, j+1]/Dt-T[i, j]/Dt

# crude way
(lhs-rhs)(map[2](select,has,PDE_imp,j+1))
= (rhs-lhs)(map[2](remove,has,PDE_imp,j+1));

k*T[i+1, j+1]/Dx^2-2*k*T[i, j+1]/Dx^2+k*T[i-1, j+1]/Dx^2-T[i, j+1]/Dt = -T[i, j]/Dt

mytype:=And(typeindex(identical(T)),patindex[reverse](anything,identical(j+1))):

 

# somewhat more robust way
ans := (lhs-rhs)(map[2](select,hastype,PDE_imp,mytype))
= (rhs-lhs)(map[2](remove,hastype,PDE_imp,mytype));

k*T[i+1, j+1]/Dx^2-2*k*T[i, j+1]/Dx^2+k*T[i-1, j+1]/Dx^2-T[i, j+1]/Dt = -T[i, j]/Dt

# and for fun
collect(ans,[indets(ans,typeindex(identical(T)))[]]);

(-2*k/Dx^2-1/Dt)*T[i, j+1]+k*T[i+1, j+1]/Dx^2+k*T[i-1, j+1]/Dx^2 = -T[i, j]/Dt

Download isol_index.mw

Here is something you might be able to work with.

The basic idea is this. If both sides of the equation are products of terms then do as follows:
 - freeze the multiplicands of both sides
 - divide both sides by the intersection of those multiplicands
 - thaw and simplify

It's quite possible that the freezing/thawing might not be necessary. I put them in to strive for generality. I would not be surprised if there were examples where it helped and other examples where it hindered.

I used simplify(expand(...),size) up front to try and get both sides to be simplified products. But for other examples that might be helped by other means, eg. factor.

By the way, why do you start out with (black) active sum calls instead of (gray) inert Sum calls. Those sum calls don't seem to compute closed forms (ie, they return as unevaluated sum calls). But every time you manipulate or simplify the expressions you have to wait while it tries in vain to compute closed forms for the newer sum calls.

restart;

E2 := (sum(sum(`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*(j+1)*m/J)*exp(-(2*I)*Pi*l*n/L), n = 0 .. L-1), m = 0 .. J-1))/(J*L)+(sum(sum(`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*(j-1)*m/J)*exp(-(2*I)*Pi*l*n/L), n = 0 .. L-1), m = 0 .. J-1))/(J*L)-4*(sum(sum(`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*j*m/J)*exp(-(2*I)*Pi*l*n/L), n = 0 .. L-1), m = 0 .. J-1))/(J*L)+(sum(sum(`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*j*m/J)*exp(-(2*I)*Pi*(l+1)*n/L), n = 0 .. L-1), m = 0 .. J-1))/(J*L)+(sum(sum(`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*j*m/J)*exp(-(2*I)*Pi*(l-1)*n/L), n = 0 .. L-1), m = 0 .. J-1))/(J*L) = h^2*(sum(sum(`#mover(mi("ρ",fontstyle = "normal"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*j*m/J)*exp(-(2*I)*Pi*l*n/L), n = 0 .. L-1), m = 0 .. J-1))/(J*L)

(sum(sum(`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*(j+1)*m/J)*exp(-(2*I)*Pi*l*n/L), n = 0 .. L-1), m = 0 .. J-1))/(J*L)+(sum(sum(`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*(j-1)*m/J)*exp(-(2*I)*Pi*l*n/L), n = 0 .. L-1), m = 0 .. J-1))/(J*L)-4*(sum(sum(`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*j*m/J)*exp(-(2*I)*Pi*l*n/L), n = 0 .. L-1), m = 0 .. J-1))/(J*L)+(sum(sum(`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*j*m/J)*exp(-(2*I)*Pi*(l+1)*n/L), n = 0 .. L-1), m = 0 .. J-1))/(J*L)+(sum(sum(`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*j*m/J)*exp(-(2*I)*Pi*(l-1)*n/L), n = 0 .. L-1), m = 0 .. J-1))/(J*L) = h^2*(sum(sum(`#mover(mi("ρ",fontstyle = "normal"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*j*m/J)*exp(-(2*I)*Pi*l*n/L), n = 0 .. L-1), m = 0 .. J-1))/(J*L)

E3 := E2*J*L; E4 := simplify(lhs(E3)) = simplify(rhs(E3))

J*L*((sum(sum(`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*(j+1)*m/J)*exp(-(2*I)*Pi*l*n/L), n = 0 .. L-1), m = 0 .. J-1))/(J*L)+(sum(sum(`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*(j-1)*m/J)*exp(-(2*I)*Pi*l*n/L), n = 0 .. L-1), m = 0 .. J-1))/(J*L)-4*(sum(sum(`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*j*m/J)*exp(-(2*I)*Pi*l*n/L), n = 0 .. L-1), m = 0 .. J-1))/(J*L)+(sum(sum(`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*j*m/J)*exp(-(2*I)*Pi*(l+1)*n/L), n = 0 .. L-1), m = 0 .. J-1))/(J*L)+(sum(sum(`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*j*m/J)*exp(-(2*I)*Pi*(l-1)*n/L), n = 0 .. L-1), m = 0 .. J-1))/(J*L)) = h^2*(sum(sum(`#mover(mi("ρ",fontstyle = "normal"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*j*m/J)*exp(-(2*I)*Pi*l*n/L), n = 0 .. L-1), m = 0 .. J-1))

sum(sum(`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*(m*(j+1)*L+l*n*J)*Pi/(J*L)), n = 0 .. L-1), m = 0 .. J-1)+sum(sum(`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*(m*(j-1)*L+l*n*J)*Pi/(J*L)), n = 0 .. L-1), m = 0 .. J-1)-4*(sum(sum(`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*(J*l*n+L*j*m)/(J*L)), n = 0 .. L-1), m = 0 .. J-1))+sum(sum(`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*(n*(l+1)*J+m*L*j)/(J*L)), n = 0 .. L-1), m = 0 .. J-1)+sum(sum(`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*(J*n*(l-1)+m*L*j)/(J*L)), n = 0 .. L-1), m = 0 .. J-1) = h^2*(sum(sum(`#mover(mi("ρ",fontstyle = "normal"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*(J*l*n+L*j*m)/(J*L)), n = 0 .. L-1), m = 0 .. J-1))

 

T := subsindets(sum(sum(`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*(m*(j+1)*L+l*n*J)*Pi/(J*L)), n = 0 .. L-1), m = 0 .. J-1)+sum(sum(`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*(m*(j-1)*L+l*n*J)*Pi/(J*L)), n = 0 .. L-1), m = 0 .. J-1)-4*(sum(sum(`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*(J*l*n+L*j*m)/(J*L)), n = 0 .. L-1), m = 0 .. J-1))+sum(sum(`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*(n*(l+1)*J+L*j*m)/(J*L)), n = 0 .. L-1), m = 0 .. J-1)+sum(sum(`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*(J*n*(l-1)+L*j*m)/(J*L)), n = 0 .. L-1), m = 0 .. J-1) = h^2*(sum(sum(`#mover(mi("ρ",fontstyle = "normal"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*(J*l*n+L*j*m)/(J*L)), n = 0 .. L-1), m = 0 .. J-1)), specfunc({Sum, sum}), proc (S) options operator, arrow; op(1, S) end proc)

`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*(m*(j+1)*L+l*n*J)*Pi/(J*L))+`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*(m*(j-1)*L+l*n*J)*Pi/(J*L))-4*`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*(J*l*n+L*j*m)/(J*L))+`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*(n*(l+1)*J+m*L*j)/(J*L))+`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*(J*n*(l-1)+m*L*j)/(J*L)) = h^2*`#mover(mi("ρ",fontstyle = "normal"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*(J*l*n+L*j*m)/(J*L))

 

simplify((`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*(m*(j+1)*L+l*n*J)*Pi/(J*L))+`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*(m*(j-1)*L+l*n*J)*Pi/(J*L))-4*`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*(J*l*n+L*j*m)/(J*L))+`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*(n*(l+1)*J+L*j*m)/(J*L))+`#mover(mi("u"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*(J*n*(l-1)+L*j*m)/(J*L)) = h^2*`#mover(mi("ρ",fontstyle = "normal"),mo("ˆ"))`[m, n]*exp(-(2*I)*Pi*(J*l*n+L*j*m)/(J*L)))*(1/exp(-(2*I)*Pi*(J*l*n+L*j*m)/(J*L))))

2*`#mover(mi("u"),mo("ˆ"))`[m, n]*(-2+cos(2*Pi*m/J)+cos(2*Pi*n/L)) = h^2*`#mover(mi("ρ",fontstyle = "normal"),mo("ˆ"))`[m, n]

TT := simplify(expand((T)),size):
if lhs(TT)::`*` and rhs(TT)::`*` then
  TTT := map[2](map,freeze,TT);
  comm := `*`(op({op(lhs(TTT))} intersect {op(rhs(TTT))}));
  new := simplify(thaw(lhs(TTT)/comm=rhs(TTT)/comm));
end if:
new;

2*`#mover(mi("u"),mo("ˆ"))`[m, n]*(-2+cos(2*Pi*m/J)+cos(2*Pi*n/L)) = h^2*`#mover(mi("ρ",fontstyle = "normal"),mo("ˆ"))`[m, n]

 

Download common_factors_ac.mw

restart;

kernelopts(version);

`Maple 2019.0, X86 64 LINUX, Mar 9 2019, Build ID 1384062`

T:=JSON:-ParseFile("https://api.myjson.com/bins/16knzm");

[table( [( "proc" ) = "getName := proc() return 'Maple' end proc:" ] ), table( [( "proc" ) = "getAge := proc() return 2018 end proc:" ] )]

parse(T[1]["proc"],statement);

proc () return 'Maple' end proc

proc () return 'Maple' end proc

getName();

Maple

parse(T[2]["proc"],statement);

proc () return 2018 end proc

proc () return 2018 end proc

getAge();

2018

# Alternative to JSON:-ParseFile
Import("https://api.myjson.com/bins/16knzm");

[table( [( "proc" ) = "getName := proc() return 'Maple' end proc:" ] ), table( [( "proc" ) = "getAge := proc() return 2018 end proc:" ] )]

 

Download JSON_example.mw

The red spam square appears visible to you because you have elevated moderator status on this site. Most members do not.

The red box only shows for postings by new members which have not yet received vote-ups (and possible Answers, I'm not sure).

If -- as moderator -- you were to remove a posting using thd red spam box then that member's account would also be deactivated.

I delete (as spam) postings from one or two advertisers (cleaning products, rivet wholesellers, etc.) each day. This forum used to be inundated with such. It was a big problem.

There is also a mechanism for moderators to delete unanswered single items ( as "Inappropriate", "Duplicate", etc) without affecting the poster's membership status.

You added material since first asking. I wrote this to address the original bit, where you seem to be asking to reverse the "product rule". First I yank off the extra 1/r^2 factor. Then I integrate wrt r, then do "by parts", and then differentiate wrt r. And then I multiply by the extra factor.

I don't know how much of that you want to automate.

I have not yet looked at the addendum.

# Using `diff`
restart;

expr:='1/r^2*diff(r^2*diff(v(r),r),r)';

(diff(r^2*(diff(v(r), r)), r))/r^2

t1,t2:=op(expr);

1/r^2, 2*r*(diff(v(r), r))+r^2*(diff(diff(v(r), r), r))

T2:=map(Int,t2,r);

Int(2*r*(diff(v(r), r)), r)+Int(r^2*(diff(diff(v(r), r), r)), r)

T2b := op(1,T2)+IntegrationTools:-Parts(op(2,T2),r^2);

r^2*(diff(v(r), r))

t1*'diff'(T2b,r)

(diff(r^2*(diff(v(r), r)), r))/r^2

# Using `Diff`
restart;

expr:='1/r^2*diff(r^2*diff(v(r),r),r)';

(diff(r^2*(diff(v(r), r)), r))/r^2

t1,t2:=op(expr);

1/r^2, 2*r*(diff(v(r), r))+r^2*(diff(diff(v(r), r), r))

new:=subs(diff=Diff,t2);

2*r*(Diff(v(r), r))+r^2*(Diff(Diff(v(r), r), r))

T2:=map(Int,new,r);

Int(2*r*(Diff(v(r), r)), r)+Int(r^2*(Diff(Diff(v(r), r), r)), r)

T2b := op(1,T2)+IntegrationTools:-Parts(op(2,T2),r^2);

(Diff(v(r), r))*r^2

t1*Diff(T2b,r)

(Diff((Diff(v(r), r))*r^2, r))/r^2

 

Download rev_prodrule.mw

The following does a "hot" edit of VectorCalculus:-D, replacing two uses of apply with direct function application.

It seems to behave so far with Maple 2016.2 through 2019.0.  Let me know if not.

restart;

 

# Hot edit, only needs running once per restart.
# Suitable for personal initialization file.
#
try
  ver:=convert(kernelopts(':-version'),string)[7..];
  ver:=parse(ver[1..StringTools:-Search(",",ver)-1]);
  if ver>=2016.0 and ver<=2019.0 then
    unprotect(:-VectorCalculus:-D);
    __Q:=ToInert(eval(:-VectorCalculus:-D)):
    if op([5,2,3,2,3,3,3,2],__Q)
       = _Inert_FUNCTION(_Inert_NAME("apply"),
                         _Inert_EXPSEQ(_Inert_LOCAL(3), _Inert_LOCAL(6))) then
      __Q:=subsop([5,2,3,2,3,3,3,2]
                  = _Inert_FUNCTION(_Inert_LOCAL(3),
                                    _Inert_EXPSEQ(_Inert_LOCAL(6))),__Q);
      print("VectorCalculus:-D updated");
    end if;
    if op([5,2,3,2,3,1,2,2,1,2,1,2,1],__Q)
       = _Inert_FUNCTION(_Inert_NAME("apply"),
                         _Inert_EXPSEQ(_Inert_PARAM(1), _Inert_LOCAL(6))) then
      :-VectorCalculus:-D:=FromInert(subsop([5,2,3,2,3,1,2,2,1,2,1,2,1]
                                     = _Inert_FUNCTION(_Inert_PARAM(1),
                                                       _Inert_EXPSEQ(_Inert_LOCAL(6))),__Q));
    end if;
  end if;
catch:
finally
  __Q:='__Q'; protect(:-VectorCalculus:-D);
end try:

"VectorCalculus:-D updated"

 

with(VectorCalculus):

assume(x, 'real');
assume(y, 'real');

 

D[1]((x,y)->x^2-y^2);

proc (x, y) options operator, arrow; 2*x end proc

D[2]((x,y)->x^2-y^2);

proc (x, y) options operator, arrow; -2*y end proc

D(x->sin(x^2));

proc (x) options operator, arrow; 2*x*cos(x^2) end proc

 

Download VCDhotedit.mw

The regression bug in apply has been reported, along with the unnecessary use of apply in VectorCalculus:-D.

This is just for general interest. I expect the OP may be busy implementing his own Emacs Lisp-based CAS (since Maple, Mathematica, Maxima, Sympy, and Axiom have all failed him) so as to finish implementing his significant improvement over FFT.

It seems that plot is having difficulty recognizing or handling some jump discontinuities (eg, at x=0.0) when that is used as the left end-point of the plotted domain.

Here I'm using Maple 12.02 to try and match your setup.

For three of the plots, if I supply a slightly wider domain (range for x) then the discont=true option seems to kick in ok and the jump is not displayed.

For plt02 I had to use an even wider range, and that one may be suffering a slightly different issue.

ps. I changed the B value for plt11 from 21 to 1, to agree with the other in that group. Perhaps a typo.

It might be possible that in modern Maple the issues could be handled by using discont=[fdiscont=[...]] with some judicious choice of options. I didn't fiddle with that because Maple 12 doesn't offer it.

I see that you've ignored my previous suggestion to compute all these plots much faster using fsolve to solve all the roots at once (per x value). Oh well.

file_discont_ac.mw

 

Is this adequate for your purposes?

with(inttrans):
sinc := x->piecewise(x = 0, 1, sin(Pi*x)/x/Pi):
S := x->sinc(x):
SE := unapply(simplify(sinc(x)*exp(-x^2)),x);
abs(fourier(convert(SE(x),Heaviside),x,s));
plot(%,s=0..5);

The original definition of SE := x->sinc(x)*exp(-x^2) gets the same plot, if used above instead. The main difference was the conversion of the piecewise to Heaviside before applying fourier. Hopefully I have not misunderstood.

I don't really understand the point of constructing an operator for SE just to plot SE(x), when an expression would do. Maybe you need/prefer an operator later on...

[edit] I see that Carl posted his working revision only moments after I posted this; doubtless we were composing at the same time. The only difference being that Carl put the Heaviside into the defn of SE while I left SE with a piecewise and only converted for the purpose of calling fourier.[/edit]

It looks like gfun:-listtorec can handle this.

You can also use rsolve(...,makeproc) to get a procedure.

Or you can construct a procedure from the recursive formula (with option remember for better efficiency). I could cut and paste the recursive formula obtained from listtorec, but for fun I build it programmatically below.

restart;

raw := gfun:-listtorec([1, 3, 5, 11, 21, 43, 85, 171, 341, 683, 1365], a(n));

[{a(n+2)-a(n+1)-2*a(n), a(0) = 1, a(1) = 3}, ogf]

rsolve(raw[1], a(n));

(4/3)*2^n-(1/3)*(-1)^n

F := rsolve(raw[1], a(n), makeproc):

seq(F(i), i=0..11);

1, 3, 5, 11, 21, 43, 85, 171, 341, 683, 1365, 2731

restart;

raw := gfun:-listtorec([1, 3, 5, 11, 21, 43, 85, 171, 341, 683, 1365], a(n));

[{a(n+2)-a(n+1)-2*a(n), a(0) = 1, a(1) = 3}, ogf]

temp1,temp2 := selectremove(type,raw[1],`=`):
G := subs(a=procname,unapply(rhs(isolate(subs(n=n-2,temp2[1]),a(n))),
                             n,proc_options=[remember]));
assign(subs(a=G,temp1)):

proc (n) option remember; procname(n-1)+2*procname(n-2) end proc

seq(G(i), i=0..11);

1, 3, 5, 11, 21, 43, 85, 171, 341, 683, 1365, 2731

 

Download listtorec.mw

I converted your Post to a Question. Please submit your future questions a Questions rather than Posts.

restart;

f := (x,y) -> x^2-y^2;

proc (x, y) options operator, arrow; x^2-y^2 end proc

Vector(2,(i)->D[i](f));

Vector(2, {(1) = proc (x, y) options operator, arrow; 2*x end proc, (2) = proc (x, y) options operator, arrow; -2*y end proc})

Vector[row](2,(i)->D[i](f));

Vector[row](2, {(1) = proc (x, y) options operator, arrow; 2*x end proc, (2) = proc (x, y) options operator, arrow; -2*y end proc})

Vector(nops([op(1,eval(f))]), (i)->D[i](f));

Vector(2, {(1) = proc (x, y) options operator, arrow; 2*x end proc, (2) = proc (x, y) options operator, arrow; -2*y end proc})

 

Download Jac_proc.mw

It's not entirely clear to me from your question whether you want a Vector of procedures or a Vector-valued procedure (ie. a single procedure which returns a Vector of the derivatives evaluated at subsequent input points). Your earlier Question about a bug under assume and with VectorCalculus loaded looked as if you wanted a Vector of procedures.

But often a Vector-valued procedure is convenient to utilize subsequently. Hence my use of codegen[JACOBIAN] below.

For example (and accepting a procedure with an arbitrary number of arguments):

restart;

makeJ := (F::procedure)->Vector(nops([op(1,eval(F))]), (i)->D[i](F)):

g := (x,y,z,a) -> x^2-y^2+sin(a*z);

proc (x, y, z, a) options operator, arrow; x^2-y^2+sin(a*z) end proc

gJ := makeJ(g);

Vector(4, {(1) = proc (x, y, z, a) options operator, arrow; 2*x end proc, (2) = proc (x, y, z, a) options operator, arrow; -2*y end proc, (3) = proc (x, y, z, a) options operator, arrow; a*cos(a*z) end proc, (4) = proc (x, y, z, a) options operator, arrow; z*cos(a*z) end proc})

map(u->u(a,b,c,d), gJ);

Vector(4, {(1) = 2*a, (2) = -2*b, (3) = d*cos(d*c), (4) = c*cos(d*c)})

map(u->u(1.2, c, 3, 2/5), gJ);

Vector(4, {(1) = 2.4, (2) = -2*c, (3) = (2/5)*cos(6/5), (4) = 3*cos(6/5)})

altmakeJ := (F::procedure)->subs(array=Vector,
                                 codegen[':-JACOBIAN']( [F] )):

gJ := altmakeJ( g );

proc (x, y, z, a) local df, grd, t1, t2, t4; t1 := x^2; t2 := y^2; t4 := sin(a*z); df := Vector(1 .. 3); df[3] := 1; df[2] := -1; df[1] := 1; grd := Vector(1 .. 4); grd[1] := 2*df[1]*x; grd[2] := 2*df[2]*y; grd[3] := df[3]*a*cos(a*z); grd[4] := df[3]*z*cos(a*z); return grd end proc

gJ(a,b,c,d);

Vector(4, {(1) = 2*a, (2) = -2*b, (3) = d*cos(d*c), (4) = c*cos(d*c)})

gJ(1.2, c, 3, 2/5);

Vector(4, {(1) = 2.4, (2) = -2*c, (3) = (2/5)*cos(6/5), (4) = 3*cos(6/5)})

 

Download Jac_proc_alt.mw

You haven't shown the full example that demands loading all of VectorCalculus, or using assume instead of say assuming. So it's not possible to properly address subsequent -- but as yet unstated -- issues you may have.

However,

restart;

kernelopts(version);

`Maple 2018.0, X86 64 LINUX, Mar 9 2018, Build ID 1298750`

f := (x, y) -> x^2-y^2;

proc (x, y) options operator, arrow; x^2-y^2 end proc

with(VectorCalculus):

assume(x, 'real');
assume(y, 'real');

:-D[1](f);

proc (x, y) options operator, arrow; 2*x end proc

:-D[2](f);

proc (x, y) options operator, arrow; -2*y end proc

:-D[1]( unapply(x^2-y^2, [x,y]) );

proc (x, y) options operator, arrow; 2*x end proc

:-D[2]( unapply(x^2-y^2, [x,y]) );

proc (x, y) options operator, arrow; -2*y end proc

 

Download VCD.mw

eqs := {a + b + c = 1,
        a^2 + b^2 + c^2 = 2,
        a^3 + b^3 + c^3 = 3}:

a^4 + b^4 + c^4 = eval(R,solve(eqs union {a^4 + b^4 + c^4=R}));

                        4    4    4   25
                       a  + b  + c  = --
                                      6 
a^5 + b^5 + c^5 = eval(R,solve(eqs union {a^5 + b^5 + c^5=R}));

                         5    5    5    
                        a  + b  + c  = 6

I recall having the same questions, when ColorTools first appeared.

At that time I was able to find, in the internal code, a transformation Matrix corresponding to D50 (which confirmed a few empirical tests). In a subsequent release I noticed that had changed to correspond to D65. But there seems to be little or no documentation about that.

And I believe that by "RGB" the ColorTools package means sRGB, not a wider gamut like Adobe RGB. (Some monitors allow one to toggle amongst such. But I have never tested what effect that might have on what the Maple GUI shows. A missing bit of knowledge might be which the Java GUI's renderer is using.)

Another interesting detail might be just how ColorTools makes certains colors (when converted to RGB from, say, Lab or Luv) "displayable". It's not always clear how it may regress back to the displayable gamut, ie. does it regress linearly back to the white point, or what?

Eventually I wrote my own converters (also via XYZ colorspace) because I wanted them to Compile and be fast on datatype=float[8] rtables representing many colors (eg, images). That gives me full control about the mathematical conversion, though details of the GUI's renderer may still be a missing link in the chain, if I want to visualize.

What form do you want for the results?

restart;

kernelopts(version);

`Maple 2015.2, X86 64 LINUX, Dec 20 2015, Build ID 1097895`

f:=(coth(x)^(1/3)-tanh(x)^(1/3))*(coth(x)^(2/3)+tanh(x)^(2/3)+1);

(coth(x)^(1/3)-tanh(x)^(1/3))*(coth(x)^(2/3)+tanh(x)^(2/3)+1)

simplify(expand(f)) assuming x>0;

1/(sinh(x)*cosh(x))

h:=simplify(combine(f,trig)) assuming x>0;

1/(sinh(x)*cosh(x))

combine(h);

2/sinh(2*x)

g:=(coth(x)^(1/3)-tanh(x)^(1/3))^2+4;

(coth(x)^(1/3)-tanh(x)^(1/3))^2+4

simplify(map(simplify,combine(g)),trigh) assuming x>0;

2+tanh(x)^(2/3)+coth(x)^(2/3)

P:=u->simplify(map(simplify,expand(simplify(expand(u)))),trigh):

P(f) assuming x>0;

1/(sinh(x)*cosh(x))

P(g) assuming x>0;

cosh(x)^(2/3)/sinh(x)^(2/3)+sinh(x)^(2/3)/cosh(x)^(2/3)+2

Q:=u->simplify(combine(convert(u,tanh)),trigh):

Q(f) assuming x>0;

1/tanh(x)-tanh(x)

Q(g) assuming x>0;

1/tanh(x)^(2/3)+tanh(x)^(2/3)+2

 

Download trighsimp.mw

First 156 157 158 159 160 161 162 Last Page 158 of 339