dharr

Dr. David Harrington

8477 Reputation

22 Badges

21 years, 33 days
University of Victoria
Professor or university staff
Victoria, British Columbia, Canada

Social Networks and Content at Maplesoft.com

Maple Application Center
I am a retired professor of chemistry at the University of Victoria, BC, Canada. My research areas are electrochemistry and surface science. I have been a user of Maple since about 1990.

MaplePrimes Activity


These are replies submitted by dharr

@Alfred_F I edited to show that the wolf always catches the goat (at least for the equal-speed case). The simple formulation of the equations comes from Barton J C and Eliezer C J 2000 Journal of the Australian Mathematical Society Series B 41 358, but with both speeds constant. This case means the goat can't have a strategy and sometimes is going towards the wolf. Barton and Eliezer's formulation allows for the wolf/goat speed ratio to be constant, but I think allows each to vary in time, though that is also not a realistic scenario.

It's a nice feature of dsolve that you can feed it a mix of des and equations and it finds a numeric solution (via DAE methods); you don't have to eliminate variables yourself as a prestep.

"he and the goat will be moving at the same speed at the beginning of the hunt and throughout the pursuit" - I assume this means (1) a constant speed for each with each the same, and not (2) the speeds may vary with time, so long as at any time the wolf's speed is equal to the goat's speed.

Are we to assume the wolf never "second guesses" the goat's position, so that the wolf's velocity vector is always pointed directly at the goat? (I assume this is what @janhardo means by focused.)

@salim-barzani I understand the general strategy of what you are doing, but not exactly what you want. My best guess is that you want to convert from one Jacobi function to another. The help page shows how to do this - some are simple and some are not.

restart

de := (diff(G(xi), xi))^2 = lambda[0]+lambda[2]*G(xi)^2+lambda[4]*G(xi)^4

(diff(G(xi), xi))^2 = lambda[0]+lambda[2]*G(xi)^2+lambda[4]*G(xi)^4

We only get the JacobiSN solution

snsol := rhs(select(has, [dsolve(de)], JacobiSN)[])

JacobiSN((1/2)*(2*(-4*lambda[0]*lambda[4]+lambda[2]^2)^(1/2)-2*lambda[2])^(1/2)*xi+c__1, (-2*(lambda[2]*(-4*lambda[0]*lambda[4]+lambda[2]^2)^(1/2)+2*lambda[0]*lambda[4]-lambda[2]^2)*lambda[0]*lambda[4])^(1/2)/(lambda[2]*(-4*lambda[0]*lambda[4]+lambda[2]^2)^(1/2)+2*lambda[0]*lambda[4]-lambda[2]^2))*lambda[0]*2^(1/2)/(lambda[0]*(-lambda[2]+(-4*lambda[0]*lambda[4]+lambda[2]^2)^(1/2)))^(1/2)

nssol := convert(snsol, JacobiNS)

lambda[0]*2^(1/2)/(JacobiNS((1/2)*(2*(-4*lambda[0]*lambda[4]+lambda[2]^2)^(1/2)-2*lambda[2])^(1/2)*xi+c__1, (-2*(lambda[2]*(-4*lambda[0]*lambda[4]+lambda[2]^2)^(1/2)+2*lambda[0]*lambda[4]-lambda[2]^2)*lambda[0]*lambda[4])^(1/2)/(lambda[2]*(-4*lambda[0]*lambda[4]+lambda[2]^2)^(1/2)+2*lambda[0]*lambda[4]-lambda[2]^2))*(lambda[0]*(-lambda[2]+(-4*lambda[0]*lambda[4]+lambda[2]^2)^(1/2)))^(1/2))

amsol := convert(snsol, JacobiAM)

sin(JacobiAM((1/2)*(2*(-4*lambda[0]*lambda[4]+lambda[2]^2)^(1/2)-2*lambda[2])^(1/2)*xi+c__1, (-2*(lambda[2]*(-4*lambda[0]*lambda[4]+lambda[2]^2)^(1/2)+2*lambda[0]*lambda[4]-lambda[2]^2)*lambda[0]*lambda[4])^(1/2)/(lambda[2]*(-4*lambda[0]*lambda[4]+lambda[2]^2)^(1/2)+2*lambda[0]*lambda[4]-lambda[2]^2)))*lambda[0]*2^(1/2)/(lambda[0]*(-lambda[2]+(-4*lambda[0]*lambda[4]+lambda[2]^2)^(1/2)))^(1/2)

From the help page

JacobiDN(z, k)^2 = 1-k^2*JacobiSN(z, k)^2; dntosnzk := JacobiSN(z, k) = solve(%, JacobiSN(z, k))[1]; dntosn := eval(dntosnzk, {k = sqrt(-(2*(lambda[2]*sqrt(-4*lambda[0]*lambda[4]+lambda[2]^2)+2*lambda[0]*lambda[4]-lambda[2]^2))*lambda[0]*lambda[4])/(lambda[2]*sqrt(-4*lambda[0]*lambda[4]+lambda[2]^2)+2*lambda[0]*lambda[4]-lambda[2]^2), z = (1/2)*sqrt(2*sqrt(-4*lambda[0]*lambda[4]+lambda[2]^2)-2*lambda[2])*xi+c__1})

JacobiDN(z, k)^2 = 1-k^2*JacobiSN(z, k)^2

JacobiSN(z, k) = (1-JacobiDN(z, k)^2)^(1/2)/k

JacobiSN((1/2)*(2*(-4*lambda[0]*lambda[4]+lambda[2]^2)^(1/2)-2*lambda[2])^(1/2)*xi+c__1, (-2*(lambda[2]*(-4*lambda[0]*lambda[4]+lambda[2]^2)^(1/2)+2*lambda[0]*lambda[4]-lambda[2]^2)*lambda[0]*lambda[4])^(1/2)/(lambda[2]*(-4*lambda[0]*lambda[4]+lambda[2]^2)^(1/2)+2*lambda[0]*lambda[4]-lambda[2]^2)) = (1-JacobiDN((1/2)*(2*(-4*lambda[0]*lambda[4]+lambda[2]^2)^(1/2)-2*lambda[2])^(1/2)*xi+c__1, (-2*(lambda[2]*(-4*lambda[0]*lambda[4]+lambda[2]^2)^(1/2)+2*lambda[0]*lambda[4]-lambda[2]^2)*lambda[0]*lambda[4])^(1/2)/(lambda[2]*(-4*lambda[0]*lambda[4]+lambda[2]^2)^(1/2)+2*lambda[0]*lambda[4]-lambda[2]^2))^2)^(1/2)*(lambda[2]*(-4*lambda[0]*lambda[4]+lambda[2]^2)^(1/2)+2*lambda[0]*lambda[4]-lambda[2]^2)/(-2*(lambda[2]*(-4*lambda[0]*lambda[4]+lambda[2]^2)^(1/2)+2*lambda[0]*lambda[4]-lambda[2]^2)*lambda[0]*lambda[4])^(1/2)

dnsol := subs(dntosn, snsol)

(1-JacobiDN((1/2)*(2*(-4*lambda[0]*lambda[4]+lambda[2]^2)^(1/2)-2*lambda[2])^(1/2)*xi+c__1, (-2*(lambda[2]*(-4*lambda[0]*lambda[4]+lambda[2]^2)^(1/2)+2*lambda[0]*lambda[4]-lambda[2]^2)*lambda[0]*lambda[4])^(1/2)/(lambda[2]*(-4*lambda[0]*lambda[4]+lambda[2]^2)^(1/2)+2*lambda[0]*lambda[4]-lambda[2]^2))^2)^(1/2)*(lambda[2]*(-4*lambda[0]*lambda[4]+lambda[2]^2)^(1/2)+2*lambda[0]*lambda[4]-lambda[2]^2)*lambda[0]*2^(1/2)/((-2*(lambda[2]*(-4*lambda[0]*lambda[4]+lambda[2]^2)^(1/2)+2*lambda[0]*lambda[4]-lambda[2]^2)*lambda[0]*lambda[4])^(1/2)*(lambda[0]*(-lambda[2]+(-4*lambda[0]*lambda[4]+lambda[2]^2)^(1/2)))^(1/2))

NULL

Download convertJacobi.mw

@salim-barzani "can we convert it from trigonometric to that shape?". No, you can go from Jacobi to trig by specifying m, but this loses information, so you can't go the other way. If dsolve gives Jacobi, then you can convert it to trig by specifying m.

"and there is any way to do automatically test all that solution in tht table for ode?" If you have the table with the conditions on the parameters, then you just do odetest as you have been. Are you asking about automatically generating the table? Then I do not think so - some variations are far from obvious. What would be the criterion for deciding the table had no further rows? 

I really don't understand the point of the tables. For other cases, I thought the point was listing explicitly real solutions, but here we have solutions such as sn(delta) + I*cn(delta) - it this real for this parameter set. I don't know but it would be hard to decide.

I'm still confused as to what exactly you want. 

@salim-barzani It seems like you know how to do this, so I am not sure what you are asking. Please specify an single exact problem and what you want done.

@salim-barzani Converting abs to conjugate immediately doesn't always solve the problem of diff with abs, but adding the Physics package usually solves this problem.

For the substitution u(x,t) =f(x,t)*exp(I*g(x,t)), it makes sense to me that one would choose f(x,t) and g(x,t) real since then they are simply interpreted as amplitude and phase functions, and subsequent math using real functions is simpler. Since Eq 11 in the original paper in this thread has no conjugate(theta), I think those authors chose theta real.
In the last paper f(x,t) was allowed to be complex and so the equivalent of Eq 11 had complex conjugates. However then the phase is not simply g(x,t) which leads to difficulties in interpretation. Bottom line, you can do it either way.

Edit:

If one just assumes Q(x,t) is real, then one finds that m is always real, which makes sense since it is part of the phase under the assumption f(x,t) and g(x,t) are real:

p5.mw

Perhaps there is some subtle physics reason to have a complex amplitude function. But looks like different papers make different choices.

@salim-barzani Yes, I assumed Q(x,t) was real. Rather than the awkward hand-coding of abs(u(x,t)) as U, it is easier just to enter the abs as written, then use convert(.., conjugate). So here is this case, which comes out correctly. [Edit, p4 slightly more general than earlier p3]

restart

with(PDEtools); with(LinearAlgebra); with(Physics, diff)

undeclare(prime, quiet)

declare(u(x, t), quiet); declare(Q(x, t), quiet)

pde := convert(I*(diff(u(x, t), t))+diff(u(x, t), `$`(x, 2))+abs(u(x, t))^2*u(x, t), conjugate)

I*(diff(u(x, t), t))+diff(diff(u(x, t), x), x)+u(x, t)^2*conjugate(u(x, t))

T := u(x, t) = (sqrt(a)+Q(x, t))*exp(I*a*t)

u(x, t) = (a^(1/2)+Q(x, t))*exp(I*a*t)

pde2 := `assuming`([eval(pde, T)], [a::positive, x::real, t::real])

I*((diff(Q(x, t), t))*exp(I*a*t)+I*(a^(1/2)+Q(x, t))*a*exp(I*a*t))+(diff(diff(Q(x, t), x), x))*exp(I*a*t)+(a^(1/2)+Q(x, t))^2*(exp(I*a*t))^2*exp(-I*a*t)*(a^(1/2)+conjugate(Q(x, t)))

E := expand(simplify(collect(pde2/exp(I*a*t), exp), exp))

Q(x, t)*a+a*conjugate(Q(x, t))+2*a^(1/2)*conjugate(Q(x, t))*Q(x, t)+a^(1/2)*Q(x, t)^2+conjugate(Q(x, t))*Q(x, t)^2+I*(diff(Q(x, t), t))+diff(diff(Q(x, t), x), x)

This needs to be linearized with respect to Q. Just throw away second and higher order terms in Q/conjugateQ. We can't use coeff on Q(x,t), so will do it in jet notation.

PJ := ToJet(E, Q(x, t), jetnotation = jetvariableswithbrackets); Q_vars := select(has, indets(%), Q); P2 := FromJet(remove(proc (q) options operator, arrow; 1 < degree(q, Q_vars) end proc, PJ), Q(x, t))

{Q[], Q[t], Q[x, x], conjugate(Q[])}

Q(x, t)*a+a*conjugate(Q(x, t))+I*(diff(Q(x, t), t))+diff(diff(Q(x, t), x), x)

We assume theta has the form

TT := Q(x, t) = r[1]*exp(I*(l*x-m*t))+r[2]*exp(-I*(l*x-m*t))

Q(x, t) = r[1]*exp(I*(l*x-m*t))+r[2]*exp(-I*(l*x-m*t))

We substitute this in P2 and collect the coefficients of the two exp terms

P3 := `assuming`([eval(P2, TT)], [real])

(r[1]*exp(I*(l*x-m*t))+r[2]*exp(-I*(l*x-m*t)))*a+a*(r[1]*exp(-I*(l*x-m*t))+r[2]*exp(I*(l*x-m*t)))+I*(-I*r[1]*m*exp(I*(l*x-m*t))+I*r[2]*m*exp(-I*(l*x-m*t)))-r[1]*l^2*exp(I*(l*x-m*t))-r[2]*l^2*exp(-I*(l*x-m*t))

P4 := collect(P3, exp); indets(%)

(-l^2*r[2]+a*r[1]+a*r[2]-m*r[2])*exp(-I*(l*x-m*t))+(-l^2*r[1]+a*r[1]+a*r[2]+m*r[1])*exp(I*(l*x-m*t))

{a, l, m, t, x, r[1], r[2], exp(-I*(l*x-m*t)), exp(I*(l*x-m*t))}

subs({exp(-I*(l*x-m*t)) = X, exp(I*(l*x-m*t)) = Y}, P4); eqs := [coeff(%, X), coeff(%, Y)]

[-l^2*r[2]+a*r[1]+a*r[2]-m*r[2], -l^2*r[1]+a*r[1]+a*r[2]+m*r[1]]

Set up these as matrix equations in the unknowns alpha[1] and alpha[2]

M, b := GenerateMatrix(eqs, [r[1], r[2]])

Matrix(%id = 36893490112467605252), Vector[column](%id = 36893490112467605132)

There is a nontrivial solution to these equations iff the determinant is zero. The determinant is a quadratic in m.

detM := Determinant(M)

-l^4+2*a*l^2+m^2

And we solve for w to define the dispersion relationship. T

ans := sort([solve(detM, m)])

[(l^2-2*a)^(1/2)*l, -(l^2-2*a)^(1/2)*l]

m is real when the expression under the square root is positive

ans[1]

(l^2-2*a)^(1/2)*l

Since detM is a quadratic, we can just find the discriminant.

expand(discrim(detM, m)/(4*l^2))

l^2-2*a

NULL

Download p4.mw

@acer Thanks for that improvement. (I played around with this at various times, and thought I'd tried that and found it inferior, but I must have misremembered.). I expected that _d01amc would be the default for a semi-infinite range, but infolevel suggests the range was split into 2 and the part from 15 to infinity done by a non-NAG routine.

I guess the take-home message is that if you are about to do a plot with thousands of integrations, choose a specific NAG one (after testing it against the slower default) so that the overhead of the singularity analysis etc is not done thousands of times.

@Paras31 Here's an analysis that shows how to get both fast and accurate results from the Fokas method.

restart;

Maple's solution as per Rouben's answer - we'll use this as the benchmark

rsol:=1/2/Pi^(1/2)*(1/4*x*Int(sin(t-zeta)/zeta^(7/2)*exp(-1/4*x^2/zeta)*(x^2
-6*zeta),zeta = 0 .. t)+Pi^(1/2)*(erf(1/2*(2*t+x)/t^(1/2))*exp(x+t)-erf(1/2*(2*
t-x)/t^(1/2))*exp(-x+t)-exp(x+t)+exp(-x+t)));

(1/2)*((1/4)*x*(Int(sin(t-zeta)*exp(-(1/4)*x^2/zeta)*(x^2-6*zeta)/zeta^(7/2), zeta = 0 .. t))+Pi^(1/2)*(erf((1/2)*(2*t+x)/t^(1/2))*exp(x+t)-erf((1/2)*(2*t-x)/t^(1/2))*exp(-x+t)-exp(x+t)+exp(-x+t)))/Pi^(1/2)

Take a sample (x,t) for testing the different approaches.

params := {x = 0.5, t = 0.1};

{t = .1, x = .5}

ans_exact := evalf(eval(rsol, params));

.6581132195

As part of the Fokas method we need to evaluate an integral - set up its integrand. Here I use expressions rather than nested functions for a little more efficiency

V := -1/2*I*exp(k*x*I - k^2*t)*(1/(k - I) + 1/(k + I) - k*(1/(k^2 + I) + 1/(k^2 - I)))/Pi:
k1 := r*exp(1/8*I*Pi):
k2 := r*exp(7/8*I*Pi):
dk1 := exp(1/8*I*Pi):
dk2 := exp(7/8*I*Pi):
integrand1 := eval(V, k = k1)*dk1 - eval(V, k = k2)*dk2;

-((1/2)*I)*exp(I*r*exp(((1/8)*I)*Pi)*x-r^2*(exp(((1/8)*I)*Pi))^2*t)*(1/(r*exp(((1/8)*I)*Pi)-I)+1/(r*exp(((1/8)*I)*Pi)+I)-r*exp(((1/8)*I)*Pi)*(1/(r^2*(exp(((1/8)*I)*Pi))^2+I)+1/(r^2*(exp(((1/8)*I)*Pi))^2-I)))*exp(((1/8)*I)*Pi)/Pi+((1/2)*I)*exp(I*r*exp(((7/8)*I)*Pi)*x-r^2*(exp(((7/8)*I)*Pi))^2*t)*(1/(r*exp(((7/8)*I)*Pi)-I)+1/(r*exp(((7/8)*I)*Pi)+I)-r*exp(((7/8)*I)*Pi)*(1/(r^2*(exp(((7/8)*I)*Pi))^2+I)+1/(r^2*(exp(((7/8)*I)*Pi))^2-I)))*exp(((7/8)*I)*Pi)/Pi

Numeric evaluation of this integral is very slow - 10 minutes!
But after adding the extra expression required by the Fokas method, it does give the correct answer

Fokas_int1 := CodeTools:-Usage(evalf(Int(eval(integrand1, params), r = 0 .. infinity)));
Fokas1 := evalf(eval(exp(-x/sqrt(2))*cos(t - x/sqrt(2)), params))+Fokas_int1;
ans_exact;

memory used=168.99GiB, alloc change=140.30MiB, cpu time=10.48m, real time=8.88m, gc time=2.50m

-0.2162433743e-1+0.*I

.6581132200+0.*I

.6581132195

The integrand is not explicitly real, but since it should be real, the OP (and the Mathematica code) evaluated the integral of the real part.
For this set of parameters (and default settings), we get an incorrect result, so this is not a good strategy.

Fokas_int2 := CodeTools:-Usage(evalf(Int(eval(Re(integrand1), params), r = 0 .. infinity)));

memory used=1.19GiB, alloc change=-5.91MiB, cpu time=8.14s, real time=7.23s, gc time=1.47s

Float(infinity)

Since the integrand is real, it makes sense to convert it into an explicitly real form

integrand2 := simplify(evalc(integrand1));

r*(((r^8-2*r^4+1)*2^(1/2)-2*r^6-2*r^2)*cos((1/2)*(r*t*2^(1/2)-2*cos((1/8)*Pi)*x)*r)+sin((1/2)*(r*t*2^(1/2)-2*cos((1/8)*Pi)*x)*r)*(r^8*2^(1/2)+2*r^6-2*r^2-2^(1/2)))*exp(-(1/2)*(r*t*2^(1/2)+2*cos((3/8)*Pi)*x)*r)/((r^8+1)*(1+r^4+r^2*2^(1/2))*Pi)

This is much faster and gives the correct result.

Fokas_int2 := CodeTools:-Usage(evalf(Int(eval(integrand2, params), r = 0 .. infinity)));
Fokas2 := evalf(eval(exp(-x/sqrt(2))*cos(t - x/sqrt(2)), params)) + Fokas_int2;
ans_exact;

memory used=184.17MiB, alloc change=0 bytes, cpu time=953.00ms, real time=837.00ms, gc time=171.88ms

-0.2162433743e-1

.6581132200

.6581132195

However, it is still slow enough that it takes a very long time to do a 3D plot at different (x,t) with grid = [40,40] or even a 2D plot vs x for a fixed t with the default number of points and adaptive plotting.

So we seek to make it faster. The integrand is oscillating, and there is a NAG method _d01akc for oscillating integrals but it is not for an infnite range.
So we make a change of variables to a finite range.
Look at the asymptotic behaviour, whch is dominated by exp(-r^2*t/sqrt(2)).

asympt(integrand2, r);

O(1/r^3)*exp(-(1/2)*2^(1/2)*r^2*t)/exp(cos((3/8)*Pi)*r*x)

Suggesting the following change of variables to u ranging from 0 to 1

itr := u = 1 - exp(-r^2*t/sqrt(2));
eval(itr,r=0), eval(itr,r=infinity) assuming t>0;
tr := r = solve(itr, r)[1];

u = 1-exp(-(1/2)*2^(1/2)*r^2*t)

u = 0, u = 1

r = (-t*2^(1/2)*ln(-u+1))^(1/2)/t

int3 := PDEtools:-dchange(tr,Int(integrand2, r = 0..infinity), [u], simplify) assuming t>0;

t*(Int(((t^2+4*t*ln(-u+1)+2*ln(-u+1)^2)*cos((-ln(-u+1))^(1/2)*2^(1/4)*(-(1/2)*2^(3/4)*(-ln(-u+1))^(1/2)*t^(1/2)+cos((1/8)*Pi)*x)/t^(1/2))+sin((-ln(-u+1))^(1/2)*2^(1/4)*(-(1/2)*2^(3/4)*(-ln(-u+1))^(1/2)*t^(1/2)+cos((1/8)*Pi)*x)/t^(1/2))*(t^2-2*ln(-u+1)^2))*exp(-2^(1/4)*(-ln(-u+1))^(1/2)*cos((3/8)*Pi)*x/t^(1/2))/(4*ln(-u+1)^4+t^4), u = 0 .. 1))/Pi

integrand3 := IntegrationTools:-GetIntegrand(int3):
Fokas_int3 := t/Pi*Int(integrand3, u = 0..1, method = _d01akc):

It is indeed faster.

CodeTools:-Usage(evalf(eval(Fokas_int3, params)));

memory used=149.23KiB, alloc change=0 bytes, cpu time=16.00ms, real time=12.00ms, gc time=0ns

-0.2162433742e-1

We combine it with the rest to make a final procedure

approx_u := unapply(exp(-x/sqrt(2))*cos(t - x/sqrt(2)) + t/Pi*Int(integrand3, u = 0..1, method = _d01akc), x,t, numeric):

evalf(approx_u(0.5,0.1));

.6581132200

CodeTools:-Usage(plot3d(approx_u, 0 .. 3, 0 .. 2*Pi, grid = [40, 40], axes = boxed, labels = ["x", "t", "u(x,t)"], title = "Fokas Method of solution of Half-Line Heat Equation", shading = zhue));

memory used=272.54MiB, alloc change=0 bytes, cpu time=9.33s, real time=9.13s, gc time=437.50ms

p1 := plot(x -> approx_u(x, 0.1), 0 .. 2, color = red):
p2 := plot(x -> approx_u(x, 0.2), 0 .. 2, color = green):
p3 := plot(x -> approx_u(x, 0.3), 0 .. 2, color = blue):
p4 := plot(x -> approx_u(x, 0.4), 0 .. 2, color = magenta):
p5 := plot(x -> approx_u(x, 0.5), 0 .. 2, color = RGB(1.0, 0.5, 0.)):
p6 := plot(x -> approx_u(x, 0.6), 0 .. 2, color = cyan):
p0 := plot(exp(-x), x = 0 .. 2, linestyle = dash, color = black):
plots:-display([p0, p1, p2, p3, p4, p5, p6], legend = ["u(x,0)", "t=0.1", "t=0.2", "t=0.3", "t=0.4", "t=0.5", "t=0.6"], legendstyle = [location = right, font = [Helvetica, 12]], view = [0 .. 2, 0 .. 1.2], labels = ["x", "u(x,t)"], labelfont = [Helvetica, 12], axes = boxed, title = "Short-Time Behavior of u(x,t)");

NULL

Download Fokas.mw

@dharr "and about step one is not about appearing twice i think, is about non linear term which we chose which a-N is bigger and non linear term is about a+b-N which N is begger and have same a then we can find b like the non couple equation ."

Let me see if I understand this. Say some terms give a-3, a-4, a-5 then we take the minimum a-5. Now suppose we have also a+b-3, a+b-4, a+b-5 so we take a+b-5. So now we set them equal and find b=0. Then the value a-5 and a+0-5 appears twice (since we set them equal). If we have different N, then we get different b (for the same a), e.g.,

a-6 = a+b-5 leads to b=-1, and then the value a-6 = a-1-5 appears twice.

Is there a problem with this?

@salim-barzani You say "intrested about the u[0],v[0] i think they are not the same which i am not sure which is corect  i don't have issue with yours one of g[x] is extra if you watch you will see". I don't see an extra g[x]; please specify exactly where this is (there are some g's there because I am not not calculating u[0] directly; do I have them incorrect?).

" the other point is about that condition how he got that condition in eq((16),  and when N=2 we have to find the u[1],u[2],..u[6] and v[1],v[2],...v[6] ". I don't see N=2 in eq 16. Please specify exactly where you mean.

You have now added the other paper here which I didn't deal with. That paper has p =1 and not p=-1 so that seems wrong to me. 

Basically I am not understanding; please spend a bit more time specifying some details of what you think is wrong or needs to be done. 

 

@salim-barzani To get the resonances and other solutions correct you have to evaluate with the earlier solutions (see how I did the KdV example), and the resonant ones set to zero (not sure why, but otherwise their deriatives don't disappear). 

all-steps-Dr.D.mw

I am working to understand the coupled case.

@salim-barzani I have updated the files here to reflect my improved understanding of the step 1 procedure. In particular see the plot in AllStepsHBnew.mw for how to interpret the lowest exponents rule. I'll work on the coupled case next.

1 2 3 4 5 6 7 Last Page 2 of 89