Carl Love

Carl Love

27306 Reputation

25 Badges

11 years, 364 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are answers submitted by Carl Love

Use explicit multiplication symbols:

PDE :=
    diff(G(a, H, phi, PI), a)*(aH) + diff(G(a, H, phi, PI), H)*(k/a^2 - kappa^2/2*PI^2/a^6)
   + diff(G(a, H, phi, PI), phi)*(PI/a^3)
   = diff(G(a, H, phi, PI), PI)*(a^3*diff(V(phi), phi))
;

pdsolve(PDE, G);

[This Answer is similar to @Kitonum's, but I wrote it independently before seeing his. I first solve for the four angles given the constrsaints, then I construct the points and plot.]

The Answer by @C_R shows a self-intersecting quadrilateral. However, it's possible to satisfy all the constraints with a simple convex planar quadrilateral:

restart
:
#Divide all distances by 10. Express each in both directions:
(ab, bc, cd, da):= ((ba, cb, dc, ad):= (17, 17, 29, 18))
:
#one side of Law of Cosines:
LoC:= (a::symbol, b::symbol, c::symbol)-> 
    eval(cat(a,b)^2 + cat(b,c)^2 - 2*cat(a,b)*cat(b,c)*cos(b)):

angs:= fsolve(
    {
        #law of cosines viewing diagonal ac as a side of triangles abc and adc:
        LoC(a,b,c) = LoC(a,d,c),
        #law of cosines viewing diagonal bd as a side of triangles bcd and abd:
        LoC(b,c,d) = LoC(b,a,d),
        #law of cosines for equality of diagonals:
        LoC(a,b,c) = LoC(d,c,b),
        #sum of interior angles:
        a+b+c+d = 2*Pi
    },
    {(a,b,c,d)=~ 0..Pi}
);
  angs := {a = 1.879417129, b = 1.962935269, c = 1.228331866, d = 1.212501044}

#Construct it: Arbitrarily put A at origin and B directly to the right of A:
A:= [0,0]:  B:= A +~ [ab,0]:  
C:= B +~ bc*~(cos,sin)(eval(Pi-b, angs)); 
`&D;`:= A +~ da*~(cos,sin)(eval(a, angs));
                C := [23.49681980, 15.70959364]
                D := [-5.467408027, 17.14956120]
plots:-display(
    plot([[A,B,C,`&D;`,A], [A,C], [B,`&D;`]], thickness= 3, color= [red, blue$2]),
    plots:-textplot(
        [
            [A[], "A", align= {below, left}], 
            [B[], "B", align= {below, right}],
            [C[], "C", align= {above, right}],
            [`&D;`[], "D", align= {above, left}]
        ], font= [HELVETICA, BOLD, 16]
    ),
    axes= none, scaling= constrained
);

#Verify constraints:
dist:= (P,Q)-> sqrt(add((P-~Q)^~2)):
dist(A,B);
                               17
dist(B,C);
                          16.99999999
dist(C,`&D;`);
                          29.00000001
dist(A,`&D;`);
                          18.00000000
dist(A,C);
                          28.26467536
dist(B,`&D;`);
                          28.26467536

 

This can be applied to any equation, and is often very useful. In this case, it returns exactly what you want.

numer((lhs-rhs)(eq)) = 0

It just seems like luck or happenstance to me that using x as the 2nd argument and 1 as the 3rd did what you wanted.

This works:

factor(%, indets(%, sqrt));

The answer is awkward IMO, because a denominator is introduced, but it is correct.

The second argument to solve should be a list of the three variables that you want to solve for. By passing a set of six variables, you're giving solve the freedom to choose to do the thing that you don't want it to do.

By solving each of the equations for one of the variables, you can make direct plots, which are usually better than implicit plots (especially in 3D). Also, note the compact entry method for matrices and vectors.

restart:

interface(prompt= ""):

LA:= LinearAlgebra:  SLA:= Student:-LinearAlgebra:

`&/`:= (b,A)-> LA:-LinearSolve(A,b):

(A,b):= (<1, 1; 12, 16>, <10, 136>);

A, b := Matrix(2, 2, {(1, 1) = 1, (1, 2) = 1, (2, 1) = 12, (2, 2) = 16}), Vector(2, {(1) = 10, (2) = 136})

sol:= b &/ A;

Vector(2, {(1) = 6, (2) = 4})

Eqs:= [seq](A.<x,y> -~ b);

[x+y-10, 12*x+16*y-136]

SLA:-LinearSystemPlot(Eqs, axes= normal);

solve~(Eqs, y);

[-x+10, -(3/4)*x+17/2]

plot(
    solve~(Eqs, y), x= sol[1]-1 .. sol[1]+1,
    color= [red, blue], thickness= 2, labels= ["x", "y"],
    legend= Eqs, title= "Plot of Linear System"
);

(A,b):= (<2, -1, 1; 0, 1, 3; 0, 0, 1>, <-5, 7, 2>);

A, b := Matrix(3, 3, {(1, 1) = 2, (1, 2) = -1, (1, 3) = 1, (2, 1) = 0, (2, 2) = 1, (2, 3) = 3, (3, 1) = 0, (3, 2) = 0, (3, 3) = 1}), Vector(3, {(1) = -5, (2) = 7, (3) = 2})

sol:= b &/ A;

Vector(3, {(1) = -3, (2) = 1, (3) = 2})

Eqs:= [seq](A.<x,y,w> =~ b);

[2*x-y+w = -5, y+3*w = 7, w = 2]

plots:-display(
    plot3d~(
        solve~(Eqs, w), x= sol[1]-1 .. sol[1]+1, y= sol[2]-1 .. sol[2]+1,
        color=~ [red, blue, green], style= surface,
        transparency=~ [.4, .4, 0]
    ),
    title= "#D Plot of Linear System"
);

 

Download LinearSystemPlots.mw

In the specification of the ODEs, you need to change all occurences of the form

Theta(t)[k]  to Theta[k](t),

diff(Theta(t), t)[k]  to diff(Theta[k](t), t),

etc.

Initial conditions need to be specified like

Theta[k](0) = 7  and  D(Theta[k])(0) = 5.

In other words, use D instead of diff for initial conditions.

Here is the Mathematica code translated to Maple. However, I am quite skeptical that this algorithm does any useful form of integration.

Translation of a Monte-Carlo integration routine from Mathematica:

restart
:

(* Mathematica code to be translated:

func[x_]:= 1/(1 + Sinh[2*x]*(Log[x])^2);

Distrib[x_, average_, var_]:=
    PDF[NormalDistribution[average, var], 1.1*x - 0.1];
n = 10;
RV = RandomVariate[TruncatedDistribution[{0.8, 3}, NormalDistribution[1, 0.399]], n];
Int = 1/n Total[func[RV]/Distrib[RV, 1, 0.399]]*Integrate[Distrib[x, 1, 0.399], {x, 0.8, 3}]
*)

#Input data:
func:= x-> 1/(1+sinh(2*x)*ln(x)^2):
xr:= 4/5..3: #integration interval
(mu, sigma):= (1, 0.399): #Normal distribution parameters
n:= 10: #sample size (seems too small)
#I don't understand the purpose of the following "fudgefunc", but I implemented it anyway:
fudgefunc:= 1.1*x - 0.1:

St:= Statistics:
N:= St:-RandomVariable(Normal(mu, sigma)):
Distrib:= MakeFunction(St:-PDF(N, fudgefunc), x);

#The Mathematica command TruncatedDistribution is only used here to limit the sample values to an
#interval. So, rather than implement this command in full, I'll just implement a
#truncated sample command.
TruncatedSample:= proc(xr::range(realcons), dist::RandomVariable, n::posint)
uses St= Statistics;
local
    A:= Array(1..ceil(-n/`-`(St:-CDF~(dist, [op](xr), 'numeric')[])), datatype= hfloat),
    R:= Array(1..0, datatype= hfloat),
    RR:= RealRange(op(xr))
;
    do R,= select['flatten'](is, St:-Sample(dist, A), RR) until numelems(R) >= n;
    R[..n]
end proc
:
#Do it:
seed:= randomize();
Int(func(x), x= xr) =
    evalf(add((func/Distrib)~(TruncatedSample(xr, N, n)))*int(Distrib, xr)/n);

proc (x) options operator, arrow; .7070044906*2^(1/2)*exp(-3.140683790*(1.1*x-1.1)^2) end proc

5744216234505

Int(1/(1+sinh(2*x)*ln(x)^2), x = 4/5 .. 3) = HFloat(0.6822234310042862)

Since 0 < Distrib(x) << func(x) for x > 2, dividing by Distrib(x) doesn't seem useful; yet that's what's in the Mathematica code.

plot([func, Distrib, ln@(func/Distrib)], xr, 0..3);

#For comparison, use Maple's own numeric integrators. The last 5 methods are essentially
#Monte-Carlo methods. The raw method= _MonteCarlo is too slow.
Methods:= [_||(
    CCquad, Dexp, Gquad, Sinc, NCrule, d01ajc, #1-D deterministic
    cuhre, Cuba||(Vegas, Suave, Divonne, Cuhre)  #2-D psuedorandom
)]:
interface(rtablesize= nops(Methods)):

#Since 1-D integrals aren't allowed for any of the psuedo Monte Carlo methods, I add a 2nd dimension
#on 0..1.
DataSeries(
    evalf[5](int~(
        func(x), [(x= xr)$6, [x= xr, _= 0..1]$5], numeric, digits= 15, epsilon= 0.5e-5, method=~ Methods
    )),
    labels= substring~(Methods, 2..-1)
);

DataSeries(Vector[row](11, {(1) = .67684, (2) = .67684, (3) = .67684, (4) = .67684, (5) = .67684, (6) = .67684, (7) = .67684, (8) = .67684, (9) = .67684, (10) = .67684, (11) = .67684}), labels = [CCquad, Dexp, Gquad, Sinc, NCrule, d01ajc, cuhre, CubaVegas, CubaSuave, CubaDivonne, CubaCuhre], datatype = anything)

The last 4 methods (the Cuba methods) have an extensive help page any many options. See ?cuba.

 

\

Download MmaMonteCarlo.mw

If results is your expression sequence, you can select its 1st and 6th elements by

results[[1,6]]  or  results[[6,1]]

depending on your preferred order. In addition to being less to type, this also avoids the need to assign to a variable results, which is necessary if you use 

results[1], results[6]

 

The Code Edit Region is expecting Maple[*1] code (which would ordinarily include arithmetic operators such as and statement separators such as ;, hence the error message "missing operator or `;`" ), but what you've put in it is simply text data for Syrup. The region can be easily converted to Maple code that assigns the value of a string, that string being exactly the text data that you already have. It just requires a small modification of the first and last lines, like this:

ckt2:= "ckt2
V1 N03 0 vin
C1 N03 N04 Cp
R1 N04 N01 Rp
L1 N01 N02 Lp
L2 N05 N06 Ls
R2 N07 N05 Rs
C2 0   N07 Cs
H1 N02 0 L2 I*w*M
H2 N06 0 L1 I*w*M
.end"

Note that only one pair of double quotes is needed to create a string that contains line breaks.

I named the string ckt2, so change the 1st argument of the Syrup:-Solve command from "ec:ckt2" to simply ckt2 (without quotes).


Footnote: [*1] Code Edit Regions can process Python code in addition to Maple.

Immediately after the DGsetup command, give the command 

interface(prompt= "> "):

Here is one way of many:

interface(rtablesize= [81,24]):
M:= Matrix(
    (81,24), 
    (i,j)-> local k; [seq](indice(C[i][1][k], rorder(orders[j], Sets[k])), k= 1..4)
);

The interface(rtablesize= ...) command only controls how much of any matrix is displayed on screen; it's not specific to matrix M. It isn't needed if you don't need to display the entire matrix simultaneously.

Here's another way, fairly close to what you had:

M:= Matrix(
    [for i to 81 do
        [for j to 24 do
            [for k to 4 do 
                indice(C[i][1][k], rorder(orders[j], Sets[k]))
            end do]
        end do]
    end do]
);

Matrix indexing is best done as M[i,j] rather then M[i][j]. The latter works but is less efficient. 

 

As you likely know, the operation that you're doing is called (in math, not just in Maple) the outer product of two vectors. But there's no need to use the command LinearAlgebra:-OuterProductMatrix or, indeed, any other named command. Indeed, almost all matrix/vector arithmetic in Maple can be done without any named commands.

Acer has shown that making r and Vectors and doing c.r generates an invisible call to OuterProductMatrix. And perhaps you need some transpose operators, which'll generate calls to LinearAlgebra-Transpose. As you may realize by now, it makes more sense to make the row coefficients r a column vector and to make the column coefficients c a row vector. As well as making more sense intuitively, that also eliminates the need for any transposes.

I think that there's an even better way. As far as I can tell,[*1] by making r a 1-column Matrix and c a 1-row Matrix, the entire operation can be done without explicitly invoking any named commands and without generating calls to any library commands for the matrix-multiplyimg operator `.`. Furthermore, the code to do this is quite simple:

r:= <R[1]; R[2]; R[3]; R[4]>:  #4x1 Matrix, not a Vector
c:= <<C[1] | C[2] | C[3]>>:    #1x3 Matrix, not a Vector
r.c;  #4x3 Matrix


[*1] "As far as I can tell": By this I mean as far as I can tell via use of the trace and printlevel ​​​​​​​internal-code-exploration commands.

If x is the expression containing exp that you want to expand, then use

expand(x, indets(x, specfunc(exp))[]);

This is on the help page ?expand.

It's not that Maple prefers the form exp(a)^2; it's that that's what it means to "expand" a function: Make its arguments simpler (2*a becomes a) even at the expense of making the overall expression more complicated.

There's also a way to suppress the expansion of exp (or any other function) that maintains the suppression until you turn it off; so it's suitable for use in an initialization file:

expand(expandoff()): expandoff(exp):

See ?expandoff.

The option axis= [mode= log] is not intended to be a replacement for the plots commands logplotsemilogplotloglogplot, etc. The difference is that mode= log applies the logarithm after the points are computed, but the dedicated log-plotting commands choose the points knowing that they should be spaced logarithmically. The part of your plot that looks linear is the result of too few values of x being used at the left end of the logarithmic domain.

plots:-semilogplot(
   k, x= 10..1e6, view= [10..1e6, 0.92..0.99], 
   color= "Blue", background= "Ivory", filled= [color= "Cyan", transparency= 0.9], 
   axis= [gridlines= [color= "gray"]], size= [600, 300]
)

First 6 7 8 9 10 11 12 Last Page 8 of 390