MaplePrimes Commons General Technical Discussions

The primary forum for technical discussions.

Maple 2023: The colorbar option for contour plots does not work when used with the Explore command. See the example below.

No_colorbar_when_exploring_contour_plots.mw
 

Just installed Maple 2023 on Macbook Air M1.

Maple Tasks dosn't work :-(

/

The inverse problem of a mathematical question is often very interesting.

I'm glad to see that Maple 2023 has added several new graph operations. The GraphTheory[ConormalProduct], GraphTheory[LexicographicProduct], GraphTheory[ModularProduct] and GraphTheory[StrongProduct] commands were introduced in Maple 2023.

In fact, we often encounter their inverse problems in graph theory as well. Fortunately, most of them can find corresponding algorithms, but the implementation of these algorithms is almost nonexistent.

 

I once asked a question involving the inverse operation of the lexicographic product.

Today, I will introduce the inverse operation of line graph operations. (In fact, I am trying to approach these problems with a unified perspective.)

 

To obtain the line graph of a graph is easy, but what about the reverse? That is to say, to test whether the graph is a line graph. Moreover, if a graph  g is the line graph of some other graph h, can we find h? (Maybe not unique. **Whitney isomorphism theorem tells us that if the line graphs of two connected graphs are isomorphic, then the underlying graphs are isomorphic, except in the case of the triangle graph K_3 and the claw K_{1,3}, which have isomorphic line graphs but are not themselves isomorphic.)

Wikipedia tells us that there are two approaches, one of which is to check if the graph contains any of the nine forbidden induced subgraphs. 

Beineke's forbidden-subgraph characterization:  A graph is a line graph if and only if it does not contain one of these nine graphs as an induced subgraph.

This approach can always be implemented, but it may not be very fast. Another approach is to use the linear time algorithms mentioned in the following article. 

  • Roussopoulos, N. D. (1973), "A max {m,n} algorithm for determining the graph H from its line graph G", Information Processing Letters, 2 (4): 108–112, doi:10.1016/0020-0190(73)90029-X, MR 0424435

Or: 

  •   Lehot, Philippe G. H. (1974), "An optimal algorithm to detect a line graph and output its root graph", Journal of the ACM, 21 (4): 569–575, doi:10.1145/321850.321853, MR 0347690, S2CID 15036484.


SageMath can do that: 

   root_graph()

Return the root graph corresponding to the given graph.

    is_line_graph()

Check whether a graph is a line graph.

For example, K_{2,2,2,2} is not the line graph of any graph.

K2222 = graphs.CompleteMultipartiteGraph([2, 2, 2, 2])
C=K2222.is_line_graph(certificate=True)[1]
C.show() # Containing the ninth forbidden induced subgraph.

 

enter image description here

 

Another Sage example for showing that the complement of the Petersen graph is the line graph of K_5.

P = graphs.PetersenGraph()
C = P.complement()
sage.graphs.line_graph.root_graph(C, verbose=False)

 

(Graph on 5 vertices, {0: (0, 1), 1: (2, 3), 2: (0, 4), 3: (1, 3), 4: (2, 4), 5: (3, 4), 6: (1, 4), 7: (1, 2), 8: (0, 2), 9: (0, 3)})

 

Following this line of thought, can Maple gradually implement the inverse operations of some standard graph operations? 

Here are some examples:

  •   CartesianProduct
  •   TensorProduct
  •   ConormalProduct
  •   LexicographicProduct
  •   ModularProduct
  •   StrongProduct
  •   LineGraph
  •  GraphPower

 

The moment we've all been waiting for has arrived: Maple 2023 is here!

With this release we continue to pursue our mission to provide powerful technology to explore, derive, capture, solve and disseminate mathematical problems and their applications, and to make math easier to learn, understand, and use. Bearing this in mind, our team of mathematicians and developers have dedicated the last year to adding new features and enhancements that not only improve the math engine but make that math engine more easily accessible within a user-friendly interface.

And if you ever wonder where our team gets inspiration, you don't need to look further than Maple Primes. Many of the improvements that went into Maple 2023 came as a direct result of feedback from users. I’ll highlight a few of those user-requested features below, and you can learn more about these, and many, many other improvements, in What’s New in Maple 2023.

  • The Plot Builder in Maple 2023 now allows you to build interactive plot explorations where parameters are controlled by sliders or dials, and customize them as easily as you can other plots

Plot Builder Explore

 

  • In Maple 2023, 2-D contour and density plots now feature a color bar to show the values of the gradations.


  • For those who write a lot of code:  You can now open your .mpl Maple code files directly in Maple’s code editor, where you can  view and edit the file from inside Maple using the editor’s syntax highlighting, command completion, and automatic indenting.

Programming Improvements

  • Integration has been improved in many ways. Here’s one of them:  The definite integration method that works via MeijerG convolutions now does a better job of checking conditions on parameters so that they are only applied under proper assumptions. It also tells you the conditions under which the method could have produced an answer, so if your problem does meet those conditions, you can add the appropriate assumptions to get your result.
  • Many people have asked that we make it easier for them to create more complex interactive Math Apps and applications that require programming, such as interactive clickable plots, quizzes that provide feedback, examples that provide solution steps. And I’m pleased to announce that we’ve done that in Maple 2023 with the introduction of the Quiz Builder and the Canvas Scripting Gallery.
    • The new Quiz Builder comes loaded with sample quizzes and makes it easy to create your own custom quiz questions. Launch the quiz builder next time you want to author interactive quizzes with randomized questions, different response types, hints, feedback, and show the solution. It’s probably one of my favorite features in Maple 2023.

  • The Scripting Gallery in Maple 2023 provides 44 templates and modifiable examples that make it easier to create more complex Math Apps and interactive applications that require programming. The Maple code used to build each application in the scripting gallery can be easily viewed, copied and modified, so you can customize specific applications or use the code as a starting point for your own work

  • Finally, here’s one that is bound to make a lot of people happy: You can finally have more than one help page open at the same time!

For more information about all the new features and enhancements in Maple 2023, check out the What’s New in Maple 2023.

P.S. In case you weren’t aware - in addition to Maple, the Maplesoft Mathematics Suite includes a variety of other complementary software products, including online and mobile solutions, that help you teach and learn math and math-related courses.  Even avid Maple users may find something of interest!

I was looking at symbolically solving a second-order differential equation and it looks like the method=laplace method has a sign error when the coefficients are presented in a certain way.  Below is a picture of some examples with and without method=laplace that should all have the same closed form.  Note that lines (s6) and (s8) have different signs in the exponential than they should have (which is a HUGE problem):

restart

s1 := dsolve([diff(x(t), t, t)+2*a*(diff(x(t), t))+a^2*x(t)], [x(t)])

{x(t) = exp(-a*t)*(_C2*t+_C1)}

(1)

s2 := dsolve([diff(x(t), t, t)+2*a*(diff(x(t), t))+a^2*x(t)], [x(t)], method = laplace)

x(t) = exp(-a*t)*(t*(D(x))(0)+x(0)*(a*t+1))

(2)

s3 := dsolve([diff(x(t), t, t)+2*(diff(x(t), t))/b+x(t)/b^2], [x(t)])

{x(t) = exp(-t/b)*(_C2*t+_C1)}

(3)

s4 := dsolve([diff(x(t), t, t)+2*(diff(x(t), t))/b+x(t)/b^2], [x(t)], method = laplace)

x(t) = exp(-t/b)*(t*(D(x))(0)+x(0)*(b+t)/b)

(4)

s5 := dsolve([diff(x(t), t, t)+2*(diff(x(t), t))/sqrt(L*C)+x(t)/(L*C)], [x(t)])

{x(t) = exp(-(L*C)^(1/2)*t/(L*C))*(_C2*t+_C1)}

(5)

s6 := dsolve([diff(x(t), t, t)+2*(diff(x(t), t))/sqrt(L*C)+x(t)/(L*C)], [x(t)], method = laplace)

x(t) = (t*(D(x))(0)+3*C*L*x(0)*t/(L*C)^(3/2)+x(0))*exp((L*C)^(1/2)*t/(L*C))

(6)

s7 := dsolve([L*C*(diff(x(t), t, t))+2*sqrt(L*C)*(diff(x(t), t))+x(t)], [x(t)])

{x(t) = exp(-(L*C)^(1/2)*t/(L*C))*(_C2*t+_C1)}

(7)

s8 := dsolve([L*C*(diff(x(t), t, t))+2*sqrt(L*C)*(diff(x(t), t))+x(t)], [x(t)], method = laplace)

x(t) = exp(t/(L*C)^(1/2))*(t*(D(x))(0)+x(0)*(L*C+3*(L*C)^(1/2)*t)/(L*C))

(8)

s9 := dsolve([diff(x(t), t, t)+2*z*wn*(diff(x(t), t))+wn^2*x(t)], [x(t)])

{x(t) = _C1*exp((-z+(z^2-1)^(1/2))*wn*t)+_C2*exp(-(z+(z^2-1)^(1/2))*wn*t)}

(9)

s10 := dsolve([diff(x(t), t, t)+2*z*wn*(diff(x(t), t))+wn^2*x(t)], [x(t)], method = laplace)

x(t) = exp(-wn*t*z)*(cosh((wn^2*(z^2-1))^(1/2)*t)*x(0)+(x(0)*wn*z+(D(x))(0))*sinh((wn^2*(z^2-1))^(1/2)*t)/(wn^2*(z^2-1))^(1/2))

(10)

s11 := dsolve([(diff(x(t), t, t))/wn^2+2*z*(diff(x(t), t))/wn+x(t)], [x(t)])

{x(t) = _C1*exp((-z+(z^2-1)^(1/2))*wn*t)+_C2*exp(-(z+(z^2-1)^(1/2))*wn*t)}

(11)

s12 := dsolve([(diff(x(t), t, t))/wn^2+2*z*(diff(x(t), t))/wn+x(t)], [x(t)], method = laplace)

x(t) = exp(-wn*t*z)*(cosh((wn^2*(z^2-1))^(1/2)*t)*x(0)+(x(0)*wn*z+(D(x))(0))*sinh((wn^2*(z^2-1))^(1/2)*t)/(wn^2*(z^2-1))^(1/2))

(12)

s13 := dsolve([(diff(x(t), t, t))/wn^2+2*z*(diff(x(t), t))/wn+x(t)], [x(t)])

{x(t) = _C1*exp((-z+(z^2-1)^(1/2))*wn*t)+_C2*exp(-(z+(z^2-1)^(1/2))*wn*t)}

(13)

s14 := dsolve([(diff(x(t), t, t))/wn^2+2*z*(diff(x(t), t))/wn+x(t)], [x(t)], method = laplace)

x(t) = exp(-wn*t*z)*(cosh((wn^2*(z^2-1))^(1/2)*t)*x(0)+(x(0)*wn*z+(D(x))(0))*sinh((wn^2*(z^2-1))^(1/2)*t)/(wn^2*(z^2-1))^(1/2))

(14)

NULL

Download DsolveLaplaceIssues.mw

Transfer functions are normally not used with units. Involving units when deriving transfer functions can help identify unit inconsistencies and reduce the likelihood of unit conversion errors.

Maple is already a great help in not having to do this manually. However, the final step of simplification still requires manual intervention, as shown in this example.

Given transfer function

H(s) = 60.*Unit('m'*'kg'/('s'^2*'A'))/(.70805*s^2*Unit('kg'^2*'m'^2/('s'^3*'A'^2))+144.*s*Unit('kg'^2*'m'^2/('s'^4*'A'^2))+0.3675e-4*s^3*Unit('kg'^2*'m'^2/('s'^2*'A'^2)))

H(s) = 60.*Units:-Unit(m*kg/(s^2*A))/(.70805*s^2*Units:-Unit(kg^2*m^2/(s^3*A^2))+144.*s*Units:-Unit(kg^2*m^2/(s^4*A^2))+0.3675e-4*s^3*Units:-Unit(kg^2*m^2/(s^2*A^2)))

(1)

Desired output (derived by hand) where the transfer function is separated in a dimensionless expression and a gain that can be attributed to units with a physical meaning in the context of an application (here displacement per voltage).

H(s) = 60.*Unit('m'/'V')/(.70805*s^2*Unit('s'^2)+144.*s*Unit('s')+0.3675e-4*s^3*Unit('s'^3))

H(s) = 60.*Units:-Unit(m/V)/(.70805*s^2*Units:-Unit(s^2)+144.*s*Units:-Unit(s)+0.3675e-4*s^3*Units:-Unit(s^3))

(2)

is(simplify((H(s) = 60.*Units[Unit](m*kg/(s^2*A))/(.70805*s^2*Units[Unit](kg^2*m^2/(s^3*A^2))+144.*s*Units[Unit](kg^2*m^2/(s^4*A^2))+0.3675e-4*s^3*Units[Unit](kg^2*m^2/(s^2*A^2))))-(H(s) = 60.*Units[Unit](m/V)/(.70805*s^2*Units[Unit](s^2)+144.*s*Units[Unit](s)+0.3675e-4*s^3*Units[Unit](s^3)))))

true

(3)

Units to factor out in the denominator are Unit('kg'^2*'m'^2/('s'^5*'A'^2)). Quick check:

Unit('m'*'kg'/('s'^2*'A'))/Unit('kg'^2*'m'^2/('s'^5*'A'^2)) = Unit('m'/'V')

Units:-Unit(m*kg/(s^2*A))/Units:-Unit(kg^2*m^2/(s^5*A^2)) = Units:-Unit(m/V)

(4)

simplify(Units[Unit](m*kg/(s^2*A))/Units[Unit](kg^2*m^2/(s^5*A^2)) = Units[Unit](m/V))

Units:-Unit(s^3*A/(m*kg)) = Units:-Unit(s^3*A/(m*kg))

(5)

"Simplification" attempts with the denominator

denom(rhs(H(s) = 60.*Units[Unit](m*kg/(s^2*A))/(.70805*s^2*Units[Unit](kg^2*m^2/(s^3*A^2))+144.*s*Units[Unit](kg^2*m^2/(s^4*A^2))+0.3675e-4*s^3*Units[Unit](kg^2*m^2/(s^2*A^2)))))

s*(.70805*s*Units:-Unit(kg^2*m^2/(s^3*A^2))+144.*Units:-Unit(kg^2*m^2/(s^4*A^2))+0.3675e-4*s^2*Units:-Unit(kg^2*m^2/(s^2*A^2)))

(6)

collect(s*(.70805*s*Units[Unit](kg^2*m^2/(s^3*A^2))+144.*Units[Unit](kg^2*m^2/(s^4*A^2))+0.3675e-4*s^2*Units[Unit](kg^2*m^2/(s^2*A^2))), Unit('kg'^2*'m'^2/('s'^5*'A'^2)))

s*(.70805*s*Units:-Unit(kg^2*m^2/(s^3*A^2))+144.*Units:-Unit(kg^2*m^2/(s^4*A^2))+0.3675e-4*s^2*Units:-Unit(kg^2*m^2/(s^2*A^2)))

(7)

is not effective because all units are wrapped in Unit commands. Example:

Unit('kg'^2*'m'^2/('s'^2*'A'^2))

Units:-Unit(kg^2*m^2/(s^2*A^2))

(8)

Expand does not expand the argument of Unit commands.

expand(Units[Unit](kg^2*m^2/(s^2*A^2))); lprint(%)

Units:-Unit(kg^2*m^2/(s^2*A^2))

 

Units:-Unit(kg^2*m^2/s^2/A^2)

 

NULL

C1: Expanding Unit command

An expand facility could be a solution that expands a Unit command with combined units to a product of separate Unit commands.

When all units are expanded in a separate Unit command, collect or factor can be used to collect units:

.70805*s*Unit('kg')^2*Unit('m')^2/(Unit('A')^2*Unit('s')^3)+144.*Unit('kg')^2*Unit('m')^2/(Unit('A')^2*Unit('s')^4)+0.3675e-4*s^2*Unit('kg')^2*Unit('m')^2/(Unit('A')^2*Unit('s')^2)

.70805*s*Units:-Unit(kg)^2*Units:-Unit(m)^2/(Units:-Unit(A)^2*Units:-Unit(s)^3)+144.*Units:-Unit(kg)^2*Units:-Unit(m)^2/(Units:-Unit(A)^2*Units:-Unit(s)^4)+0.3675e-4*s^2*Units:-Unit(kg)^2*Units:-Unit(m)^2/(Units:-Unit(A)^2*Units:-Unit(s)^2)

(9)

collect(.70805*s*Units[Unit](kg)^2*Units[Unit](m)^2/(Units[Unit](A)^2*Units[Unit](s)^3)+144.*Units[Unit](kg)^2*Units[Unit](m)^2/(Units[Unit](A)^2*Units[Unit](s)^4)+0.3675e-4*s^2*Units[Unit](kg)^2*Units[Unit](m)^2/(Units[Unit](A)^2*Units[Unit](s)^2), [Unit('A'), Unit('kg'), Unit('m'), Unit('s')])

(.70805*s/Units:-Unit(s)^3+144./Units:-Unit(s)^4+0.3675e-4*s^2/Units:-Unit(s)^2)*Units:-Unit(m)^2*Units:-Unit(kg)^2/Units:-Unit(A)^2

(10)

factor(.70805*s*Units[Unit](kg)^2*Units[Unit](m)^2/(Units[Unit](A)^2*Units[Unit](s)^3)+144.*Units[Unit](kg)^2*Units[Unit](m)^2/(Units[Unit](A)^2*Units[Unit](s)^4)+0.3675e-4*s^2*Units[Unit](kg)^2*Units[Unit](m)^2/(Units[Unit](A)^2*Units[Unit](s)^2))

0.3675e-4*Units:-Unit(kg)^2*Units:-Unit(m)^2*(19266.66666*s*Units:-Unit(s)+3918367.346+.9999999999*s^2*Units:-Unit(s)^2)/(Units:-Unit(A)^2*Units:-Unit(s)^4)

(11)

C2: Using the Natural Units Environment

In this environment, no Unit commands are required and the collection of units should work with Maple commands.
However, for the expressions discussed here, this would lead to a naming conflict with the complex variable s of the transfer function and the unit symbol s for seconds.

NULL

C3: A type declaration or unit assumptions on names

A type declaration as an option of commands like in

Units[TestDimensions](s*(.70805*s*Units[Unit](kg^2*m^2/(s^3*A^2))+144.*Units[Unit](kg^2*m^2/(s^4*A^2))+0.3675e-4*s^2*Units[Unit](kg^2*m^2/(s^2*A^2))), {s::(Unit(1/s))})

true

(12)

could help Maple in simplification tasks (in its general meaning of making expressions shorter or smaller).
Alternatively, assumptions could provide information of which "unit type" a name is

`assuming`([simplify(H(s) = 60.*Units[Unit](m*kg/(s^2*A))/(.70805*s^2*Units[Unit](kg^2*m^2/(s^3*A^2))+144.*s*Units[Unit](kg^2*m^2/(s^4*A^2))+0.3675e-4*s^3*Units[Unit](kg^2*m^2/(s^2*A^2))))], [s::(Unit(1/s))]); `assuming`([combine(H(s) = 60.*Units[Unit](m*kg/(s^2*A))/(.70805*s^2*Units[Unit](kg^2*m^2/(s^3*A^2))+144.*s*Units[Unit](kg^2*m^2/(s^4*A^2))+0.3675e-4*s^3*Units[Unit](kg^2*m^2/(s^2*A^2))), 'units')], [s::(Unit(1/s))])

Error, (in assuming) when calling 'property/ConvertProperty'. Received: 'Units:-Unit(1/s) is an invalid property'

 

On various occasions (beyond transfer functions) I have looked for such a functionality.

 

C4: DynamicSystems Package with units

C4.1: The complex variable s could be attributed to the unit 1/s (i.e. Hertz) either by default or as an option. This could enable using units within the dynamic system package which is not possible in Maple 2022. An example what the package provides currently can be found here: help(applications, amplifiergain)
The phase plot shows that the package is already implicitly assuming that the unit of s is Hertz. A logical extension would be to have magnitude plots with units (e.g. m/V, as in this example).

 

C4.2: A dedicated "gain" command that takes units into account and that could potentially simplify the transfer function to an expression like (2) in SI units. In such a way the transfer function is separated into a dimensionless (but frequency depended) term and a gain term with units.
This would make the transfer of transfer functions to MapleSim easy and avoid unit conversion errors.

 

Download Collecting_and_expanding_units.mw

Does everyone have a good idea of ​​the work of the Draghilev method? For example, in this answer https://www.mapleprimes.com/questions/235407-The-Second-Example-Of-Finding-All-Solutions#answer291268
( https://www.mapleprimes.com/questions/235407-The-Second-Example-Of-Finding-All-Solutions )
there was a very successful attempt by Rouben Rostamian to calculate the line of intersection of surfaces without applying the Draghilev method.
Let now not 3d, but 8d. And how will the solve command work in this case? Imagine that aij ((i=1..7,j=1..8)) are partial derivatives, and xj (,j=1..8) are derivatives, as in the above example. f8 is responsible for the parametrization condition.

 

restart;
 f1 := a11*x1+a12*x2+a13*x3+a14*x4+a15*x5+a16*x6+a17*x7+a18*x8; 
 f2 := a21*x1+a22*x2+a23*x3+a24*x4+a25*x5+a26*x6+a27*x7+a28*x8; 
 f3 := a31*x1+a32*x2+a33*x3+a34*x4+a35*x5+a36*x6+a37*x7+a38*x8; 
 f4 := a41*x1+a42*x2+a43*x3+a44*x4+a45*x5+a46*x6+a47*x7+a48*x8;
 f5 := a51*x1+a52*x2+a53*x3+a54*x4+a55*x5+a56*x6+a57*x7+a58*x8; 
 f6 := a61*x1+a62*x2+a63*x3+a64*x4+a65*x5+a66*x6+a67*x7+a68*x8; 
 f7 := a71*x1+a72*x2+a73*x3+a74*x4+a75*x5+a76*x6+a77*x7+a78*x8;
 f8 := x1^2+x2^2+x3^2+x4^2+x5^2+x6^2+x7^2+x8^2-1; 
allvalues(solve({f1, f2, f3, f4, f5, f6, f7, f8}, {x1, x2, x3, x4, x5, x6, x7, x8}))


And this is how the Draghilev method works in this case.
 

restart; with(LinearAlgebra):
f1 := a11*x1+a12*x2+a13*x3+a14*x4+a15*x5+a16*x6+a17*x7+a18*x8; 
f2 := a21*x1+a22*x2+a23*x3+a24*x4+a25*x5+a26*x6+a27*x7+a28*x8; 
f3 := a31*x1+a32*x2+a33*x3+a34*x4+a35*x5+a36*x6+a37*x7+a38*x8; 
f4 := a41*x1+a42*x2+a43*x3+a44*x4+a45*x5+a46*x6+a47*x7+a48*x8; 
f5 := a51*x1+a52*x2+a53*x3+a54*x4+a55*x5+a56*x6+a57*x7+a58*x8;
f6 := a61*x1+a62*x2+a63*x3+a64*x4+a65*x5+a66*x6+a67*x7+a68*x8; 
f7 := a71*x1+a72*x2+a73*x3+a74*x4+a75*x5+a76*x6+a77*x7+a78*x8;

n := 7; 
x := seq(eval(cat('x', i)), i = 1 .. n+1):
 F := [seq(eval(cat('f', i)), i = 1 .. n)]: 
A := Matrix(nops(F), nops(F)+1):
 for j to nops(F) do for i to nops(F)+1 do A[j, i] := op(1, op(i, op(j, F))) 
end do:
           end do: 

# b[i] and b[n+1] are solutions of a linear homogeneous system and at the 
# same time they serve as the right-hand sides of an autonomous ODE.
for i to n do

 b[i] := Determinant(DeleteColumn(ColumnOperation(A, [i, n+1]), n+1)) 
                                                                     end do:
 b[n+1] := -Determinant(DeleteColumn(A, n+1)):


Only the original seven linear homogeneous equations with eight variables are needed. We solve them according to Cramer's rule, and in order to have uniqueness when solving the ODE, we use a point on the curve (according to the theory). (This point is sought in any convenient way.)
If we want to get a parameterization, then additionally, directly in dsolve, we can add the following:
 

for i to n do 
b[i] := simplify(Determinant(DeleteColumn(ColumnOperation(A, [i, n+1]), n+1))) 
end do:
b[n+1] := simplify(-Determinant(DeleteColumn(A, n+1))); 
deqs := seq(diff(x[i](s), s) = b[i]/(b[1]^2+b[2]^2+b[3]^2+b[4]^2+b[5]^2+b[6]^2+b[7]^2+b[8]^2)^.5, i = 1 .. n+1):

 

In an old post, vv reported a bug in simpl/max, which has been "fixed" in Maple 2018. However, it seem that such repairs are not complete enough.
For example, suppose it is required to find the (squared) distance between the origin and a point on x3 - x + y2 = ⅓ which is closest to the origin. In other words, one needs to minimize x²+y² among the points on this curve, i.e., 

extrema(x^2 + y^2, {x^3 + y^2 - x = 1/3}, {x, y}, 's'); # in exact form

Unfortunately, an identical error message appears again: 

restart;

extrema(x^2+y^2, {x^3+y^2-x = -2/(3*sqrt(3))}, {x, y})

{4/3}

(1)

extrema(x^2+y^2, {x^3+y^2-x = 1/3}, {x, y})

Error, (in simpl/max) complex argument to max/min: 1/36*((36+12*I*3^(1/2))^(2/3)+12)^2/(36+12*I*3^(1/2))^(2/3)

 

`~`[`^`](extrema(sqrt(x^2+y^2), {x^3+y^2-x = 1/3}, {x, y}), 2)

{4/3, 4/27}

(2)

extrema(x^2+1/3-x^3+x, {x^3+y^2-x = 1/3}, {x, y})

{4/3, 4/27}

(3)

MTM[limit](extrema(x^2+y^2, {x^3+y^2-x = a}, {x, y}), 1/3)

{4/3, 4/27}

(4)

Download tryHarder.mws

How about changing the values of parameter ?

for a from -3 by 3/27 to 3 do
    try
        extrema(x^2 + y^2, {x^3 + y^2 - x = a}, {x, y}); 
    catch:
        print(a); 
    end;
od;
                               -1
                               --
                               3 

                               -2
                               --
                               9 

                               -1
                               --
                               9 

                               1
                               -
                               9

                               2
                               -
                               9

                               1
                               -
                               3

By the way, like extrema, Student[MultivariateCalculus]:-LagrangeMultipliers also executes the Lagrange Multiplier method, but strangely, 

Student[MultivariateCalculus][LagrangeMultipliers](y^2 + x^2, [x^3 + y^2 - x - 1/3], [x, y], output = plot):

does not cause any errors.

This is about functionality introduced in Maple 2022, which however is still not well known: Integral Vector Calculus and parametrization using symbolic (algebraic) vector notation. Four new commands were added to the Physics:-Vectors package, implementing the parametrization of curves, surfaces and volumes, as well as the computation of path, surface and volume vector integrals. Those are integrals where the integrand is a scalar or vector function. The computation is done from any description (algebraic, parametric, vectorial) of the region of integration - a path, surface or volume.
 
There are three kinds of line or path integrals:

NOTE Jan 1: Updated the worksheet linked below; it runs in Maple 2022.
Download Integral_Vector_Calculus_and_Parametrization.mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

The search query in the new Maple Application center is broken.  There is no advanced search options and a search for mapleflow or maple flow brings up 0 results.  There should at least be one found, for example the search should have at least brought up The Liquid Volume in a Partially-Filled Horizontal Tank".

Maple, please fix.

 

Notation is one of the most important things to communicate with others in science. It is remarkable how many people use or do not use a computer algebra package just because of its notation. For those reasons, in the context of the Physics package, strong emphasis is put on using textbook notation as much as possible regarding input and output, including, for that purpose, as people here know, significant developments in Maple typesetting.

Still, for historical reasons, when using the Physics package, the labels used to refer to a coordinate system had been a single Capital Letter, as in X, Y, ...It was not possible to use, e.g. X', or x.

That has changed. Starting with the Maplesoft Physics Updates v.1308, any symbol can be used as a coordinate system label. The lines below demo this change.

 

Download new_coordinates_labels.mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

Following the previous post on The Electromagnetic Field of Moving Charges, this is another non-trivial exercise, the derivation of 4-dimensional relativistic Lorentz transformations,  a problem of a 3rd-year undergraduate course on Special Relativity whose solution requires "tensor & matrix" manipulation techniques. At the end, there is a link to the Maple document, so that the computation below can be reproduced, and a link to a corresponding PDF file with all the sections open.

Deriving 4D relativistic Lorentz transformations

Freddy Baudine(1), Edgardo S. Cheb-Terrab(2)

(1) Retired, passionate about Mathematics and Physics

(2) Physics, Differential Equations and Mathematical Functions, Maplesoft

 

Lorentz transformations are a six-parameter family of linear transformations Lambda that relate the values of the coordinates x, y, z, t of an event in one inertial reference system to the coordinates diff(x, x), diff(y(x), x), diff(z(x), x), diff(t(x), x) of the same event in another inertial system that moves at a constant velocity relative to the former. An explicit form of Lambda can be derived from physics principles, or in a purely algebraic mathematical manner. A derivation from physics principles is done in an upcoming post about relativistic dynamics, while in this post we derive the form of Lambda mathematically, as rotations in a (pseudo) Euclidean 4 dimensional space. Most of the presentation below follows the one found in Jackson's book on Classical Electrodynamics [1].

 

The computations below in Maple 2022 make use of the Maplesoft Physics Updates v.1283 or newer.

Formulation of the problem and ansatz Lambda = exp(`𝕃`)

 

 

The problem is to find a group of linear transformations,

  "x^(' mu)=(Lambda^( mu))[nu]  x^(nu)" 

that represent rotations in a 4D (pseudo) Euclidean spacetime, and so they leave invariant the norm of the 4D position vector x^mu; that is,

"x^(' mu) (x')[mu]=x^( mu) (x^())[mu]"

For the purpose of deriving the form of `#msubsup(mi("Λ",fontstyle = "normal"),mi("ν",fontstyle = "normal"),mrow(mo("⁢"),mi("μ",fontstyle = "normal")))`, a relevant property for it can be inferred by rewriting the invariance of the norm in terms of `#msubsup(mi("Λ",fontstyle = "normal"),mi("ν",fontstyle = "normal"),mrow(mo("⁢"),mi("μ",fontstyle = "normal")))`. In steps, from the above,

"g[alpha,beta] x^(' alpha) (x^(' beta))[]=g[mu,nu] x^( mu) (x^( nu))[]"
 

g[alpha, beta]*`#msubsup(mi("Λ",fontstyle = "normal"),mi("μ",fontstyle = "normal"),mrow(mo("⁢"),mi("α",fontstyle = "normal")))`*x^mu*`#msubsup(mi("Λ",fontstyle = "normal"),mi("ν",fontstyle = "normal"),mrow(mo("⁢"),mi("β",fontstyle = "normal")))`*x^nu = g[mu, nu]*x^mu*`#msup(mi("x"),mrow(mo("⁢"),mi("ν",fontstyle = "normal")))`
 

g[alpha, beta]*`#msubsup(mi("Λ",fontstyle = "normal"),mi("μ",fontstyle = "normal"),mrow(mo("⁢"),mi("α",fontstyle = "normal")))`*x^mu*`#msubsup(mi("Λ",fontstyle = "normal"),mi("ν",fontstyle = "normal"),mrow(mo("⁢"),mi("β",fontstyle = "normal")))`*x^nu = g[mu, nu]*x^mu*`#msup(mi("x"),mrow(mo("⁢"),mi("ν",fontstyle = "normal")))`

from where,

g[alpha, beta]*`#msubsup(mi("Λ",fontstyle = "normal"),mi("μ",fontstyle = "normal"),mrow(mo("⁢"),mi("α",fontstyle = "normal")))`*`#msubsup(mi("Λ",fontstyle = "normal"),mi("ν",fontstyle = "normal"),mrow(mo("⁢"),mi("β",fontstyle = "normal")))` = g[mu, nu]``

or in matrix (4 x 4) form, `#mrow(msubsup(mi("Λ",fontstyle = "normal"),mi("μ",fontstyle = "normal"),mrow(mo("⁢"),mi("α",fontstyle = "normal"))),mo("⁢"),mo("≡"),mo("⁢"),mo("⁢"),mi("Λ",fontstyle = "normal"))`, `≡`(g[alpha, beta], g)

Lambda^T*g*Lambda = g

where Lambda^T is the transpose of Lambda. Taking the determinant of both sides of this equation, and recalling that det(Lambda^T) = det(Lambda), we get

 

det(Lambda) = `&+-`(1)

 

The determination of Lambda is analogous to the determination of the matrix R (3D tensor R[i, j]) representing rotations in the 3D space, where the same line of reasoning leads to det(R) = `&+-`(1). To exclude reflection transformations, that have det(Lambda) = -1 and cannot be obtained through any sequence of rotations, because they do not preserve the relative orientation of the axes, the sign that represents our problem is +. To explicitly construct the transformation matrix Lambda, Jackson proposes the ansatz

  Lambda = exp(`𝕃`)   

Summarizing: the determination of `#msubsup(mi("Λ",fontstyle = "normal"),mi("ν",fontstyle = "normal"),mrow(mo("⁢"),mi("μ",fontstyle = "normal")))` consists of determining `𝕃`[nu]^mu entering Lambda = exp(`𝕃`) such that det(Lambda) = 1followed by computing the exponential of the matrix `𝕃`.

Determination of `𝕃`[nu]^mu

 

In order to compare results with Jackson's book, we use the same signature he uses, "(+---)", and lowercase Latin letters to represent space tensor indices, while spacetime indices are represented using Greek letters, which is already Physics' default.

 

restart; with(Physics)

Setup(signature = "+---", spaceindices = lowercaselatin)

[signature = `+ - - -`, spaceindices = lowercaselatin]

(1)

Start by defining the tensor `𝕃`[nu]^mu whose components are to be determined. For practical purposes, define a macro LM = `𝕃` to represent the tensor and use L to represent its components

macro(LM = `𝕃`, %LM = `%𝕃`); Define(Lambda, LM, quiet)

LM[`~mu`, nu] = Matrix(4, symbol = L)

`𝕃`[`~mu`, nu] = Matrix(%id = 36893488153289603060)

(2)

"Define(?)"

{Lambda, `𝕃`[`~mu`, nu], Physics:-Dgamma[mu], Physics:-Psigma[mu], Physics:-d_[mu], Physics:-g_[mu, nu], Physics:-gamma_[a, b], Physics:-LeviCivita[alpha, beta, mu, nu]}

(3)

Next, from Lambda^T*g*Lambda = g (see above in Formulation of the problem) one can derive the form of `𝕃`. To work algebraically with `𝕃`, Lambda, g representing matrices, set these symbols as noncommutative

Setup(noncommutativeprefix = {LM, Lambda, g})

[noncommutativeprefix = {`𝕃`, Lambda, g}]

(4)

From

Lambda^T*g*Lambda = g

Physics:-`*`(Physics:-`^`(Lambda, T), g, Lambda) = g

(5)

it follows that

(1/g*(Physics[`*`](Physics[`^`](Lambda, T), g, Lambda) = g))/Lambda

Physics:-`*`(Physics:-`^`(g, -1), Physics:-`^`(Lambda, T), g) = Physics:-`^`(Lambda, -1)

(6)

eval(Physics[`*`](Physics[`^`](g, -1), Physics[`^`](Lambda, T), g) = Physics[`^`](Lambda, -1), Lambda = exp(LM))

Physics:-`*`(Physics:-`^`(g, -1), Physics:-`^`(exp(`𝕃`), T), g) = Physics:-`^`(exp(`𝕃`), -1)

(7)

Expanding the exponential using exp(`𝕃`) = Sum(`𝕃`^k/factorial(k), k = 0 .. infinity), and taking into account that the matrix product `𝕃`^k/g*g can be rewritten as(`𝕃`/g*g)^k, the left-hand side of (7) can be written as exp(`𝕃`^T/g*g)

exp(LM^T/g*g) = rhs(Physics[`*`](Physics[`^`](g, -1), Physics[`^`](exp(`𝕃`), T), g) = Physics[`^`](exp(`𝕃`), -1))

exp(Physics:-`*`(Physics:-`^`(g, -1), Physics:-`^`(`𝕃`, T), g)) = Physics:-`^`(exp(`𝕃`), -1)

(8)

Multiplying by exp(`𝕃`)

(exp(Physics[`*`](Physics[`^`](g, -1), Physics[`^`](`𝕃`, T), g)) = Physics[`^`](exp(`𝕃`), -1))*exp(LM)

Physics:-`*`(exp(Physics:-`*`(Physics:-`^`(g, -1), Physics:-`^`(`𝕃`, T), g)), exp(`𝕃`)) = 1

(9)

Recalling that  "g^(-1)=g[]^(mu,alpha)", g = g[beta, nu] and that for any matrix `𝕃`, "(`𝕃`^T)[alpha]^(   beta)= `𝕃`(( )^(beta))[alpha]",  

"g^(-1) `𝕃`^T g= 'g_[~mu,~alpha]*LM[~beta, alpha] g_[beta, nu] '"

Physics:-`*`(Physics:-`^`(g, -1), Physics:-`^`(`𝕃`, T), g) = Physics:-`*`(Physics:-g_[`~mu`, `~alpha`], `𝕃`[`~beta`, alpha], Physics:-g_[beta, nu])

(10)

subs([Physics[`*`](Physics[`^`](g, -1), Physics[`^`](`𝕃`, T), g) = Physics[`*`](Physics[g_][`~mu`, `~alpha`], `𝕃`[`~beta`, alpha], Physics[g_][beta, nu]), LM = LM[`~mu`, nu]], Physics[`*`](exp(Physics[`*`](Physics[`^`](g, -1), Physics[`^`](`𝕃`, T), g)), exp(`𝕃`)) = 1)

Physics:-`*`(exp(Physics:-g_[`~alpha`, `~mu`]*Physics:-g_[beta, nu]*`𝕃`[`~beta`, alpha]), exp(`𝕃`[`~mu`, nu])) = 1

(11)

To allow for the combination of the exponentials, now that everything is in tensor notation, remove the noncommutative character of `𝕃```

Setup(clear, noncommutativeprefix)

[noncommutativeprefix = none]

(12)

combine(Physics[`*`](exp(Physics[g_][`~alpha`, `~mu`]*Physics[g_][beta, nu]*`𝕃`[`~beta`, alpha]), exp(`𝕃`[`~mu`, nu])) = 1)

exp(`𝕃`[`~beta`, alpha]*Physics:-g_[beta, nu]*Physics:-g_[`~alpha`, `~mu`]+`𝕃`[`~mu`, nu]) = 1

(13)

Since every tensor component of this expression is real, taking the logarithm at both sides and simplifying tensor indices

`assuming`([map(ln, exp(`𝕃`[`~beta`, alpha]*Physics[g_][beta, nu]*Physics[g_][`~alpha`, `~mu`]+`𝕃`[`~mu`, nu]) = 1)], [real])

`𝕃`[`~beta`, alpha]*Physics:-g_[beta, nu]*Physics:-g_[`~alpha`, `~mu`]+`𝕃`[`~mu`, nu] = 0

(14)

Simplify(`𝕃`[`~beta`, alpha]*Physics[g_][beta, nu]*Physics[g_][`~alpha`, `~mu`]+`𝕃`[`~mu`, nu] = 0)

`𝕃`[nu, `~mu`]+`𝕃`[`~mu`, nu] = 0

(15)

So the components of `𝕃`[`~mu`, nu]

LM[`~μ`, nu, matrix]

`𝕃`[`~μ`, nu] = Matrix(%id = 36893488151939882148)

(16)

satisfy (15). Using TensorArray  the components of that tensorial equation are

TensorArray(`𝕃`[nu, `~mu`]+`𝕃`[`~mu`, nu] = 0, output = setofequations)

{2*L[1, 1] = 0, 2*L[2, 2] = 0, 2*L[3, 3] = 0, 2*L[4, 4] = 0, -L[1, 2]+L[2, 1] = 0, L[1, 2]-L[2, 1] = 0, -L[1, 3]+L[3, 1] = 0, L[1, 3]-L[3, 1] = 0, -L[1, 4]+L[4, 1] = 0, L[1, 4]-L[4, 1] = 0, L[3, 2]+L[2, 3] = 0, L[4, 2]+L[2, 4] = 0, L[4, 3]+L[3, 4] = 0}

(17)

Simplifying taking these equations into account results in the form of `𝕃`[`~mu`, nu] we were looking for

"simplify(?,{2*L[1,1] = 0, 2*L[2,2] = 0, 2*L[3,3] = 0, 2*L[4,4] = 0, -L[1,2]+L[2,1] = 0, L[1,2]-L[2,1] = 0, -L[1,3]+L[3,1] = 0, L[1,3]-L[3,1] = 0, -L[1,4]+L[4,1] = 0, L[1,4]-L[4,1] = 0, L[3,2]+L[2,3] = 0, L[4,2]+L[2,4] = 0, L[4,3]+L[3,4] = 0})"

`𝕃`[`~μ`, nu] = Matrix(%id = 36893488153606736460)

(18)

This is equation (11.90) in Jackson's book [1]. By eye we see there are only six independent parameters in `𝕃`[`~mu`, nu], or via

"indets(rhs(?), name)"

{L[1, 2], L[1, 3], L[1, 4], L[2, 3], L[2, 4], L[3, 4]}

(19)

nops({L[1, 2], L[1, 3], L[1, 4], L[2, 3], L[2, 4], L[3, 4]})

6

(20)

This number is expected: a rotation in 3D space can always be represented as the composition of three rotations, and so, characterized by 3 parameters: the rotation angles measured on each of the space planes x, y, y, z, z, x. Likewise, a rotation in 4D space is characterized by 6 parameters: rotations on each of the three space planes, parameters L[2, 3], L[2, 4] and L[3, 4],  and rotations on the spacetime planest, x, t, y, t, z, parameters L[1, j]. Define now `𝕃`[`~mu`, nu] using (18) for further computing with it in the next section

"Define(?)"

{Lambda, `𝕃`[`~mu`, nu], Physics:-Dgamma[mu], Physics:-Psigma[mu], Physics:-d_[mu], Physics:-g_[mu, nu], Physics:-gamma_[a, b], Physics:-LeviCivita[alpha, beta, mu, nu]}

(21)

Determination of Lambda[`~mu`, nu]

 

From the components of `𝕃`[`~mu`, nu] in (18), the components of Lambda[`~mu`, nu] = exp(`𝕃`[`~mu`, nu]) can be computed directly using the LinearAlgebra:-MatrixExponential command. Then, following Jackson's book, in what follows we also derive a general formula for `𝕃`[`~mu`, nu]in terms of beta = v/c and gamma = 1/sqrt(-beta^2+1) shown in [1] as equation (11.98), finally showing the form of Lambda[`~mu`, nu] as a function of the relative velocity of the two inertial systems of references.

 

An explicit form of Lambda[`~mu`, nu] in the case of a rotation on thet, x plane can be computed by taking equal to zero all the parameters in (19) but for L[1, 2] and substituting in "?≡`𝕃`[nu]^(mu)"  

`~`[`=`](`minus`({L[1, 2], L[1, 3], L[1, 4], L[2, 3], L[2, 4], L[3, 4]}, {L[1, 2]}), 0)

{L[1, 3] = 0, L[1, 4] = 0, L[2, 3] = 0, L[2, 4] = 0, L[3, 4] = 0}

(22)

"subs({L[1,3] = 0, L[1,4] = 0, L[2,3] = 0, L[2,4] = 0, L[3,4] = 0},?)"

`𝕃`[`~μ`, nu] = Matrix(%id = 36893488153606695500)

(23)

Computing the matrix exponential,

"Lambda[~mu,nu]=LinearAlgebra:-MatrixExponential(rhs(?))"

Lambda[`~μ`, nu] = Matrix(%id = 36893488151918824492)

(24)

"convert(?,trigh)"

Lambda[`~μ`, nu] = Matrix(%id = 36893488151918852684)

(25)

This is formula (4.2) in Landau & Lifshitz book [2]. An explicit form of Lambda[`~mu`, nu] in the case of a rotation on thex, y plane can be computed by taking equal to zero all the parameters in (19) but for L[2, 3]

`~`[`=`](`minus`({L[1, 2], L[1, 3], L[1, 4], L[2, 3], L[2, 4], L[3, 4]}, {L[2, 3]}), 0)

{L[1, 2] = 0, L[1, 3] = 0, L[1, 4] = 0, L[2, 4] = 0, L[3, 4] = 0}

(26)

"subs({L[1,2] = 0, L[1,3] = 0, L[1,4] = 0, L[2,4] = 0, L[3,4] = 0},?)"

`𝕃`[`~μ`, nu] = Matrix(%id = 36893488151918868828)

(27)

"Lambda[~mu, nu]=LinearAlgebra:-MatrixExponential(rhs(?))"

Lambda[`~μ`, nu] = Matrix(%id = 36893488153289306948)

(28)

NULL

Rewriting `%𝕃`[`~mu`, nu] = K[`~i`]*Zeta[i]+S[`~i`]*omega[i]

 

Following Jackson's notation, for readability, redefine the 6 parameters entering `𝕃`[`~mu`, nu] as

'{LM[1, 2] = `ζ__1`, LM[1, 3] = `ζ__2`, LM[1, 4] = `ζ__3`, LM[2, 3] = `ω__3`, LM[2, 4] = -`ω__2`, LM[3, 4] = `ω__1`}'

{`𝕃`[1, 2] = zeta__1, `𝕃`[1, 3] = zeta__2, `𝕃`[1, 4] = zeta__3, `𝕃`[2, 3] = omega__3, `𝕃`[2, 4] = -omega__2, `𝕃`[3, 4] = omega__1}

(29)

(Note in the above the surrounding backquotes '...' to prevent a premature evaluation of the left-hand sides; that is necessary when using the Library:-RedefineTensorComponent command.) With this redefinition, `𝕃`[`~mu`, nu] becomes

Library:-RedefineTensorComponent({`𝕃`[1, 2] = zeta__1, `𝕃`[1, 3] = zeta__2, `𝕃`[1, 4] = zeta__3, `𝕃`[2, 3] = omega__3, `𝕃`[2, 4] = -omega__2, `𝕃`[3, 4] = omega__1})

LM[`~μ`, nu, matrix]

`𝕃`[`~μ`, nu] = Matrix(%id = 36893488151939901668)

(30)

where each parameter is related to a rotation angle on one plane. Any Lorentz transformation (rotation in 4D pseudo-Euclidean space) can be represented as the composition of these six rotations, and to each rotation, corresponds the matrix that results from taking equal to zero all of the six parameters but one.

 

The set of six parameters can be split into two sets of three parameters each, one representing rotations on the t, x__j planes, parameters `ζ__j`, and the other representing rotations on the x__i, x__j planes, parameters `ω__j`. With that, following [1], (30) can be rewritten in terms of four 3D tensors, two of them with the parameters as components, the other two with matrix as components, as follows:

Zeta[i] = [`ζ__1`, `ζ__2`, `ζ__3`], omega[i] = [`ω__1`, `ω__2`, `ω__3`], K[i] = [K__1, K__2, K__3], S[i] = [S__1, S__2, S__3]

Zeta[i] = [zeta__1, zeta__2, zeta__3], omega[i] = [omega__1, omega__2, omega__3], K[i] = [K__1, K__2, K__3], S[i] = [S__1, S__2, S__3]

(31)

Define(Zeta[i] = [zeta__1, zeta__2, zeta__3], omega[i] = [omega__1, omega__2, omega__3], K[i] = [K__1, K__2, K__3], S[i] = [S__1, S__2, S__3])

{Lambda, `𝕃`[mu, nu], Physics:-Dgamma[mu], K[i], Physics:-Psigma[mu], S[i], Zeta[i], Physics:-d_[mu], Physics:-g_[mu, nu], Physics:-gamma_[a, b], omega[i], Physics:-LeviCivita[alpha, beta, mu, nu]}

(32)

The 3D tensors K[i] and S[i] satisfy the commutation relations

Setup(noncommutativeprefix = {K, S})

[noncommutativeprefix = {K, S}]

(33)

Commutator(S[i], S[j]) = LeviCivita[i, j, k]*S[k]

Physics:-Commutator(S[i], S[j]) = Physics:-LeviCivita[i, j, k]*S[`~k`]

(34)

Commutator(S[i], K[j]) = LeviCivita[i, j, k]*K[k]

Physics:-Commutator(S[i], K[j]) = Physics:-LeviCivita[i, j, k]*K[`~k`]

(35)

Commutator(K[i], K[j]) = -LeviCivita[i, j, k]*S[k]

Physics:-Commutator(K[i], K[j]) = -Physics:-LeviCivita[i, j, k]*S[`~k`]

(36)

The matrix components of the 3D tensor K__i, related to rotations on the t, x__j planes, are

K__1 := matrix([[0, 1, 0, 0], [1, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]])

array( 1 .. 4, 1 .. 4, [( 3, 1 ) = (0), ( 4, 2 ) = (0), ( 1, 2 ) = (1), ( 3, 2 ) = (0), ( 1, 3 ) = (0), ( 4, 3 ) = (0), ( 4, 4 ) = (0), ( 1, 1 ) = (0), ( 2, 1 ) = (1), ( 3, 3 ) = (0), ( 2, 4 ) = (0), ( 1, 4 ) = (0), ( 2, 2 ) = (0), ( 2, 3 ) = (0), ( 4, 1 ) = (0), ( 3, 4 ) = (0)  ] )

(37)

K__2 := matrix([[0, 0, 1, 0], [0, 0, 0, 0], [1, 0, 0, 0], [0, 0, 0, 0]])

array( 1 .. 4, 1 .. 4, [( 3, 1 ) = (1), ( 4, 2 ) = (0), ( 1, 2 ) = (0), ( 3, 2 ) = (0), ( 1, 3 ) = (1), ( 4, 3 ) = (0), ( 4, 4 ) = (0), ( 1, 1 ) = (0), ( 2, 1 ) = (0), ( 3, 3 ) = (0), ( 2, 4 ) = (0), ( 1, 4 ) = (0), ( 2, 2 ) = (0), ( 2, 3 ) = (0), ( 4, 1 ) = (0), ( 3, 4 ) = (0)  ] )

(38)

K__3 := matrix([[0, 0, 0, 1], [0, 0, 0, 0], [0, 0, 0, 0], [1, 0, 0, 0]])

array( 1 .. 4, 1 .. 4, [( 3, 1 ) = (0), ( 4, 2 ) = (0), ( 1, 2 ) = (0), ( 3, 2 ) = (0), ( 1, 3 ) = (0), ( 4, 3 ) = (0), ( 4, 4 ) = (0), ( 1, 1 ) = (0), ( 2, 1 ) = (0), ( 3, 3 ) = (0), ( 2, 4 ) = (0), ( 1, 4 ) = (1), ( 2, 2 ) = (0), ( 2, 3 ) = (0), ( 4, 1 ) = (1), ( 3, 4 ) = (0)  ] )

(39)

The matrix components of the 3D tensor S__i, related to rotations on the x__i, x__j 3D space planes, are

S__1 := matrix([[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, -1], [0, 0, 1, 0]])

array( 1 .. 4, 1 .. 4, [( 3, 1 ) = (0), ( 4, 2 ) = (0), ( 1, 2 ) = (0), ( 3, 2 ) = (0), ( 1, 3 ) = (0), ( 4, 3 ) = (1), ( 4, 4 ) = (0), ( 1, 1 ) = (0), ( 2, 1 ) = (0), ( 3, 3 ) = (0), ( 2, 4 ) = (0), ( 1, 4 ) = (0), ( 2, 2 ) = (0), ( 2, 3 ) = (0), ( 4, 1 ) = (0), ( 3, 4 ) = (-1)  ] )

(40)

S__2 := matrix([[0, 0, 0, 0], [0, 0, 0, 1], [0, 0, 0, 0], [0, -1, 0, 0]])

array( 1 .. 4, 1 .. 4, [( 3, 1 ) = (0), ( 4, 2 ) = (-1), ( 1, 2 ) = (0), ( 3, 2 ) = (0), ( 1, 3 ) = (0), ( 4, 3 ) = (0), ( 4, 4 ) = (0), ( 1, 1 ) = (0), ( 2, 1 ) = (0), ( 3, 3 ) = (0), ( 2, 4 ) = (1), ( 1, 4 ) = (0), ( 2, 2 ) = (0), ( 2, 3 ) = (0), ( 4, 1 ) = (0), ( 3, 4 ) = (0)  ] )

(41)

S__3 := matrix([[0, 0, 0, 0], [0, 0, -1, 0], [0, 1, 0, 0], [0, 0, 0, 0]])

array( 1 .. 4, 1 .. 4, [( 3, 1 ) = (0), ( 4, 2 ) = (0), ( 1, 2 ) = (0), ( 3, 2 ) = (1), ( 1, 3 ) = (0), ( 4, 3 ) = (0), ( 4, 4 ) = (0), ( 1, 1 ) = (0), ( 2, 1 ) = (0), ( 3, 3 ) = (0), ( 2, 4 ) = (0), ( 1, 4 ) = (0), ( 2, 2 ) = (0), ( 2, 3 ) = (-1), ( 4, 1 ) = (0), ( 3, 4 ) = (0)  ] )

(42)

NULL

Verifying the commutation relations between S[i] and K[j]

   

The `𝕃`[`~mu`, nu] tensor is now expressed in terms of these objects as

%LM[`~μ`, nu] = omega[i].S[i]+Zeta[i].K[i]

`%𝕃`[`~μ`, nu] = K[`~i`]*Zeta[i]+S[`~i`]*omega[i]

(50)

where the right-hand side, without free indices, represents the matrix form of `%𝕃`[`~mu`, nu]. This notation makes explicit the fact that any Lorentz transformation can always be written as the composition of six rotations

SumOverRepeatedIndices(`%𝕃`[`~μ`, nu] = K[`~i`]*Zeta[i]+S[`~i`]*omega[i])

`%𝕃`[`~μ`, nu] = zeta__1*K[`~1`]+zeta__2*K[`~2`]+zeta__3*K[`~3`]+omega__1*S[`~1`]+omega__2*S[`~2`]+omega__3*S[`~3`]

(51)

Library:-RewriteInMatrixForm(`%𝕃`[`~μ`, nu] = zeta__1*K[`~1`]+zeta__2*K[`~2`]+zeta__3*K[`~3`]+omega__1*S[`~1`]+omega__2*S[`~2`]+omega__3*S[`~3`])

`%𝕃`[`~μ`, nu] = (array( 1 .. 4, 1 .. 4, [( 3, 1 ) = (0), ( 4, 2 ) = (0), ( 1, 2 ) = (zeta__1), ( 3, 2 ) = (0), ( 1, 3 ) = (0), ( 4, 3 ) = (0), ( 4, 4 ) = (0), ( 1, 1 ) = (0), ( 2, 1 ) = (zeta__1), ( 3, 3 ) = (0), ( 2, 4 ) = (0), ( 1, 4 ) = (0), ( 2, 2 ) = (0), ( 2, 3 ) = (0), ( 4, 1 ) = (0), ( 3, 4 ) = (0)  ] ))+(array( 1 .. 4, 1 .. 4, [( 3, 1 ) = (zeta__2), ( 4, 2 ) = (0), ( 1, 2 ) = (0), ( 3, 2 ) = (0), ( 1, 3 ) = (zeta__2), ( 4, 3 ) = (0), ( 4, 4 ) = (0), ( 1, 1 ) = (0), ( 2, 1 ) = (0), ( 3, 3 ) = (0), ( 2, 4 ) = (0), ( 1, 4 ) = (0), ( 2, 2 ) = (0), ( 2, 3 ) = (0), ( 4, 1 ) = (0), ( 3, 4 ) = (0)  ] ))+(array( 1 .. 4, 1 .. 4, [( 3, 1 ) = (0), ( 4, 2 ) = (0), ( 1, 2 ) = (0), ( 3, 2 ) = (0), ( 1, 3 ) = (0), ( 4, 3 ) = (0), ( 4, 4 ) = (0), ( 1, 1 ) = (0), ( 2, 1 ) = (0), ( 3, 3 ) = (0), ( 2, 4 ) = (0), ( 1, 4 ) = (zeta__3), ( 2, 2 ) = (0), ( 2, 3 ) = (0), ( 4, 1 ) = (zeta__3), ( 3, 4 ) = (0)  ] ))+(array( 1 .. 4, 1 .. 4, [( 3, 1 ) = (0), ( 4, 2 ) = (0), ( 1, 2 ) = (0), ( 3, 2 ) = (0), ( 1, 3 ) = (0), ( 4, 3 ) = (omega__1), ( 4, 4 ) = (0), ( 1, 1 ) = (0), ( 2, 1 ) = (0), ( 3, 3 ) = (0), ( 2, 4 ) = (0), ( 1, 4 ) = (0), ( 2, 2 ) = (0), ( 2, 3 ) = (0), ( 4, 1 ) = (0), ( 3, 4 ) = (-omega__1)  ] ))+(array( 1 .. 4, 1 .. 4, [( 3, 1 ) = (0), ( 4, 2 ) = (-omega__2), ( 1, 2 ) = (0), ( 3, 2 ) = (0), ( 1, 3 ) = (0), ( 4, 3 ) = (0), ( 4, 4 ) = (0), ( 1, 1 ) = (0), ( 2, 1 ) = (0), ( 3, 3 ) = (0), ( 2, 4 ) = (omega__2), ( 1, 4 ) = (0), ( 2, 2 ) = (0), ( 2, 3 ) = (0), ( 4, 1 ) = (0), ( 3, 4 ) = (0)  ] ))+(array( 1 .. 4, 1 .. 4, [( 3, 1 ) = (0), ( 4, 2 ) = (0), ( 1, 2 ) = (0), ( 3, 2 ) = (omega__3), ( 1, 3 ) = (0), ( 4, 3 ) = (0), ( 4, 4 ) = (0), ( 1, 1 ) = (0), ( 2, 1 ) = (0), ( 3, 3 ) = (0), ( 2, 4 ) = (0), ( 1, 4 ) = (0), ( 2, 2 ) = (0), ( 2, 3 ) = (-omega__3), ( 4, 1 ) = (0), ( 3, 4 ) = (0)  ] ))

(52)

Library:-PerformMatrixOperations(`%𝕃`[`~μ`, nu] = zeta__1*K[`~1`]+zeta__2*K[`~2`]+zeta__3*K[`~3`]+omega__1*S[`~1`]+omega__2*S[`~2`]+omega__3*S[`~3`])

`%𝕃`[`~μ`, nu] = (array( 1 .. 4, 1 .. 4, [( 3, 1 ) = (zeta__2), ( 4, 2 ) = (-omega__2), ( 1, 2 ) = (zeta__1), ( 3, 2 ) = (omega__3), ( 1, 3 ) = (zeta__2), ( 4, 3 ) = (omega__1), ( 4, 4 ) = (0), ( 1, 1 ) = (0), ( 2, 1 ) = (zeta__1), ( 3, 3 ) = (0), ( 2, 4 ) = (omega__2), ( 1, 4 ) = (zeta__3), ( 2, 2 ) = (0), ( 2, 3 ) = (-omega__3), ( 4, 1 ) = (zeta__3), ( 3, 4 ) = (-omega__1)  ] ))

(53)

NULL

which is the same as the starting point (30)NULL

The transformation Lambda[`~mu`, nu] = exp(`%𝕃`[`~mu`, nu]), where  `%𝕃`[`~mu`, nu] = K[`~i`]*Zeta[i], as a function of the relative velocity of two inertial systems

 

 

As seen in the previous subsection, in `𝕃`[`~mu`, nu] = K[`~i`]*Zeta[i]+S[`~i`]*omega[i], the second term, S[`~i`]*omega[i], corresponds to 3D rotations embedded in the general form of 4D Lorentz transformations, and K[`~i`]*Zeta[i] is the term that relates the coordinates of two inertial systems of reference that move with respect to each other at constant velocity v.  In this section, K[`~i`]*Zeta[i] is rewritten in terms of that velocity, arriving at equation (11.98)  of Jackson's book [1]. The key observation is that the 3D vector Zeta[i], can be rewritten in terms of arctanh(beta), where beta = v/c and c is the velocity of light (for the rationale of that relation, see [2], sec 4, discussion before formula (4.3)).

 

Use a macro - say ub - to represent the atomic variable `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))` (this variable can be entered as `#mover(mi("β"),mo("ˆ")`. In general, to create atomic variables, see the section on Atomic Variables of the page 2DMathDetails ).

 

macro(ub = `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`)

ub[j] = [ub[1], ub[2], ub[3]], Zeta[j] = ub[j]*arctanh(beta)

`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[j] = [`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1], `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2], `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]], Zeta[j] = `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[j]*arctanh(beta)

(54)

Define(`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[j] = [`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1], `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2], `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]], Zeta[j] = `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[j]*arctanh(beta))

{Lambda, `𝕃`[mu, nu], Physics:-Dgamma[mu], K[i], Physics:-Psigma[mu], S[i], Zeta[i], Physics:-d_[mu], Physics:-g_[mu, nu], Physics:-gamma_[a, b], omega[i], Physics:-LeviCivita[alpha, beta, mu, nu], `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[j]}

(55)

With these two definitions, and excluding the rotation term S[`~i`]*omega[i] we have

%LM[`~μ`, nu] = Zeta[j]*K[j]

`%𝕃`[`~μ`, nu] = Zeta[j]*K[`~j`]

(56)

SumOverRepeatedIndices(`%𝕃`[`~μ`, nu] = Zeta[j]*K[`~j`])

`%𝕃`[`~μ`, nu] = arctanh(beta)*(K[`~1`]*`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]+K[`~2`]*`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]+K[`~3`]*`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3])

(57)

Library:-PerformMatrixOperations(`%𝕃`[`~μ`, nu] = arctanh(beta)*(K[`~1`]*`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]+K[`~2`]*`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]+K[`~3`]*`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]))

`%𝕃`[`~μ`, nu] = (array( 1 .. 4, 1 .. 4, [( 3, 1 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]*arctanh(beta)), ( 4, 2 ) = (0), ( 1, 2 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]*arctanh(beta)), ( 3, 2 ) = (0), ( 1, 3 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]*arctanh(beta)), ( 4, 3 ) = (0), ( 4, 4 ) = (0), ( 1, 1 ) = (0), ( 2, 1 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]*arctanh(beta)), ( 3, 3 ) = (0), ( 2, 4 ) = (0), ( 1, 4 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]*arctanh(beta)), ( 2, 2 ) = (0), ( 2, 3 ) = (0), ( 4, 1 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]*arctanh(beta)), ( 3, 4 ) = (0)  ] ))

(58)

 

From this expression, the form of "Lambda[nu]^(mu)" can be obtained as in (24) using LinearAlgebra:-MatrixExponential and simplifying the result taking into account that `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[j] is a unit vector

SumOverRepeatedIndices(ub[j]^2) = 1

`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]^2+`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]^2+`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]^2 = 1

(59)

exp(lhs(`%𝕃`[`~μ`, nu] = (array( 1 .. 4, 1 .. 4, [( 3, 3 ) = (0), ( 2, 3 ) = (0), ( 4, 2 ) = (0), ( 1, 1 ) = (0), ( 1, 2 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]*arctanh(beta)), ( 4, 4 ) = (0), ( 4, 1 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]*arctanh(beta)), ( 3, 1 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]*arctanh(beta)), ( 3, 4 ) = (0), ( 4, 3 ) = (0), ( 1, 4 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]*arctanh(beta)), ( 3, 2 ) = (0), ( 1, 3 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]*arctanh(beta)), ( 2, 4 ) = (0), ( 2, 1 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]*arctanh(beta)), ( 2, 2 ) = (0)  ] )))) = simplify(LinearAlgebra:-MatrixExponential(rhs(`%𝕃`[`~μ`, nu] = (array( 1 .. 4, 1 .. 4, [( 3, 3 ) = (0), ( 2, 3 ) = (0), ( 4, 2 ) = (0), ( 1, 1 ) = (0), ( 1, 2 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]*arctanh(beta)), ( 4, 4 ) = (0), ( 4, 1 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]*arctanh(beta)), ( 3, 1 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]*arctanh(beta)), ( 3, 4 ) = (0), ( 4, 3 ) = (0), ( 1, 4 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]*arctanh(beta)), ( 3, 2 ) = (0), ( 1, 3 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]*arctanh(beta)), ( 2, 4 ) = (0), ( 2, 1 ) = (`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]*arctanh(beta)), ( 2, 2 ) = (0)  ] )))), {`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]^2+`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]^2+`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]^2 = 1})

exp(`%𝕃`[`~μ`, nu]) = Matrix(%id = 36893488153234621252)

(60)

It is useful at this point to analyze the dependency on the components of `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[j] of this matrix

"map(u -> indets(u,specindex(ub)), rhs(?))"

Matrix(%id = 36893488151918822812)

(61)

We see that the diagonal element [4, 4] depends on two instead of only one component of  `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[j]. That is due to the simplification with respect to side relations , performed in (60), that constructs an elimination Groebner Basis that cannot reduce at once, using the single equation (59), the dependency of all of the elements [2, 2], [3, 3] and [4, 4] to a single component of  `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[j]. So, to reduce further the dependency of the [4, 4] element, this component of (60) requires one more simplification step, using a different elimination strategy, explicitly requesting the elimination of "{(beta)[1],(beta)[2]}"

"rhs(?)[4,4]"

((`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]^2+`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]^2)*(-beta^2+1)^(1/2)-`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]^2-`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]^2+1)/(-beta^2+1)^(1/2)

(62)

 

simplify(((`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]^2+`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]^2)*(-beta^2+1)^(1/2)-`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]^2-`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]^2+1)/(-beta^2+1)^(1/2), {`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]^2+`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]^2+`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]^2 = 1}, {`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1], `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]})

(-(-beta^2+1)^(1/2)*`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]^2+`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]^2+(-beta^2+1)^(1/2))/(-beta^2+1)^(1/2)

(63)

This result involves only `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3], and with it the form of Lambda[`~mu`, nu] = exp(`%𝕃`[`~mu`, nu]) becomes

"subs(1/(-beta^2+1)^(1/2)*((`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]^2+`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]^2)*(-beta^2+1)^(1/2)-`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1]^2-`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2]^2+1) = (-(-beta^2+1)^(1/2)*`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]^2+`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3]^2+(-beta^2+1)^(1/2))/(-beta^2+1)^(1/2),?)"

exp(`%𝕃`[`~μ`, nu]) = Matrix(%id = 36893488151918876660)

(64)

Replacing now the components of the unit vector `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[j] by the components of the vector `#mover(mi("β",fontstyle = "normal"),mo("→"))` divided by its modulus beta

seq(ub[j] = beta[j]/beta, j = 1 .. 3)

`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1] = beta[1]/beta, `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[2] = beta[2]/beta, `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[3] = beta[3]/beta

(65)

and recalling that

exp(`%𝕃`[`~μ`, nu]) = Lambda[`~μ`, nu]

exp(`%𝕃`[`~μ`, nu]) = Lambda[`~μ`, nu]

(66)

to get equation (11.98) in Jackson's book it suffices to introduce (the customary notation)

1/sqrt(-beta^2+1) = gamma

1/(-beta^2+1)^(1/2) = gamma

(67)

"simplify(subs(`#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[1] = beta[1]/beta, exp(`%𝕃`[`~μ`,nu]) = Lambda[`~μ`,nu], 1/(-beta^2+1)^(1/2) = gamma,(1/(-beta^2+1)^(1/2) = gamma)^(-1),?))"

Lambda[`~μ`, nu] = Matrix(%id = 36893488151911556148)

(68)

 

This is equation (11.98) in Jackson's book.

 

Finally, to get the form of this general Lorentz transformation excluding 3D rotations, directly expressed in terms of the relative velocity v of the two inertial systems of references, introduce

v[i] = [v__x, v__y, v__z], beta[i] = v[i]/c

v[i] = [v__x, v__y, v__z], beta[i] = v[i]/c

(69)

At this point it suffices to Define (69) as tensors

Define(v[i] = [v__x, v__y, v__z], beta[i] = v[i]/c)

{Lambda, `𝕃`[mu, nu], Physics:-Dgamma[mu], K[i], Physics:-Psigma[mu], S[i], Zeta[i], beta[i], Physics:-d_[mu], Physics:-g_[mu, nu], Physics:-gamma_[a, b], omega[i], v[i], Physics:-LeviCivita[alpha, beta, mu, nu], `#mover(mi("β",fontstyle = "normal"),mo("ˆ"))`[j]}

(70)

and remove beta and gamma from the formulation using

(rhs = lhs)(1/(-beta^2+1)^(1/2) = gamma), beta = v/c

gamma = 1/(-beta^2+1)^(1/2), beta = v/c

(71)

"simplify(subs(gamma = 1/(-beta^2+1)^(1/2),simplify(?)),size) "

Lambda[`~μ`, nu] = Matrix(%id = 36893488153289646316)

(72)

NULL

``

NULL

References

 

[1] J.D. Jackson, "Classical Electrodynamics", third edition, 1999.

[2] L.D. Landau, E.M. Lifshitz, "The Classical Theory of Fields", Course of Theoretical Physics V.2, 4th revised English edition, 1975.

NULL

Download Deriving_the_mathematical_form_of_Lorentz_transformations.mw

Deriving_the_mathematical_form_of_Lorentz_transformations.pdf

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

According to Wikipedia

"In computing, a programming language reference or language reference manual is
part of the documentation associated with most mainstream programming languages. It is
written for users and developers, and describes the basic elements of the language and how
to use them in a program. For a command-based language, for example, this will include details
of every available command and of the syntax for using it.

The reference manual is usually separate and distinct from a more detailed
 programming language specification meant for implementors of the language rather than those who simply use it to accomplish some processing task."

 

And no, Maple's Programming guide is not a Language reference manual. This is a guide to how to program in Maple. But it is not reference manual for the language itself. i.e. description of the Language itself.

Examples of Language reference manuals are

C

Original C++ 

Python

Ada

Fortran 77

 

and so on.

Why is there no LRM for Maple? Should there be one? I find Maple semantics sometime confusing, and many times I find how things work by trial and error. Having a reference to look at will be good. 

I know for a language as large as Maple, this is not easy to do. But it will good for Maplesoft to invest into making one.

This is an interesting exercise, the computation of the Liénard–Wiechert potentials describing the classical electromagnetic field of a moving electric point charge, a problem of a 3rd year undergrad course in Electrodynamics. The calculation is nontrivial and is performed below using the Physics  package, following the presentation in [1] (Landau & Lifshitz "The classical theory of fields"). I have not seen this calculation performed on a computer algebra worksheet before. Thus, this also showcases the more advanced level of symbolic problems that can currently be tackled on a Maple worksheet. At the end, the corresponding document is linked  and with it the computation below can be reproduced. There is also a link to a corresponding PDF file with all the sections open.

Moving charges:
The retarded and Liénard-Wiechert potentials, and the fields `#mover(mi("E"),mo("→"))` and `#mover(mi("H"),mo("→"))`

Freddy Baudine(1), Edgardo S. Cheb-Terrab(2)

(1) Retired, passionate about Mathematics and Physics

(2) Physics, Differential Equations and Mathematical Functions, Maplesoft

 

Generally speaking, determining the electric and magnetic fields of a distribution of charges involves determining the potentials `ϕ` and `#mover(mi("A"),mo("→"))`, followed by determining the fields `#mover(mi("E"),mo("→"))` and `#mover(mi("H"),mo("→"))` from

`#mover(mi("E"),mo("→"))` = -(diff(`#mover(mi("A"),mo("→"))`, t))/c-%Gradient(`ϕ`(X)),        `#mover(mi("H"),mo("→"))` = `&x`(VectorCalculus[Nabla], `#mover(mi("A"),mo("→"))`)

In turn, the formulation of the equations for `ϕ` and `#mover(mi("A"),mo("→"))` is simple: they follow from the 4D second pair of Maxwell equations, in tensor notation

"`∂`[k](F[]^( i, k))=-(4 Pi)/c j^( i)"

where "F[]^( i, k)" is the electromagnetic field tensor and j^i is the 4D current. After imposing the Lorentz condition

`∂`[i](A^i) = 0,     i.e.    (diff(`ϕ`, t))/c+VectorCalculus[Nabla].`#mover(mi("A"),mo("→"))` = 0

we get

`∂`[k](`∂`[`~k`](A^i)) = 4*Pi*j^i/c

which in 3D form results in

"(∇)^2A-1/(c^2) (((∂)^2)/(∂t^2)( A))=-(4 Pi)/c j"

 

Laplacian(`ϕ`)-(diff(`ϕ`, t, t))/c^2 = -4*Pi*rho/c

where `#mover(mi("j"),mo("→"))` is the current and rho is the charge density.

 

Following the presentation shown in [1] (Landau and Lifshitz, "The classical theory of fields", sec. 62 and 63), below we solve these equations for `ϕ` and `#mover(mi("A"),mo("→"))` resulting in the so-called retarded potentials, then recompute these fields as produced by a charge moving along a given trajectory `#mover(mi("r"),mo("→"))` = r__0(t) - the so-called Liénard-Wiechert potentials - finally computing an explicit form for the corresponding `#mover(mi("E"),mo("→"))` and `#mover(mi("H"),mo("→"))`.

 

While the computation of the generic retarded potentials is, in principle, simple, obtaining their form for a charge moving along a given trajectory `#mover(mi("r"),mo("→"))` = r__0(t), and from there the form of the fields `#mover(mi("E"),mo("→"))` and `#mover(mi("H"),mo("→"))` shown in Landau's book, involves nontrivial algebraic manipulations. The presentation below thus also shows a technique to map onto the computer the manipulations typically done with paper and pencil for these problems. To reproduce the contents below, the Maplesoft Physics Updates v.1252 or newer is required.

NULL

with(Physics); Setup(coordinates = Cartesian); with(Vectors)

[coordinatesystems = {X}]

(1)

The retarded potentials phi and `#mover(mi("A"),mo("→"))`

 

 

The equations which determine the scalar and vector potentials of an arbitrary electromagnetic field are input as

CompactDisplay((`ϕ`, rho, A_, j_)(X))

j_(x, y, z, t)*`will now be displayed as`*j_

(2)

%Laplacian(`ϕ`(X))-(diff(`ϕ`(X), t, t))/c^2 = -4*Pi*rho(X)

%Laplacian(varphi(X))-(diff(diff(varphi(X), t), t))/c^2 = -4*Pi*rho(X)

(3)

%Laplacian(A_(X))-(diff(A_(X), t, t))/c^2 = -4*Pi*j_(X)

%Laplacian(A_(X))-(diff(diff(A_(X), t), t))/c^2 = -4*Pi*j_(X)

(4)

The solutions to these inhomogeneous equations are computed as the sum of the solutions for the equations without right-hand side plus a particular solution to the equation with right-hand side.

Computing the solution to the equations for `ϕ`(X) and  `#mover(mi("A"),mo("→"))`(X)

   

The Liénard-Wiechert potentials of a charge moving along `#mover(mi("r"),mo("→"))` = r__0_(t)

 

From (13), the potential at the point X = (x, y, z, t)is determined by the charge e(t-r/c), i.e. by the position of the charge e at the earlier time

`#msup(mi("t"),mo("'",fontweight = "bold"))` = t-LinearAlgebra[Norm](`#mover(mi("R"),mo("→"))`)/c

The quantityLinearAlgebra[Norm](`#mover(mi("R"),mo("→"))`)is the 3D distance from the position of the charge at the time diff(t(x), x) to the 3D point of observationx, y, z. In the previous section, the charge was located at the origin and at rest, so LinearAlgebra[Norm](`#mover(mi("R"),mo("→"))`) = r, the radial coordinate. If the charge is moving, say on a path r__0_(t), we have

`#mover(mi("R"),mo("→"))` = `#mover(mi("r"),mo("→"))`-r__0_(`#msup(mi("t"),mo("'",fontweight = "bold"))`)

From (13)`ϕ`(r, t) = de(t-r/c)/r and the definition of `#msup(mi("t"),mo("'",fontweight = "bold"))` above, the potential `ϕ`(r, t) of a moving charge can be written as

`ϕ`(r, t(x)) = e/LinearAlgebra[Norm](`#mover(mi("R"),mo("→"))`) and e/LinearAlgebra[Norm](`#mover(mi("R"),mo("→"))`) = e/(c*(t(x)-(diff(t(x), x))))

When the charge is at rest, in the Lorentz gauge we are working, the vector potential is `#mover(mi("A"),mo("→"))` = 0. When the charge is moving, the form of `#mover(mi("A"),mo("→"))` can be found searching for a solution to "(∇)^2A-1/(c^2) (((∂)^2)/(∂t^2)( A))=-(4 Pi)/c j" that gives `#mover(mi("A"),mo("→"))` = 0 when `#mover(mi("v"),mo("→"))` = 0. Following [1], this solution can be written as

"A( )^(alpha)=(e u( )^(alpha))/(`R__beta` u^(beta))" 

where u^mu is the four velocity of the charge, "R^(mu)  =  r^( mu)-`r__0`^(mu)  =  [(r)-(`r__`),c(t-t')]".  

 

Without showing the intermediate steps, [1] presents the three dimensional vectorial form of these potentials `ϕ` and `#mover(mi("A"),mo("→"))` as

 

`ϕ` = e/(R-`#mover(mi("v"),mo("→"))`/c.`#mover(mi("R"),mo("→"))`),   `#mover(mi("A"),mo("→"))` = e*`#mover(mi("v"),mo("→"))`/(c*(R-`#mover(mi("v"),mo("→"))`/c.`#mover(mi("R"),mo("→"))`))

Computing the vectorial form of the Liénard-Wiechert potentials

   

The electric and magnetic fields `#mover(mi("E"),mo("→"))` and `#mover(mi("H"),mo("→"))` of a charge moving along `#mover(mi("r"),mo("→"))` = r__0_(t)

 

The electric and magnetic fields at a point x, y, z, t are calculated from the potentials `ϕ` and `#mover(mi("A"),mo("→"))` through the formulas

 

`#mover(mi("E"),mo("→"))`(x, y, z, t) = -(diff(`#mover(mi("A"),mo("→"))`(x, y, z, t), t))/c-(%Gradient(`ϕ`(X)))(x, y, z, t),        `#mover(mi("H"),mo("→"))`(x, y, z, t) = `&x`(VectorCalculus[Nabla], `#mover(mi("A"),mo("→"))`(x, y, z, t))

where, for the case of a charge moving on a path r__0_(t), these potentials were calculated in the previous section as (24) and (18)

`ϕ`(x, y, z, t) = e/(LinearAlgebra[Norm](`#mover(mi("R"),mo("→"))`)-`#mover(mi("R"),mo("→"))`.(`#mover(mi("v"),mo("→"))`/c))

`#mover(mi("A"),mo("→"))`(x, y, z, t) = e*`#mover(mi("v"),mo("→"))`/(c*(LinearAlgebra[Norm](`#mover(mi("R"),mo("→"))`)-`#mover(mi("R"),mo("→"))`.(`#mover(mi("v"),mo("→"))`/c)))

These two expressions, however, depend on the time only through the retarded time t__0. This dependence is within `#mover(mi("R"),mo("→"))` = `#mover(mi("r"),mo("→"))`(x, y, z)-r__0_(t__0(x, y, z, t)) and through the velocity of the charge `#mover(mi("v"),mo("→"))`(t__0(x, y, z, t)). So, before performing the differentiations, this dependence on t__0(x, y, z, t) must be taken into account.

CompactDisplay(r_(x, y, z), (E_, H_, t__0)(x, y, z, t))

t__0(x, y, z, t)*`will now be displayed as`*t__0

(29)

R_ = r_(x, y, z)-r__0_(t__0(x, y, z, t)), v_ = v_(t__0(x, y, z, t))

R_ = r_(x, y, z)-r__0_(t__0(X)), v_ = v_(t__0(X))

(30)

The Electric field `#mover(mi("E"),mo("→"))` = -(diff(`#mover(mi("A"),mo("→"))`, t))/c-%Gradient(`ϕ`)

 

Computation of Gradient(`ϕ`(X)) 

Computation of "(∂A)/(∂t)"

   

 Collecting the results of the two previous subsections, we have for the electric field

`#mover(mi("E"),mo("→"))`(X) = -(diff(`#mover(mi("A"),mo("→"))`(X), t))/c-%Gradient(`ϕ`(X))

E_(X) = -(diff(A_(X), t))/c-%Gradient(varphi(X))

(60)

subs(%Gradient(varphi(X)) = -c*e*(-Physics[Vectors][Norm](v_)^2*R_-Physics[Vectors][Norm](R_)*c*v_+R_*c^2+Physics[Vectors][`.`](R_, a_)*R_+Physics[Vectors][`.`](R_, v_)*v_)/(c*Physics[Vectors][Norm](R_)-Physics[Vectors][`.`](R_, v_))^3, Physics[Vectors]:-diff(A_(X), t) = e*(Physics[Vectors][Norm](R_)^2*a_*c-v_*Physics[Vectors][Norm](v_)^2*Physics[Vectors][Norm](R_)-Physics[Vectors][Norm](R_)*Physics[Vectors][`.`](R_, v_)*a_+v_*Physics[Vectors][`.`](R_, a_)*Physics[Vectors][Norm](R_)+c*v_*Physics[Vectors][`.`](R_, v_))/((1-Physics[Vectors][`.`](R_, v_)/(Physics[Vectors][Norm](R_)*c))*(c*Physics[Vectors][Norm](R_)-Physics[Vectors][`.`](R_, v_))^2*Physics[Vectors][Norm](R_)), E_(X) = -(diff(A_(X), t))/c-%Gradient(varphi(X)))

E_(X) = -e*(Physics:-Vectors:-Norm(R_)^2*a_*c-v_*Physics:-Vectors:-Norm(v_)^2*Physics:-Vectors:-Norm(R_)-Physics:-Vectors:-Norm(R_)*Physics:-Vectors:-`.`(R_, v_)*a_+v_*Physics:-Vectors:-`.`(R_, a_)*Physics:-Vectors:-Norm(R_)+c*v_*Physics:-Vectors:-`.`(R_, v_))/(c*(1-Physics:-Vectors:-`.`(R_, v_)/(Physics:-Vectors:-Norm(R_)*c))*(c*Physics:-Vectors:-Norm(R_)-Physics:-Vectors:-`.`(R_, v_))^2*Physics:-Vectors:-Norm(R_))+c*e*(-Physics:-Vectors:-Norm(v_)^2*R_-Physics:-Vectors:-Norm(R_)*c*v_+R_*c^2+Physics:-Vectors:-`.`(R_, a_)*R_+Physics:-Vectors:-`.`(R_, v_)*v_)/(c*Physics:-Vectors:-Norm(R_)-Physics:-Vectors:-`.`(R_, v_))^3

(61)

The book, presents this result as equation (63.8):

`#mover(mi("E"),mo("→"))` = e*(1-v^2/c^2)*(`#mover(mi("R"),mo("→"))`-`#mover(mi("v"),mo("→"))`*R/c)/(R-(`#mover(mi("v"),mo("→"))`.`#mover(mi("R"),mo("→"))`)/c)^3+`&x`(e*`#mover(mi("R"),mo("→"))`/c(R-(`#mover(mi("v"),mo("→"))`.`#mover(mi("R"),mo("→"))`)/c)^6, `&x`(`#mover(mi("R"),mo("→"))`-`#mover(mi("v"),mo("→"))`*R/c, `#mover(mi("a"),mo("→"))`))

where `≡`(R, LinearAlgebra[Norm](`#mover(mi("R"),mo("→"))`)) and `≡`(v, LinearAlgebra[Norm](`#mover(mi("v"),mo("→"))`)). To rewrite (61) as in the above, introduce the two triple vector products

`&x`(R_, `&x`(v_, a_)); expand(%) = %

v_*Physics:-Vectors:-`.`(R_, a_)-Physics:-Vectors:-`.`(R_, v_)*a_ = Physics:-Vectors:-`&x`(R_, Physics:-Vectors:-`&x`(v_, a_))

(62)

simplify(E_(X) = -e*(Physics[Vectors][Norm](R_)^2*a_*c-v_*Physics[Vectors][Norm](v_)^2*Physics[Vectors][Norm](R_)-Physics[Vectors][Norm](R_)*Physics[Vectors][`.`](R_, v_)*a_+v_*Physics[Vectors][`.`](R_, a_)*Physics[Vectors][Norm](R_)+c*v_*Physics[Vectors][`.`](R_, v_))/(c*(1-Physics[Vectors][`.`](R_, v_)/(Physics[Vectors][Norm](R_)*c))*(c*Physics[Vectors][Norm](R_)-Physics[Vectors][`.`](R_, v_))^2*Physics[Vectors][Norm](R_))+c*e*(-Physics[Vectors][Norm](v_)^2*R_-Physics[Vectors][Norm](R_)*c*v_+R_*c^2+Physics[Vectors][`.`](R_, a_)*R_+Physics[Vectors][`.`](R_, v_)*v_)/(c*Physics[Vectors][Norm](R_)-Physics[Vectors][`.`](R_, v_))^3, {v_*Physics[Vectors][`.`](R_, a_)-Physics[Vectors][`.`](R_, v_)*a_ = Physics[Vectors][`&x`](R_, Physics[Vectors][`&x`](v_, a_))})

E_(X) = e*(-Physics:-Vectors:-Norm(R_)*Physics:-Vectors:-`&x`(R_, Physics:-Vectors:-`&x`(v_, a_))+R_*c*Physics:-Vectors:-`.`(R_, a_)-Physics:-Vectors:-Norm(R_)^2*a_*c+(-c^2*v_+v_*Physics:-Vectors:-Norm(v_)^2)*Physics:-Vectors:-Norm(R_)+R_*c^3-R_*c*Physics:-Vectors:-Norm(v_)^2)/(c*Physics:-Vectors:-Norm(R_)-Physics:-Vectors:-`.`(R_, v_))^3

(63)

`&x`(R_, `&x`(R_, a_)); expand(%) = %

Physics:-Vectors:-`.`(R_, a_)*R_-Physics:-Vectors:-Norm(R_)^2*a_ = Physics:-Vectors:-`&x`(R_, Physics:-Vectors:-`&x`(R_, a_))

(64)

simplify(E_(X) = e*(-Physics[Vectors][Norm](R_)*Physics[Vectors][`&x`](R_, Physics[Vectors][`&x`](v_, a_))+R_*c*Physics[Vectors][`.`](R_, a_)-Physics[Vectors][Norm](R_)^2*a_*c+(-c^2*v_+v_*Physics[Vectors][Norm](v_)^2)*Physics[Vectors][Norm](R_)+R_*c^3-R_*c*Physics[Vectors][Norm](v_)^2)/(c*Physics[Vectors][Norm](R_)-Physics[Vectors][`.`](R_, v_))^3, {Physics[Vectors][`.`](R_, a_)*R_-Physics[Vectors][Norm](R_)^2*a_ = Physics[Vectors][`&x`](R_, Physics[Vectors][`&x`](R_, a_))})

E_(X) = (c*Physics:-Vectors:-`&x`(R_, Physics:-Vectors:-`&x`(R_, a_))-Physics:-Vectors:-Norm(R_)*Physics:-Vectors:-`&x`(R_, Physics:-Vectors:-`&x`(v_, a_))+(c-Physics:-Vectors:-Norm(v_))*(c+Physics:-Vectors:-Norm(v_))*(R_*c-Physics:-Vectors:-Norm(R_)*v_))*e/(c*Physics:-Vectors:-Norm(R_)-Physics:-Vectors:-`.`(R_, v_))^3

(65)

Split now this result into two terms, one of them involving the acceleration `#mover(mi("a"),mo("→"))`. For that purpose first expand the expression without expanding the cross products

lhs(E_(X) = (c*Physics[Vectors][`&x`](R_, Physics[Vectors][`&x`](R_, a_))-Physics[Vectors][Norm](R_)*Physics[Vectors][`&x`](R_, Physics[Vectors][`&x`](v_, a_))+(c-Physics[Vectors][Norm](v_))*(c+Physics[Vectors][Norm](v_))*(R_*c-Physics[Vectors][Norm](R_)*v_))*e/(c*Physics[Vectors][Norm](R_)-Physics[Vectors][`.`](R_, v_))^3) = frontend(expand, [rhs(E_(X) = (c*Physics[Vectors][`&x`](R_, Physics[Vectors][`&x`](R_, a_))-Physics[Vectors][Norm](R_)*Physics[Vectors][`&x`](R_, Physics[Vectors][`&x`](v_, a_))+(c-Physics[Vectors][Norm](v_))*(c+Physics[Vectors][Norm](v_))*(R_*c-Physics[Vectors][Norm](R_)*v_))*e/(c*Physics[Vectors][Norm](R_)-Physics[Vectors][`.`](R_, v_))^3)])

E_(X) = e*Physics:-Vectors:-Norm(R_)*Physics:-Vectors:-Norm(v_)^2*v_/(c*Physics:-Vectors:-Norm(R_)-Physics:-Vectors:-`.`(R_, v_))^3-e*Physics:-Vectors:-Norm(R_)*c^2*v_/(c*Physics:-Vectors:-Norm(R_)-Physics:-Vectors:-`.`(R_, v_))^3-e*Physics:-Vectors:-Norm(v_)^2*R_*c/(c*Physics:-Vectors:-Norm(R_)-Physics:-Vectors:-`.`(R_, v_))^3+e*R_*c^3/(c*Physics:-Vectors:-Norm(R_)-Physics:-Vectors:-`.`(R_, v_))^3+e*c*Physics:-Vectors:-`&x`(R_, Physics:-Vectors:-`&x`(R_, a_))/(c*Physics:-Vectors:-Norm(R_)-Physics:-Vectors:-`.`(R_, v_))^3-e*Physics:-Vectors:-Norm(R_)*Physics:-Vectors:-`&x`(R_, Physics:-Vectors:-`&x`(v_, a_))/(c*Physics:-Vectors:-Norm(R_)-Physics:-Vectors:-`.`(R_, v_))^3

(66)

Introduce the notation used in the textbook, `≡`(R, LinearAlgebra[Norm](`#mover(mi("R"),mo("→"))`)) and `≡`(v, LinearAlgebra[Norm](`#mover(mi("v"),mo("→"))`)) and proceed with the splitting

lhs(E_(X) = (c*Physics[Vectors][`&x`](R_, Physics[Vectors][`&x`](R_, a_))-Physics[Vectors][Norm](R_)*Physics[Vectors][`&x`](R_, Physics[Vectors][`&x`](v_, a_))+(c-Physics[Vectors][Norm](v_))*(c+Physics[Vectors][Norm](v_))*(R_*c-Physics[Vectors][Norm](R_)*v_))*e/(c*Physics[Vectors][Norm](R_)-Physics[Vectors][`.`](R_, v_))^3) = subs(Norm(R_) = R, Norm(v_) = v, add(normal([selectremove(`not`(has), rhs(E_(X) = e*Physics[Vectors][Norm](R_)*Physics[Vectors][Norm](v_)^2*v_/(c*Physics[Vectors][Norm](R_)-Physics[Vectors][`.`](R_, v_))^3-e*Physics[Vectors][Norm](R_)*c^2*v_/(c*Physics[Vectors][Norm](R_)-Physics[Vectors][`.`](R_, v_))^3-e*Physics[Vectors][Norm](v_)^2*R_*c/(c*Physics[Vectors][Norm](R_)-Physics[Vectors][`.`](R_, v_))^3+e*R_*c^3/(c*Physics[Vectors][Norm](R_)-Physics[Vectors][`.`](R_, v_))^3+e*c*Physics[Vectors][`&x`](R_, Physics[Vectors][`&x`](R_, a_))/(c*Physics[Vectors][Norm](R_)-Physics[Vectors][`.`](R_, v_))^3-e*Physics[Vectors][Norm](R_)*Physics[Vectors][`&x`](R_, Physics[Vectors][`&x`](v_, a_))/(c*Physics[Vectors][Norm](R_)-Physics[Vectors][`.`](R_, v_))^3), `#mover(mi("a"),mo("→"))`)])))

E_(X) = e*(-R*c^2*v_+R*v^2*v_+R_*c^3-R_*c*v^2)/(c*R-Physics:-Vectors:-`.`(R_, v_))^3-e*(R*Physics:-Vectors:-`&x`(R_, Physics:-Vectors:-`&x`(v_, a_))-c*Physics:-Vectors:-`&x`(R_, Physics:-Vectors:-`&x`(R_, a_)))/(c*R-Physics:-Vectors:-`.`(R_, v_))^3

(67)

Rearrange only the first term using simplify; that can be done in different ways, perhaps the simplest is using subsop

subsop([2, 1] = simplify(op([2, 1], E_(X) = e*(-R*c^2*v_+R*v^2*v_+R_*c^3-R_*c*v^2)/(c*R-Physics[Vectors][`.`](R_, v_))^3-e*(R*Physics[Vectors][`&x`](R_, Physics[Vectors][`&x`](v_, a_))-c*Physics[Vectors][`&x`](R_, Physics[Vectors][`&x`](R_, a_)))/(c*R-Physics[Vectors][`.`](R_, v_))^3)), E_(X) = e*(-R*c^2*v_+R*v^2*v_+R_*c^3-R_*c*v^2)/(c*R-Physics[Vectors][`.`](R_, v_))^3-e*(R*Physics[Vectors][`&x`](R_, Physics[Vectors][`&x`](v_, a_))-c*Physics[Vectors][`&x`](R_, Physics[Vectors][`&x`](R_, a_)))/(c*R-Physics[Vectors][`.`](R_, v_))^3)

E_(X) = e*(c-v)*(c+v)*(-R*v_+R_*c)/(c*R-Physics:-Vectors:-`.`(R_, v_))^3-e*(R*Physics:-Vectors:-`&x`(R_, Physics:-Vectors:-`&x`(v_, a_))-c*Physics:-Vectors:-`&x`(R_, Physics:-Vectors:-`&x`(R_, a_)))/(c*R-Physics:-Vectors:-`.`(R_, v_))^3

(68)

NULL

By eye this result is mathematically equal to equation (63.8) of the textbook, shown here above before (62) .

 

Algebraic manipulation rewriting (68) as the textbook equation (63.8)

   

The magnetic field  `#mover(mi("H"),mo("→"))` = `&x`(VectorCalculus[Nabla], `#mover(mi("A"),mo("→"))`)

 

 

The book does not show an explicit form for `#mover(mi("H"),mo("→"))`, it only indicates that it is related to the electric field by the formula

 

`#mover(mi("H"),mo("→"))` = `&x`(`#mover(mi("R"),mo("→"))`, `#mover(mi("E"),mo("→"))`)/LinearAlgebra[Norm](`#mover(mi("R"),mo("→"))`)

 

Thus in this section we compute the explicit form of `#mover(mi("H"),mo("→"))` and show that this relationship mentioned in the book holds. To compute `#mover(mi("H"),mo("→"))` = `&x`(VectorCalculus[Nabla], `#mover(mi("A"),mo("→"))`) we proceed as done in the previous sections, the right-hand side should be taken at the previous (retarded) time t__0. For clarity, turn OFF the compact display of functions.

OFF

 

We need to calculate

H_(X) = Curl(A_(x, y, z, t__0(x, y, z, t)))

H_(X) = Physics:-Vectors:-Curl(A_(x, y, z, t__0(X)))

(75)

Deriving the chain rule `&x`(VectorCalculus[Nabla], `#mover(mi("A"),mo("→"))`(t__0(x, y, z, t))) = %Curl(A_(x, y, z, `#msub(mi("t"),mi("0"))`))+`&x`(%Gradient(`#msub(mi("t"),mi("0"))`(X)), diff(`#mover(mi("A"),mo("→"))`(t__0), t__0))