Carl Love

Carl Love

28100 Reputation

25 Badges

13 years, 105 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are replies submitted by Carl Love

@testht06 

But I did compute the 1/4 power, not the 4th power! To say it explicitly, I computed Diag as the diagonal matrix of the 4th roots (also known as the 1/4 powers) of the eigenvalues of M. Here is the command that computes the 4th roots (ROOT = 4):

ROOTs:= map(lambda-> Roots(t^ROOT - lambda[1]) mod p, eigs);

The fact that R^4 = M proves that I computed the 1/4 power.

I request the assistance of my colleagues here at MaplePrimes to step in here and try to explain this better for you.

What is your native language? Perhaps someone else here speaks (or writes) it and is also fluent in English.

@Rouben Rostamian  

The difference in your two experiments is due to a very different (and more superficial) phenomenon called automatic simplification. The following example explains the behavior in your experiment #1, yet it has nothing to do with Vectors or other mutable objects. Note that automatic simplification cannot be delayed with unevaluation quotes, which is why the use of those quotes is used to prove that automatic simplification is occurring.

' '{a,a}' ';

a:= 1:  b:= 1:
' '{a,b}' ';

eval(%);

Please let me know if you understand why the example above explains your experiment 1 or if you have further questions.

 

 

@testht06 

I don't understand your problems with my implementation. Part of the problem is that I don't fully understand your English, and part is that I don't know if you understand the mathematical issues.

Regarding your point (1), "The matrix D is not take powers 1/4": I didn't use a matrix D (because D is a protected name in Maple); I used Diag instead of D. My matrix Diag is already constructed with the fourth roots of the eigenvalues; there is no need to compute Diag^(1/4). Do you want me to explicitly raise a matrix to the 1/4 power? That is superficial, but it can be acheived with operator overloading. 

Regarding your point (2): "Then the matrix R = P x D x P^(-1).": This is not even close to being a complete English sentence, and thus I can't figure it out. "Then elements of the received R will return to GF(2^8).": This is not mathematically possible! The only conceivable alternative is to present the entries of R as radical expressions over GF(2^8). This would be very messy visually, and would be an unusual presentation for a finite field. "The calculation of R^4 is performed in GF(2^8)/...": No! This is mathematically incorrect. The computation of R^4 is over GF(2^16).

@tomleslie Thanks for the research. Yes, spanning arborescence (see Wikipedia article "spanning arborescence") seems to be the correct concept in this situation. However, I don't think that Edmond's algorithm (see Wikipedia article "Edmond's algorithm") is the goal.

@zmq I don't understand what you're asking. Please explicitly ask a question as a complete sentence.

The syntax of GraphTheory:-Graph requires that the edges be given as a set---it's as simple as that. See ?GraphTheory,Graph. It doesn't have anything to do with edges being in pairs.

 

@acer Sure, I know that SVD is used to determine the rank of floating-point matrices. But what does that have to do with determining the equality of those matrices?

Regarding the FAQness of the mutability thing: I enjoy repeatedly rewriting certain answers to FAQs: It gives me much-needed practice writing, and I don't think those FAQs have perfectly understandable answers yet. Indeed, some have become FQAs---Frequently Questioned Answers. Perhaps the above will become Alejandro's go-to answer for the equality-versus-identity FAQ.

@mortezaem 

The Question is about MapleSim, not regular Maple. MapleSim allows for the simulation of the building of objects.

@tomleslie 

Your removal of the evals and your incorporation of frem as only the last term of piecewises substantially changes the mathematical meaning from the OP's expression.

Rouben: This Reply is to expand on what Acer said: "It's because Vectors are mutable". Items in a set are collasped if they are identical, i.e., they are in fact the same entity stored at the same address. This is the same (arguably flimsy) criterion used to determine equality in the expression evalb(A=B), the statement if A=B then..., etc. (but not the profound expression is(A=B)!). Two lists containing identical elements in the same order are themselves identical; thus lists are said to be immutable. (A list can only be created or destroyed; it can't be modified.) But two tables or rtables (Arrays, Vectors, Matrixes) containing identical entries in the same order are separate entities; they are said to be mutable. (An entry of an rtable can be changed without destroying and recreating the whole rtable.) To determine the equality of rtables, there're various tricks, one of which is conversion to list or listlist form. Another is the (poorly placed) command LinearAlgebra:-Equal, which has nothing to do with linear algebra.

Acer also brought up the point of determing the equality of floating-point rtables. He mentioned using an SVD. Hmm, that seems like overkill to me, but I'm willing to be convinced otherwise by an example. Personally, I'd compute pairwise (via ListTools:-Categorize) the LinearAlgebra:-Norm (or LinearAlgebra:-MatrixNorm) of the differences.

@Alejandro Jakubi Thanks for the PURRS.

Can you show an example of using the ansatz? Can it be applied to the linear homogenous constant-coefficient recurrence

B(n,k) = B(n-1,k) + B(n-1,k-1), B(n,0)=1, B(n,n)=1, for all n,k >= 0,

whose solution is very well known (binomial(n,k))?

@testht06 

Okay, I understand the project now, and I'll work on it. Please post any followups in the new Question thread.

Okay, now I'm sure that you mean that D is diagonal, not that it's diagonalizable.

 

 

@testht06 

You say D is a diagonalizable matrix of the eigenvalues of M. That doesn't quite make sense to me. Do you mean that D is a diagonal matrix of the eigenvalues of M?

What does D^(1/4) mean? Do you mean the fourth roots? Since we know that the eigenvalues of M are in GF(2^16), will we need to go to GF((2^16)^4) to get the fourth roots? Which of the four fourth roots do you want to use?

Would you post your code so far?

@testht06 

Are you saying that the matrices P and D are given, and that you simply need to compute P.D^(1/4).P^(-1)? Or are you talking about the same matrix as in the previous problem? Can you show your attempt so far? Mightn't we need to go to GF((2^8)^4) to get the fourth roots of the elements of D?

Shouldn't you be asking this as a new Question, a separate thread?

@testht06 

I added many comments to the Answer worksheet above, and I added code to represent the irreducible factor in the new field, extract both its roots, and verify the relationship (1st root)^256 = (2nd root).

First 494 495 496 497 498 499 500 Last Page 496 of 709