acer

32348 Reputation

29 Badges

19 years, 329 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

The LDL^T decomposition is suitable for symmetric indefinite mattices, and does pivoting to handle that.

Cholesky LL^T decomposition is for symmetric positive definite.

LDL^T implementations are usually considerably slower than Cholesky LL^T, which is not unexpected.

The OP wrote that his Matrix is "s.p.d". So why the need for LDL^T? Just because some instructor demands it? Or is there some numeric justification? (eg. close to indefinite, or...?) If there is in fact a numeric justification for preferring LDL^T over LL^T for the given Matrix then performing LL^T and then adjusting would defeat that goal.

I don't recall seeing LDL^T called "Cholesky" by an expert. It seems like an unnecessary and confusing equivocation.

@samen You have incorrectly changed the code I showed above. Your Reply shows,

plots:-animate(K->plots:-matrixplot(p(k)-p(K)),[k],k=0.0001..0.001);

while my code above showed,

plots:-animate(K->plots:-matrixplot(p(0.0001)-p(K)),[k],k=0.0001..0.001);

With your incorrect change it won't work properly for any N.

So first I reverted that change.

Next I replace the call to solve inside procedure p with a call to fsolve. That sped it up enough that N=19 was bearable, but N=49 was still too slow.

Then I replaced the use of fsolve (or solve) by use of LinearSolve, after generating the Matrix-form of the equations in sys, which are all linear in the variables in vars. That made the case N=49 run reasonably quickly. See the attachment.

crnk_ac.mw

The call to GenerateEquations is now the slowest part. At least it's outside the of any procedure like pmat or p, and so only gets called once. But you could avoid that time cost as well if you generated the Matrix-form directly instead of bothering to forms the eqs.

I don't know whether your discretization scheme is overall correct. I've focused only on the aspect of solving the equations you've targeted.

If your eventual goal is to numerically evaluate the eigenvectors at floating-point values for all the parameters then you would likely be better off with a procedure (function) that admitted those float values and did the eigenvector calculations in hardware double precision.

That is likely true whether you intend on operating purely within Maple or in C or Fortran. In the latter cases I mean a purely hardware float LAPACK eigen-solver like from the Intel MKL, say, as opposed to exporting the symbolic formulas for the eigenvectors.

Even just within Maple, an efficiently coded procedure that acted inplace on float[8] Matrices should be able to do millions of such eigenvector computations per second. (In a recent bit of work I was able to crank out several million 5x5 eigenvector computations per second in a highly optimized set of procedures in just Maple, and in C/Fortran there should be even less function-call overhead.)

But there is also numerical stability to consider. The eigen-solvers in LAPACK are quite good, but there's no guarantee that the floating-point instantiation of length symbolic eigenvector formulas would be so numerically robust.

And one more.

map( `^`, L1, 2 );

That should faster than using the user-defined operator u->u^2 for very long lists, since it only uses the kernel builtin `^`

[edited] And unlike the elementwise operator ~^ it will work in much older versions of Maple, back to MapleV R5 (released January 1998).

@jthress1 The only difference I see is in the order in which the 3 distinct eigenvalues might appear in the Vector, and in whether the (numerators and denominator) entries in the eigenvectors are factored. But scalar multiples of the eigenvectors would also be acceptable.

1D_Euler_Eigenvectors_ac.mw

I don't see anything to object to here. Just be careful to associate the eigenvectors with the right eigenvalues, and account for the order in which they appear.

And if any eigenvalue were associated with more than one independent eigenvector (not the case in this followup) then various linear combinations would also be acceptable.

@GoBills3 The key thing is your use of OS X.

I have seen this reported before, and alternatives suggested if I'm not mistaken. Searching this site might find the older Questions I recall seeing.

@jthress1 Perhaps you are having trouble expressing your idea mathematically, but this paragraph you wrote above contains logical contradictions and some of the sentences do not make sense.

"The eigenvalues are exactly what I expected. In fact, I know that the eigenvalues are correct. What I expected was for each Eigenvalue to have only one eigen vector associated with it. Since the last four eigenvalues are identical, they should have the same eigenvector(s) as mentioned earlier. What I did not expect is for each eigenvalue to have four eigenvectors associated with it. What i expected was to have six total eigenvectors (one eigenvector for each eigenvalue). Does this clear things up? Sorry for the confusion"

It does not make sense to say that the same eigenvalue (or multiplicity four) can be considered as four eigenvalues each with only one eigenvector, as in this sentence, "What I expected was for each Eigenvalue to have only one eigen vector associated with it".

Generically, the Matrix A as given can be shown to have six linearly independent eigenvectors. But there are only three distinct eigenvalues. One of the eigenvalues is associated with an eigen-space having four linearly independent eigenvectors as its basis.

It makes no sense to say that you expect one of the eigenvalues to have multiplicity greater than one, while both objecting to that eigenvalue's associated eigen-space having rank greater than one as well as expecting a full linearly independent set of eigenvectors for the Matrix altogether.

 

@jthress1 

Leaving eigenvectors aside for the moment, your symbolic Matrix A appears to have an exact eigenvalue with multiplicity of four. That is, there are only three distinct eigenvalues. Are you saying that is not what you expect? If that contradicts your expectations then perhaps you should resolve that issue before proceeding to eigenvector computation.

For many numeric values of the parameters there can still be six linearly independent eigenvectors, however. It's not clear whether that alone satisfies all your requirements, irrespective of how many distinct eigenvalues your Matrix might have.

@dharr Please see the edit in my followup Comment to my Answer.

It seems to me that the manual solution of the Nullspace (of the CharacteristicMatrix instantiated at each eigenvalue) provides six independent eigenvectors, as computed in my Answer.

It may be only the computation using Eigenvectors -- with forced Normalizer -- that returns the two zero-vectors. Given the presence of radicals in two of the eigenvalues then forcing Normalizer=normal may have been a mistake of mine in the followup..

It seems as if the slow behavior of the LinearAlgebra:-Eigenvectors command (for this example) can be improved by assigning a (straightforward) particular simplifier to Normalizer -- and thus bypassing any "smart" deduction of the normalizer.

Some commands use Normalizer and Testzero to keep down expression swell during row reduction, as well as guard against inadvertant division by hidden zeros while pivoting.

For this example plain old normal is adequate to obtain reasonably short expressions for the eigenvectors.

[edited]
Member dharr points out that two columns of evecsA as computed in the attachment below -- by the Eigenvectors command with Normalizer=u->normal(u) -- are all-zero, which seems to indicate a dearth of independent eigenvectors and a defective Matrix. Yet in the earlier Answer I gave above there appear to be 6 independent non-zero Vectors computed, which each satisfy the definition appropriately. This suggests that the zero-vectors provided by Eigenvectors in the attachment below are incorrect, and those in the Answer above are correct.
Download eigs_LA.mw
[end of edit]

@jthress1 No, the eigenvectors associated with each of the evalsA[ii] are in the sets assigned to evecsA[ii].

There are 4 eigenvectors associated with one of the eigenvalues, which is the same eigenspace for the evalsA[ii], ii=3..6 which are all equal. On the other hand, there is just one eigenvector associated with evalsA[1], and also just one associated with evalsA[2].

That last part of the worksheet is just a verification that the (associated pairs of) eigenvalues and eigenvectors satisfy the definition:
   A . evec = lambda*evec
which all produce the expected zero vectors.

I computed the eigenvectors evecsA[ii] associated with each eigenvalue evalsA[ii] by computing the nullspace (kernel) of the characteristic Matrix (ie. A-evalsA[ii]*IdentityMatrix).

@maple2015 It's taking a long time to compute but I'm really not sure that I entered all the many values correctly. It's onerous to re-enter them.

Could you simply upload a worksheet that contained an explicit definition of KFF? (You might lprint it, or lprint it to a text file.)

Or could you remove the Maplets and hard-code the supplied values?

@basha 666 I don't see any explanation of your use of the term "contour", or any actual description of what precisely you want.

If you have additional details, then please add them as a Comment/Reply to your Question above.

But please don't spam this forum with more duplicates of the question. I'll flag them as duplicates, for deletion.

Are you trying to find the (smallest positive, real) eigenvalues of the Matrix, for each particular numeric value of the parameter?

If so then could you not write a procedure that admitted the Matrix and a value for the parameter, instantiated the Matrix at that value, and computed the floating-point eigenvalues? (If it's symmetric then you might even avoid production of small imaginary artefacts.)

That could be fast, and is also (generally) more robust numerically than computing the float roots of the instantiated determinant of the original Matrix.

There are some options which can speed up the execution of the procedures produced and returned by the call to dsolve (numeric), sometimes at the cost of a slightly more expensive call to dsolve itself. You might try the compile option, but whether you can utilize that will depend on the particular example. Or you might try the option optimize=true , but again the applicability will depend upon the particular example.

Why not tell us more about your intended examples?

First 246 247 248 249 250 251 252 Last Page 248 of 592