Carl Love

Carl Love

28100 Reputation

25 Badges

13 years, 104 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are replies submitted by Carl Love

@acer Thank you for the improvements to my hasty code. They certainly expand its scope of applicability.

@Thomas Richard I emphasize that those FAQs apply only if the error occurs when starting Maple. The vast majority of kernel connection errors are caused by kernel bugs, not by firewall or configuration issues. Indeed, I've never seen a case of the latter, nor seen a personal report of such a case in any public forum.

@Joe Riel Pausing the output with a simple readstat could be useful because it lets you read the output that has been generated so far. This could be especially useful if the program has several traces and infolevels set---my primary debugging tools---generating reams of output.

There are a great many possible reasons for that. You'll need to post your code. Usually it is caused by an improperly trapped error in the internal Maple code (the kernel code) that causes the death of the kernel.

You asked:

When we use the relative and absolute errors (C[1] and C[2]) simultaneously it leads to the more accuracy or using absolute error is sufficient?

There's no easy answer to that. There is a correspondence between the relative error and the number of significant digits. Specifically, a relative error of .5*10^(-d) corresponds to d significant digits. If the magnitude of a result is less than one, then the relative error tolerance is more constraining than the absolute error tolerance; if the magnitude is greater than one, then that is reversed. I'd recommend never setting the relative error tolerance greater than .5e-4. The setting of the absolute error tolerance must depend on the scale (magnitude) of the expected results. If you were going to use only one of the two, I'd recommend using the relative. (I welcome any arguments to the contrary.) Note, however, that when the result is 0, achieving a certain relative error can be difficult or impossible.

Why after solution C[12] to C[21]  (the components of array corresponding to output data) are zero?

I see what you mean, but I don't have any answer. I think that it's a bug. I think that lsode is not frequently used, so not much maintenance is done on it. If it were being maintained, then Arrays (instead of arrays) would've been allowed many, many years ago.

@spreka After the iteration, the eigenvalues are not "randomly placed around the diagonal." Their placement is highly structured, as this shows:

 

restart:

C:= Matrix(8, [[6,14,7], [1, 13, 7, 3]], scan= band[5,5], datatype= float[8]):

C:= C + C^+:

evalf[5]~(simplify(LinearAlgebra:-Eigenvalues(C), zero))^+;

Vector[row](8, {(1) = -20.296, (2) = 20.296, (3) = -9.0162, (4) = 9.0162, (5) = -3.9546, (6) = 3.9546, (7) = -.37725, (8) = .37725})

(1)

for i to 100 do
     (Q,R):= LinearAlgebra:-QRDecomposition(C);
     C:= R.Q
end do:

evalf[5]~(fnormal~(C));

Matrix(8, 8, {(1, 1) = -0., (1, 2) = -20.296, (1, 3) = -0., (1, 4) = 0., (1, 5) = -0., (1, 6) = -0., (1, 7) = -0., (1, 8) = -0., (2, 1) = -20.296, (2, 2) = 0., (2, 3) = -0., (2, 4) = 0., (2, 5) = 0., (2, 6) = 0., (2, 7) = -0., (2, 8) = 0., (3, 1) = 0., (3, 2) = -0., (3, 3) = -0., (3, 4) = 9.0162, (3, 5) = -0., (3, 6) = -0., (3, 7) = 0., (3, 8) = -0., (4, 1) = 0., (4, 2) = 0., (4, 3) = 9.0162, (4, 4) = 0., (4, 5) = 0., (4, 6) = -0., (4, 7) = 0., (4, 8) = -0., (5, 1) = -0., (5, 2) = -0., (5, 3) = -0., (5, 4) = -0., (5, 5) = -0., (5, 6) = 3.9546, (5, 7) = 0., (5, 8) = 0., (6, 1) = -0., (6, 2) = -0., (6, 3) = -0., (6, 4) = 0., (6, 5) = 3.9546, (6, 6) = 0., (6, 7) = 0., (6, 8) = 0., (7, 1) = 0., (7, 2) = -0., (7, 3) = -0., (7, 4) = 0., (7, 5) = 0., (7, 6) = 0., (7, 7) = 0., (7, 8) = -.37725, (8, 1) = 0., (8, 2) = 0., (8, 3) = -0., (8, 4) = -0., (8, 5) = -0., (8, 6) = 0., (8, 7) = -.37725, (8, 8) = -0.})

(2)

 

 

Download QR.mw

@spreka Again, please provide a small example of such a matrix C.

@acer (I'm sure that you know this already; this is just for other readers.) The referenced code from Stack Exchange has a severe bug. There, the iteration is equivalent to

while not stopping criterion do    
     (Q,R):= LinearAlgebra:-QRDecomposition(C);
     C:= Q^+.R.Q
end do;

However, in the OP code, the iteration is correct.

 

Please provide a small example of what you consider to be a "bidiagonal" matrix. I've heard of tridiagonal, but I don't think that bidiagonal is well-defined. If a matrix has only one diagonal other than the main diagonal, then its eigenvalues are obvious: They are just the main diagonal.

@Kitonum I believe that the OP is refering to something like this:

restart:
M1:=146996733613391:
M2:=1348471408813:
teks:= CodeTools:-Usage(numtheory[cfrac](M1/M2,  'quotients'));

memory used=23.62MiB, alloc change=22.90MiB, cpu time=31.00ms, real time=28.00ms

31 ms seems to me to be an outrageously long time for such a trivial computation.

@marc sancandi wrote:

PS : For Carl Love :  at x=L1 the OP has not written boundary conditions (as you said) but just Temperature and heat flux continuity conditions

Although it may not be a "boundary condition" in the sense of a true physical boundary or in your textbook senses, Maple is counting any condition that uses a specific value of x as a boundary condition. Three different values of x are used. That is the one and only reason for the error message. The lack of a Dirichlet boundary condition is irrelevant to Maple at this point in its analysis of the problem. That's all that I was saying.

Maple's numeric PDE solver is quite limited. You've specified boundary conditions at three values: x=0, x=L1, x=L1+L2. The solver can only handle boundary conditions at at most two x values.

@one man I guess that you're refering to Draghilev's method. It would be great to have a general procedure for that in Maple.

@vv It's feasible to do the 2x2 case for moduli q up to about 64.

I'd forgotten that Iterator was now available in out-of-the-box Maple. It's great that there's finally a library alternative to combinat:-cartprod.

As far as I know, RootFinding:-Isolate is the only sure way to get (approximations of) all of the solutions for a polynomial system.

First 425 426 427 428 429 430 431 Last Page 427 of 709