Carl Love

Carl Love

28055 Reputation

25 Badges

12 years, 359 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are replies submitted by Carl Love

@melika mokhtari Your z contains the derivative to the 6th power.  Then c contains a term with z in the denominator. Then ode contains separate terms with c and the derivative squared.

I don't know how to fix it.

@tomleslie Yes, I agree that if the OP is working with mathematical structures that are represented symbolically in Maple (rather than working with strings), then it is far better to use the built-in type system than regular expressions. 

If the OP gives more details of their project, I'd be happy to show some examples of using types.

@torabi You need to change F[n] to F[.., n]. The former is a row vector, and the latter is a column vector.

@tomleslie As I said last night, our relative memory and cputime numbers were not surprising.

I just wrote a version that uses a full-size Array, as yours does. My memory usage is still much lower than yours, which I find surprising, and also my cputime is much lower. 

CodeTools:-Usage(sieve(2^24)): #Tom's procedure, not copied here.
memory used=1.37GiB, alloc change=-329.75MiB, cpu time=9.72s, real time=6.10s, gc time=4.28s

restart:

EratosthenesSieve:= proc(N::posint)
description "Returns list of primes <= N";
   if N < 5 then return remove(`>`, [2, 3], N) fi;
   local p, P:= rtable(2..N, k-> k);
   P[[seq(4..N, 2), seq(seq(p^2..N, 2*p), p= thisproc(isqrt(N))[2..])]]:= ();
   [seq(P)]
end proc 
:
CodeTools:-Usage(EratosthenesSieve(2^24)):
memory used=0.90GiB, alloc change=102.38MiB, cpu time=4.94s, real time=3.16s, gc time=2.05s

I post this not out of any desire to criticize, but simply in the spirit of education and good-natured sportsmanship.

@torabi So, do you not understand the meaning of the error message? Are you incapable of tracking it down yourself? You've been posting on MaplePrimes for nearly 4 years. I can understand your need for some help with the details of a Matlab-to-Maple translation, but I can't hold your hand for every step.

Second degree? Ha! Just eyeballing it, I see that it's at least degree 8 in the derivative.

@torabi You need to replace exp with LinearAlgebra:-MatrixExponential.

@brian bovril Reading through that thread to find my code that you refer to is brutal, so I put it here instead:

EratosthenesSieve:= proc(N::posint)
# Returns list of primes <= N 
   local
      # Only record data for the odds >= 3
      M:= iquo(N-1,2)
     ,prime:= rtable(1..M, fill= true)
     ,p, P, k
   ;
   for p to trunc(evalf(sqrt(N)/2)) do
      if prime[p] then
         P:= 2*p+1;
         for k from (P+1)*p by P to M do  prime[k]:= false  end do
      end if
   end do;
   [`if`(N=1,[][],2), seq(`if`(prime[p], 2*p+1, [][]), p= 1..M)]
end proc:
     
CodeTools:-Usage(EratosthenesSieve(2^14)):
memory used=0.55MiB, alloc change=0 bytes, cpu time=16.00ms, real time=8.00ms, gc time=0ns

 

You say, essentially, that you want to plot P(t) vs. E. That's 3 dimensions: E = 0...4 (on horizontal axis), t = 0..T (on which axis?), and P = P(0)..P(T) (on vertical axis, I guess). Do you understand why this can't work?

If you want to plot P(T) vs. E, that can be done, because is not a dimension; it's just the number 20.

If you want a 3D plot of P(t) vs. (E, t), that can be done.

If you want a sequence of 2D plots of P(t) vs. t for various specific values of E, that can be done, but E won't be an axis.

If you want a sequence of 2D plots of P(t) vs. t with E varying continuously through 0..4 with its value shown by color gradation, that can be done. In this case, color is the 3rd dimension.

If you want the same thing as an animation with E varying with time (which is thus the third dimension), that can be done.

Your example of what you expect shows multiple distinct plots. Why is that?

@torabi That's just a small excerpt from a math paper with the formulas that you want to implement. I can't debug code in a language that I don't know well (Matlab) and then translate it to Maple. Please use explicit, nontrivial, matrices instead of scalars for everything that should be a matrix. When you've done that, I'll take another look at it.

@torabi Sorry then, I don't know Matlab well enough to figure it out. From my reading of Matlab's online help, size(M,1) refers to the extent of the 1st dimension of matrix M. I didn't see any meaning for when is a scalar. Hopefully someone else can answer. I do have a vague memory that in Matlab scalars are actually 1x1 matrices. Could that be the key to the solution?

In my translation table above, I added a translation for T= S:h:F because it looks like you need that for this code.

@Rouben Rostamian  wrote:

  • [T]he system A.U=B is solved m times within a for-loop with the same A, while B varies. In my mind I justified doing the inversion once and then applying the inverse m times as being not worse than solving the system m times, but I may be wrong on that.

Using theorethical considerations alone, i.e., order asymptotics, I'll explain why the O(n) solver is better even in this case where you repeatedly use the same coefficient matrix A for different side vectors B. Let's say that A1:= A^(-1). Then simply doing the multiplication U:= A1.B even once requires O(n^2) steps. This is not even counting the time to compute the inverse.

@acer wrote

  • Maple doesn't offer precomputing the P,LU for a band storage, in part because it needs a structure larger than input AT and things get a bit awkward functionality-wise.

I just wrote a triadiagonal solver that stores prefactors and thus beats that NAG band solver from your example by a factor of 5 timewise. The prefactors could be stored in place in the tridiagonal Matrix (without altering the fact that it is tridiagonal), but I didn't implement that aspect. Indeed, they could simply replace the main diagonal and 1st subdiagonal.

In my example, rather than solving repeatedly using the same side vector, I use all different side vectors.
 

restart:

N:= 2^12: #space dimension size
maxiter:= 2^7: #time dimension size

AT:= LinearAlgebra:-RandomMatrix(
   N$2, generator= 0.0..100.0, datatype= hfloat, shape= band[1,1]
):

V:= LinearAlgebra:-RandomMatrix((N,maxiter), generator= -100.0..100.0, datatype= hfloat):
XT:= Matrix((N,maxiter), datatype= hfloat):

#Time and accuracy test of Maple's/NAG's TriD solver:

st:= time():
for i to maxiter do
   XT[..,i]:= LinearAlgebra:-LinearSolve(AT,V[..,i])
end do:
time()-st;
LinearAlgebra:-Norm(AT.XT - V);

.172

0.210472710371334415e-7

#Carl's 2-stage TriD solver:
TriD_prefactor_inner:= proc(
   A::Array(datatype= float[8]),
   B::Array(datatype= float[8]),
   C::Array(datatype= float[8]),
   n::integer[4]
)::integer[4];
local k::integer[4];
   for k to n do A[k]:= A[k]/B[k]; B[k+1]:= B[k+1] - A[k]*C[k] od;
   0 #Just to return something
end proc:

TriD_prefactor_comp:= Compiler:-Compile(TriD_prefactor_inner):

TriD_prefactor:= proc(M::Matrix(square, storage= band[1,1], datatype= hfloat))
local   
   n:= rhs(rtable_dims(M)[1])-1,
   A:= rtable(1..n, k-> M[k+1, k], datatype= hfloat),
   B:= rtable(1..n+1, k-> M[k, k], datatype= hfloat),
   C:= rtable(1..n, k-> M[k, k+1], datatype= hfloat)
;
   TriD_prefactor_comp(A, B, C, n);
   (A,B,C)
end proc:

TriD_solve_inner:= proc(
   A::Array(datatype= float[8]),
   B::Array(datatype= float[8]),
   C::Array(datatype= float[8]),
   X::Array(datatype= float[8]),
   n::integer[4]
)::integer[4];
local k::integer[4];
   for k to n do X[k+1]:= X[k+1] - A[k]*X[k] od;
   X[n+1]:= X[n+1]/B[n+1];
   for k from n by -1 to 1 do X[k]:= (X[k]-C[k]*X[k+1])/B[k] od;
   0 #Just to return something
end proc:

TriD_solve_comp:= Compiler:-Compile(TriD_solve_inner):

TriD_solve:= proc(A, B, C, D):
local n:= rtable_num_elems(A), X:= rtable(D);
   TriD_solve_comp(A, B, C, X, n);
   X
end proc:
   

XT:= Matrix((N,maxiter), datatype= hfloat): #Clear old solutions.

#Time and accuracy test of Carl's 2-stage solver
st:= time():
AT_pf:= TriD_prefactor(AT):
for i to maxiter do
   XT[..,i]:= TriD_solve(AT_pf, V[..,i])
end do:
time()-st;
LinearAlgebra:-Norm(AT.XT - V);

0.32e-1

0.468513432494432891e-7

 


 

Download TriD_prefactored.mw

@torabi  GIGO[*1]: Translations can only work if the original is correct. My Matlab knowledge is quite rusty, but in this case, it looks like must be a Matrix for your code to make any sense in either language. Yet, you have M:= 2, making a scalar. So, I must ask, Did you ever run the code in Matlab? And did it work?

[*1]GIGO stands for "Garbage In, Garbage Out", a tongue-in-cheek law of computer science.

Vote up for the derivation and exposition.

Regarding the iterative solver: For n x tridiagonal A, the system A.U = B can be solved in O(n) steps and O(1) auxilliary memory, which is much faster than even multiplying by A^(-1), let alone computing A^(-1) in the first place. And often the memory alone needed for A^(-1) is prohibitive. See the Wikipedia article "Tridiagonal matrix algorithm".

First 272 273 274 275 276 277 278 Last Page 274 of 709