acer

32954 Reputation

29 Badges

20 years, 150 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

Is this an attempt to phish for HD SNs associated with valid Maple licenses? (Isn't/wasn't HD SN an alternative to NIC SN for authentication?) Sorry if that sounds callous, but we live in troubled times.

acer

@turloughmack I apologize if I misinterpreted. The option price nomenclature part is technical jargon, and seems to obscure the fact that your difficulties seem to be with the coding aspects (which really are not specific to just that discipline). The jargon can obscure the question.

It might be better for legibility if you could use a notation that matched Maple's 1D notation, when posting the code as plaintext. For example, if your names are indexed then an entry for f might be f[i+1,j] and not f_i+1,j (which is what one might physically type, as 2D input). This is key, since even this interpretation might not be what you want. It seems that you might want the (i+1)th instance of a Vector f. In that case you could have a table of N Vectors f[i], with each member Vector having entries f[i][j], j=1..m. Or you might prefer to create an N by m Matrix f where each row represents a Vector and the indexing would be f[i,j].

It was pointed out in response to your earlier Question that square brackets are for list creation, and that you'd probably need round brackets as expression delimiters. But square brackets are still used above in the assignments of a[j], b[j], and c[j]. Is there a reason for that?

You've used `m` mostly, but in your setup of f[N] it appears that you really mean `M` (which is 10, as is N). Is that right? Is m supposed to be M, or vice versa?

Is this getting there?

restart:

a := j -> 1/2*(r-q)*j*Deltat - 1/2*sigma^2*j^2*Deltat:
b := j -> 1 + sigma^2*j^2*Deltat + r*Deltat:
c := j -> -1/2*(r-q)*j*Deltat - 1/2*sigma^2*j^2*Deltat:

r:= 0.05: Deltat:= T/N: DeltaS:=S/M: T:=0.4: q:=0: K:=1.1: S:=2: M:=10: N:=10:

thematrix := Matrix(M,M,scan=band[1,1],
            [[seq(a(j),j=1..M-1)],
             [seq(b(j),j=1..M)],
             [seq(c(j),j=1..M-1)]]):

f[N]:=Vector(M,(j)->max(K-j,DeltaS,0)):

thismatrix:=eval(thematrix,sigma=0.9):

for i from N to 2 by -1 do
 f[i-1] := LinearAlgebra:-LinearSolve(thismatrix,f[i]);
end do:

f[1];

invthismatrix:=thismatrix^(-1):
invthismatrix^9 . f[N];

It might be slightly more numerically stable to do the repeated LinearSolve, over MatrixInverse. But at this size problem any efficiency concerns and tweaks (re-using the LU decomposition, done just once) would make the code less clear for a negligible benefit I suspect.

Maybe the brute force way would do: find the x,y (err, Q1,Q2) values at the intersections. Then just draw eight curves instead of four. And have some of the eight get the dashed line style. See ?plot,option on chenging the line style.

Instead of creating all curves in a single plot structure right away, you could assign each to a name. Ie, P1:=plot3(....,Q1=-1..X, Q2=-1...1) where X is the Q1-coordinate of the intersection point. Analogously for the other curves, but some with the line style specified as an option. Then use plots:-display to show them all superimposed.

acer

What precisely do you mean by "prints itself"? Do you mean that calling it will produce as output the same result as would evaluating (or printing) it?

I have set interface(prettyprint=1) for these examples below.

> f:=proc() print(eval(procname)); end proc;
                 f := proc() print(eval(procname)) end proc;

> f();
proc() print(eval(procname)) end proc;

Note that the above does not work when the procedure is anonymous. But for that case, one can use the new (as of Maple 14, see here) `thisproc`,

> f:=proc() print(eval(procname)); end proc:

> %(); # NULL output, ie. it didn't succeed

> proc() print(eval(thisproc)); end proc;
proc() print(eval(thisproc)) end proc;

> %();
proc() print(eval(thisproc)) end proc;

It occurs to me that perhaps you instead mean that you want the result of calling `f` to simply be the name of f.

> f:=proc(x)
>    debugopts('callstack')[2];
> end proc:

> f();
                               f

> g:=proc() f(); end proc:
> g();
                               f

I found that callstack useful when coding a redefined version of (the protected) `userinfo` which would display results to a Text Component. This worked even when the code was run in a Maplet or other "hidden" facility, and it worked without having (access) to change or edit the original routine sources. The task was harder still -- to find out how the current parent procedure had been called. I'd meant to blog it...

Another way to get this simpler effect of printing only the current procedure's own name might be,

> f:=proc() op(1,''procname''); end proc:

> f();
                               f

Do either of those two interpretations of "prints itself" come close to your intended meaning?

acer

Hey Axel, I see you're still musing over that usenet post. One might easily get Maple to convert the following roots of that quintic to radicals, by just calling `solve` with its Explicit option.

> seq(cos(i*Pi/11),i in [1,3,5,7,9]);

           /1    \     /3    \     /5    \      /4    \      /2    \
        cos|-- Pi|, cos|-- Pi|, cos|-- Pi|, -cos|-- Pi|, -cos|-- Pi|
           \11   /     \11   /     \11   /      \11   /      \11   /

But these roots of that sextic look like they are going to be a little tougher... ;)

seq(cos(i*Pi/13),i in [1,3,5,7,9,11]);

    /1    \     /3    \     /5    \      /6    \      /4    \  
 cos|-- Pi|, cos|-- Pi|, cos|-- Pi|, -cos|-- Pi|, -cos|-- Pi|, 
    \13   /     \13   /     \13   /      \13   /      \13   /  

       /2    \
   -cos|-- Pi|
       \13   /

acer

As 1D Maple notation input, your expression would need a star or a dot (* or .) between the bracketed terms in order to denote multiplication. For example, (a+b)*(a-b).

As 2D Math input, it would need either a star of a dot (typed in as * or .) for explicitly denoted multiplication or a space for implicitly denoted multiplication. For example (a+b)*(a-b).

Without any of these, the first set of bracketed terms get interpreted as (a sum of functions), and the entire expression as one big function application. The 1D and 2D parsers support a distributed function call syntax. Without the multiplication sign (or space for 2D input) the following are all parsed as function application:

> (sin+cos)(x);
                        sin(x) + cos(x)

> (sin+f)(x);
                         sin(x) + f(x)

> (f+g)(x-y);
                      f(x - y) + g(x - y)

> (a+b)(x-y);
                      a(x - y) + b(x - y)

> (a+b)(a-b);
                      a(a - b) + b(a - b)

The first term of even that last result is not the same as a*(a-b). No, it is `a` applied as a function, with argument a-b. There's no reason to prevent the parser from recognizing that an as-yet undefined operator `a` might be applied to the sum of names of other operators. Maple is so much more than just mere math.

Some people have expressed the opinion that Maple's relatively new implicit multiplication syntax (denoted by a space between terms, in 2D Math input) makes for visually confusing documents which obscure these syntax distinctions and lead inexperienced users to make this mistake more often.

acer

You should not be using the (officially) deprecated linalg package. You should use the newer LinearAlgebra package.

For transposition either use the LinearAlgebra:-Transpose command, or raise the Matrix or Vector to the %T power (eg. V^%T).

Remember that Maple is case-sensitive.

Also, use Matrix and Vector, not the deprecated matrix and vector commands.

You can extract a column using the shorter syntax E[1..-1,j]. So E[1..-1,5] would extract the 5th column of a Matrix E.

See ?rtable_indexing

acer

If this is an assignment for which you are supposed to explicitly demonstrate the solving method in action, then see dskoog's Answer.

If, on the other hand, you just want to apply a known method to solve your moderately sized numeric problem then you could use the LinearSolve command with the method=LU options. (Linear solving via LU dcomposition is pretty much just Gaussian elimination in disguise since the L factor provides a way to store the pivoting info. Generally, G.E. of an augmented system would get you half-way there, and you'd back-sub for the second stage. By doing LU and then both forward- and backward-sub'ing the whole task is done. The LinearSolve command would just do all those steps for you, internally.)

The quoted size of your problem makes me think that maybe you just want the system solved. Hence the method=LU and LinearSolve suggestion.

 X := LinearAlgebra:-LinearSolve(A, B, method=LU);

acer

Yes, use Re() and Im(), and not op().

And use I*Im(expr) if you want the imaginary component of expr, as opposed to the "imaginary part". The term "imaginary part" is being used to denote the real number b in a complex number a+b*I, say.

> Xi := sqrt(I);

                               1  (1/2)   1    (1/2)
                         Xi := - 2      + - I 2     
                               2          2         

> Bs := Matrix([[Re(Xi), I*Im(Xi)], [I*Im(Xi), Re(Xi)]]);

                             [ 1  (1/2)   1    (1/2)]
                             [ - 2        - I 2     ]
                             [ 2          2         ]
                       Bs := [                      ]
                             [1    (1/2)   1  (1/2) ]
                             [- I 2        - 2      ]
                             [2            2        ]

Use capitalized Matrix, not matrix.

acer

[deleted]

I misunderstood the request as being a question on how to expand the denominator. (I ought to read the Questions more carefully...)

acer

The ith entry of the column Vector B=A.x will be equal to add(A[i,j]*x[j], j=1..numcolsA). But for any fixed i the terms A[i,j], j=1..numcolsA are the entries of the ith row of A. The product A.x can be considered as a weighted linear combination of the columns (where the weights used for each of the j entries of any row are the x[j].

> A:=Matrix(2,3,symbol=a);

                           [a[1, 1]  a[1, 2]  a[1, 3]]
                      A := [                         ]
                           [a[2, 1]  a[2, 2]  a[2, 3]]

> x:=Vector(3, symbol=X);

                                      [X[1]]
                                      [    ]
                                 x := [X[2]]
                                      [    ]
                                      [X[3]]

> Vector(2, (i)->add(A[i,j]*x[j],j=1..3)); # Vector of weighted sums along rows

                [a[1, 1] X[1] + a[1, 2] X[2] + a[1, 3] X[3]]
                [                                          ]
                [a[2, 1] X[1] + a[2, 2] X[2] + a[2, 3] X[3]]

> add(A[1..2,j]*x[j], j=1..3); # weighted sum of the column Vectors

                [a[1, 1] X[1] + a[1, 2] X[2] + a[1, 3] X[3]]
                [                                          ]
                [a[2, 1] X[1] + a[2, 2] X[2] + a[2, 3] X[3]]

> A.x;

                [a[1, 1] X[1] + a[1, 2] X[2] + a[1, 3] X[3]]
                [                                          ]
                [a[2, 1] X[1] + a[2, 2] X[2] + a[2, 3] X[3]]

Looking at the last two results above: the Vector of weighted elementwise sums along rows is equal to the sum of weighted column Vectors.

acer

That's right. fsolve is very likely not thread safe. Most of the complicated system Library (interpreted) commands like solve, limit, fsolve, dsolve, int, etc, are not. (I mean Maple 14, and it's true for Maple 12 as well.)

acer

Extrapolating is somewhat dubious, especially back so far. But are you trying to do something like this?

numer_extrap.mws

acer

How about this, where `ee` is your big expression with the arctan call in it.

stuff := seq([op(t)],t in indets(ee,specfunc(anything,arctan)));

stuff[1];
stuff[2];

acer

You wrote "at least 11K nonzero entries". That's not very large, as a dense 110x110 Matrix has about that much.

I'd want to hear what you'd answer to my earlier query about how you plan to judge acceptable roundoff (ie. according to whether it is unlikely, or...?).

But the following code shows that 5000 individual sums over all entries of dense 110x110 float[8] Arrays can be done on a fast PC in less than a second.

Do you have some special reason to think that roundoff is going to be a problem for your data?

The procedure `P` below is a bit like what's under the hood of the 2D AddAlongDimension routine (it calls the same external BLAS function), except it operates on a single Vector. And its first argument must be a float[8] Vector. The procedure `f2` simply aliases a 2D Array to a 1D Vector and then calls P on that.

P := proc(V::Vector(datatype=float[8]),n::posint)
local extlib, ExtCall, x;
   extlib := ExternalCalling:-ExternalLibraryName("linalg", 'HWFloat');
   ExtCall := ExternalCalling:-DefineExternal('hw_f06ecf', extlib);
   x:=Vector(1,'datatype'='float[8]');
   ExtCall(n,1.0,V,0,1,x,0,0);
   x[1];
end proc:

f2:=proc(A::Array(datatype=float[8]))
local B, m, n;
  #(m,n):=op(map(rhs,[rtable_dims(A)]));
  m:=rhs(rtable_dims(A)[1]);
  n:=rhs(rtable_dims(A)[2]);
  B:=ArrayTools:-Alias(A,[m*n]);
  P(B, m*n);
end proc:

M:=LinearAlgebra:-RandomMatrix(110,110,generator=-1.0..1.0,
                               outputoptions=[datatype=float[8]]):
rtable_options(M,subtype=Array);

CodeTools:-Usage( f2(M) );
memory used=64.82KiB, alloc change=0 bytes, cpu time=0.00s, real time=0.02s

                    -11.847884729671568

CodeTools:-Usage( add(t, t in M) );
memory used=0.60MiB, alloc change=0.62MiB, cpu time=0.02s, real time=0.02s

                    -11.847884729671568

str,st,ba,bu:=time[real](),time(),kernelopts(bytesalloc),kernelopts(bytesused):
for i from 1 to 5000 do
   f2(M);
end do:
time[real]()-str,time()-st,kernelopts(bytesalloc)-ba,kernelopts(bytesused)-bu;

                0.640, 0.640, 22540256, 26030924


str,st,ba,bu:=time[real](),time(),kernelopts(bytesalloc),kernelopts(bytesused):
for i from 1 to 5000 do
   add(t, t in M);
end do:
time[real]()-str,time()-st,kernelopts(bytesalloc)-ba,kernelopts(bytesused)-bu;

              33.462, 33.462, 18477768, 3146484956

acer

First 286 287 288 289 290 291 292 Last Page 288 of 342