acer

32303 Reputation

29 Badges

19 years, 310 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Actually, DJKeenan suggested something like, Optimization:-Maximize(integral,0..1,method='branchandbound'); which also returns [0.829292352740012, [0.]]. The purely local solver invoked as Optimization:-Maximize(integral,0..1) also gave that result for me, even though the function may not be monotonic (if the apparent behaviour between 0.99999 and 1.0 is accurate). acer
Actually, DJKeenan suggested something like, Optimization:-Maximize(integral,0..1,method='branchandbound'); which also returns [0.829292352740012, [0.]]. The purely local solver invoked as Optimization:-Maximize(integral,0..1) also gave that result for me, even though the function may not be monotonic (if the apparent behaviour between 0.99999 and 1.0 is accurate). acer
Once x is assigned it can no longer be used as the dummy variable in the plot() call. You could instead use t, or xx, etc. A Vector may only be indexed starting from 1. Use an Array if you wish to have the index start at 0. If last allocated entry in the structure x (Vector or Array) is the 100th, then the loop which assigns to x[i+i] can only go as far as i=99. Otherwise it will try to access the 101st entry, which is an access attempt that is out of bounds. You had k=1 (which is an equation but doesn't bring about the assignment) while I presume that you intended the assignment k:=1 . It looks to me as if you are trying to figure out where a crossing takes place, or something like that. You might wish to do an inequality rather than a strict equality comparison. (I'm guessing as to your exact intention.) For example,
restart;
with(plots):

x:=Array(0..100):
f:= x -> a*x - x^2:
a:= 2.8:

p:= plot({f(t),t},t=0..2.5):

x[0]:=1.5:
for i from 0 to 99 do
  x[i+1] := f(x[i]):
  if i>0 and x[i] <= x[i+1] then
    k := i;
    g:= x[k]:
    break;
  end if;
end do:
k;
and then the rest of your code. acer
Once x is assigned it can no longer be used as the dummy variable in the plot() call. You could instead use t, or xx, etc. A Vector may only be indexed starting from 1. Use an Array if you wish to have the index start at 0. If last allocated entry in the structure x (Vector or Array) is the 100th, then the loop which assigns to x[i+i] can only go as far as i=99. Otherwise it will try to access the 101st entry, which is an access attempt that is out of bounds. You had k=1 (which is an equation but doesn't bring about the assignment) while I presume that you intended the assignment k:=1 . It looks to me as if you are trying to figure out where a crossing takes place, or something like that. You might wish to do an inequality rather than a strict equality comparison. (I'm guessing as to your exact intention.) For example,
restart;
with(plots):

x:=Array(0..100):
f:= x -> a*x - x^2:
a:= 2.8:

p:= plot({f(t),t},t=0..2.5):

x[0]:=1.5:
for i from 0 to 99 do
  x[i+1] := f(x[i]):
  if i>0 and x[i] <= x[i+1] then
    k := i;
    g:= x[k]:
    break;
  end if;
end do:
k;
and then the rest of your code. acer
Nice. Now I feel silly for not bothering to think first, about the nature of the equations. acer
Nice. Now I feel silly for not bothering to think first, about the nature of the equations. acer
Well, solve() should be able to handle (eq1,G=0}. So perhaps there's not need for optimizing in that case. I rather got the impression that the OP had some specific values of the parameters in mind, and that those might not be free to range. I thought that one might use Optimization:-Minimize, being careful with the abs, and either choosing a method not requiring derivatives or supplying them by hand. While GlobalOptimization might require only Lipschitz continuity, Optimization is pickier but sometimes one can coerce it. acer
Well, solve() should be able to handle (eq1,G=0}. So perhaps there's not need for optimizing in that case. I rather got the impression that the OP had some specific values of the parameters in mind, and that those might not be free to range. I thought that one might use Optimization:-Minimize, being careful with the abs, and either choosing a method not requiring derivatives or supplying them by hand. While GlobalOptimization might require only Lipschitz continuity, Optimization is pickier but sometimes one can coerce it. acer
The difference between linalg and LinearAlgebra might be explained by their using differing criteria for choosing between valid candidate pivots. The routine linalg:-ffgausselim selects according to length, ie, length(B[p[j],k]) < length(B[p[i],k]) The routine LinearAlgebra:-LA_Main:-LUDecomposition appears to select the first nonzero entry it can find (in the current leading column). There are reasons to prefer the selection by shortest length. As for the method itself, you can find it implemented in several places in the Maple library. In some places it is implemented as a means to compute the inverse or determinant, eg, LinearAlgebra:-LA_Main:-``MatrixInverse/polynom`. In other places it is implemented to compute a result with a modular technique, or over some particular field, eg. LinearAlgebra:-LA_Main:-`Determinant/modular` or LinearAlgebra:-Generic:-BareissAlgorithm. You can view the source of the Maple library routines named above by calling eval() around their names. You may first need to issue these two commands, kernelopts(opaquemodules=false): interface(verboseproc=3): These sources are mostly straightforward and the core algorithmic implementations within them are not too difficult to understand. They each usually differ in some way, according to their purpose, domain, etc. acer
The difference between linalg and LinearAlgebra might be explained by their using differing criteria for choosing between valid candidate pivots. The routine linalg:-ffgausselim selects according to length, ie, length(B[p[j],k]) < length(B[p[i],k]) The routine LinearAlgebra:-LA_Main:-LUDecomposition appears to select the first nonzero entry it can find (in the current leading column). There are reasons to prefer the selection by shortest length. As for the method itself, you can find it implemented in several places in the Maple library. In some places it is implemented as a means to compute the inverse or determinant, eg, LinearAlgebra:-LA_Main:-``MatrixInverse/polynom`. In other places it is implemented to compute a result with a modular technique, or over some particular field, eg. LinearAlgebra:-LA_Main:-`Determinant/modular` or LinearAlgebra:-Generic:-BareissAlgorithm. You can view the source of the Maple library routines named above by calling eval() around their names. You may first need to issue these two commands, kernelopts(opaquemodules=false): interface(verboseproc=3): These sources are mostly straightforward and the core algorithmic implementations within them are not too difficult to understand. They each usually differ in some way, according to their purpose, domain, etc. acer
Here's a stab at the idea posted above. By altering it so that FFFF_3 accepts A as a second argument (or uses it as a global) then A could be created outside of FFFF_3. This allows one to check that the compiled and uncompiled instances produce the same Y results at a small size like g=10 say.
FFFF_3 := proc(g)
global adder;
local i, Y, A;
  # Generate A with enough entries so that it is unlikely that
  # adder() attempts to access more than that number of values.
  A:=Statistics:-Sample(Statistics:-RandomVariable(':-Poisson'(1)),g*4);
  Y := Vector[row](g,datatype=integer[4]);
  adder(A,Y,g);
  [seq(Y[i], i = 1 .. g)];
end proc:
                                                                                                                                           
addproc := proc(A::Vector(datatype=float[8]),Y::Vector(datatype=integer[4]),g::integer)
local count::integer,j::integer,x::integer;
  count := 1:
  for j from 1 to g do
    x := trunc(A[count]);
    count:=count+1;
    if x=0 then
      Y[j]:=0;
    else
      Y[j]:=trunc(add(A[i],i=count..count+x-1));
    end if;
    count:=count+x;
  end do;
end proc:
                                                                                                                                           
adder := addproc:
T := time(): FFFF_3(2000): time()-T;
                                                                                                                                           
adder:=Compiler:-Compile(addproc):
gc():
                                                                                                                                           
T := time(): FFFF_3(2000): time()-T;
I don't know what the Compiler does with calls to the Maple built-in function add(). Maybe it makes the compiled procedure call back to the Maple kernel for such. But I did not get a big further improvement above by replacing the add() call in addproc() with, say,
      temp := 0.0;
      for i from count to count+x-1 do
        temp := temp+A[i];
      od;
      Y[j]:=trunc(temp);
acer
The (maple kernel) built-in routine trunc() may be faster than floor() for you, where you know that all your values will be greater than zero. But the improvement due to using trunc() instead of floor() may be small. There may also be some other, faster, ways to approach the whole problem. But these too might have their own bottlenecks to figure out. Here's an example idea, which I haven't tested. My guess is that done well it might perform faster, even if a first crack at trying it doesn't seem better. Suppose that, right up front and just the once, you created a very long 1-D Array or Vector from the distribution. Say with g * 10 entries. This is by calling Sample just once, like, A := Statistics:-Sample(Statistics:-RandomVariable((':-Poisson')(1)), g*10); Then you could create some new, inner, proc which can walk A. As it walks A, it keeps a counter. It take a value from A, increments the counter by 1. Take the floor of that, and let that number be assigned to C. Then walk and add up the next C entries, and take the floor and assign to y[i] or do what you will with it. Increment the counter by C. And repeat. The point is that, when calling the routine S returned by Sample(), a new Vector for the results is called each time. These Vectors make Maple garbage, and garbage collection is part of what's taking up time. Such time might be avoided by creating just a single Vector and incurring the Vector creation overhead just once. You can choose the length of Array A so that it is very, very likely to contain enough values from the distribution for your full purpose. You might be able to use add() to add up the C entries from current-counter to current-counter+C. Or you might be able to use for-loops and perhaps even use Maple's Compiler on this new inner procedure. acer
For a Matrix whose entries are all explicitly stored, one practical limit is the amount of available memory. If the amount of available memory is the effective limit, then using a hardware datatype (integer[1], float[8], etc) can allow one to create and use larger Matrices and Vectors. If, on the other hand, not all entries must be stored, then a hard limit in each dimension may be something like 2^(kernelopts(wordsize)-1)-1. This can be relevant when the Matrix is set up to have either so-called empty storage with unstored entries specified only by an indexing function, or sparse storage with a only a specific number of entries' storage allocated up front. For example, on 64bit Linux, M:=Matrix(2^63-1,2^63-1,storage=sparse[1]); and on 32bit Windows, M:=Matrix(2^31-1,2^31-1,storage=sparse[1]); One way to conceptually extend even that limitation could be to use multi-dimensional Arrays instead of Matrices or Vectors. An Array can have from 0 to 63 dimensions (as opposed to Matrices which have just 2 and Vectors which have just 1). Another possible extension might be to have Matrices whose entries are themselves Matrices. Eg, M:=Matrix(2^31-1,2^31-1,storage=sparse[1]); M[1,1]:=Matrix(2^31-1,2^31-1,storage=sparse[1]); acer
Suppose that the variable `sol` were assigned something like this, sol:= fsolve({Eq16, Eq15}, {beta = 4 .. 5, N_bar = 0 .. 1000}); Then, 2-argument eval() should do the trick, to get at the separate values. sol := {N_bar = 18.04394737 + 0.*I, beta = 4.474743382}; eval(N_bar,sol); eval(beta,sol); acer
Suppose that the variable `sol` were assigned something like this, sol:= fsolve({Eq16, Eq15}, {beta = 4 .. 5, N_bar = 0 .. 1000}); Then, 2-argument eval() should do the trick, to get at the separate values. sol := {N_bar = 18.04394737 + 0.*I, beta = 4.474743382}; eval(N_bar,sol); eval(beta,sol); acer
First 558 559 560 561 562 563 564 Last Page 560 of 591