acer

32632 Reputation

29 Badges

20 years, 45 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

It wasn't clear which way you wanted the entries laid out, by column or row.

`Reshape` can be a bit quicker, even with an explicit transposition (to get entries laid out in rows, presuming that you didn't just want order=C_order as a faster way to get that).

restart:
m,n:=33,22:
iter:=1009:

V:=Vector[row](m*n,(i)->i):
with(ArrayTools):

st:=time():
for i from 1 to iter do
   m1:=Reshape(V,[n,m])^%T;
end do:
time()-st;
                                    0.249

st:=time():
for i from 1 to iter do
   L:=convert(V, list):
   m2:=Matrix(m, n, L);
end do:
time()-st;
                                    0.655

st:=time():
for i from 1 to iter do
   m3:=Matrix(m, n,(i,j)->V[(i-1)*n+j]);
end do:
time()-st;
                                    1.186

LinearAlgebra:-Norm(m1-m2),LinearAlgebra:-Norm(m2-m3);
                                    0, 0
#m1,m2,m3;

restart:
m,n:=33,22:
iter:=1000:

V:=Vector[row](m*n,(i)->i):
with(ArrayTools):

st:=time():
for i from 1 to iter do
   m1:=Reshape(V,[m,n])^%T;
end do:
time()-st;
                                    0.203

st:=time():
for i from 1 to iter do
   L:=convert(V, list):
   m2:=Matrix(n, m, L);
end do:
time()-st;
                                    0.640

st:=time():
for i from 1 to iter do
   m3:=Matrix(n,m,(i,j)->V[(i-1)*m+j]);
end do:
time()-st;
                                    1.170

LinearAlgebra:-Norm(m1-m2),LinearAlgebra:-Norm(m2-m3);
                                    0, 0
#m1,m2,m3;

acer

It wasn't clear which way you wanted the entries laid out, by column or row.

`Reshape` can be a bit quicker, even with an explicit transposition (to get entries laid out in rows, presuming that you didn't just want order=C_order as a faster way to get that).

restart:
m,n:=33,22:
iter:=1009:

V:=Vector[row](m*n,(i)->i):
with(ArrayTools):

st:=time():
for i from 1 to iter do
   m1:=Reshape(V,[n,m])^%T;
end do:
time()-st;
                                    0.249

st:=time():
for i from 1 to iter do
   L:=convert(V, list):
   m2:=Matrix(m, n, L);
end do:
time()-st;
                                    0.655

st:=time():
for i from 1 to iter do
   m3:=Matrix(m, n,(i,j)->V[(i-1)*n+j]);
end do:
time()-st;
                                    1.186

LinearAlgebra:-Norm(m1-m2),LinearAlgebra:-Norm(m2-m3);
                                    0, 0
#m1,m2,m3;

restart:
m,n:=33,22:
iter:=1000:

V:=Vector[row](m*n,(i)->i):
with(ArrayTools):

st:=time():
for i from 1 to iter do
   m1:=Reshape(V,[m,n])^%T;
end do:
time()-st;
                                    0.203

st:=time():
for i from 1 to iter do
   L:=convert(V, list):
   m2:=Matrix(n, m, L);
end do:
time()-st;
                                    0.640

st:=time():
for i from 1 to iter do
   m3:=Matrix(n,m,(i,j)->V[(i-1)*m+j]);
end do:
time()-st;
                                    1.170

LinearAlgebra:-Norm(m1-m2),LinearAlgebra:-Norm(m2-m3);
                                    0, 0
#m1,m2,m3;

acer

Sorry, I forgot to add this,

restart:
Optimization:-Minimize(
   (3*a/sqrt(1-u^2)+b/sqrt(1-t^2))/c,
   {a^2+2*b*c*u >= b^2+c^2, a*t+b*u <= c,
   b^2*(t^2-u^2)/(t^2-1)+c^2 <= 2*b*c*u},
   t=0..1, u=0..1, a=0..100, b=0..100, c=0..100,
   iterationlimit=400,feasibilitytolerance=1e-5);

   [3.99999999999999467, [a = 32.1004819041634, b = 42.3076441354364, 

     c = 53.1072282340437, t = 0.604446569149606, u = 0.796645683487556]]

restart:
Optimization:-Minimize(
   (3*a/sqrt(1-u^2)+b/sqrt(1-t^2))/c,
   {a^2+2*b*c*u >= b^2+c^2, a*t+b*u <= c,
   b^2*(t^2-u^2)/(t^2-1)+c^2 <= 2*b*c*u},
   t=0..1, u=0..1, a=0..100, b=0..100, c=0..100,
   iterationlimit=4000,feasibilitytolerance=1e-7);

   [4.00000000000000089, [a = 32.8320116158925, b = 40.1608997908390, 

     c = 51.8732962008298, t = 0.632927036071215, u = 0.774211448509924]]

But here is my objection to these results: we may believe that the exact global minima is exactly 4, on various grounds. But without that knowledge how can we trust the numerical optimizers. All the above results from both GlobalSolve and Minimize have constraint violations. And here is the kicker: allowing the very same constraint violations we can get these solvers to produce "false" results with much lower objective values. Eg,

restart:
Optimization:-Minimize(
   (3*a/sqrt(1-u^2)+b/sqrt(1-t^2))/c,
   {a^2+2*b*c*u >= b^2+c^2, a*t+b*u <= c,
   b^2*(t^2-u^2)/(t^2-1)+c^2 <= 2*b*c*u},
   t=0..1, u=0..1, a=0..0.001, b=0..0.001, c=0..0.001,
   iterationlimit=400,feasibilitytolerance=1e-5);

       [0., [a = 0., b = 0., c = 0.00100000000000000, t = 0., u = 0.]]

The problem is that the last result above with objective value of 0.0 (or others cases very close to 0.0) seems to have "essential" constraint violations which are due not just to numerical (floating-point) effects. If you first choose some given feasibility tolerance then it may be possible to find a value of `c` such that a=0 and b=0 giving an objective result less than 4 (and perhaps close to zero). How shall we discriminate between one "ok" kind of constraint violations and another "not ok" kind?

I believe that the globally minimal value may be 4, and that the constraint violations for a=0 and b=0 (say) are essential and exact violations. But without analytical examination, how can we distinguish situations involving constraint violations?

acer

Sorry, I forgot to add this,

restart:
Optimization:-Minimize(
   (3*a/sqrt(1-u^2)+b/sqrt(1-t^2))/c,
   {a^2+2*b*c*u >= b^2+c^2, a*t+b*u <= c,
   b^2*(t^2-u^2)/(t^2-1)+c^2 <= 2*b*c*u},
   t=0..1, u=0..1, a=0..100, b=0..100, c=0..100,
   iterationlimit=400,feasibilitytolerance=1e-5);

   [3.99999999999999467, [a = 32.1004819041634, b = 42.3076441354364, 

     c = 53.1072282340437, t = 0.604446569149606, u = 0.796645683487556]]

restart:
Optimization:-Minimize(
   (3*a/sqrt(1-u^2)+b/sqrt(1-t^2))/c,
   {a^2+2*b*c*u >= b^2+c^2, a*t+b*u <= c,
   b^2*(t^2-u^2)/(t^2-1)+c^2 <= 2*b*c*u},
   t=0..1, u=0..1, a=0..100, b=0..100, c=0..100,
   iterationlimit=4000,feasibilitytolerance=1e-7);

   [4.00000000000000089, [a = 32.8320116158925, b = 40.1608997908390, 

     c = 51.8732962008298, t = 0.632927036071215, u = 0.774211448509924]]

But here is my objection to these results: we may believe that the exact global minima is exactly 4, on various grounds. But without that knowledge how can we trust the numerical optimizers. All the above results from both GlobalSolve and Minimize have constraint violations. And here is the kicker: allowing the very same constraint violations we can get these solvers to produce "false" results with much lower objective values. Eg,

restart:
Optimization:-Minimize(
   (3*a/sqrt(1-u^2)+b/sqrt(1-t^2))/c,
   {a^2+2*b*c*u >= b^2+c^2, a*t+b*u <= c,
   b^2*(t^2-u^2)/(t^2-1)+c^2 <= 2*b*c*u},
   t=0..1, u=0..1, a=0..0.001, b=0..0.001, c=0..0.001,
   iterationlimit=400,feasibilitytolerance=1e-5);

       [0., [a = 0., b = 0., c = 0.00100000000000000, t = 0., u = 0.]]

The problem is that the last result above with objective value of 0.0 (or others cases very close to 0.0) seems to have "essential" constraint violations which are due not just to numerical (floating-point) effects. If you first choose some given feasibility tolerance then it may be possible to find a value of `c` such that a=0 and b=0 giving an objective result less than 4 (and perhaps close to zero). How shall we discriminate between one "ok" kind of constraint violations and another "not ok" kind?

I believe that the globally minimal value may be 4, and that the constraint violations for a=0 and b=0 (say) are essential and exact violations. But without analytical examination, how can we distinguish situations involving constraint violations?

acer

@STHence Well, then I must apologize for not understanding just what you were after. I guess I got confused and thought that you primarily wanted to use SmithForm (for some reason of your own). And I was thinking about making a comment -- but did not -- about floating-point.

You may possibly be interested to know that Maple will use (comparison amongst) entries of the diagonal Matrix `S` produced by SingularValues when computing the rank of a floating-point Matrix. This ties into earlier stuff about Rank and Digits.

Best regards.

@STHence Well, then I must apologize for not understanding just what you were after. I guess I got confused and thought that you primarily wanted to use SmithForm (for some reason of your own). And I was thinking about making a comment -- but did not -- about floating-point.

You may possibly be interested to know that Maple will use (comparison amongst) entries of the diagonal Matrix `S` produced by SingularValues when computing the rank of a floating-point Matrix. This ties into earlier stuff about Rank and Digits.

Best regards.

@Rowlinginthedeep I wrote a short piece of code that did it a bit like that, but it's at home and I'm in the office right now. Instead of "removing" points I used a weight vector, with zero as entries for points to be ignored. I compared each residual (and/or y-difference w.r.t the line) to the max of such, as a basis for rejection.

Also, I did it as an iterated process, on the following grounds. At the first iteration, the residuals are being computed against what is taken to be the initial "wrong" approximating curve (line). After a "corrected" new line is computed (using weights to sieve out the noise) then all the points will have new residuals. It may well be that, with these new residuals based on the new line, the sets of points that should be included/excluded is slightly different than it was the first time. So a new weight vector can be constructed, and the process repeated. And so on. For the example at hand, after abount 5 iterations of this process the weight vector stopped needing to be changed, and the set of points rejected stabilized, and the approximating line stopped looking like it was being influence by the "noise" (ie. rotated too much clockwise).

I'm not a statistician, and I just cooked this up, so anyone who knows better please speak up.

@Rowlinginthedeep I wrote a short piece of code that did it a bit like that, but it's at home and I'm in the office right now. Instead of "removing" points I used a weight vector, with zero as entries for points to be ignored. I compared each residual (and/or y-difference w.r.t the line) to the max of such, as a basis for rejection.

Also, I did it as an iterated process, on the following grounds. At the first iteration, the residuals are being computed against what is taken to be the initial "wrong" approximating curve (line). After a "corrected" new line is computed (using weights to sieve out the noise) then all the points will have new residuals. It may well be that, with these new residuals based on the new line, the sets of points that should be included/excluded is slightly different than it was the first time. So a new weight vector can be constructed, and the process repeated. And so on. For the example at hand, after abount 5 iterations of this process the weight vector stopped needing to be changed, and the set of points rejected stabilized, and the approximating line stopped looking like it was being influence by the "noise" (ie. rotated too much clockwise).

I'm not a statistician, and I just cooked this up, so anyone who knows better please speak up.

Could you add, in a Comment, the Maple code (procedure) that you used, and also let us know the operating system and 32/64bit-ness of your Maple 16?

acer

This also seems to work (and provides some functional `ssystem` as a bonus),

restart:

unprotect(ssystem):
ssystem:=eval(`dsolve/numeric/ssystem`):
protect(ssystem):

cp:=Compiler:-Compile( proc(x) sin(x); end proc ):

cp(2.1);

                         0.863209366648873710

Who knew!

acer

This also seems to work (and provides some functional `ssystem` as a bonus),

restart:

unprotect(ssystem):
ssystem:=eval(`dsolve/numeric/ssystem`):
protect(ssystem):

cp:=Compiler:-Compile( proc(x) sin(x); end proc ):

cp(2.1);

                         0.863209366648873710

Who knew!

acer

@Joe Riel Thanks for the comment, Joe. I wanted an A such that failure would happen at roughly the half-way mark (walking through). Some schemes' performance advantage won't demonstrate "properly" if the first failure occurs very late through the walk, and might be over-emphasized if very early. And I wanted the same number of failures, while running the three methods in distinct sessions. It sure can be tough to characterize advantage accurately for these kinds of tasks -- in part because some aspect of  a user's particular task might make early hits common,.. or rare.

Is there something much better than ormap(evalb,`<`~(A,B))? I can't help but feel that a function call is being done at least once more than necessary for each element (of both).

@Joe Riel Thanks for the comment, Joe. I wanted an A such that failure would happen at roughly the half-way mark (walking through). Some schemes' performance advantage won't demonstrate "properly" if the first failure occurs very late through the walk, and might be over-emphasized if very early. And I wanted the same number of failures, while running the three methods in distinct sessions. It sure can be tough to characterize advantage accurately for these kinds of tasks -- in part because some aspect of  a user's particular task might make early hits common,.. or rare.

Is there something much better than ormap(evalb,`<`~(A,B))? I can't help but feel that a function call is being done at least once more than necessary for each element (of both).

@jscottelder Is it possible for you to upload some example worksheet that demonstrates the issues?

Have you tried the Classic GUI (using 32bit Maple on MS-Windows, say)?

This link may or may not be of interest. (Common subexpression labelling can sometimes greatly reduce very long output, and can also work well in Classic as well as CLI. But it's not yet clear whether you have very long output expressions or just very many output expressions.)

If you are finished with viewing of the outputs of previous calculations, then in the Standard GUI you might get some relief from Edit-Remove Output from the main menubar. This does not undo computations or clear previous assignments to variable names.

@jscottelder Is it possible for you to upload some example worksheet that demonstrates the issues?

Have you tried the Classic GUI (using 32bit Maple on MS-Windows, say)?

This link may or may not be of interest. (Common subexpression labelling can sometimes greatly reduce very long output, and can also work well in Classic as well as CLI. But it's not yet clear whether you have very long output expressions or just very many output expressions.)

If you are finished with viewing of the outputs of previous calculations, then in the Standard GUI you might get some relief from Edit-Remove Output from the main menubar. This does not undo computations or clear previous assignments to variable names.

First 395 396 397 398 399 400 401 Last Page 397 of 597