acer

32400 Reputation

29 Badges

19 years, 345 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@spm71 A Matrix has its indices start from 1.

An Array can have its indices start from whatever integers you want, and the default is to start from 0. Note that an Array itself will not prettyprint display (if that matters to you...). So you could do, say,

Array(0..5,0..5,(i,j)->i+j);

or

Array(0..4,1..5,(i,j)->i+j);

or what have you.

Note also that for Matrix M the operation M^(-1) computes a Matrix inverse. But for an Array A the operation A^(-1) will do elementwise (scalar) multiplicative inversion.

This sounds like a good time to start reading the manuals and Help. (But you may find it difficiult to find good material on Matrix; ie, you won't find that here, or here, or here. I suggest go here and here.)

Several years ago I posted an answer to another question, where I wanted something faster than Student:-Calculus1:-Roots for finding multiple roots of a univariate nonpolynomial expression over a finite range. The quick one-off that I posted I called `findroots`. (Carl Love posted a very similar thing in yet another thread here just last year.) Here is one version of what I wrote. It may not be the latest and most robust I've implemented, but it is the first version I found.

findroots:=proc(expr,a,b,{guard::posint:=5,maxtries::posint:=50})
local F,x,sols,i,res,start,t;
   x:=indets(expr,name) minus {constants};
   if nops(x)>1 then error "too many indeterminates"; end if;
   F:=subs(__F=unapply(expr,x[1]),__G=guard,proc(t)
      Digits:=Digits+__G;
      __F(t);
   end proc);
   sols,i,start:=table([]),0,a;
   to maxtries do
      i:=i+1;
      res:=RootFinding:-NextZero(F,start,
                                 'maxdistance'=b-start);
      if type(res,numeric) then
         sols[i]:=fnormal(res);
         if sols[i]=sols[i-1] then
            start:=sols[i]+1.0*10^(-Digits);
            i:=i-1;
         else
            start:=sols[i];
         end if;
      else
         break;
      end if;
   end do;
   op({entries(sols,'nolist')});
end proc:

Using that for the 2nd stage (of reclaiming roots in terms of t,t2,t2,t3, from the certified roots of the polynomial system as obtained in the 1st stage using RootFinding:-Isolate) the computation to find 16 roots of the original problem in this thread goes down from about 30 seconds to about 1.2 seconds on my 5 year-old i7 machine. And that's over 200 times faster than what DirectSearch:-SolveEquation might take my machine. Yout mileage may vary.

f1 := -8100+(-30+70*cos(t1)-40*cos(t2))^2
      +(-70*sin(t1)+40*sin(t2))^2:
f2 := (-20-80*cos(t3))^2+(-15+70*cos(t1)+10*cos(t1+t))^2
      +(-70*sin(t1)-10*sin(t1+t)+80*sin(t3))^2-5625:
f3 := (-20-80*cos(t3))^2+(15+40*cos(t2)+10*cos(t1+t))^2
      +(-40*sin(t2)-10*sin(t1+t)+80*sin(t3))^2-5625:
f4 := 10*cos(t1+t)*(30-70*cos(t1)+40*cos(t2))
      -10*sin(t1+t)*(70*sin(t1)-40*sin(t2)):
origsys := {f1,f2,f3,f4}:
st,str := time(),time[real]():
sys := map(expand,origsys):
T := [indets(sys,function)[]]:
Tsub := [seq(x=freeze(x),x=T)]:
extra := {seq(sin(r)^2+cos(r)^2-1,r=[t,t1,t2,t3])}:
Digits := 20:
raw := thaw(RootFinding:-Isolate(eval(sys union extra,Tsub),eval(T,Tsub))):
gen := proc(cand)
  local c, v;
  c[t]:=select(has,cand,t); c[t1]:=select(has,cand,t1);
  c[t2]:=select(has,cand,t2); c[t3]:=select(has,cand,t3);
  [seq(seq(seq(seq([e,f,g,h],e=c[t3]),f=c[t2]),g=c[t1]),h=c[t])];
end proc:
Digits := 15:
sols := {seq(seq(
      `if`(evalf(max(map(abs,eval(origsys,eq))))<10^(-5),
           eq, NULL),
      eq=gen(
map(eq->indets(eq,name)[]
                     =~findroots((lhs-rhs)(eq),
                                 0,evalf(2*Pi))[],raw[i]))
                ),i=1..nops(raw))}:
sols := evalf[15](sols):
time()-st,time[real]()-str;
                                1.170, 1.167
nops(sols);
                                     16
Digits := 50:
max(seq(max(abs~(evalf(eval([f1,f2,f3,f4],s)))),s=sols)): evalf[5](%);
Digits := 10:
                                         -11
                                5.7233 10   

I mentioned above that the initial stage of finding all roots of the polynomial system (with rational coefficients) gets a so-called certified result from RootFinding:-Isolate.

But the matter of speed and accuracy may matter to some. And there is also the question of how automatic the methodology is.

The approach I've taken with RootFinding:-Isolate involves more lines of code, but that's just because I haven't bothered to make it a user-friendly procedure. I don't see why in principle it could be made more convenient to call. Perhaps most awkward is the fiddling with working precision (Digits) to get adequate guard digits at various stages, to assure that spurious duplicates are avoided.

I consider it a disadvantage if the DirectSearch:-SolveEquations command has to be invoked with options such as solutions=17 or number=250 since those appear to be arbitrary, or ad hoc, or requiring special knowledge beforehand of the number of members in the solution.

So to my eye both these methodologies have their disadvantages, as described in the two paragraphs above. Of course the very big advantage of DirectSearch is that it is general and not just a method tailored to a very specific kind of problem as considered here.

I used the syntax suggest by Markiyan for SolveEquations and compared with the code I posted for speed and accuracy, on 64bit and 32bit Maple 18.02 for Windows 7, and on 64bit Maple 2015.1 for Windows 7, and 64bit Maple 18.02 for Linux.

The common trend was that DirectSearch (DS) code took about 8-10 times as long to run as the hybrid RootFinding (RF) code, and the final DS results had a larger maximal forward error (approx. 2e-7) than that of the RF results ( approx. 5e-11). Naturally I am sure that one can obtain just as accurate results with DS, but that should likely make the performace slower still. In all cases the same 16 solutions were found.

I used these code portions below (and if either fails just use the latest code in my comments above, or that from Markiyan's post). The output shown in from 64bit Maple 2015.1 on Windows.

restart:
f1 := -8100+(-30+70*cos(t1)-40*cos(t2))^2
      +(-70*sin(t1)+40*sin(t2))^2:
f2 := (-20-80*cos(t3))^2+(-15+70*cos(t1)+10*cos(t1+t))^2
      +(-70*sin(t1)-10*sin(t1+t)+80*sin(t3))^2-5625:
f3 := (-20-80*cos(t3))^2+(15+40*cos(t2)+10*cos(t1+t))^2
      +(-40*sin(t2)-10*sin(t1+t)+80*sin(t3))^2-5625:
f4 := 10*cos(t1+t)*(30-70*cos(t1)+40*cos(t2))
      -10*sin(t1+t)*(70*sin(t1)-40*sin(t2)):
origsys := {f1,f2,f3,f4}:
st,str := time(),time[real]():
sys := map(expand,origsys):
T := [indets(sys,function)[]]:
Tsub := [seq(x=freeze(x),x=T)]:
extra := {seq(sin(r)^2+cos(r)^2-1,r=[t,t1,t2,t3])}:
Digits := 20:
raw := thaw(RootFinding:-Isolate(eval(sys union extra,Tsub),eval(T,Tsub))):
gen := proc(cand)
  local c, v;
  c[t]:=select(has,cand,t); c[t1]:=select(has,cand,t1);
  c[t2]:=select(has,cand,t2); c[t3]:=select(has,cand,t3);
  [seq(seq(seq(seq([e,f,g,h],e=c[t3]),f=c[t2]),g=c[t1]),h=c[t])];
end proc:
Digits := 15:
sols := {seq(seq(
      `if`(evalf(max(map(abs,eval(origsys,eq))))<10^(-5),
           eq, NULL),
      eq=gen(map(eq->indets(eq,name)[]
                     =~Student:-Calculus1:-Roots(eq,indets(eq,name)[]
                                                 =0..2*Pi)[],raw[i]))
                ),i=1..nops(raw))}:
sols := evalf[15](sols):
time()-st,time[real]()-str;
                               29.047, 29.064
nops(sols);
                                     16
Digits := 50:
max(seq(max(abs~(evalf(eval([f1,f2,f3,f4],s)))),s=sols)): evalf[5](%);
Digits := 10:
                                         -11
                                5.7233 10   

And here is the other.

restart:

f1 := -8100+(-30+70*cos(t1)-40*cos(t2))^2
      +(-70*sin(t1)+40*sin(t2))^2:
f2 := (-20-80*cos(t3))^2+(-15+70*cos(t1)+10*cos(t1+t))^2
      +(-70*sin(t1)-10*sin(t1+t)+80*sin(t3))^2-5625:
f3 := (-20-80*cos(t3))^2+(15+40*cos(t2)+10*cos(t1+t))^2
      +(-40*sin(t2)-10*sin(t1+t)+80*sin(t3))^2-5625:
f4 := 10*cos(t1+t)*(30-70*cos(t1)+40*cos(t2))
      -10*sin(t1+t)*(70*sin(t1)-40*sin(t2)):

interface(rtablesize=20):

st,str := time(),time[real]():
rawDS := DirectSearch:-SolveEquations([f1,f2,f3,f4],
                         {t>=0,t<=2*Pi,t1>=0,t1<=2*Pi,
                          t2>=0,t2<=2*Pi,t3>=0,t3<=2*Pi},
                         AllSolutions,solutions=17,
                         number=250):
time()-st,time[real]()-str;
                        250.101, 241.201
solsDS := [seq(`if`(max(abs~(evalf[20](eval([f1,f2,f3,f4],rawDS[i,3]))))<10^(-5),
                    rawDS[i,3],NULL),i=1..op([1,1],rawDS))]:
nops(solsDS);
                                     16
Digits := 50:
max(seq(max(abs~(evalf(eval([f1,f2,f3,f4],s)))),s=solsDS)): evalf[5](%);
Digits := 10:
                                          -7
                                 1.8819 10  

Restricting to just [0..2*Pi] for each of t,t1,t2,t3 and pulling out multiple solutions from each individual trig fsolve,

restart:
f1 := -8100+(-30+70*cos(t1)-40*cos(t2))^2
      +(-70*sin(t1)+40*sin(t2))^2:
f2 := (-20-80*cos(t3))^2+(-15+70*cos(t1)+10*cos(t1+t))^2
      +(-70*sin(t1)-10*sin(t1+t)+80*sin(t3))^2-5625:
f3 := (-20-80*cos(t3))^2+(15+40*cos(t2)+10*cos(t1+t))^2
      +(-40*sin(t2)-10*sin(t1+t)+80*sin(t3))^2-5625:
f4 := 10*cos(t1+t)*(30-70*cos(t1)+40*cos(t2))
      -10*sin(t1+t)*(70*sin(t1)-40*sin(t2)):
origsys := {f1,f2,f3,f4}:
sys := map(expand,origsys):
T := [indets(sys,function)[]]:
Tsub := [seq(x=freeze(x),x=T)]:
extra := {seq(sin(r)^2+cos(r)^2-1,r=[t,t1,t2,t3])}:
Digits,oldDigits := 20*Digits,Digits:
raw := thaw(RootFinding:-Isolate(eval(sys union extra,Tsub),eval(T,Tsub))):
Digits := oldDigits:
gen := proc(cand)
  local c, v;
  c[t]:=select(has,cand,t);
  c[t1]:=select(has,cand,t1);
  c[t2]:=select(has,cand,t2);
  c[t3]:=select(has,cand,t3);
  [seq(seq(seq(seq([e,f,g,h],e=c[t3]),f=c[t2]),g=c[t1]),h=c[t])];
end proc:
sols := {seq(seq(
      `if`(evalf[2*Digits](max(map(abs,eval(origsys,eq))))<10^(-Digits+1),
           eq, NULL),
      eq=evalf[2*Digits](gen(
map(eq->select(u->is(rhs(u)<=2*Pi and rhs(u)>=0),
    [seq(lhs(eq)=rhs(eq)+k*Pi,k=-2..2)])[],
    (map(eq->indets(eq,name)[]
             =~Student:-Calculus1:-Roots(eq,indets(eq,name)[]
                                            =0..2*Pi)[],raw[i])))
))),
    i=1..nops(raw))}:

sols := evalf[15](sols):

nops(sols);
                                     16

max(seq(max(map(abs,evalf[2*Digits](eval(origsys,s)))), s in sols));

                                          -11
                               5.723270 10   

# Now rounding down to 10 places only so as to display more narrowly
seq(print(evalf[10](s)), s=sort(sols));

   [t3 = 1.070715258, t2 = 1.019516096, t1 = 2.095286009, t = 5.459630968]
   [t3 = 1.257840449, t2 = 1.050604655, t1 = 2.120744075, t = 2.310515606]
   [t3 = 2.721397680, t2 = 3.776592798, t1 = 2.139834459, t = 4.548991504]
  [t3 = 2.740714045, t2 = 0.6259476126, t1 = 1.796185808, t = 5.536826747]
   [t3 = 2.864015594, t2 = 3.646956314, t1 = 1.314690469, t = 1.571558536]
  [t3 = 2.884047829, t2 = 0.3612427293, t1 = 1.616589671, t = 2.427210452]
  [t3 = 2.896744038, t2 = 3.733377105, t1 = 0.8793268881, t = 4.843491309]
  [t3 = 2.973006347, t2 = 4.289550414, t1 = 0.6708585845, t = 1.994687498]
   [t3 = 3.310178960, t2 = 1.993634893, t1 = 5.612326723, t = 4.288497809]
   [t3 = 3.386441270, t2 = 2.549808202, t1 = 5.403858419, t = 1.439693998]
   [t3 = 3.399137478, t2 = 5.921942578, t1 = 4.666595636, t = 3.855974855]
   [t3 = 3.419169713, t2 = 2.636228993, t1 = 4.968494838, t = 4.711626771]
  [t3 = 3.542471262, t2 = 5.657237695, t1 = 4.486999499, t = 0.7463585603]
   [t3 = 3.561787627, t2 = 2.506592510, t1 = 4.143350849, t = 1.734193803]
   [t3 = 5.025344858, t2 = 5.232580652, t1 = 4.162441232, t = 3.972669701]
  [t3 = 5.212470049, t2 = 5.263669211, t1 = 4.187899299, t = 0.8235543392]

One benefit of such an approach is that the RootFinding:-Isolate step provides a certified set of solutions of the first stage, to obtain all real roots of the polynomial system with rational coefficients.

@Markiyan Hirnyk  Of course I know what "periodic" means. And anyone who is not asleep can read that I deliberately used the range -2*Pi..2*Pi. I clearly wrote that range in the second sentence of my Answer! The OP did not state a domain. I now think that you're just stating something I felt too obvious to make even more than overtly clear.

BTW, using that wider range -2*Pi..2*Pi more than "doubles" (your word) the number of generated solutions found in [0..2*Pi]^4.

@Markiyan Hirnyk I don't understand your sentences, but that may be my failing rather than yours. How many distinct solutions do you think there are, with each of t,t1,t2, and t3 in the closed real interval [-2*Pi..2*Pi]?

@Preben Alsholm We used a similar approach (for an initial stage), I suspect, replacing the instances of sin and cos with names, so obtain a system of polynomials with rational coefficients.

But perhaps where you took the real solutions after applying evalf to allvalues([solve(eqs union eqs2)])  instead I did something like,

RootFinding:-Isolate(eqs union map(lhs-rhs,eqs2),[c1,c2,c3,c4,s1,s2,s3,s4])

to obtain all the real roots of that polynomial system.

@lt Did you remember to load the Statistics package, before calling Histogram? (Or you can call it as Statistics:-Histogram.)

Now Nasser Abbasi has updated his page and the numbers indicate a key difference for Matrix size > 2500. He posits that Mathematica 10.1 may be switching algorithms at that point.

As I mentioned above, there are other Rank revealing algorithms other than computing all singular values (and usually comparing magnitudes w.r.t. the largest, or otherwise estimating how many are nonzero or sufficiently small). Most of them are based on the QR decomposition.

I notice in my Mathematica 10.1 installation that the help page for its MatrixRank command still describes its methodology as being equivalent to computing how many nonzero values are returned by its SingularValueList command. So it would seem to me that either Wolfram staff have found a faster way to compute singular values (which might be remarkable) or they are using some other rank revealing algorithm. If the latter is the case then I can imagine potential (and possibly rare) issues, as I'm not aware of another algorithm which is known to always produce the same results as might be obtained by singular values examination.  I don't see any new option to control the choice of method (without simply electing to use SingularValueList instead of MatrixRank), or a documented description of any possible change in behaviour.

A lot of very smart people have examined this topic, and there is quite a bit of published material on it, because knowing the effective matrix rank can potentially reduce computation time for several popular computational tasks. And there have been improvements to rank estmation based on QR. As I said I'm not aware of any alternate method which is completely equivalent at both low and high rank estimation. Here is just one link of interest, not so very new but still interesting. But many other references can be found (using google, which naturally may use page rank computations with related math underneath!).

It should be possible to implement an alternate algorithm for Maple too. The easiest would be to use DGEQP3 from LAPACK, but that might just be a staring point. But I would hope that its implementation were not opaque, and that LinearAlgebra:-Rank would offer any such alternates by choice with a method option.

By coincidence, a few days ago I was reading an old 2009 article about finding John Francis the developer of the QR algorithm. (Frank Uhlig's article on p.19 here)

 

@tomleslie In Maple 13 there was no easy "empty plot". So to avoid an execution error for your code, the blank corner could be done as,

plot([[0,0]],axes=none)

rather than as your,

plot(axes=none)

Which version of Maple?

You can set up a PlotComponent from the Embedded Components palette, and then use SetProperty(...,refresh) to update it while your looping code is running.

There are even easier ways, in Maple 2015, which is why I ask about the version.

acer

An ellpise can also look nicer than a rectangle, for longer labels.

restart:

quad2ellipse := proc(P::specfunc(anything,:-PLOT))
  subsindets(P,
             And(specfunc(':-POLYGONS'),
                 satisfies(p->nops(p)>0
                           and op(1,p)::[[numeric,numeric],[numeric,numeric],
                                         [numeric,numeric],[numeric,numeric]])),
             proc(p) local t,r;
               (t,r):=selectremove(type,[op(p)],list(list(numeric)));
               ':-POLYGONS'(seq(op(plottools:-ellipse([(op([1,1],pp)+op([3,1],pp))/2,
                                    (op([1,2],pp)+op([2,2],pp))/2],
                                   1.3*abs(op([1,1],pp)-op([3,1],pp))/2,
                                   1.2*abs(op([2,2],pp)-op([1,2],pp))/2)),
                                pp in t), op(r)); end proc);
end proc:

with(GraphTheory):

G := Graph({{"blah","FOO"},{"bleh","FOO"},{"FOO","blah"},
            {"FOO",4444},{4444,5555},{5555,"Z"},{"Z",4444}}):
HighlightVertex(G, Vertices(G), gold):
HighlightEdges(G, G, COLOR(RGB,0.2,0.2,0.7)):

P:=DrawGraph(G): P;
quad2ellipse(P);

This hack is only a good as the original rectangle to fogure out the width. And the original goes wrong for very long labels.

@Alejandro Jakubi The purpose of the comment in my code is to to describe somewhat what the code is doing, as it might differ from what others do in such cases. It's not an instruction for the world, or a description of what I expect of other people.

I am not angry at all. Indeed I am amused by your long-standing tendency to see conspiracy and ulterior motive where none exists. Which leads me to ask, in the words of Jack Burton,

https://www.youtube.com/watch?v=WLSSh6G3Bx0 

@Alejandro Jakubi It may be OK if simplify (or another command) utilizes length while measuring the size of an expression. That depends on the structure of the expression. But I do not believe that length alone is a sensible measure of expression size for all purposes.

Suppose that one wishes to simplify a non-constant expression for the purpose of reducing the time for evaluating at a point. In that case it would not be sensible to rely on the following, as the length of the names ought not matter. Consider the following example, for which the cost of evaluation ought not (within reasonable bounds) depend on the length of the names,

length(aaaaaa+bbbbbbbbbb);
                                      23

length(a+b);
                                       9

But `simplify/size/length` does a more sensible job here, returning the same value (9, if it matters) for both of those two expressions. And MmaTranslator:-Mma:-LeafCount also returns the same value (3, if it matters) for both those expressions.

It's not difficult to come up with other problematic examples too. For example a multivariable expression which might factor in two different ways (according to choice of variable). It doesn't seem very sensible and useful for the purpose of evaluation cost (at least) if one factorization is chosen by virtue of being the one which has the least instances of the much longer name. More sensible and useful for the purpose of evaluation cost reduction would be the choice which minimizes operations regardless of name lengths.

Also, the worksheet I posted made use of simplify(...,size) and so it makes sense to use the same metric to gauge the relative success of that operation. I often see people use raw length calls in order to gauge the success of their calls to simplify(...,size), and that is not as sensible as would be using the metric that simplify(...,size) itself uses. The person who posted this Question did that very thing, which is partly why I added the comment.

You often read too much into my comments, or misinterpret them.

Now, if the purpose is to simplify so that the expression takes less space on disk, or as .m format in a .mla archive, then it could well be a whole other ballgame. I wrote my response worksheet with an eye toward evaluation cost,and if that it not what the OP intended then I hope it'll be clarified as followup.

By the way, another interesting metric can be had using the codegen[cost] command, although unfortunately it is difficult to make programmatic use of the result. What's appealing there is that its result includes the separate detail of the number of function costs. It might be nice to have some similar command which could return a list of the various counts, to which a custom weighting could be applied after the fact.

 

Without specifying the dimensions as arguments the calls Matrix([[]]) and Matrix([]) each produce a Matrix with 1 row and 0 columns.

The calls Matrix() , Matrix(0,0,[[]]) , Matrix(0,0,[]) and Matrix(0,0) each produce a Matrix with 0 rows and 0 columns.

You can build a Matrix with 0 rows and 1 column with the call Matrix(0,1) but I don't see how that could be produced by the Matrix command without specifying the dimensions explicitly in the call. (It can be done with the angle-bracket constructors, though.)

I suspect that your intended meaning for your first sentence is wrong. It's hard to be sure since the grammar is poor.

Why don't you tell us what you're trying to accomplish?

acer

First 331 332 333 334 335 336 337 Last Page 333 of 593