acer

32587 Reputation

29 Badges

20 years, 38 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

It's not true that, "when the number of equations equals the number of unknowns there was a unique solution."

There are three possibilities, when there are n linear equations in n unknowns.

The first possibility is that there are no solutions. The system of equations is usually called inconsistent in this situation. An example could be something like this,

x + y = 2;
2*x + 2*y = 11;

A second possibility is that there are infinitely many solutions. This is often called an underdetermined system. It can arise when one of the equations is a multiple of another (or a linear combination of several other) of the equations. As a result of that, there is not enough data to forcibly pin the variables down to single unique solution values. A simple example is this,

x + y = 2;
2*x + 2*y = 4;

The third possibility is that there is a unique solution.

A discipline of mathematics that formalizes all the above and provides ways to look at and get insight into it is Linear Algebra. Representing the linear multivariate equations as a Matrix, and manipulating that object, can provide nice neat ways to demonstrate which of the three situations above (related to the number of possible solutions) is the case at hand for given data.

acer

It's not true that, "when the number of equations equals the number of unknowns there was a unique solution."

There are three possibilities, when there are n linear equations in n unknowns.

The first possibility is that there are no solutions. The system of equations is usually called inconsistent in this situation. An example could be something like this,

x + y = 2;
2*x + 2*y = 11;

A second possibility is that there are infinitely many solutions. This is often called an underdetermined system. It can arise when one of the equations is a multiple of another (or a linear combination of several other) of the equations. As a result of that, there is not enough data to forcibly pin the variables down to single unique solution values. A simple example is this,

x + y = 2;
2*x + 2*y = 4;

The third possibility is that there is a unique solution.

A discipline of mathematics that formalizes all the above and provides ways to look at and get insight into it is Linear Algebra. Representing the linear multivariate equations as a Matrix, and manipulating that object, can provide nice neat ways to demonstrate which of the three situations above (related to the number of possible solutions) is the case at hand for given data.

acer

Some years ago, I was faced with a similar problem. And using resultant() is not always a possible solution, since the two surfaces or curves might not be represented as polynomials. In my situation, there were a pair of surfaces given as (blackbox, say) procedures of x- and y-parameters.

My solution, if I recall correctly, was to use implicitplot to produce a 2-dimensional PLOT structure (which represented how the desired 3-d spacecure would look if projected onto the x-y plane. Then the data within it could be lifted up to the height of (either, since it is where they intersect) surface, and thrown into a PLOT3D structure.

Of course, creating the implicitplot result as that first step was possible because the pair of surfaces were explicit forms for height z.

Maybe someone else (without Maple 11 say) might be able to use that approach too. I should peek into plots[intersectplot] and see if it works the same way for similar cases.

acer

There are several possible variants on the following. (Which you get might well depend on the style of whoever answered.)

EQ1:=unapply(eq1,lambda);
plot( t->fsolve(subs(alpha=t,eval(EQ1)),Pi/2..3*Pi/2), 1..2 );

acer

There are several possible variants on the following. (Which you get might well depend on the style of whoever answered.)

EQ1:=unapply(eq1,lambda);
plot( t->fsolve(subs(alpha=t,eval(EQ1)),Pi/2..3*Pi/2), 1..2 );

acer

I believe that, in the original blog entry above, I mentioned that the help-pages claim that all built-ins allow for that "global namespace" extension mechanism. I'm not sure that that is still strictly speaking true (if it ever were). It's not obvious how all builtins would work with an extension. And only the more obvious ones (obvious also in the sense that it's easier to guess how an extension would work) are documented by example.

As far as TypeTools goes, I did specifically mention it as the modern replacement to `type/XXX` global namespace extensions in the original blog article (and refrenced two other posts by you on using that). Yes, the older mechanism would have to continue to work if (by definition) backwards compatibility were to be adhered to on this issue.

It's true, I did not mention a few newer packages (like Units, or VerifyTools), partly  because they already handle extension management in more flexible or modern ways. I would rather see a Print package and a Verification package which exported their own extension management routines than I would a whole slew of "Tools" packages. That term isn't even in vogue any more, and already looks dated. But also I would like to see the extension closer to the direct use functionality, and not in separate packages unless necessary or unless there were already a top-level routine for use such as `latex` and `verify`.

The jumble of extension mechanisms in Maple is incredible. It doesn't help so much to get stuck on why or how it became so. The thing is, what can be best done about it now? I see two possibilities that are workable.

The first is to have a really ulta-consistent mechanism that works across all areas (from typesetting to type to everything else) and which is not impossible to explain. The second is to have a set of documentation which explains in minute detail all the nuances of a highly variegated (and possibly inconsistent) set of extensions mechanisms. Unfortunately, Maple currently has all the diversity and variety, but without the crystal clear documentation. Either the consistency or the documentation has to be greatly amended in order for the extension system to be good.

acer

Isn't that Vector c, when using LPSolve in that "Matrix form" of calling sequence, the coefficient vector of the linear objective function?

If that were so then that reported error message about the contraints would be misleading/mistaken.

acer

Isn't that Vector c, when using LPSolve in that "Matrix form" of calling sequence, the coefficient vector of the linear objective function?

If that were so then that reported error message about the contraints would be misleading/mistaken.

acer

Thanks for that, about mint. I don't know what I could have been thinking.

Yes, there are some gotchas when using code based on Maple 11's parameter-processing but within a Maple 10 session.

acer

Thanks for that, about mint. I don't know what I could have been thinking.

Yes, there are some gotchas when using code based on Maple 11's parameter-processing but within a Maple 10 session.

acer

It looks like the result of your last transform() call is, in Maple 8, a PLOT3D rather than a PLOT call. But the data within it looks like 2-dimensional plotting data.

So try this..

zzz := transform((x,y,z)->[x,y])(FG):
type(zzz,specfunc(anything,PLOT3D));
PLOT(op(zzz));

acer

Since freeof is not protected and may have been assigned a value by the user, or may be used alongside a local of the same same from within a procedure, that call would be better as,

select(type,expr,':-freeof'(vars)):

It bothers me a little, that the help-pages like ?type,freeof don't show it being used that way. That's also a general complaint I have about many help-pages -- the lack of uneval quotes in examples where they may be necessary in a general context. I also wonder why mint doesn't complain about this example.

 

acer

Since freeof is not protected and may have been assigned a value by the user, or may be used alongside a local of the same same from within a procedure, that call would be better as,

select(type,expr,':-freeof'(vars)):

It bothers me a little, that the help-pages like ?type,freeof don't show it being used that way. That's also a general complaint I have about many help-pages -- the lack of uneval quotes in examples where they may be necessary in a general context. I also wonder why mint doesn't complain about this example.

 

acer

I've never been able to decide whether I liked codegen[JACOBIAN]. On the one hand it often makes something which is easier to evalhf (or use Compiler:-Compile on, after maybe transcribing and editing). But on the other hand it's awkward if it fails on some of the procs, or if the input/output of data doesn't match. And I don't like how it can create workspaces internally, as I like to be able to cut out that sort of collectible garbage production.

So these days I tend to use `D`, and the newer `fdiff` routine, and the nice way that evalf(D(...)) can call fdiff.

Sometimes I use those together, evalf and D, but in two steps so that D can actually produce an explicit result for the easier procedures. Something like,

J:=unapply(Matrix(nops(funlist),numvars,(i,j)->D[j](funlist[i])(a,b)),[a,b]);

followed shortly afterwards by something like,

evalf(J(seq(X[count-1][jj],jj=1..numvars)))

So the above has that nice aspect, that the D can actually produce a new proc whose body is explicitly differentiated from the original proc. (I mention this for others here, not you of course, Robert.) And then whichever entries of J come back as unevaluated `D` calls will then get hit by evalf and the evalf(D(..)) will become fdiff calls. It's almost the best of both worlds. The only thing that I don't like about it is that it creates quite a few new Matrices, each time the Jacobian is updated with new point X, which can be inefficient even if those Matrices are collectible garbage.

So sometimes I do it instead like this,

funlist:=[f,g]:
numvars:=2: # or use nops([op(3,eval(f))])
dummyseq:=seq(dummy[i],i=1..numvars);
J:=Matrix(nops(funlist),numvars,
  (i,j)->unapply(fdiff(funlist[i],[j],[dummyseq]),[dummyseq]));
thisJ := Matrix(nops(funlist),numvars,datatype=complex(float)):

and then each iteration through I have it assign,

thisJ[i,j] := J[i,j](currXseq);

in a double-loop where,

currXseq:=seq(X[count-1][jj],jj=1..numvars);

I have an somewhat decent routine for Newton's method for procs, that works this way. Maybe I can dust it off and post it as a blog item.

acer

I've never been able to decide whether I liked codegen[JACOBIAN]. On the one hand it often makes something which is easier to evalhf (or use Compiler:-Compile on, after maybe transcribing and editing). But on the other hand it's awkward if it fails on some of the procs, or if the input/output of data doesn't match. And I don't like how it can create workspaces internally, as I like to be able to cut out that sort of collectible garbage production.

So these days I tend to use `D`, and the newer `fdiff` routine, and the nice way that evalf(D(...)) can call fdiff.

Sometimes I use those together, evalf and D, but in two steps so that D can actually produce an explicit result for the easier procedures. Something like,

J:=unapply(Matrix(nops(funlist),numvars,(i,j)->D[j](funlist[i])(a,b)),[a,b]);

followed shortly afterwards by something like,

evalf(J(seq(X[count-1][jj],jj=1..numvars)))

So the above has that nice aspect, that the D can actually produce a new proc whose body is explicitly differentiated from the original proc. (I mention this for others here, not you of course, Robert.) And then whichever entries of J come back as unevaluated `D` calls will then get hit by evalf and the evalf(D(..)) will become fdiff calls. It's almost the best of both worlds. The only thing that I don't like about it is that it creates quite a few new Matrices, each time the Jacobian is updated with new point X, which can be inefficient even if those Matrices are collectible garbage.

So sometimes I do it instead like this,

funlist:=[f,g]:
numvars:=2: # or use nops([op(3,eval(f))])
dummyseq:=seq(dummy[i],i=1..numvars);
J:=Matrix(nops(funlist),numvars,
  (i,j)->unapply(fdiff(funlist[i],[j],[dummyseq]),[dummyseq]));
thisJ := Matrix(nops(funlist),numvars,datatype=complex(float)):

and then each iteration through I have it assign,

thisJ[i,j] := J[i,j](currXseq);

in a double-loop where,

currXseq:=seq(X[count-1][jj],jj=1..numvars);

I have an somewhat decent routine for Newton's method for procs, that works this way. Maybe I can dust it off and post it as a blog item.

acer

First 554 555 556 557 558 559 560 Last Page 556 of 596