Robert's suggestion to simply remove the square [] list brackets from equ1 and equ2 is better than my suggestion to do equ11:=w_1=op(rhs(...)).
But you might still want to use the ideas about reducing the number of equations. Multivariable Newton's method has an easier time of it and a better chance for convergence, for smaller systems of fewer variables and equations. It might then make it faster a more likely to get a result, if you decide to go with a variety of values for delta, p, and T say.
acer
The first problem is that some of the "equations" you were passing in the first set argument to fsolve were of the form,
name = list
It looks a lot like what you wanted instead was the inside of that right-hand-side list (rhs). So, you could get it like so,
equ11 := w_1 = op(rhs(eval(equ1, [p = .3, T = .1, delta = .9])))
and,
equ22 := w_2 = op(rhs(eval(equ2, [p = .3, T = .1, delta = .9])))
Now, moving on. You can re-substitute quite a few of the later equations back into the earlier ones, to get down to a 2- or 3-variable system that fsolve handles OK.
Here's what I added, at the end of your document,
mw1w2 := isolate(equ44, m);
A1 := eval(eval(equ11, [equ66, equ77, equ55]), mw1w2);
A2 := eval(eval(equ22, [equ66, equ77, equ55]), mw1w2);
A3 := eval(eval(equ33, equ55), mw1w2);
Note that now, I could isolate w_1 from equation A1, and re-substitute that into A2 and A3 to get a 2-variable system of 2 equations. I didn't bother.
solution := fsolve({A1, A2, A3}, {l, w[1], w[2]}, w[1] = -10000 .. 10, w[2] = -10000 .. 10, l = 0.1e-3 .. 1.999);
That gave the result,
{l = 0.4787635963e-2, w[1] = -1.900353594, w[2] = -1.901837431}
The equations equ66 and equ77 give q and o directly.
eval(mw1w2,solution); # gives value for m
eval(eval(equ55, mw1w2), solution); # gives j
I can re-substitute that solution into the equations, and check that doing so produces a very small float. Eg,
eval(A2,solution); # gives close to 0.0, same for A1,A3
acer
The first problem is that some of the "equations" you were passing in the first set argument to fsolve were of the form,
name = list
It looks a lot like what you wanted instead was the inside of that right-hand-side list (rhs). So, you could get it like so,
equ11 := w_1 = op(rhs(eval(equ1, [p = .3, T = .1, delta = .9])))
and,
equ22 := w_2 = op(rhs(eval(equ2, [p = .3, T = .1, delta = .9])))
Now, moving on. You can re-substitute quite a few of the later equations back into the earlier ones, to get down to a 2- or 3-variable system that fsolve handles OK.
Here's what I added, at the end of your document,
mw1w2 := isolate(equ44, m);
A1 := eval(eval(equ11, [equ66, equ77, equ55]), mw1w2);
A2 := eval(eval(equ22, [equ66, equ77, equ55]), mw1w2);
A3 := eval(eval(equ33, equ55), mw1w2);
Note that now, I could isolate w_1 from equation A1, and re-substitute that into A2 and A3 to get a 2-variable system of 2 equations. I didn't bother.
solution := fsolve({A1, A2, A3}, {l, w[1], w[2]}, w[1] = -10000 .. 10, w[2] = -10000 .. 10, l = 0.1e-3 .. 1.999);
That gave the result,
{l = 0.4787635963e-2, w[1] = -1.900353594, w[2] = -1.901837431}
The equations equ66 and equ77 give q and o directly.
eval(mw1w2,solution); # gives value for m
eval(eval(equ55, mw1w2), solution); # gives j
I can re-substitute that solution into the equations, and check that doing so produces a very small float. Eg,
eval(A2,solution); # gives close to 0.0, same for A1,A3
acer
Since the evalf[..](...) action is only an approximate floating-point computation then the results would likely never be identically equal.
If you increase the working precision during the floating-point evaluation in Idea 3 (or 2) then you might see those scalar floating-point numbers get closer to each other. If they don't get closer, as the precision is increased, then they are likely not equal.
The precision in the incantations that I gave is specified by the number in the indexing [..] brackets of the evalf call. So, evalf[50](...) uses a working precision of 50 decimal base-ten digits, while evalf[100](..) would use a working precision of 100 digits. Calling evalf in this indexed way is an alternative to setting the Maple environment variable Digits. If, as you raise the precision, the results for sol1 and either sol2 or sol3 keeps getting closer then it may be that the obtained formula which you are instantiating at specific a,b, and c does actually work.
During the (Gaussian) elimination steps that happen while Maple computes the so-called LU decomposition of a Matrix, a "pivot" is found for each row reduction. The pivot is chosen as a leading non-zero entry in the current column. Sometimes a complicated expression in an entry might appear as a huge involved formula while in truth it is identically zero (upon simplification, or conversion to so-called "normal" form). The candidates for pivot selection are hit with the Normalizer function, in order to attempt to always recognize elements as being invalid on account of their being actually zero. One can override the Normalizer. The function x->x is a "no operation" action, which while fast won't do any simplification and is thus not a good zero-recognizer. The danger, then, is that a hidden zero might get used as a pivot. The default Normalizer, when not overridden, may have to work very hard, which is why there is such a performance speed difference.
I ought to read Jacques' paper(s) on these subjects. In one (other?) thread he talked about doing the Gaussian elimination step by step, and taking account of the pivots one by one. I'm wondering now whether fraction-free elimination would be a more useful approach. After all, determinant calculation doesn't necessitate divisions...
Now, back to your problem. If you evaluate the original Matrix at given numerical points for a and b, then the determinant will be a univariate expression in c, yes? You could solve or fsolve that expr=zero for c. So, if it was fast enough to compute the determinant of that univariate Matrix (after instantiating a and b) then you might be able to repeat this many times. If you were lucky, you might be able to repeat it enough times so as to get a mesh of values for c, based on all the different particular a and b pairs that you used. You could then plot those. You might even be able to get a smooth interpolating approximation function, if you had enough numeric sets.
acer
Thanks, Paulina.
The method of converting to Atomic Identifier we figured out, for non-1D-parsable 2D objects. But ?plots,typesetting also mentions using left-name-quotes to do this (in the same paragraph that it mentions atomic identifiers, for such problematic 2D objects). I couldn't get the left-quotes to do it. Could you give an example?
acer
Thanks, Paulina.
The method of converting to Atomic Identifier we figured out, for non-1D-parsable 2D objects. But ?plots,typesetting also mentions using left-name-quotes to do this (in the same paragraph that it mentions atomic identifiers, for such problematic 2D objects). I couldn't get the left-quotes to do it. Could you give an example?
acer
From the Maple program's top menubar, choose,
View -> Palettes -> Arrange Palettes
That should pop up a new window, which lets you configure which palettes will show in the side pane.
Remember, after you create the 2D object, toggle it as Atomic Identifier before you Convert it to 1D notation.
acer
From the Maple program's top menubar, choose,
View -> Palettes -> Arrange Palettes
That should pop up a new window, which lets you configure which palettes will show in the side pane.
Remember, after you create the 2D object, toggle it as Atomic Identifier before you Convert it to 1D notation.
acer
I also tried to enter the 2D Math expression directly into the session, surround it with left- name-quotes, and use that directly within the textplot() call much as the help-page suggests.
At first I used 2D Math left-quotes, ie, inside the object. That ended up producing something with Typesetting errors embedded within it. Which is weird.
When trying to use the left-quotes, I also produced a plot whose structure was,
PLOT(TEXT([3., 4.], _TYPESET()))
desite there being some nice 2D typeset input thing in the actual call.
Then I tried with 1D notation left-quotes, just outside but around the 2D Math input object. That is, the quotes were red instead of black. That resulted in a textplot which contained, literally,
`LinearAlgebra:-HermitianTranspose(p[gt]);`
The thing inside the textplot call was all nicely typeset, though. This was the weirdest.
I think that precise instructions would help the user most here. Precise sets of key-strokes, named in the help-pages, seem almost necessary to provide satisfaction.
Also, (and this may not be true of this situation; I don't know yet, about the left-quotes) anything which can only be done by using the mouse is not best implemented.
acer
I also tried to enter the 2D Math expression directly into the session, surround it with left- name-quotes, and use that directly within the textplot() call much as the help-page suggests.
At first I used 2D Math left-quotes, ie, inside the object. That ended up producing something with Typesetting errors embedded within it. Which is weird.
When trying to use the left-quotes, I also produced a plot whose structure was,
PLOT(TEXT([3., 4.], _TYPESET()))
desite there being some nice 2D typeset input thing in the actual call.
Then I tried with 1D notation left-quotes, just outside but around the 2D Math input object. That is, the quotes were red instead of black. That resulted in a textplot which contained, literally,
`LinearAlgebra:-HermitianTranspose(p[gt]);`
The thing inside the textplot call was all nicely typeset, though. This was the weirdest.
I think that precise instructions would help the user most here. Precise sets of key-strokes, named in the help-pages, seem almost necessary to provide satisfaction.
Also, (and this may not be true of this situation; I don't know yet, about the left-quotes) anything which can only be done by using the mouse is not best implemented.
acer
I also tried to cut 'n paste the 2D nicely typeset input object, surrounded by name-quotes (ie. left-single quotes, not uneval right-single quotes), for use in the typeset() call within textplot(). That did not work for me. The help-page ?plot,typesetting indicated that those name quotes should have worked, though, if I understood it correctly.
acer
I also tried to cut 'n paste the 2D nicely typeset input object, surrounded by name-quotes (ie. left-single quotes, not uneval right-single quotes), for use in the typeset() call within textplot(). That did not work for me. The help-page ?plot,typesetting indicated that those name quotes should have worked, though, if I understood it correctly.
acer
The original poster asked about the determinant, given that the entries were multivariate expressions involving floats. It wasn't my suggestion to compute a determinant for that, but rather it was the original poster's.
Now, it may well be that the poster wanted to know because the underlying motivation was a wish to gain insight into whether the system was full rank (in some sense) or some such similar thing. In that case, yes, of course it would be much more appropriate to compute singular values than it would be to try to compute the determinant, *if* the entries were purely numeric floats. But the entries are stated as *not* being purely numeric -- there are mutliple symbols present. For a multivariable system, trying to compute the singular values is even harder than trying to compute the determinant. So that's not a very good suggestion, purely in itself.
Maybe the original poster can tell us why she wants to compute this determinant. Perhaps there's another way to get at her end goal.
acer
If you are feeling brave, you might try the following:
Suppose that your Matrix has been assigned to m.
M := map(convert,m,rational):
Normalizer := x->x; # the brave bit
(p,u):=LinearAlgebra:-LUDecomposition(M,'output'=['P','U']):
sol1 := LinearAlgebra:-Determinant(p) * mul(u[i,i],i=1..13):
What I've done above is similar to Gaussian elimination. The determinant will then be the product of the diagonal entries of u (which is the result of that process), further corrected with the sign (+/-1) by using the determinant of the "pivot matrix" P.
You can play around with it a bit, to try to check correctness. Some possibilities for that are below.
Idea 1
------
Take M := M[1..7,1..7] , say, and repeat the above calculation. You may be able to compute the determinant of that smaller M using LinearAlgebra:-Determinant. If you're lucky, that won't be a result of zero.
sol2 := LinearAlgebra:-Determinant(M):
radnormal(sol1-sol2); # trying to test sol1=sol2
If sol1 and sol2 agree, that might give you confidence in the sol1 obtained for the full sized M.
Idea 2
------
Alternatively you could try to instantiate sol1 and sol2, computed for the smaller sized M, at specific (but randomly chosen by you) floating-point values for a,b, and c. Ie,
evalf[50]( eval(sol1,[a=0.05,b=0.03,c=0.07]) );
evalf[50]( eval(sol2,[a=0.05,b=0.03,c=0.07]) );
Idea 3
------
Or, you could take the sol1 computed for the full sized M, instantiate as some values as in Idea 2, and compare with this:
LinearAlgebra:-Determinant(evalf[50]( eval(m,[a=0.05,b=0.03,c=0.07]) ));
That is the determinant of the Matrix obtained by first instantiating Matrix m with those same values. In other words, you are here comparing these two things,
A) the formula sol1 for the determinant of m, then instantiated at certain values.
B) the Matrix m, instantiated at those same values, which you then take the determinant of.
Again, if those two things agree, for a variety of triples of floating-point values for a,b, and c, then you may gain some confidence in the answer sol1.
I think that I'd prefer Idea 3, over Ideas 1 and 2 which are a bit shaky as they use only an upper-left portion of Matrix m.
The danger of setting Normalizer to x->x is that a pivot might be selected during the (Gaussian elmination) LU computation that is actually zero even though it doesn't immediately appear to be zero. If that were to happen, then the result would be invalid, and hidden divisions by zero would be contained in the results. Another potential problem is that sol1 might be a very much larger expression than its most simplified form, when computed with the Normalizer as x->x.
It's not so uncommon for multivariate problems with floating-point coefficients to present maple with difficulties. Converting those floats to rationals is one way to try to handle such problems.
acer
That article talks about the possible need for security, when running software. It's discussing older software, without the most recent security patches. But that just makes me think about a pieces of software's current security state of affairs.
I've recently considered setting up a separate unix/linux account for, just for the purpose of my running Maple. This should add security, as Maple sessions run under that new account shouldn't be able to affect my true own true user account and its files. I worry most about .mw worksheets that I pick up off the web. What do you think, unnecessary overkill?
acer