If you use t as the dummy variable of integration alongside t[0] or t[o], then t isn't really free. This is because names like t can refer to tables in Maple, and t[0] references the 0 index entry of t. Hence mixing t and t[...] is a muddle to be avoided, with semantic baggage that confuses Maple.
acer

On reflection, it may not be at all obvious to the new user how to store the data and the final iterates, so that they can be easily plotted.
restart:
N := proc(f)
local Xnew, X0, incX, A, B, k, df, count, Lreal, Lcplx, R;
R := Array(1..101*101,1..3);
df := diff(f, x);
# These two outermost loops are used to create the X0 initial points.
count := 0;
for A from 0 by 0.02 to 2 do
for B from -1 by 0.02 to 1 do
X0 := evalf(A + B*I);
# Now do ten Newton iterations, using X0
Xnew := X0;
for k to 10 do
incX := evalf(eval(f/df, x = Xnew));
Xnew := Xnew - incX;
end do;
count := count + 1;
R[count,1],R[count,2],R[count,3]:=evalf(A),evalf(B),Xnew;
end do;
end do;
# Return the results in lists suitable for pointplot3d.
Lreal := [seq([R[k,1],R[k,2],Re(R[k,3])],k=1..101*101)];
Lcplx := [seq([R[k,1],R[k,2],Im(R[k,3])],k=1..101*101)];
return Lreal,Lcplx;
end proc:
Lreal,Lcplx := N(sin(x)):
plots[pointplot3d](Lreal,axes=boxed);
plots[pointplot3d](Lcplx,axes=boxed);
acer

So, you had to use Newton's method to get from X0 to X1, X2, etc? If so, then what was the function? You must know, since you already solved question a). Someone else here also asked for this. No doubt it was laid out earlier, perhaps in Part I or Part II.
Question b) seems to be that you repeat the task like in a), for each X0=A+B*I, but a double-loop doing that was already illustrated here. You might try studying it some more.
acer

So, you want to save all the final (10th) iterates, for each A and B?
If that's true, then simply create an Array, outside the double loops. (Is it 101x101 in size?) Then after each k-loop finishes, save the final Xnew to the A,B coordinate of the Array. Then return that Array at the end of the procedure.
This Array would then contain the 10th and final Newton iterates, for all the X0=A+B*I initial points.
You could then do some sort of complex plot. Perhaps the idea was to show you how the various subregions of the A+B*I complex domain gets mapped to final iterates.
ps. For anyone who was shocked that the derivative of f is computed each time through the triple loop, sorry! Of course, it would be so much more efficient to get diff(f,x) just once, outside all the loops. For shame!
acer

I'm guessing that the idea is to start Newton's method using each of the points X0=A+B*I in the complex plane. So, the chances are greater to find a point that converges to a root. And perhaps several roots may be found.
N := proc(f)
local Xnew, X0, incX, A, B, k, results;
results := {};
# These two outermost loops are used to create the X0 initial points.
for A from 0 by 0.02 to 2 do
for B from -1 by 0.02 to 1 do
X0 := evalf(A + B*I);
# Now do ten Newton iterations, using X0
for k to 10 do
incX := evalf(eval(f, x = X0))/evalf(eval(diff(f, x), x = X0));
Xnew := X0 - incX;
X0 := Xnew;
end do;
# If it's "good enough", and not found already,
# then put the final iterate in the set of results.
if abs(incX) < 0.1000000000*10^(-6) and not member(evalf[7](Xnew), results) then
results := results union {evalf[7](Xnew)};
# Print which initial point converged to which new result.
printf("initial point %Ze converged to %Ze\n", evalf(A + B*I), Xnew);
end if;
end do;
end do;
# Return the results
return results;
end proc:
# try it out
N(sin(x));
Look, this may not be what you were assigned to do. It was unclear. If it's close, then study it, then when you understand it you should be able to alter it appropriately.
acer

The difference between the exact and floating-point eigenvector results, for say your 5x5 example, is not a condition number issue.
The reason for the differences you see in the exact vs floating-point results is simply that, for any given eigenvalue, the eigenvectors live in a linear subspace. Multiples of single eigenvectors, or linear combinations of eigenvectors in that relevant subspace (eigenspace) also happen to be valid eigenvectors for that given eigenvalue.
Take your 5x5 exact example. The exact eigenvalues are {3/4,0,23/20,1,1}. The eigenvalues {3/4,0,23/20} each have a single eigenvector associated with them, in the corresponding column of result Matrix e. You can scale each of those, and the result will still be an eigenvector of the same eigenvalue. The proof is easy, and falls out of the linear quality of the definition,
M * x = lambda * x
Similarly for the double eigenvalue {1,1}. It has two linearly independent eigenvectors associated with it (not always true, but true in this example). Any linear combination of such basis eigenvectors will produce another eigenvector in the same eigenspace associated with that same eigenvalue (ie, 1). This too falls right out of the linear quality of the definition above.
I don't think that numerical conditioning has much, if anything, to do with your 5x5 example when computed in floating-point.
I use Linux, and the particular scaling or linear combinations done on Windows might differ. But here is a sequence of elementary column operations with which I was able to transform the eigenvectors of evalf(L) into the floating-point equivalent of the results of the exact Matrix L.
Lf := evalf(L):
vf, ef := Eigenvectors(Lf):
Digits:=15: QQ:=ColumnOperation(simplify(ef),1,1/0.4472135955):
ColumnOperation(QQ,3,1/0.5920027785,inplace):
ColumnOperation(QQ,4,-1/(.7071067812),inplace):
ColumnOperation(QQ,2,-1/0.8164965809,inplace):
ColumnOperation(QQ,[5,2],0.03887187786,inplace):
ColumnOperation(QQ,5,(0.5/0.706304985706392),inplace):
ColumnOperation(QQ,[2,5],-1,inplace):
ColumnOperation(QQ,5,1/0.5,inplace):
ColumnOperation(QQ,[5,2],1,inplace):Digits:=10:map(fnormal,QQ);
Note that I only had to scale the 1st, 3rd, and 4th columns, relating to the eigenvalues of multiplicity one, to get an accord. To handle the eigenvalue 1, which has multiplicity two, I took linear combinations of just those the 2nd and 5th columns.
You may also enjoy playing with crude forward error estimates, using the definition, ie,
Norm( Lf . ef - ef . DiagonalMatrix(vf) );
You may also enjoy playing with the EigenConditionNumbers routine. These might help, if you doubt the floating-point calculations for your 50x50 case.
acer

An article about Dell's offering WinXP for new purchases is

here.
acer

After I run your worksheet, the data in L is exact numeric, not floating-point.
If floating-point results are acceptable to you, then just do something like Eigenvectors(evalf(L)) .
If you really need exact results, then consider that the characteristic polynomial of L is of degree 50. This polynomial, whose roots are the eigenvalues of L, does not factor quickly, if at all. So it may take a long time trying to do so, if tried for exact L. Suppose that the characteristic polynomial doesn't factor; would you then be satisfied with a nullspace (basis of eigenspace) which was rife with unindexed implicit RootOfs? What could one do usefully with such a beast (other than to evalf it, in which case evalf(L) could have been used instead).
acer

Is your data symbolic or exact rational, or numeric and floating-point, or...? It can make a difference with regard to what sort of suggestions might be useful. You might also consider posting or uploading some example.
acer

y' is not specific to Documents, it is specific to 2D Math Input (aka 2Dinput).
In a Worksheet, hitting the F5 key is one way to toggle between 1D and 2D input modes. I believe that Ctl-r is another way to switch to 2D input mode. There's also a drop-down menu on the menubar to select an input mode. This is all in a worksheet.
When toggle to 2D mode, even in a worksheet, the cursor becomes slanted.
After entering y'+y=sin(y) and hitting enter I get the DE containing dy/dx, and so on.
acer

The are many ways in which you could automate it. Perhaps this will do.
m := Matrix(2,6,[[1,2,3,4,10,11],[21,22,23,24,30,31]]):
v := Vector(6-2,(i)->x^(i-1)):
mv := m[1..-1,1..-3].v:
p := 'piecewise'(X>mslast and X

I believe that `read` will use the current working directory, which within a maple session may be queried or set with the currentdir() command. You can check that is set to the location of your maple source file.
As far as the error message about reserved word `read`, one possibility is that you tried it in 2D-input mode. Try entering it as 1D maple input.
Another possibility is that you typed the first `read` example literally as you showed it here, without a colon or semicolon. And then you tried it with the full explicit location. In that case, the error message is that the second `read` was unexpected, as maple will be trying to process both `read` calls at once.
The behaviour to fill in "missing" colons or semicolons is rather new, and specific to the Standard GUI. The commandline interface doesn't fill them in for you (since it can't know when you don't want it).
acer

You could use the read() function, in any of the Maple interfaces, to get the BesselJ.mpl file sources into a new session.
Or you could use march() to create a new personal .mla Maple library, and then store the routine within that by using savelib(). You could also look at the ?march , ?savelib and ?LibraryTools help-pages. This is a nice way to go, if you are going to write a lot of procedures that you want available in lots of distinct maple sessions.
You may also be able to add an option to the Standard GUI icon on your desktop, so that it adds some location of your new .mla libraries to libname upon startup. (For Unix, that is the -b maple option.) Or you could amend libname within your maple.ini file.
acer

I use Linux, for which the default initialization file is ~/.mapleinit, but I believe that it mostly gets treated the same as maple.ini for Windows.
I have seen variants on these first two, to coerce Maple into using a .mapleinit file local to a particular working directory.
try proc() read(`./.mapleinit`) end() catch: end try;
try read "./mapleinit" catch: end try;
Sometimes I use this line,
interface(rtablesize=50):
When I'm doing a lot of module development and debugging I sometimes insert this line,
kernelopts(opaquemodules=false):
Way back before LinearAlgebra, I recall seeing this,
eye := n -> array(1..n,1..n,identity):
acer

What Jacques says is mostly true. One class of development for which I'd disagree is the so-called application worksheet or document. By that I mean a document that contains not only Maple code and procedures but also presentation graphics. With embedded components, it is possible to construct a document which actively illustrates some ideas in a scientific or educational field. But the more sophisticated of these do have programs buried within them, usually as collapsed blocks. It's convenient to be able to pass these documents on to other people without having to pass along a Maple .mla Library file as well. And sometimes developing the programmatic parts of such documents is rather easily done within the document itself.
Having said that, I'll add another reason why the command-line (tty) interface for Maple is great for developing large programming projects. It's the availability of the "include" directives (see ?include). Being able to store the source for each module export in a separate file, each one of which is $include'd in the parent module source, is nice.
George, you could try this sort of thing, in the command-line interface.
interface(verboseproc=2);
writeto("BesselJ.mpl"):
eval(BesselJ);
writeto(terminal):
Following that, you would just need a few edits at the front and back of the code. But were you to do it that way, you shouldn't need to add any colon or semicolon except at the very end, after the `end proc'.
acer