acer

32303 Reputation

29 Badges

19 years, 309 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Maple can sort objects (in sets, or sums) by memory address or by the order in which they first appear in the session. Consider, restart: seq( addressof(t), t in op(expand((x+y+z)^6)) ); restart: 60*x*y^2*z^3:30*x*y*z^4:6*y^5*z: # notice now which terms appear first expand((x+y+z)^6); acer
What was the method that you used? It might be possible to speed up a high precision "software" floating-point Matrix calculation by increasing garbage collection frequency. See ?gc . acer
For the original poster's benefit: Values which approximate exact zero, but which have small nonzero components due to floating-point evaluation under fixed precision, can also be handled by judicious use of Maple's `fnormal` routine. For example, `+`( seq(`if`(Im(x)=0.0,signum(Re(x)),NULL),x in map(fnormal,evals)) ); Additional optional arguments to fnormal() allow one to fine-tune what is taken to be "close enough" to zero. acer
For the original poster's benefit: Values which approximate exact zero, but which have small nonzero components due to floating-point evaluation under fixed precision, can also be handled by judicious use of Maple's `fnormal` routine. For example, `+`( seq(`if`(Im(x)=0.0,signum(Re(x)),NULL),x in map(fnormal,evals)) ); Additional optional arguments to fnormal() allow one to fine-tune what is taken to be "close enough" to zero. acer
That's very nice. It makes me wonder, about usability and nice defaults for students. The default value of the discont option to plot() could be reconsidered. An option to show asymptotes easily would be nice, in Maple's own plotting routines. acer
That does sound more likely. I'm not so good a guesser. I wonder whether in future it might be possible to get jump discontinuties to be shown by dashed lines, like appear in so many texts. Is there already an easy way to get that, does anyone know? It might look nice if jumps from a curve to a finite point, or vertical asymptotes, could get shown as dashed lines though some nice option such as 'discont'='dashed'. acer
We might only guess you think is wrong with it. Is it because there are no vertical bars indicating the jumps at -Pi and Pi? If so, you might try extending the range a tiny amount on each side, so as to help Maple realize that there are jumps there. plot( cot(x), x=-Pi-0.001..Pi+0.001, view=[-Pi..Pi,-3..3] ); acer
It might be useful to know how bad the conditioning can be. Does it get worse, as the size of problems in your class grows? (For example, the condition number for solving linear systems with the so-called Hilbert Matrix will grow with the size N.) I ask because, if you needed 128 decimal digits at smaller sizes, and if the conditioning gets worse as the size grows, then for size 7000 the conditioning might be so much worse. If you can generate problems from your "class" in sizes of multiples of 10, say, then you could set Digits high and look for a pattern. Something like this,
Digits:=500:
kernelopts[printbytes=false]:
with(LinearAlgebra):
for k from 1 to 20 do
M := # however you construct the example,
# with size k*10 by k*10
ConditionNumber(M):print(evalf[10](%));
od:
If above shows that the condition number grows with the size, then you might need a very high working precision to deal with the 7000x7000 case. The required working precision might be prohibitively high. There are iterative sparse solvers for high precision floating-point linear systems, available in LinearSolve. The method='SparseIterative' option forces use of (only) such methods. Specifying the method means that it won't fall back to any other (much slower) method if it encounters difficulties. You might experiment with using that on a small problem in the same class. You could try a smaller system at both hardware and software precision, to compare the performance effect of switching to software precision. Be prepared to see a 15-20 times slowdown just be switching between Digits=14 and Digits=16, which covers the hardware double precision cutoff. Further increase to Digits will result in a further (gradual but steady) slowdown. See the help-page ?IterativeSolver for more details on using this method. That page describes symmetric problems only, but experimentation reveals that there is also a nonsymmetric iterative solver. If you go that route, be sure to create your Matrix directly with datatype=sfloat. You may need to use ImportMatrix to effectively get the data into an sfloat Matrix in the first place (it depends on how sparse it is). Do not(!) try to do general computations with such a Matrix, Maple might try to copy it to a dense Matrix and get bogged down. That includes calling ConditionNumber(). I'd suggest setting infolevel[LinearAlgebra]:=2 and breaking any computation that didn't show a NAG f11 function in the printed progress output. Any attempt to form the dense "rectangular" storage version of your large sparse Matrix might end up exceeding your memory resources. The few things that you might reasonably do with such a huge sparse high-precision floating-point system include,
  • linear solving
  • matrix-vector multiplication
You might also need to know the answers to questions like this. What sort of accuracy are you after? What would characterize a valid solution for you? (forward error? backward error?) How long are you prepared to wait for result? Do you need to solve for multiple right-hand-sides (Matrix b instead of Vector b, in A.x=b)? acer
Hi Georgios, I can't seem to reproduce the problem with Browse, on my 11.02 in 64bit Linux, sorry. But I can answer your second question. The setting interface(rtablesize) controls the cutoff size below which one only sees that "summary" detail of the Vector/Matrix/Array. acer
I just realized that the commands in the string could all be written out more nicely, in one execution group, as say, H := " sin(x); int(cos(x),x); plot(x^2,x=1..2); x^3; ": If the string is then split at the symbol "\n", then it should still work. That way the code is easier to test and debug, since the first and last line could be temporarily commented out. There are of course simpler ways to accumulate results, for later output without shown input. But this way should preserve aspects of the flow like warnings and userinfo, etc. acer
It's crude, but... Could you put all the commands in a string, inside a single collapsed Section. Then, also inside the collapsed section, define a procedure to parse and evaluate the pieces of that string as maple commands. Then, outside the section, in the open air, run that procedure against that string. Then there's very little to delete. Eg,
H:="sin(x);|int(cos(x),x);|plot(x^2,x=1..2);|3*x^2-cot(x);|h;"; Q:=proc(s::string) local i; for i in StringTools:-Split(s,"|") do print(eval(parse(i))); od; end proc; Q(H); Then collapse the Section. There's just the Q(H) as visible input. acer
It's a good idea to check the final results of any computation, regardless of whether it comes from a CAS or some other program. My emphasis would be on the word "final", there. A particular definite integral result might well not be the final result. If the software double checks all intermediate results, then it'll be far slower. Why not check only the final result, if it is a compound and more involved computation? If problems are found with the final result, then go ahead and attempt a forced test of intermediate results. There's another important issue with numerical floating point tests of symbolic results. If you don't know the numerical conditioning of the problem, how are you going to know what fixed precision at which to do the computation? How will you decide, for example, that small imaginary floating-point components are not merely artefacts? Letting the program try to figure this out has at least two major snags. The first is that it may be very difficult to compute the conditioning (and hence also the needed working precision to get a desired accuracy). The second is that if the program is allowed to automatically adjust the precision then it might try to use an absurd number of working digits (arbitrary, since the example is unspecified). A check of final results does not necessarily have to be a quantitative test, such as a floating-point numeric comparison. It could be a qualitative test, according to some characteristic that is deduced. All software has bugs. Most all computational software has a history of producing some incorrect results. Even an exact definite integral result, corroborated by a floating-point approximation, should be checked somehow if it's going to be used for some important purpose. Adding an automatic floating-point verificaton attempt, with its resulting performance penalty, should be up to the individual to select. It would be a judgment, weighing the liklihood of error against the performance cost. acer
The problem with numerical verification, done automatically, is that it makes the system much slower. It's pretty straightforward for the end-user to get a floating-point verification out of Maple, for exact definite integration examples. But if int() always did it, then it'd be measurably slower. Many people might be unhappy about that. Leaving it as a choice for the user, after the computation, is nicer. The idea is very good for one related purpose, however. It is very good as for assuring that int()'s exact definite integration schemes are behaving and performing well. Produce a scheme for automatically generating "random" exact definite integration test problems, and such a scheme can becomes strong test engine. By "random" I mean selections from a true miscellany. acer
I wonder, how many of the issues mentioned in Davenport's 2003 Calculemus paper are still relevant today? (It mentions some other methods done by Maple, Alex, such as table-lookup and convolution of MeijerG functions.) Also, one year after an earlier mapleprimes post, how much of its content is still to the point? acer
DJ, I use ArrayTools:-Alias quite often, but I might never have though of what you did in IR4. That is very neat and very, very efficient. Since ArrayTools:-Alias produces a new rtable, without actually copying the data, it costs very little. It's not even clear that it'd be much benefit to "scatter-point index" directly on M (even if one could thus access the antidiagonal all at once, without a loop). It's possible to offload most of the indexing to the time of element access, using an indexing function. Of course, this induces a huge penalty at access time (which I don't measure because it depends on the final usage, which includes printing!). Really, I just show this for fun.
`index/antident`:=proc(idx,M,val)
if nargs>2 then return NULL; end if;
if idx[1]=op([1,2],M)-idx[2]+1 then -1 else 0 end if;
end proc:

IR5 := proc(n,{compact::truefalse:=true},$)
   local M;
   M:=Matrix(n,n,shape='antident',storage=empty);
   if not compact then
      Matrix(M);
   else
      M;
   end if;
end proc:

st,ba,bu:=time(),kernelopts(bytesalloc),kernelopts(bytesused):
IR5(100):
#IR5(100,compact=false): # slow
#IR4(100):
time()-st,kernelopts(bytesalloc)-ba,kernelopts(bytesused)-bu;
Something that puzzled me. If I debug the indexing function, and call IR5(1) with a colon to suppress printing, then why is the indexing function called twice!? acer
First 561 562 563 564 565 566 567 Last Page 563 of 591