acer

32622 Reputation

29 Badges

20 years, 43 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@Kr1stin I don't see much more than you had already. The content that seemed valid (to me) seems to end with a corrupted image (and possibly all null after that...)

See attached.

mat1noter_2.mw

And here are results after parallelizing the computation using the Grid package.

Using both 32bit and 64bit Maple 2016.2, on an 8-core AMD @3.5GHz machine running Windows 7 Pro, I was obtained the following performance:

J(3,Pi/6) to ten digits in 3.3sec when utilizing all 8 cores, and 5.4sec when restricting to just 4 cores.

a*b*(-A*J(3,Pi/6)+B*J(6,Pi/6)) to ten digits in 6.6sec, using all 8 cores.

For some reason I had to use the initializing command Grid:-Set("local",numnodes=n) on both those Windows OSes, while for Linux the parallelism speedup was attained without that unexpected extra command.

shortersum_win32_b.mw

shortersum_win64_b.mw

It required little effort to parallelize the computation.

@kuwait1 I don't understand why you are still 1) using Digits=50 and nterms=100, when you haven't shown examples of n and phi that require that much computational expense, and 2) using unapply in conjunction with evalf/Sum.

It seems that the upper value of the summation index for the innermost of the nested sums may not need to be nearly so high. That is, the innermost summation may be converging more quickly than previously considered. I can obtain the same decimal results as before, taking the innermost summation from 0 to only 25. Naturally, this is much faster.

I am not sure why you are still getting inaccurate answers, while it works ok for me using either 32bit or 64bit Maple 2016.2 as shown below. But if you use the suboptimal approaches then I can imagine that you might get into memory allocation problems on with the 32bit version.

Here it is running on an AMD FX-8320 @3.5GHz machine, with 32bit Maple 2016.2 on 64bit Windows 7.

The computation of J(3,Pi/6) is done in as little as 7.8 sec, and the computation of a*b*(-A*J(3, (1/6)*Pi)+B*J(6, (1/6)*Pi))  is done in as little as 18.0 sec. Total memory allocation for that last computation is 156MB.

I show two apporaches for each of those two examples. One approach just uses evalf/Sum (without unapply!), while the other tries some symbolic simplification of the inert Sums before utilizing evalf. Pick and use whichever is the faster or more robust one for yourself. On my 64bit Linux the symbolic simplification helps more, while it doesn't much (if at all) on Windows 7.

kernelopts(version)

`Maple 2016.2, IBM INTEL NT, Jan 13 2017, Build ID 1194701`

restart

Digits := 30

nterms := 70

r := 2.8749; a := .7747; b := .3812; B := 29000; R := 5.4813; Z := 2; A := 17.4

J := proc (n, phi) options operator, arrow; 8*Pi^(3/2)*r*R*(Sum((2*r*R)^(2*i)*pochhammer((1/2)*n, i)*pochhammer((1/2)*n+1/2, i)*(Sum((-1)^j*cos(phi)^(2*j)*(Sum((2*r*cos(phi))^(2.*l)*pochhammer(n+2*i, 2*l)*hypergeom([2*j+2*l+1, .5], [2*j+2*l+1.5], -1)*(.5*Beta(l+.5, n+2*i+l-.5)-sin(arctan(-Z/sqrt(R^2+r^2)))^(2*l+1)*hypergeom([-n-2*i-l+1.5, l+.5], [l+1.5], sin(arctan(-Z/sqrt(R^2+r^2)))^2)/(2*l+1))/(factorial(2*l)*pochhammer(2*j+2*l+1, .5)*(R^2+r^2)^(n+2*i+l-.5)), l = 0 .. NNN))/(factorial(i-j)*factorial(j)), j = 0 .. i))/factorial(i), i = 0 .. NN)) end proc

raw := CodeTools:-Usage(J(3, (1/6)*Pi))

memory used=263.19KiB, alloc change=0 bytes, cpu time=0ns, real time=296.00ms, gc time=0ns

CodeTools:-Usage(evalf(subs([NNN = 20, NN = nterms], raw))); evalf[10](%)

memory used=487.53MiB, alloc change=36.00MiB, cpu time=7.52s, real time=7.80s, gc time=546.00ms

.3995579519

restart

Digits := 30

nterms := 70

r := 2.8749; a := .7747; b := .3812; B := 29000; R := 5.4813; Z := 2; A := 17.4

J := proc (n, phi) options operator, arrow; 8*Pi^(3/2)*r*R*(Sum((2*r*R)^(2*i)*pochhammer((1/2)*n, i)*pochhammer((1/2)*n+1/2, i)*(Sum((-1)^j*cos(phi)^(2*j)*(Sum((2*r*cos(phi))^(2.*l)*pochhammer(n+2*i, 2*l)*hypergeom([2*j+2*l+1, .5], [2*j+2*l+1.5], -1)*(.5*Beta(l+.5, n+2*i+l-.5)-sin(arctan(-Z/sqrt(R^2+r^2)))^(2*l+1)*hypergeom([-n-2*i-l+1.5, l+.5], [l+1.5], sin(arctan(-Z/sqrt(R^2+r^2)))^2)/(2*l+1))/(factorial(2*l)*pochhammer(2*j+2*l+1, .5)*(R^2+r^2)^(n+2*i+l-.5)), l = 0 .. NNN))/(factorial(i-j)*factorial(j)), j = 0 .. i))/factorial(i), i = 0 .. NN)) end proc

raw := CodeTools:-Usage(J(ii, (1/6)*Pi))

memory used=265.86KiB, alloc change=0 bytes, cpu time=15.00ms, real time=16.00ms, gc time=0ns

new := `assuming`([CodeTools:-Usage(simplify(simplify(combine(convert(combine(raw), elementary))), size))], [i::nonnegint, j::nonnegint, l::nonnegint, j <= i, ii::posint, NN::nonnegint, NNN::posint])

memory used=90.24MiB, alloc change=38.01MiB, cpu time=2.26s, real time=6.35s, gc time=78.00ms

CodeTools:-Usage(evalf(subs([NNN = 20, NN = nterms, ii = 3], new))); evalf[10](%)

memory used=0.52GiB, alloc change=32.00MiB, cpu time=7.13s, real time=6.97s, gc time=717.60ms

.3995579519

restart

Digits := 30

nterms := 70

r := 2.8749; a := .7747; b := .3812; B := 29000; R := 5.4813; Z := 2; A := 17.4

J := proc (n, phi) options operator, arrow; 8*Pi^(3/2)*r*R*(Sum((2*r*R)^(2*i)*pochhammer((1/2)*n, i)*pochhammer((1/2)*n+1/2, i)*(Sum((-1)^j*cos(phi)^(2*j)*(Sum((2*r*cos(phi))^(2.*l)*pochhammer(n+2*i, 2*l)*hypergeom([2*j+2*l+1, .5], [2*j+2*l+1.5], -1)*(.5*Beta(l+.5, n+2*i+l-.5)-sin(arctan(-Z/sqrt(R^2+r^2)))^(2*l+1)*hypergeom([-n-2*i-l+1.5, l+.5], [l+1.5], sin(arctan(-Z/sqrt(R^2+r^2)))^2)/(2*l+1))/(factorial(2*l)*pochhammer(2*j+2*l+1, .5)*(R^2+r^2)^(n+2*i+l-.5)), l = 0 .. NNN))/(factorial(i-j)*factorial(j)), j = 0 .. i))/factorial(i), i = 0 .. NN)) end proc

raw := CodeTools:-Usage(a*b*(-A*J(3, (1/6)*Pi)+B*J(6, (1/6)*Pi)))

memory used=301.39KiB, alloc change=0 bytes, cpu time=0ns, real time=0ns, gc time=0ns

CodeTools:-Usage(evalf(subs([NNN = 25, NN = nterms, ii = 3, iii = 6], raw))); evalf[10](%)

memory used=1.19GiB, alloc change=36.00MiB, cpu time=18.70s, real time=18.78s, gc time=1.26s

.6541253936

restart

Digits := 30:

nterms := 70

r := 2.8749:

J := proc (n, phi) options operator, arrow; 8*Pi^(3/2)*r*R*(Sum((2*r*R)^(2*i)*pochhammer((1/2)*n, i)*pochhammer((1/2)*n+1/2, i)*(Sum((-1)^j*cos(phi)^(2*j)*(Sum((2*r*cos(phi))^(2.*l)*pochhammer(n+2*i, 2*l)*hypergeom([2*j+2*l+1, .5], [2*j+2*l+1.5], -1)*(.5*Beta(l+.5, n+2*i+l-.5)-sin(arctan(-Z/sqrt(R^2+r^2)))^(2*l+1)*hypergeom([-n-2*i-l+1.5, l+.5], [l+1.5], sin(arctan(-Z/sqrt(R^2+r^2)))^2)/(2*l+1))/(factorial(2*l)*pochhammer(2*j+2*l+1, .5)*(R^2+r^2)^(n+2*i+l-.5)), l = 0 .. NNN))/(factorial(i-j)*factorial(j)), j = 0 .. i))/factorial(i), i = 0 .. NN)) end proc:

raw := CodeTools:-Usage(a*b*(-A*J(ii, (1/6)*Pi)+B*J(iii, (1/6)*Pi))):

memory used=307.61KiB, alloc change=0 bytes, cpu time=0ns, real time=31.00ms, gc time=0ns

new := `assuming`([CodeTools:-Usage(simplify(simplify(combine(convert(combine(raw), StandardFunctions))), size))], [i::nonnegint, j::nonnegint, l::nonnegint, j <= i, ii::posint, iii::posint, NN::nonnegint, NNN::posint]):

memory used=115.30MiB, alloc change=68.07MiB, cpu time=2.51s, real time=3.09s, gc time=156.00ms

CodeTools:-Usage(evalf(subs([NNN = 25, NN = nterms, ii = 3, iii = 6], new))):

memory used=1.07GiB, alloc change=8.00MiB, cpu time=15.34s, real time=14.94s, gc time=1.39s

.6541253936

evalf[5](convert(kernelopts(bytesalloc)*Unit(byte), 'units', Unit(megabyte)));

153.56*Units:-Unit('megabyte')

 

Download shortersum_win32.mw

Here it is running on an AMD FX-8320 @3.5GHz machine, with 64bit Maple 2016.2 on 64bit Windows 7.

The computation of J(3,Pi/6) is done in as little as 9.3 sec, and the computation of a*b*(-A*J(3, (1/6)*Pi)+B*J(6, (1/6)*Pi))  is done in as little as 18.1 sec. Total memory allocation for that last computation is 122MB.

kernelopts(version)

`Maple 2016.2, X86 64 WINDOWS, Jan 13 2017, Build ID 1194701`

restart

Digits := 30

nterms := 70

r := 2.8749; a := .7747; b := .3812; B := 29000; R := 5.4813; Z := 2; A := 17.4

J := proc (n, phi) options operator, arrow; 8*Pi^(3/2)*r*R*(Sum((2*r*R)^(2*i)*pochhammer((1/2)*n, i)*pochhammer((1/2)*n+1/2, i)*(Sum((-1)^j*cos(phi)^(2*j)*(Sum((2*r*cos(phi))^(2.*l)*pochhammer(n+2*i, 2*l)*hypergeom([2*j+2*l+1, .5], [2*j+2*l+1.5], -1)*(.5*Beta(l+.5, n+2*i+l-.5)-sin(arctan(-Z/sqrt(R^2+r^2)))^(2*l+1)*hypergeom([-n-2*i-l+1.5, l+.5], [l+1.5], sin(arctan(-Z/sqrt(R^2+r^2)))^2)/(2*l+1))/(factorial(2*l)*pochhammer(2*j+2*l+1, .5)*(R^2+r^2)^(n+2*i+l-.5)), l = 0 .. NNN))/(factorial(i-j)*factorial(j)), j = 0 .. i))/factorial(i), i = 0 .. NN)) end proc

raw := CodeTools:-Usage(J(3, (1/6)*Pi))

memory used=410.60KiB, alloc change=0 bytes, cpu time=15.00ms, real time=313.00ms, gc time=0ns

CodeTools:-Usage(evalf(subs([NNN = 20, NN = nterms], raw))); evalf[10](%)

memory used=0.77GiB, alloc change=4.00MiB, cpu time=8.69s, real time=9.02s, gc time=530.40ms

.3995579519

restart

Digits := 30

nterms := 70

r := 2.8749; a := .7747; b := .3812; B := 29000; R := 5.4813; Z := 2; A := 17.4

J := proc (n, phi) options operator, arrow; 8*Pi^(3/2)*r*R*(Sum((2*r*R)^(2*i)*pochhammer((1/2)*n, i)*pochhammer((1/2)*n+1/2, i)*(Sum((-1)^j*cos(phi)^(2*j)*(Sum((2*r*cos(phi))^(2.*l)*pochhammer(n+2*i, 2*l)*hypergeom([2*j+2*l+1, .5], [2*j+2*l+1.5], -1)*(.5*Beta(l+.5, n+2*i+l-.5)-sin(arctan(-Z/sqrt(R^2+r^2)))^(2*l+1)*hypergeom([-n-2*i-l+1.5, l+.5], [l+1.5], sin(arctan(-Z/sqrt(R^2+r^2)))^2)/(2*l+1))/(factorial(2*l)*pochhammer(2*j+2*l+1, .5)*(R^2+r^2)^(n+2*i+l-.5)), l = 0 .. NNN))/(factorial(i-j)*factorial(j)), j = 0 .. i))/factorial(i), i = 0 .. NN)) end proc

raw := CodeTools:-Usage(J(ii, (1/6)*Pi))

memory used=415.02KiB, alloc change=0 bytes, cpu time=15.00ms, real time=15.00ms, gc time=0ns

new := `assuming`([CodeTools:-Usage(simplify(simplify(combine(convert(combine(raw), elementary))), size))], [i::nonnegint, j::nonnegint, l::nonnegint, j <= i, ii::posint, NN::nonnegint, NNN::posint])

memory used=142.48MiB, alloc change=36.11MiB, cpu time=2.29s, real time=6.07s, gc time=140.40ms

CodeTools:-Usage(evalf(subs([NNN = 20, NN = nterms, ii = 3], new))); evalf[10](%)

memory used=0.68GiB, alloc change=8.00MiB, cpu time=7.13s, real time=6.99s, gc time=1.01s

.3995579519

restart

Digits := 30

nterms := 70

r := 2.8749; a := .7747; b := .3812; B := 29000; R := 5.4813; Z := 2; A := 17.4

J := proc (n, phi) options operator, arrow; 8*Pi^(3/2)*r*R*(Sum((2*r*R)^(2*i)*pochhammer((1/2)*n, i)*pochhammer((1/2)*n+1/2, i)*(Sum((-1)^j*cos(phi)^(2*j)*(Sum((2*r*cos(phi))^(2.*l)*pochhammer(n+2*i, 2*l)*hypergeom([2*j+2*l+1, .5], [2*j+2*l+1.5], -1)*(.5*Beta(l+.5, n+2*i+l-.5)-sin(arctan(-Z/sqrt(R^2+r^2)))^(2*l+1)*hypergeom([-n-2*i-l+1.5, l+.5], [l+1.5], sin(arctan(-Z/sqrt(R^2+r^2)))^2)/(2*l+1))/(factorial(2*l)*pochhammer(2*j+2*l+1, .5)*(R^2+r^2)^(n+2*i+l-.5)), l = 0 .. NNN))/(factorial(i-j)*factorial(j)), j = 0 .. i))/factorial(i), i = 0 .. NN)) end proc

raw := CodeTools:-Usage(a*b*(-A*J(3, (1/6)*Pi)+B*J(6, (1/6)*Pi)))

memory used=463.62KiB, alloc change=0 bytes, cpu time=0ns, real time=0ns, gc time=0ns

CodeTools:-Usage(evalf(subs([NNN = 25, NN = nterms, ii = 3, iii = 6], raw))); evalf[10](%)

memory used=1.99GiB, alloc change=4.00MiB, cpu time=22.20s, real time=22.21s, gc time=1.37s

.6541253936

restart

Digits := 30

nterms := 70

r := 2.8749; a := .7747; b := .3812; B := 29000; R := 5.4813; Z := 2; A := 17.4

J := proc (n, phi) options operator, arrow; 8*Pi^(3/2)*r*R*(Sum((2*r*R)^(2*i)*pochhammer((1/2)*n, i)*pochhammer((1/2)*n+1/2, i)*(Sum((-1)^j*cos(phi)^(2*j)*(Sum((2*r*cos(phi))^(2.*l)*pochhammer(n+2*i, 2*l)*hypergeom([2*j+2*l+1, .5], [2*j+2*l+1.5], -1)*(.5*Beta(l+.5, n+2*i+l-.5)-sin(arctan(-Z/sqrt(R^2+r^2)))^(2*l+1)*hypergeom([-n-2*i-l+1.5, l+.5], [l+1.5], sin(arctan(-Z/sqrt(R^2+r^2)))^2)/(2*l+1))/(factorial(2*l)*pochhammer(2*j+2*l+1, .5)*(R^2+r^2)^(n+2*i+l-.5)), l = 0 .. NNN))/(factorial(i-j)*factorial(j)), j = 0 .. i))/factorial(i), i = 0 .. NN)) end proc

raw := CodeTools:-Usage(a*b*(-A*J(ii, (1/6)*Pi)+B*J(iii, (1/6)*Pi)))

memory used=472.79KiB, alloc change=0 bytes, cpu time=0ns, real time=31.00ms, gc time=0ns

new := `assuming`([CodeTools:-Usage(simplify(simplify(combine(convert(combine(raw), StandardFunctions))), size))], [i::nonnegint, j::nonnegint, l::nonnegint, j <= i, ii::posint, iii::posint, NN::nonnegint, NNN::posint])

memory used=170.48MiB, alloc change=68.03MiB, cpu time=2.51s, real time=3.12s, gc time=202.80ms

CodeTools:-Usage(evalf(subs([NNN = 25, NN = nterms, ii = 3, iii = 6], new))); evalf[10](%)

memory used=1.46GiB, alloc change=8.00MiB, cpu time=15.05s, real time=14.96s, gc time=1.34s

.6541253936

evalf[5](convert(kernelopts(bytesalloc)*Unit(byte), 'units', Unit(megabyte)))

122.06*Units:-Unit('megabyte')

 

Download shortersum_win64.mw

It opens without any such message for me, using Maple 2016.2.

Did you Save the document after it gave you that message? If so then you may have wiped out any of the problematic data. Is your upload the document that gave the message, without further saving?

@Carl Love I was planning on updating my suggestion today, to warn about that inheritance of Digits and the special-evaluation-rules of evalf, but you've beaten me to it, thanks.

In another Mapleprimes thread in which I've been responding this week I took care to make the second evalf call (intended just for final rounding of purely float results) as a separate, subsequent call. I did that because it was affected by the effects you've mentioned, and of which I was familiar. It's ironic that I was not careful about mentioning it here, and I've been awaiting a little free time to get in a correction since almost right after I Answered here. (My second shoddy remark this week. I am distracteď, sorry alĺ.)

Here are my best results so far, using 64bit Maple 2016.2 for Linux on a quad-core Intel(R) Core(TM) i5-6600K CPU @ 3.50GHz machine.

I've used Digits=30 and nterms=70 throughout.

Some of the timings vary across multiple re-runs (and I could be wrong but this seems to correspond to how much time is spent in garbage-collecting).

For computing J(3,Pi/6) my results are:
- Using add (with unapply as a first of two steps)
     21.07 sec
- Using unevaluated add in first step
     17.38 sec
- Using evalf/Sum and "simplification"
     13.81 sec
- Using codegen[optimize] and "simplification"
     12.46 sec

For computing a*b*(-A*J(3, (1/6)*Pi)+B*J(6, (1/6)*Pi)) my results are:
- Using add (with unapply as a first of two steps)
      26.53 sec
- Using unevaluated add in first step
      35.65 sec
- Using evalf/Sum and "simplification"
      26.60 sec
- Using codegen[optimize] and "simplification"
      22.70 sec

moresummation.mw

By "unevaluated add" I mean that the expression passed to unapply has uneval quotes around the calls to add. Naturally that makes that first step be near instantaneous. In the case of computing a J for just one value of n that reduces the overall time. Not surprisingly the overall time for computing J for two values of n take about twice as long in this approach, but the overall time is then more than it was for the simpler approach of letting those add calls evaluate up front.

@tomleslie Yes, that was a central point in my previous Comment.

I had wrongly concluded that evalf/Sum was faster than add, because of that. And I believe that you had wrongly concluded the opposite, because of the same thing. (Thanks for pointing out that I'd misremembered how evalf/Sum worked, BTW.) Now I'll put together what I hope are better ways for both, in a sheet. As I mentioned, on my slower Windows 7 machine that led to the same timing for both.

One nice thing about using Sum is that the expression can be manipulated, and it seems that a faster implementation may be possible, by doing do. It might affect the accuracy and the settings needed to attain the same number of correct digits in the final result, though. That is, I'll repeat that part of my original. I may be able to also use that with add as well.

I really don't know beforehand, but it might also be interesting to see how evalf/Sum behaves here if either the upper index of the inner (or out, or both) summation is put as infinity. Not sure whether the Levin's u-transform would exit using less than, say, value of 70 used before. Probably not...

One thing not discussed so far is parallelism. Offhand I doubt that the subexpressions are all thread-safe, so using the Threads:-Task package may not be viable. (Nor Threads:-Add, even if it could be coerced into parallelizing when the outermost summation has such a small range for its index.) But with such an expensive computation it might be worthwhile to parallelize using the Grid package. I have in mind that the lower and upper index values of the outermost summation could be made symbolic parameters, so that the whole job could be split according to numcpus and dispatched via Grid. How effective this would be remains to be seen. If the unparallelized full job was already utilizing many cores for garbage collection then the benefit might be reduced, though. However the garbage-collection time seemed small, even though the amount of garbage produced (bytes used) was high. So it might be effective.

@tomleslie The OP originally gave code with a procedure defined by an explicit call to proc(..) .. end proc. I'll call that the "direct" approach. John's suggestion to use add instead of sum used an unapply call. It turns out that this affects the performance significantly.

Using the unapply approach improves the performance with add, in that it does better than the direct approach with add.

Using the unapply appoach with evalf/Sum hurts the performance, making it do worse than the direct approach with evalf/Sum.

I'll show details later, as I have to run. But the essence is that you used the unapply approach with both, and I used the direct apporach with both.

I did in fact compare apples to apples, in that my sheet compared Digits=30 and nterms=70 for both add and evalf/Sum, using the direct approach for both got about 28sec for add and 20sec for evalf/Sum.

You obtained the reverse conclusion, using the unapply approach for both.

I just tested the unapply approach on add with the direct approach on evalf/Sum, on a slow 64bit Windows machine. It took about 190sec for both.

The form of the expression matters. That's why in my sheet I also included an improvement where I massaged the expression before using evalf/Sum. That got me from 20sec down to 15sec on a fast 64bit Linux machine.

@tomleslie First, just for clarity: I did not know that John May had suggested something like Digits=30, and happened to have found out Digits=50 was unnecessary yet Digits=20 was inadequate. I found through experimentation that Digits=30 was adequate for the given parameter set.

Tom, your explanation is mostly fine, as regards `sum` versus `add`. One important additional note on `sum` though: Using symbolic `sum` can sometimes be exceptionally inefficient *in the case where it tries hard symbolically to find a closed value yet fails before reverting to explicit summation of the finite number of terms.*  For the given code that slow, doomed failure is repeated (at great, wasted expense) all of the tens of thousands of times through the loops. So the most important gain is by avoiding highly duplicated, unsuccessful symbolic summation -- and instead forcing a scheme of finitely-termed numeric summation.

Also, you miss the key point about using `Sum` here, which ends up being quite faster than `add`.  Inert `Sum` is NOT just for programmatic manipulation or evaluating via `value` (which turns it into active `sum`). Wrapping `Sum` with `evalf` causes special purpose routine `evalf/Sum` to be invoked. That uses a so-called Levin transformation to detect convergence of the summation based on a finite number of terms, and exit earlier (ie. accelerated) yet accurately. That is all demonstrated in my Answer's attachment.

Using evalf(Sum(...)) is handily faster than using `add` when both are done with 70 as the upper index limit, and also when both are done with 100 as the upper index limit.

@kuwait1 Yes, I know quite well that there are numerical difficulties here when the working precision is not high enough (ie. the results "differ" as you say). I just wanted to find out the level in which I ought to couch explanation and details. I may not get the opportunity to look deep at the precise cause for a while.

@kuwait1 I can respond with details at several possible levels, but it would help to know your current level of understanding.

It might be that you are unfamiliar with basic  concepts of numerical (floating point) computation, and in particular numerical roundoff error. For this a web search or Wikipedia should suffice.

For this level you might also benefit from a proper explanation of the difference between the Maple commands: symbolic `sum` , explicit `add` for summing a finite number of terms, and inert `Sum` which here be used for accelerated floating-point summation. (It's a terrible shame that Maple's side palette contains `sum` which is often the wrong choice altogether.)

Or you may be familiar with those concepts in general but not with how they affect Maple. In particular I include Maple's behavior of striving for accuracy to within 0.6 ulps (based on the current Digits setting) on atomic operations (calls to elementary functions, arithmetic, etc) but NOT for compound operations. For compound expressions/operations small roundoff noise or small numeric error can be magnified by the outer operations. For this a close reading of key sections of Maple's Programming Manual might suffice.

Or you might be familiar with all the above, and want detailed surgical analysis of just where and how your computations get into which specific numerical difficulties.

@Axel Vogt Your worksheet contains a function call to b, rather than a multiplication. Ie,

b(-A*K(3,Pi/6)+B*K(6,Pi/6)

instead of,

b*(-A*K(3,Pi/6)+B*K(6,Pi/6)
Hence your result is just this:
evalf(eval(a*b, Params));

             0.295315640000000

@w-Teilchen You can enforce a view.

plots:-contourplot( 1/(x^2+y^2), x=-2 .. 2, y=-2 .. 2,
                    method="alternate",
                    contours=[seq(i,i=0.5..4,0.5)],
                    view=[-2..2, -2..2] );

(The plots:-implicitplot command has an option rangeasview which I imagine was originally intended as a convenience to avoid having to enter the ranges twice. That might make it relate to what I mentioned about my wanting to pass along options to the underlying implicitplot command.)

@vv The assuming facility does not place assumptions on variables present only within a procedure body (here, f).

That is documented in the current last example on the assuming help page.

@vv I was able to get similar results using just simplify and radnormal under the positive assumption.

I was able to squeeze it a little more, but not by much. I don't see a really nice factorization, offhand.

 

u:=f(3): u:=simplify(radnormal(u)) assuming positive:

lprint(u);
(A*L^4*c^2*dz^4*s^2+2*A*L^4*c*dz^4*s^3+A*L^4*dz^4*s^4+B*L^3*c^2*dz^3*s+2*B*L^3*c*dz^3*s^2+B*L^3*dz^3*s^3+3*A*L^2*a^2*c*dz^2
*s+3*A*L^2*a^2*dz^2*s^2+2*B*L*a^2*c*dz+2*B*L*a^2*dz*s+A*a^4)/a^4

u:=f(3): u:=simplify(simplify(radnormal(u)),size) assuming positive:

lprint(u);
(A*L^4*s^2*(s+c)^2*dz^4+B*L^3*s*(s+c)^2*dz^3+3*A*L^2*a^2*s*(s+c)*dz^2+2*B*L*a^2*(s+c)*dz+A*a^4)/a^4

U:=proc(e)
  local u;
  u:=simplify(simplify(radnormal(e)),size) assuming positive;
  thaw(collect(numer(subs(s+c=freeze(s+c),%)),freeze(s+c),
               ff->simplify(simplify(radnormal(ff)),size))
       /denom(subs(s+c=freeze(s+c),%)));
end proc:

u:=f(3): u:=simplify(simplify(radnormal(u)),size) assuming positive: u:=U(u):

lprint(u);
(L^3*dz^3*s*(A*L*dz*s+B)*(s+c)^2+L*a^2*dz*(3*A*L*dz*s+2*B)*(s+c)+A*a^4)/a^4
First 286 287 288 289 290 291 292 Last Page 288 of 596