acer

32373 Reputation

29 Badges

19 years, 334 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

I may be missing something, but is there some significant way (performance, or other) in which this differs from the splitting by and divvying up with the Threads:-Task model, to parallelize? (See the attachment in my worksheet in the comment to your earlier response above, where I did that.) It seemed to me that both get a small bit of speedup, but not a by large factor.

...Almost as if some Maple level overhead was dealt with better (parallelized, perhaps) while a large significant portion of the computation (the externally called bit perhaps) was done in the same manner.

And CodeTools:-Usage shows a similar value for both the wallclock time[real] and the cpu time (summed for all threads), which indicates that the thing may not actually running concurrently all of the code done under the maple threads. See my comments about possible mututal blocking by call_externals -- which is not confirmed.

It is also possible that multiple cores (not maple threads) may already have been doing some of the external work in parallel, via BLAS.

acer

@Preben Alsholm Indeed. One of the important reason's that the `parameters` option exists is that it allows one to avoid a considerable amount of repeated prelimimary overhead involved in code such as in 3) above.

In 3), dsolve/numeric is called each time the procedure is invoked with to solve for a different `p` value. This entails all the necessary determination of the nature of the IVP, the setup of internal structures for storing results, and so on.

@Preben Alsholm Indeed. One of the important reason's that the `parameters` option exists is that it allows one to avoid a considerable amount of repeated prelimimary overhead involved in code such as in 3) above.

In 3), dsolve/numeric is called each time the procedure is invoked with to solve for a different `p` value. This entails all the necessary determination of the nature of the IVP, the setup of internal structures for storing results, and so on.

For various values of t1 there will be more than a single real value for t2 as roots of mrdot0. The Asker may wish to consider whether it matters which value of t2 is taken.

If it matters which value of t2 is accepted then either fsolve's `avoid` option could be used alongwise repeated fsolve calls, or RootFinding:-NextZero might be used with judicious use of the starting value (based on the previous t1's accepted t2 value?) and `maxdistance` option (based on t1?).

Or you could take the `min`, say, of multiple results from `solve`. (...which does what? Call evalf/RootOf repeatedly using fsolve & avoid? It might be faster to use NextZero, using unapply just once of course.)

Also (implied in Carl's Answer) the first call to `solve` for mrdot0 would need to be brought inside the t1-loop for it to be changed to a numeric fsolve (or NextZero) call, so that t1 had a numeric value each time it was computed.

acer

For various values of t1 there will be more than a single real value for t2 as roots of mrdot0. The Asker may wish to consider whether it matters which value of t2 is taken.

If it matters which value of t2 is accepted then either fsolve's `avoid` option could be used alongwise repeated fsolve calls, or RootFinding:-NextZero might be used with judicious use of the starting value (based on the previous t1's accepted t2 value?) and `maxdistance` option (based on t1?).

Or you could take the `min`, say, of multiple results from `solve`. (...which does what? Call evalf/RootOf repeatedly using fsolve & avoid? It might be faster to use NextZero, using unapply just once of course.)

Also (implied in Carl's Answer) the first call to `solve` for mrdot0 would need to be brought inside the t1-loop for it to be changed to a numeric fsolve (or NextZero) call, so that t1 had a numeric value each time it was computed.

acer

Carl, you wrote, "It is necessary to assume that they are positive."

What if `A` and `B` are of opposite sign, or one them is zero, or both of them are negative with `a` an integer?

acer

Carl, you wrote, "It is necessary to assume that they are positive."

What if `A` and `B` are of opposite sign, or one them is zero, or both of them are negative with `a` an integer?

acer

Have you considered upgrading to Maple 7 (2001) or later?

acer

@Doug Meade I'm not sure if my earlier message was clear (and apologies if you understood this already), but you followed-up with, "An alert that alerts users to possibly unexpected consequences to what should be protected names should also be flagged." However this example with the imaginary unit has nothing to do with protected names.

If `Calculator` were a protected name then the imaginary unit issue that you've observed would still occur.

I know of a somewhat similar situation from a few years back. The module for the Maple-Nag Connector is in a .mla archive which was license-locked (by Maple) back when that was a toolbox add-on that was sold separately from Maple. Here's the kind of thing that would happen: Call `LUDecomposition` with its output=NAG option, trigger the scan and read of the name `NAG` from .mla archives in libname merely on account of that reference, for whatever reason fail the license check in the ModuleLoad of the NAG module, then get an error message from that ModuleLoad, and then the ModuleLoad would (unassign) clobber the module name by NAG:='NAG', after which any subsequent similar calls to LUDecomposition & friends would work fine.

The same mechanism was in the ModuleLoad of the some other add-on modules (BlockBuilder maybe?). But that name wasn't already in use as a keyword option to popular LinearAlgbebra commands.

Back to the main line: there are a few other schemes that could ensure that the Calculator switched the imaginary unit for its own use while avoiding doing so via its ModuleLoad. They're just less slick. For example the routines in the rest of the Calculator could instead check a semaphore (module local, as a flag) and, if not yet set, adjust the imaginary unit and set the local. That is, adjust state when the module is first used rather than when it is first loaded/referenced.

Some ModuleLoad actions seem OK. The packages which use ModuleLoad to do define_external calls (and reassign some of their dummy exports to be call_externals) appear reasonable. But changing global state and behaviour seems dodgy.

Here's another kind of use that I find awkward,

showstat(ImageTools::ModuleLoad);

ImageTools:-ModuleLoad := proc()
   1   TypeTools:-AddType('Image',op(ImageTools:-IsImage));
   2   TypeTools:-AddType('GrayImage',op(ImageTools:-IsGrayImage));
   3   TypeTools:-AddType('ColorImage',op(ImageTools:-IsColorImage));
   4   TypeTools:-AddType('ColorAImage',op(ImageTools:-IsColorAImage));
   5   NULL
end proc

eval(ImageTools:-IsImage);
Error, IsImage is not a command in the ImageTools package

How can one utilize those type-checks without reading the ImageTools module from archive? I mean loading the module name from .mla archive, not rebinding exports' names via `with`. The `IsImage` routine is not an export of the package. Suppose one wished to test in a procedure that an Array would be recognized as an "image". The procedure is not the right place to call `with`. So how to get the types added, in a noninteractive session? It's awkward.

In recent Maple there is less need for `AddType` used in that way, since objects might carry around with them the neceesary means to check their own type or to dispatch. But I haven't seen a great deal of revising so far, where the structures of older packages get re-coded as objects.

 

I believe that the ModuleLoad of the module will be called when the name is first read from Library archive, which should happen when it is first referenced. Ie. it's not relevant whether one attempts to assign to the name `Calculator` -- just using it is enough.

A quick test in Maple 17.00 confirms this.

restart:

interface(imaginaryunit);

                               I

Calculator:

interface(imaginaryunit);

                               i

This is an example of how using ModuleLoad in a way that is not super careful can run amok with the global namespace (which modules were designed to help with!).

acer

I suspect that a significant part of the performance may relate to whether the rhs b[i] are only ever stored in a Matrix, as opposed to being all separate (Vectors, say).

I would not expect a huge amount of performance improvement because the float[8] machinery for LinearAlgebra:-Modular should already use fast simd/threaded/cache-tuned BLAS where possible, even when run in regular (serial programming) mode from Maple.

I didn't check... but it's even possible that the define_external calls from the LinearAlgebra:-Modular Library routines do not use the THREAD_SAFE option -- in which case the ensuing call_externals would be mutually blocking.

There may be a bit of overhead costs to save, and a tiny bit more threading to eke out. Some of that may just be due to being consistent with a storage order (column-major, -minor), or other seemingly minor aspects.

Here's a version of Carl's sheet, trying also to use the Task threading model to parallelize.

acer

I suspect that a significant part of the performance may relate to whether the rhs b[i] are only ever stored in a Matrix, as opposed to being all separate (Vectors, say).

I would not expect a huge amount of performance improvement because the float[8] machinery for LinearAlgebra:-Modular should already use fast simd/threaded/cache-tuned BLAS where possible, even when run in regular (serial programming) mode from Maple.

I didn't check... but it's even possible that the define_external calls from the LinearAlgebra:-Modular Library routines do not use the THREAD_SAFE option -- in which case the ensuing call_externals would be mutually blocking.

There may be a bit of overhead costs to save, and a tiny bit more threading to eke out. Some of that may just be due to being consistent with a storage order (column-major, -minor), or other seemingly minor aspects.

Here's a version of Carl's sheet, trying also to use the Task threading model to parallelize.

acer

@Alejandro Jakubi Yes, I knew of that earlier thread here.

There's a difference between something announced and something appearing only in a a member's Question. The Maple Player is an important piece of missing functionality that could help make Maple more competitive. But it didn't get an announcement of its own. There was a big announcement for Mobius, at which point an earlier version of the Player could be downloaded from links available to those who signed up for the Mobius pilot.

But now the Player has a "product" page all its own, and is immediately downloadable.

FWIW, in addition to,

d := conjugate(u)*conjugate(y)+conjugate(v)*conjugate(x):
z := conjugate(f)/d+G*w*conjugate(w)*conjugate(y)/d:

simplify(z,size);                                        
                                      _ _   _
                                  G w w y + f
                                  -----------
                                   _ _   _ _
                                   u y + v x

there is also, for this example,

normal(z);
                                      _ _   _
                                  G w w y + f
                                  -----------
                                   _ _   _ _
                                   u y + v x

combine(%);
                                 _____       _
                                 (w y) G w + f
                                 -------------
                                  ___________
                                  (u y + v x)

I'm certainly not saying that these will handle all your examples. But for simplification it often serves to try other avenues, depending on how much the final form is important to you.

acer

@Markiyan Hirnyk My answer here is different in that it uses a single call to `surfdata` which passes its supported optional arguments to specify the axes ranges. I believe that it address the question asked.

The Answer by marc005 does not use that simple functionality of `surfdata`, but rather produces a `surfdata` call and`plot3d` call (only the latter of which uses the specified axes ranges). Try it.

Robert Israel's suggestion in that same thread to which you've referred -- to use `subs` to replace ranges in the structure that results from calling `matrixplot` -- is more complicated. And it would run into trouble if the number of rows equalled the number of columns in the Matrix while the two desired target ranges differed from each other.

First 368 369 370 371 372 373 374 Last Page 370 of 592