acer

33188 Reputation

29 Badges

20 years, 207 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

This is a bug in Maple 17, illustrated by the behaviour of `subs`.

It worked properly in Maple 16.02 and at least 15.01, 14.01, and 11.02.

It works for 2-by-1 Matrices in Maple 17.01. But it is not working properly for the Vector case in Maple 17.01.

There is nothing unusual about the following way of constucting Vector V, which is similar in essence to one of your ways. It is a natural way to stack Vectors. Here it produces a (column) Vector. The `subs` action in question is broken in Maple 17.01. The problem also occurs for `eval`. It's also broken in the row-Vector analogue.

restart:
V0:=Vector([a]):
V1:=Vector([b]):

V:=Vector([V0,V1]);
                                       [a]
                                  V := [ ]
                                       [b]

P:=subs(a=5,V); # produced a Vector containing 5 in Maple 16, but is brokem in 17.01

                                       [a]
                                  P := [ ]
                                       [b]

rtable_eval(P); # this at least should work, but doesn't in Maple 17.01

                                     [a]
                                     [ ]
                                     [b]

The following constructs a 2-by-1 Matrix. It is not a more usual way of constructing a Vector (because it's a way of constructing a Matrix, rather than a Vector). The `subs` action in question is not broken in Maple 17.01, for Matrix (or Array).

restart:
V0:=Vector[row]([a]):
V1:=Vector[row]([b]):

M:=<V0,V1>;

                                       [a]
                                  M := [ ]
                                       [b]

P:=subs(a=5,M);
                                       [5]
                                  P := [ ]
                                       [b]

acer

Do you have the typesetting level set to `extended`? If so, then I suppose that you are seeing an enhanced prettyprinting of a call to hypergeom.

acer

@Alejandro Jakubi Thanks, but ScientificConstants has several times that much infomation on isotopes. For example,

restart:
with(ScientificConstants):
select(t->evalb(op(0,t)=H),convert([GetIsotopes()],`global`));
map(GetElement,%);

I have previously found several such partial sets of data at NIST and related sites. But what is needed, I suspect, is a much more full collection which is also named (so that it can be cited and referenced for comparison at a later date).

For example, there is some mention of a 2001 published data set here, with some later update here in 2005. The 2001 data might possibly be accesible only by subscription, and the 2005 updates might possibly have its central numbers available in the linked abstract. I'd like to hear an expert's opinion.

 

@Carl Love Hi Carl. Darin might have a good answer for you, but I'll chip in with some anecdotal evidence if that's OK.

I was using the Task model to split (halve) some of my embarrasingly parallelizable numeric escape-time fractal code. At first I imagined that I'd get optimal performance by just using numcpus to figure out the best base case. Ie, the code could split if the "current" size were not less than 1/numcpus times the original total size.

But in practice I found that the OS (64bit Windows and 64bit Linux) could ramp up more quickly if I instead used a value higher than numcpus. Both Linux `top` and Windows' Task Manager showed all cores getting to a higher load more quickly if the Maple Task mechanism was being instructed to split more times than just the value of numcpus. Eg, on an 8-core Intel i7 or a 4-core i5 I got a measurably better total real time for the entire computation if I made the code split until the size was say 1/15th to 1/20th of the original.

I'd be interested if anyone else had seen behaviour that was similar (or radically different).

In my experience the 2010 release of the CODATA collection of values for the fundamental physical constants was easily found on the web as a single plaintext file.

I once wrote a Maple routine which processed the CODATA 2010 .txt data file and saved the data into Maple using the ScientificConstants package. This was quite straightforward a task, given the single text file with the data.

But finding the latest data for isotopes (or nuclides) in a single collection that is recognized by NIST seems more difficult. Does anyone know the location of such a data set, as plaintext or XML?

acer

@Mac Dude It seems like a bug in ScientificConstants, where it doesn't properly accomodate the new system.

This next looks ok,

restart:

with(ScientificConstants):
em:=Constant(electron_mass):

GetValue(em),GetUnit(em);

                            -31                   
              9.109381882 10   , Units:-Unit('kg')

But now, with a system with energy and action to be simplified in terms of MeV and MeV*s respectively,

restart:

Units:-AddSystem('Accelerator',Units:-GetSystem('SI'),MeV,MeV*s); 
Units:-UseSystem('Accelerator');

with(ScientificConstants):
em:=Constant(electron_mass):

GetValue(em),GetUnit(em);

                            -18                   
              5.685626500 10   , Units:-Unit('kg')

@Mac Dude It seems like a bug in ScientificConstants, where it doesn't properly accomodate the new system.

This next looks ok,

restart:

with(ScientificConstants):
em:=Constant(electron_mass):

GetValue(em),GetUnit(em);

                            -31                   
              9.109381882 10   , Units:-Unit('kg')

But now, with a system with energy and action to be simplified in terms of MeV and MeV*s respectively,

restart:

Units:-AddSystem('Accelerator',Units:-GetSystem('SI'),MeV,MeV*s); 
Units:-UseSystem('Accelerator');

with(ScientificConstants):
em:=Constant(electron_mass):

GetValue(em),GetUnit(em);

                            -18                   
              5.685626500 10   , Units:-Unit('kg')

Answers about how to do this best (or perhaps just better) may depend on the particular nature of `a1`. Could you provide some details in the form of a fully functioning, explicit example?

acer

@Markiyan Hirnyk I interpreted the question as being about two things: the holes in the plot, and the long computation time.

Maple can get quirky in strange ways when Digits is set less than 5, so having it that low is not a great idea.

The holes in the plot are because the individual quadrature attempts failed (for input pairs in the plane). evalf/Int infers the value for epsilon based on Digits, when that option is supplied. But if Digits is set too low then the inferred looser tolerance may not be helpful as there might bot be enough working precision even to satisfy the looser tolerance. Hence quite often it helps to have a higher working precision (default Digits might do) while forcing a looser tolerance separately.

And evalf/Int might converge enough to satisfy a looser tolerance more quickly. Hence I suggested leaving Digits at its default value (10) while supplying looser tolerance (larger epsilon). That appears to help with both the failing values as well as the speed.

Having Digits be as high as 8 (or 10, the default) might fix the holes. But reducing Digits from default 10 down to a value of 8 doesn't do as good a job with speeding it all up as does supplying a looser tolerance. So typing in Digits:=8 or what have you just seems to be unnecessary typing, that might also obscure what may matter more. I did not test whether Digits=10 is needed; it might be, but even if not there seems less reason to reduce it from its default as doing so does not by itself cure the speed issues.

And the same may go for other tweaks such as using the non-iterated quadrature method and reducing the plot points.

 

@Markiyan Hirnyk I interpreted the question as being about two things: the holes in the plot, and the long computation time.

Maple can get quirky in strange ways when Digits is set less than 5, so having it that low is not a great idea.

The holes in the plot are because the individual quadrature attempts failed (for input pairs in the plane). evalf/Int infers the value for epsilon based on Digits, when that option is supplied. But if Digits is set too low then the inferred looser tolerance may not be helpful as there might bot be enough working precision even to satisfy the looser tolerance. Hence quite often it helps to have a higher working precision (default Digits might do) while forcing a looser tolerance separately.

And evalf/Int might converge enough to satisfy a looser tolerance more quickly. Hence I suggested leaving Digits at its default value (10) while supplying looser tolerance (larger epsilon). That appears to help with both the failing values as well as the speed.

Having Digits be as high as 8 (or 10, the default) might fix the holes. But reducing Digits from default 10 down to a value of 8 doesn't do as good a job with speeding it all up as does supplying a looser tolerance. So typing in Digits:=8 or what have you just seems to be unnecessary typing, that might also obscure what may matter more. I did not test whether Digits=10 is needed; it might be, but even if not there seems less reason to reduce it from its default as doing so does not by itself cure the speed issues.

And the same may go for other tweaks such as using the non-iterated quadrature method and reducing the plot points.

 

I googled kovacic algorithm and the first hit was this and most of the first page of hits seemed relevant. I mention that first hit because its references indicate a preprint (1979?) by Kovacic as well as an implementation by D.Saunders given at ACM 1981.

Another hit that stood out was this Maple help-page (even if it may differ in implementation), which cites Kovacic at end in its References section,

Kovacic, J. "An algorithm for solving second order linear homogeneous equations". J. Symb. Comp. Vol. 2. (1986): 3-43.

acer

@Carl Love I was not claiming that it is bug in 2-argument eval. I stated that `eval/if` relies on the behaviour of 2-argument eval.

I'm not sure that `eval/if` is quite right. There are other corners, too.

The routine `eval/if` is affected by the following behaviour of 2-argument `eval`,

> eval('sin(r)', r=0);

                                       0

> eval('sin(0)', r=0);

                                    sin(0)

Note that `seq` does not behave like that,

> seq('sin(r)', r=0); 

                                       0

> seq('sin(0)', r=0);

                                       0

acer

@Preben Alsholm Yes, thanks, that's why I included my second example. It's an oddity amongst oddities.

> restart:

> eval(`if`(r,sin(Pi),p),r=true); # hmm

                                    sin(Pi)

> f:=x->x:

> eval(`if`(r,f(r),p),r=true);

                                     true

> eval(`if`(r,f(2),p),r=true); # hmm

                                     f(2)

> seq(`if`(r,f(2),p),r=true);

                                       2

It looks like a bug.

acer

First 382 383 384 385 386 387 388 Last Page 384 of 607