acer

32303 Reputation

29 Badges

19 years, 310 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Your example has infinitely many solutions. Your result is a compact way of representing them all. Maple's convention for such answers is that _Bxx is Boolean-valued (0 or 1) and _Zyy may be integer-valued. So, your result indicates that any value for x obtained by evaluating that formula at _B15 = <0 or 1> and _Z15 = <some integer> would be a solution. You get to choose which. The tilde (~) is Maple's way of demarking a name that has assumptions on it. The about() command can print the current known assumptions on a name. For example,
> _EnvAllSolutions:=true:

> sol := solve(cos(x) = -1/2);

                    sol := 2/3 Pi - 4/3 Pi _B1~ + 2 Pi _Z1~
 
> seq(about(i),i in indets(sol));
Originally _Z1, renamed _Z1~:
  is assumed to be: integer
 
Originally _B1, renamed _B1~:
  is assumed to be: OrProp(0,1)
The numbers xx and yy in _Bxx and _Zyy are generated automatically by Maple so that the resulting new names don't already have assumptions or assigned values. acer
> q:=y[2]:

> op(1,q);
                                       2

> op(0,q);
                                       y

acer
> q:=y[2]:

> op(1,q);
                                       2

> op(0,q);
                                       y

acer
Your suggestion would make using parts of the OrthogonalSeries package problematic. You may have meant the more special case when n is of type nonnegint. It's still not a great idea, though. With the polynomials expanded, their useful properties (orthogonality, recurrence relation, faster simplification, what have you) are lost. Those properties can be useful even when expansion is possible. Those properties might be very difficult and expensive to ascertain or recognize, with the polynomials expanded. No, it is good to have unexpanded HermiteH, so that SumTools, invlaplace, DEtools, and many more bits of Maple can also make good use of them. I don't think that your sin() example is much good, either. It's not even a true characterization. Consider sin(2*Pi/15). Maple could compute that as being -1/8*2^(1/2)*(5-5^(1/2))^(1/2)+1/8*3^(1/2)+1/8*5^(1/2)*3^(1/2). But it doesn't do that by default. The trig theory that Maple knows can often be used to good effect with it in the sin form. And that's one reason why Maple has `convert`, so that one may have choice. acer
There are routines in Maple which have knowledge of how to manipulate and work with certain classes of polynomial. By that I mean specialized knowledge, which uses theory and is more clever or efficient than what one would get with just the explicit expanded form. In the explicit form, these routines would have no way to immediately recognize the polynomial's special nature. The form HermiteH(n,x) can be regarded as a placeholder for the polynomial, even when n is already given a particular value. Some purposes can use -- or even require -- the expanded, explicit form. And so the help-page for HermiteH gives an example of how to obtain it from the placeholder form. An alternative syntax that might appeal more could be for the HermiteH routine to have an optional 'expanded'=<falsetrue> parameter. It's important to keep in mind that there are usually many more reasons for Maple's behaviour than just the functioning of one's own example at hand. acer
To enter a new line of code, without causing the whole "execution group" to execute, use Shift-Enter instead of Enter. I suspect that what you've been referred to, for Units entry, is one of the Units palettes. The palettes are in a collapsible left-pane. You can customize which palettes are seen using the top-menu View -> Palettes -> Arrange Palettes. The "Units (SI)" palette contains various SI system units as well as a generic Unit() call. This is an easy way to enter units with 2D Math input typesetting. You can also type in calls like Unit('m') and with the context menu convert to 2D Input. Or you can do command-completion on the word Unit typed while in 2D Input mode and select the generic call which should be the first item in the drop-down list that appears. In Maple, units have a multiplicative quality. To attach a unit to an expression pretty much means multiplying it by the appropriate Unit() call. acer
To enter a new line of code, without causing the whole "execution group" to execute, use Shift-Enter instead of Enter. I suspect that what you've been referred to, for Units entry, is one of the Units palettes. The palettes are in a collapsible left-pane. You can customize which palettes are seen using the top-menu View -> Palettes -> Arrange Palettes. The "Units (SI)" palette contains various SI system units as well as a generic Unit() call. This is an easy way to enter units with 2D Math input typesetting. You can also type in calls like Unit('m') and with the context menu convert to 2D Input. Or you can do command-completion on the word Unit typed while in 2D Input mode and select the generic call which should be the first item in the drop-down list that appears. In Maple, units have a multiplicative quality. To attach a unit to an expression pretty much means multiplying it by the appropriate Unit() call. acer
Sorry, I inadvertantly posted a reply to your message as a reply to the thread parent. Please see below. acer
R:=(n,x)->rem(simplify(HermiteH(n,x),'HermiteH'), x^2-1, x);

R(5,x);
acer
I can't foresee such a basic and longstanding aspect of Maple's evaluation behaviour changing. It doesn't seem like a bug to me, but then I'm used to it. I appreciate the fine level of control it gives. Note that a "full eval" like eval(x) is not always necessary. Eg,
> f:=proc()
> local x,y,z;
> x:=y^2; y:=z; z:=3;
> eval(x,1),eval(x,2),eval(x,3);
> end proc:
> f();

                                    2   2
                                   y , z , 9
A partial eval can allow you to return the value such as an unevaluated function call. If a full-eval were always done then we wouldn't be able to force less than full evaluation. The only locals that I can think of offhand, which get evaluated fully within a proc, are table members in the case that they have been passed (or supplied via lexical scoping) to another inner procedure. For example,
> f:=proc()
> local x,y,z;
> proc()
> x[1]:=y[1]^2; y[1]:=z[1]; z[1]:=3;
> x[1];
> end proc();
> end proc:
> f();

                                       9
Presumably that's because tables have last_name_eval and otherwise usual 1-level eval of locals used as parameters would only give the table name and not even the assigned value. It's as if this exception was coded precisely so that programmers wouldn't have to keep using eval(). (It's a bit weird though, because wouldn't an automatic 2-level eval have also solved the problem without having to wreck the fine control with a full-eval?) Note that globals declared in a procedure always get full evaluation.
> f:=proc()
> global x,y,z;
> x:=y^2; y:=z; z:=3;
> x;
> end proc:
> f();

                                       9
Using globals instead of locals can thus make a procedure less efficient. acer
The existing `convert/base` code in Maple 11, without edits, can also be used to do some conversions to binary significantly faster than is done using Maple 11's `convert/binary`.
Q := proc(n)
parse(cat("11.",
          StringTools:-Reverse(
             StringTools:-Select(
                StringTools:-IsBinaryDigit,
                convert(convert(trunc(evalf[trunc(n/log[2](10.0))+2](
                                         Pi*2^(n-2))),
                                base,2)[1..-3],
                        string))))):
end proc:
On my 64bit Linux machine, the performance crossover between Doug's modified `convert/binary` and `Q` above looks to be about N=25000. Repeat these examples below out of order, or separately. Use the `convert/binary/fraction` routine and its friends from the worksheet. As N gets to 400000 the routine Q gets about 10 times faster.
N := 25000:

gc():
st,ba,bu:=time(),kernelopts(bytesalloc),kernelopts(bytesused):
sol2:=convert(evalf[trunc(N/log[2](10.0))+2](Pi),binary,N):
time()-st,kernelopts(bytesalloc)-ba,kernelopts(bytesused)-bu;
                                                                                
gc():
st,ba,bu:=time(),kernelopts(bytesalloc),kernelopts(bytesused):
sol1 := Q(N):
time()-st,kernelopts(bytesalloc)-ba,kernelopts(bytesused)-bu;
If my sums are right, then one doesn't need more than trunc(N/log[2](10.0))+2 decimal digits in order to be able to obtain N binary digits. The crossover point between Q and the regular system `convert/binary` routines in Maple 11 is about N=200. I didn't look at improving the routines in the worksheet. Perhaps if one could get the address of the DAG of the floating-point number in Maple, and offset it past the DAG header, and copy it byte for byte into a hardware datatype Array, then that could used with hard-coded lookup tables. The lookup tables might contain hardware datatype Arrays as well. acer
I couldn't get the Document that you referenced above (which does performance comparisons) to run as quickly as its printed timings suggest. I also saw what might be a mistake in that Document. You claim that it's OK to replace evalf[n](Pi) with evalf(Pi) in key spots. But there are no special evaluation rules in action here, and the `convert/binary` (and related) procedures would just see 10 digits of Pi in that case, since Digits was not set at the top-level. Are you sure that the methods you compare are really all doing the same thing? (DJ Keenan made some comments about his routines not needing Digits to be set at a higher level in order than a variant of his modifications to respect the optional precision parameter of `convert/binary. But it must surely still matter to what accuracy the inbound float approximation of Pi is made.) Perhaps as much accuracy as is provided by evalf[n](Pi) would not be required. In order to have `convert/binary` produce n binary digits then maybe something like evalf[trunc(n/log[2](10.0))+2](Pi) would suffice. Also, it's not reasonable to compare methods which might compute evalf of some same numbers (as they work internally, say) in the same session like that. More accurate would be to clear evalf's remember tables between tests. Better still is to also measure bytes alloc increases and to place each method in its own testing file. The following seems to be a big improvement on the first method that I suggested. And it doesn't require edits to any of the existing `convert` routines.
Q := proc(n)
op(ListTools:-Reverse(convert(trunc(evalf[trunc(n/log[2](10.0))+2](Pi*2^(n-2))),base,2))):
end proc:
If it really is a big improvement, then one might wonder why, exactly. One reason is that maple can scale a large floating point number up by a power of 2 pretty quickly, presumably helped by use of gmp. Another reason is that the irem techniques already used in `convert/base` aren't so bad. It seems to me that they act in the same general way as DJKeenan's code (but used above with the smallest possible base and hence no lookup table benefits). Consider the number
        trunc(evalf[trunc(n/log[2](10.0))+2](Pi*2^(n-2)))
Maybe one could get the address of its DAG, offset by enough to get the address of its data portion (the number itself), and copy it into a hardware datatype Vector. Entries of that Vector might then be taken and used in conjunction with a hard-coded lookup table. The copying might be done using an external call to a BLAS routine like scopy or dcopy. To compute comparison timings, I grabbed the modified `convert/binary` routines directly from the original worksheet. Those do indeed provide a big speedup over the regular Maple 11 `convert/binary` routines. But for your particular task, it looks to me as if simple routine Q above (which uses different, unchanged internal routines like `convert/base`) is still about 10 times faster and allocates about 10 time less memory when n=1000000. I'm just wondering whether I compared Q against the most efficient alternative provided by any incarnation of `convert/binary` and friends. acer
I couldn't get the Document that you referenced above (which does performance comparisons) to run as quickly as its printed timings suggest. I also saw what might be a mistake in that Document. You claim that it's OK to replace evalf[n](Pi) with evalf(Pi) in key spots. But there are no special evaluation rules in action here, and the `convert/binary` (and related) procedures would just see 10 digits of Pi in that case, since Digits was not set at the top-level. Are you sure that the methods you compare are really all doing the same thing? (DJ Keenan made some comments about his routines not needing Digits to be set at a higher level in order than a variant of his modifications to respect the optional precision parameter of `convert/binary. But it must surely still matter to what accuracy the inbound float approximation of Pi is made.) Perhaps as much accuracy as is provided by evalf[n](Pi) would not be required. In order to have `convert/binary` produce n binary digits then maybe something like evalf[trunc(n/log[2](10.0))+2](Pi) would suffice. Also, it's not reasonable to compare methods which might compute evalf of some same numbers (as they work internally, say) in the same session like that. More accurate would be to clear evalf's remember tables between tests. Better still is to also measure bytes alloc increases and to place each method in its own testing file. The following seems to be a big improvement on the first method that I suggested. And it doesn't require edits to any of the existing `convert` routines.
Q := proc(n)
op(ListTools:-Reverse(convert(trunc(evalf[trunc(n/log[2](10.0))+2](Pi*2^(n-2))),base,2))):
end proc:
If it really is a big improvement, then one might wonder why, exactly. One reason is that maple can scale a large floating point number up by a power of 2 pretty quickly, presumably helped by use of gmp. Another reason is that the irem techniques already used in `convert/base` aren't so bad. It seems to me that they act in the same general way as DJKeenan's code (but used above with the smallest possible base and hence no lookup table benefits). Consider the number
        trunc(evalf[trunc(n/log[2](10.0))+2](Pi*2^(n-2)))
Maybe one could get the address of its DAG, offset by enough to get the address of its data portion (the number itself), and copy it into a hardware datatype Vector. Entries of that Vector might then be taken and used in conjunction with a hard-coded lookup table. The copying might be done using an external call to a BLAS routine like scopy or dcopy. To compute comparison timings, I grabbed the modified `convert/binary` routines directly from the original worksheet. Those do indeed provide a big speedup over the regular Maple 11 `convert/binary` routines. But for your particular task, it looks to me as if simple routine Q above (which uses different, unchanged internal routines like `convert/base`) is still about 10 times faster and allocates about 10 time less memory when n=1000000. I'm just wondering whether I compared Q against the most efficient alternative provided by any incarnation of `convert/binary` and friends. acer
Ah, yes, sorry about that.
split := proc(z::polynom(anything,[x,y]))
  local C,t;
  C:=[coeffs(z,[x,y],'t')];
  add(C[i]*INT(degree(t[i],x),degree(t[i],y)),i=1..nops(C));
end proc:
acer
Just replace the seq() with add(). acer
First 556 557 558 559 560 561 562 Last Page 558 of 591