I think that is something that should be changed in `is`

. In any case, though, monadic `signum`

should generally be avoided, because `signum(0)`

is commonly unpredictable; use `signum(0,x,y)`

, perhaps.

Er, yup, I meant any elementary function that has an elementary antiderivative.
As for Mathematica, I just tried the example you gave on Mathematica's online

Integrator, and that too returned something involving elliptic functions. I do not see this as proof that Mathematica does not have the full Risch algorithm though. Couldn't it just mean that the heuristic approaches took priority, and since they got an answer, nothing further was tried? Mathematica's Integrator can integrate exp(arcsin(x)).
As for other systems, I asked Joel Moses a long time ago, and he said that Macsyma did have the algorithm fully implemented. (

Moses was responsible for Macsyma's symbolic integration routines.)

`int(exp(arcsin(x)),x);`

int(exp(arcsin(x)),x)
So Maple does not integrate it.
I would be interested in an explanation. My (admittedly very poor) understanding is that (i) Maple uses the

Risch algorithm for integrating, (ii) this algorithm can integrate any

elementary function, and (iii) exp(arcsin(x)) is an elementary function—and thus exp(arcsin(x)) should be integrated by Maple.

Joe, thank you for the detailed analysis. For me, the conclusion is clear: do not use `overload`

. The semantics will eventually lead to hard-to-track bugs. (Maybe someday Maple could get a replacement, say `Overload`

, that worked with cleaner semantics.)

Wikipedia has a good description of

algorithms for pi(n).
From a quick reading, it appears that Mathematica's approach might be suboptimal. It would be nice if Maplesoft would implement a viable general

`pi`

—perhaps following Lehmer, rather than Legendre.

Wikipedia has a good description of

algorithms for pi(n).
From a quick reading, it appears that Mathematica's approach might be suboptimal. It would be nice if Maplesoft would implement a viable general

`pi`

—perhaps following Lehmer, rather than Legendre.

I agree with what gulliet said: each of Maple and Mathematica have strengths and weaknesses, and each will best the other on specific computations.
What JacquesC said might be true in general (and is certainly true for ODEs), but it is the opposite of the truth in the example you give of `ithprime`

. From the code, `ithprime`

uses a table for numbers up to 22300000; after that, it simply tests every second integer to see if it is prime. Such a simple test is obviously too slow for practical use; so Maple does not have a viable ithprime procedure for numbers much outside the table.
Mathematica, in contrast, uses a general procedure based on finding roots and `numtheory:-pi`

. This requires a fast version of `pi`

, which exists (see, for example, Bressoud & Wagon: it is due to Legendre) and is implemented in Mathematica, but not in Maple. So this is all an embarrassment for Maple.
Returning to the general issue of comparison, I do not know of anyone who has done this for Maple 11 and Mathematica 6. It would take a large effort. For earlier versions of the two, the general consensus seemed to strongly hold that Maple was more reliable. I do not know about speed, but reliability is obviously more important.

I agree with what gulliet said: each of Maple and Mathematica have strengths and weaknesses, and each will best the other on specific computations.
What JacquesC said might be true in general (and is certainly true for ODEs), but it is the opposite of the truth in the example you give of `ithprime`

. From the code, `ithprime`

uses a table for numbers up to 22300000; after that, it simply tests every second integer to see if it is prime. Such a simple test is obviously too slow for practical use; so Maple does not have a viable ithprime procedure for numbers much outside the table.
Mathematica, in contrast, uses a general procedure based on finding roots and `numtheory:-pi`

. This requires a fast version of `pi`

, which exists (see, for example, Bressoud & Wagon: it is due to Legendre) and is implemented in Mathematica, but not in Maple. So this is all an embarrassment for Maple.
Returning to the general issue of comparison, I do not know of anyone who has done this for Maple 11 and Mathematica 6. It would take a large effort. For earlier versions of the two, the general consensus seemed to strongly hold that Maple was more reliable. I do not know about speed, but reliability is obviously more important.

I had assumed that the set was needed for something. If only the count is needed, one approach is the following.
`count:= add(piecewise(nops({'roll()'$8})=6, 1, 0), i=1..k):`

```
roll:= rand(1..6):
k:=100000:
s:= select(x->nops(x)=6, ['{ 'roll()'$8 }'$k]):
count:= nops(s):
printf("count=%d, prob= %d/%d = %f\n",count,count, k, evalf(count/k));
```

```
roll:= rand(1..6):
k:=100000:
f:= x-> piecewise(nops(x)=6, x, NULL):
s:= seq(f({'roll()'$8}), i=1..k):
count:= nops([s]):
printf("count=%d, prob= %d/%d = %f\n",count,count, k, evalf(count/k));
```

Thanks for the pointer to the Chaitin aritcle; I hadn't seen it before. The example of Ramanujan is particularly important, and has been given by many people. Ramanujan was arguably one of the greatest mathematicians who ever lived, but it seems that even the concept of proof was beyond his grasp. His intuition (or the Indian goddess with whom he conferred) virtually never led him astray. Yet nowadays, Ramanujan would probably have that intuition essentially beaten out of him, in the pursuit of rigor.
But rigor is usually critical to get reliable results. As I recall, physicists wasted ~15 years in the study of superconductivity, because they erroneously thought that every function was equal to its Taylor series. More generally, as complexity increases, rigor is important to keep error at bay. There is also a need to lessen fraud, which is vastly easier without rigor. The rise of the modern journal system, with its publish-or-perish impetus (enticing those who lack the morals), makes this even more improtant.
The tension, though, is not really be between mathematics and physics. The tension is between rigor and intuition. This naturally tends to arise between math and physics, but not always. Ramanujan is an example wholly within math. An example wholly within physics is arguably Kantor's “Information Mechanics”. Kantor's equations agree with experiment to within a few ppm, but are about as well-formalized as Ramanujan's works, and largely ignored for that reason (though Wheeler later somewhat accepted Kantor's central “it from bit” premise).
The big question is how to have both intuition and rigor—both are clearly needed. I suspect that intuitionists and rigorists need to work more in teams. Would mathematicians have brought rigor to Dirac delta functions without (implicit) pressure from physicists?

Thanks for the pointer to the Chaitin aritcle; I hadn't seen it before. The example of Ramanujan is particularly important, and has been given by many people. Ramanujan was arguably one of the greatest mathematicians who ever lived, but it seems that even the concept of proof was beyond his grasp. His intuition (or the Indian goddess with whom he conferred) virtually never led him astray. Yet nowadays, Ramanujan would probably have that intuition essentially beaten out of him, in the pursuit of rigor.
But rigor is usually critical to get reliable results. As I recall, physicists wasted ~15 years in the study of superconductivity, because they erroneously thought that every function was equal to its Taylor series. More generally, as complexity increases, rigor is important to keep error at bay. There is also a need to lessen fraud, which is vastly easier without rigor. The rise of the modern journal system, with its publish-or-perish impetus (enticing those who lack the morals), makes this even more improtant.
The tension, though, is not really be between mathematics and physics. The tension is between rigor and intuition. This naturally tends to arise between math and physics, but not always. Ramanujan is an example wholly within math. An example wholly within physics is arguably Kantor's “Information Mechanics”. Kantor's equations agree with experiment to within a few ppm, but are about as well-formalized as Ramanujan's works, and largely ignored for that reason (though Wheeler later somewhat accepted Kantor's central “it from bit” premise).
The big question is how to have both intuition and rigor—both are clearly needed. I suspect that intuitionists and rigorists need to work more in teams. Would mathematicians have brought rigor to Dirac delta functions without (implicit) pressure from physicists?

Thanks for this! I used to trade bond options (in City of London); so it is especially interesting for me.

Scott, the question is asking about `(a*b)/c+((a-d)/(b*e))*I)`

, which has unmatched parentheses. Hence there is a typo in the book. William Fish was just taking a reasonable guess at where the typo was.
William, to find the inverse in standard form, try this.
`(a*b)/(c+((a-d)/(b*e)*I));`

`evalc(1/%);`

Scott, the question is asking about `(a*b)/c+((a-d)/(b*e))*I)`

, which has unmatched parentheses. Hence there is a typo in the book. William Fish was just taking a reasonable guess at where the typo was.
William, to find the inverse in standard form, try this.
`(a*b)/(c+((a-d)/(b*e)*I));`

`evalc(1/%);`