Maple 2015 Questions and Posts

These are Posts and Questions associated with the product, Maple 2015


I must thank @Scot Gould for having asked this question more than a year ago and thus, without meaning to, having been the driving force behind this post.


There is an enormous litterature about Monte-Carlo integration (MCI for short) and you might legitimately ask "Why another one?".



A personal experience.
Maybe if I tell you about my experience you will better understand why I believe that something is missing in the traditional courses and textbooks, even the most renowned ones.

For several years, I led training seminars in statistics for engineers working in the field of numerical simulation.
At some point I always came to speak about MCI and (as anyone does today) I introduced the subject by presenting the estimation of the area of a disk by randomly picking points in its circumsribed square and assessing its area from the proportion of points it contained.




Once done I switched (still as anybody does) to the Monte-Carlo summation formula (see Wikipedia for instance).

One day an attendee asked me this question "Why do you say that this [1D] summation formula is the same thing that the [2D] counting of points in the [circle within a box] example you have just presented?"
I have to say I was surprised by this question for it seemed to me quite evident that these two ways of assessing the area were nothing but two different points of view of, roughly, the same thing.
So I gave a quick, mostly informal, explanation (that I am not proud of) and, because the clock was running, I kept teaching the class.

But this question really puzzled me and I thought for a simple but rigourous way to prove these two approaches were (were they?) equivalent, at least in some reasonable sense.
The thing is that trying to derive simple explanations based on couting is not enough, and that you have to resort to certain probabilistic arguments to get out of it. Indeed, sticking to the counting approach leads to the more reasonable position that these two approaches are not equivalent.

The end of the story is that I spent more time on these two approaches of MCI during the trainings that followed.
Saying that, yes, the summation formula seems to be the reference today, but that the old counting strategy still has some advantages and can even gives access to informations the summation formula cannot.



About this post.
This post foccuses mainly on what I call the Historical viewpoint (counting points), and is aimed, in its first part, to answer the question "Is this point of view equivalent or not to the Modern (summation formula) one?" (And if it is, in what sense is it so?).

Let me illustrate this with the example
@Scot Gould  presented in its question. The brown bold curve on the left figure is the graph of the function  func(x) (whose expression has no interest here) and the brown area represents the area we want to assess using MCI.
In the Historical approach I picked unifomly at random N=100 points within the gray box (of area 2.42), found 26 of them were in the brown region and said the area of this latter is 2.42 x 26/100 = 0.6292. The Modern approach consists in picking uniformly N random points in the range x= [0.8, 3],  and using the blue formula to get an estimation of this same area ((
Lbox is the x-length of the gray box, here equal to 2.2).
The quesion is: Am I assessing the same thing when I apply either method? And, perhaps more importantly,  do my estimators have the same properties?




And here apppears a first problem:

  • Whatever the number of times you repeat the Historical sampling method, even with different points, you will always get a number of points in the brown region between 0 and N included, meaning that if S is the area of the gray box, the estimation of the brown area is always one of these numbers {0, S/N, 2.S/N, ..., S}.
  • At the opposite repetitions of the Modern approach will lead to a continuum of values for this brown area.
  • So, saying the two approaches might be equivalent simply means that a discrete set is equivalent to a non countable one.

If we remain at the elementary counting level, Historical and Modern viewpoints then are not equivalent.



Towards a probabilistic model of the Historical Process:
This goes against everything you may have heard or read: so, are the authors of these statements all wrong?
Yes from a strict Historical point of view, but happily not if we interpret the Historical approach in a more loose and probabilistic manner (although this still needs to be considered carefully as it is shown in the main worksheet).

This probabilistic manner relies upon a probabilistic model of the Historical process, where the event "K points out of N belong to the brown area" is to be interpreted as the realization of a very special random variable named Poisson-Binomial (do not worry if you never heard about it: a lot of statisticians did not neither).
In a few words, whereas a Binomial random variable is the sum of several independent and identically distributed Bernoulli random variables, a Poisson-Binomial random variable is the sum of several independent but not necessarily identically distributed Bernoulli random variables. Thus the Poisson-Binomial distribution generalizes the Binomial one.


Using the properties of Poisson-Binomial random variables we must prove in a rigourous way that the expectations of the area estimators for both the Historical and Modern approaches are identical.
So, given this "trick" the two methods are thus equivalent, are they not? And that settles it.
In fact no, the matetr of equivalence still remains.



When uncertainty enters the picture.
Generally one cannot satisfy ourselves with the sole estimation of the area and we would like to have informations about the reliability of this estimation. For instance if I find this value is 0.6292, am I ready to bet my salary that I am right? Of course not, unless I am insane, but the things would change if I am capable to say for instance that "I am 95% sure that the true value of the area is between 0.6 and 0.67".

For the Historical vewpoint the Poisson-Binomial model makes possible to asses an uncertainty (not the uncertainty!) of the area estimation. But things are subtle, because there are different ways to compute an uncertainty:

  • At the elementary level the height of the gray box is an essential parameter, but it does not necessarily gives a good estimation of this uncertainty (one can easily reduced this latter arbitrarily close to 0!).
  • To get reliable uncertainty estimation the call to a probability theory related to Extreme Value Theory (EVT for short) necessary (all of this is explained in the attached worksheet).


For the Modern point of view it is enough to observe that there is no concept of "box height" and that it is then   impossible to assess any uncertainty. Question: "If it is so, how can (all the) MCI procedures return an uncertainty value?"
The answer is simple: they consider a virtual encapsulating box whose eight is the maximum of the 
func(xi). This trick enables providing an uncertainty, but this is a non-conservative estimation (an over-optimistic one if you prefer, in other terms an estimation we mus regard very carefully).

So, at the end Historical and Modern approaches are equivalent only if we restrict to the estimation of the area, but no longer as soon as we are interested in the quality of this estimation.



What does the attached file contain?
The attached file speaks a lot of the estimation of the estimator uncertainty.
The core theory is named (Right) EndPoint Theory (I found nothing on Wikipedia nor any easy-to-read papers about this theory, so I more or less arbitrarilly decided to refer to this one). Basically it enables assessing the (usually right) end-point of a distribution known only through (right) censored data.
The simplest example is those of a New York pedestrian who looks to the taxi numbers and asks himself how to assess the highest number a taxi has. Here we know this number exists (meaning that some related distribution is bounded), but the situation can be more complex if one does not ever know if this distribution is bounded or not (in which cas one seeks for a right end-point whose probability to be overpassed is less than some small value).
A conservative, and thus reliable, uncertainty on the area estimator  can only be derived in the framework of the end-point theory.

Once the basis of this theory are understood it becomes relatively simple to enhance the Historical approach to get estimators with lessen uncertainties.
I present different ways to do this: one (even if derived otherwise) is named Importance Sampling, and the other leads in a straightforward way to algorithms which are quite close to some used in the CUBA library (partially accessible through evalf/Int).

The last important, if not fundamental, concept discussed in this article concerns the distinction between dispersion interval and confidence interval, concepts that are unfortunately not properly distinguished due to the imprecision of the English language (I apologize to native English speakers for these somewhat harsh words, but this is the reality here).
Some references are provided in attached (main) worksheet, but please, if you don't want to end up even more confused than you were before, avoid Wikipedia.



To sum up.
This note is a non-orthodox presentation of MCI centered arround the Historical viewpoint which, I am convinced of that, deserves a little more attention than the disk-in-the-square picture commonly displayed in MCI courses and textbooks.
An I am even more convinced of that then this old-fashion (antiquated?) approach is an open door to some high level probability theories such than the EndPoint and the EVT one.

Of course this post is not an advocacy agaist the Modern approach, and does not mean that you have to ignore classical texts or that the Law of Large Numbers (LLN) or the Central limit theorms are useless stuff in MCI.




Maple, but not just Maple.
A part of the attached worksheet is devoted base presents results I got with R  (a programming language for statistical computing and data visualization), simply because Maple 2015 (and it is still true for Maple 2025) did not contain the functions I needed.
For instance R implements the Cuba library in a far more complete way than Maple (I give a critical discussion about the way Maple does it), enabling for instance the change of the random seed
.

Main worksheet (I apologize in advance for typos that could remain in the texts)
A_note_on_Monte-Carlo_Integration.mw

The main worksheet refers to this one
How_does_the_variance_of_f_impact_the_estimator_dispersion.mw


Extra worksheet: An introduction to Importance Sampling
Importance_Sampling.mw


Under the name of mmcdara (unfortunately inaccessible since the major July 2025 Mapleprimes outage, and probably lost forever, God rest his soul.) I published two years ago a post about Multivariate Normal Distribution.

The current post continues in the same vein and presents the construction of a few new Multivariate Random Variables (MRV for short) named Multinomial (see for instance this recent question), Dirichlet, Categorical and related compound distributions.
I advice the interested readers to give a quick look to these names on Wikipedia (more specific references are given at the top of the wotksheet).

As I explained (in fact as my alter ego did) in Multivariate Normal Distribution, the Statistics package is limited to univariate random variabled  and thus implementing MRVs requires a little cunning.
Here is a list of a few problems you face:

  • Whereas the expectation (sometimes named "mean") of a univariate random variable is a number or an expression, the expectation of a MRV is a vector (or a list, a n-uple, ...) of numbers or expressions.

So far, so good, except that the Mean attribute of Distribution can only be a scalar quantity. So if you want to assign a vector to Mean you have to code it some way and do something like Decode(Mean(My_MRV)) to get the expectation in a vector form.
 

  • The Variance case is even more tricky because MRV variance are matrices.
     
  • Beyond this some very useful attributes like ParentName and Parameters cannot be instanciated in the definition of user random variables (whether there are MRV or not), implying here again some bit of gymnastics to, if not reaslly instantiate these attributes, be able at least to retrieve them when needed.
     
  • Finally, last but not least, the RandomSample is not appropriated to sample MRVs for reasons which are explained in the attached worksheet.


The file below contains more than 20 procedures enabling the definition of the studied MRVs, the decoding of the coded attributes, the visualization (which is not that immediate because the supports of the MRVs I foccus on are simplexes), the parameter estimations against empirical observations (frequentist and bayesian points of view), and so on.

Multinomial_Dirichlet_and_so_on.mw

Nevertheless, there is still a lot missing, but at some point I believe we need to decide that the work is over.

 


I'm struggling to construct a statistical Distribution involving Product.
This is likely a question of delayed evaluation but I'm not capable to fix it.
Can you please look to this  Product_error.mw  worksheet  and help me fixing the issue?

Thanks in advance

Hello everyone,
I hope this message finds you well. I am trying to plot a function f(x, y) and overlay its contour on a quarter ellipse using Maple 2015. However, I’ve encountered some difficulties and have not been successful so far. I would greatly appreciate any assistance in resolving this issue. Thank you!

Plotting in 2D

restart:with(plots):  aa := 4: bb := 2:  
f := -((x^(2))/(aa^(2))+(y^(2))/(bb^(2))-1)*((aa^(2)*bb^(2))/(aa^(2)+bb^(2))):  
plot3d(f,x = 0 .. aa/(2),y = 0 .. bb/(2),region = (x, y) -> ((2 x)/(aa))^(2) + ((2 y)/(bb))^(2)<= 1,axes = boxed,style = patchcontour, grid = [50, 50],orientation = [-120, 45],shading = zhue,title = "f(x,y) over quarter ellipse domain");

Contour plotting
xrange := 0 .. aa/(2): yrange := 0 .. bb/(2):  
nx := 100:   ny := 100:  
dx := (rhs(xrange) - lhs(xrange))/(nx-1):dy := (rhs(yrange) - lhs(yrange))/(ny-1):  
Z := Matrix(nx, ny, (i, j) -> local x, y, inside;x := lhs(xrange) + (i-1)*dx;y := lhs(yrange) + (j-1)*dy;inside := (((2 x)/aa)^2 + ((2 y)/bb)^2 <= 1);if inside then f(x, y) else NULL end if):  
contourplot(Z, xrange, yrange,contours = 15, filled = true, coloring = [blue, green, yellow, red], axes = boxed, title = "Contour plot over quarter ellipse", grid = [nx, ny]);  

Hi

I hope everyone is fine and doing well. I want to constrcut the set of monomials {p[0],p[1],...,p[m-1]} for any value of m for example, for m=6 the monoial is define as:

p[0]:=1;

p[1]:=x;

p[2]:=y;

p[3]:=x^2;

p[4]:=x*y;

p[5]:=y^2;

and similarly for m=10 the monomials should be given as:

p[0]:=1;

p[1]:=x;

p[2]:=y;

p[3]:=x^2;

p[4]:=x*y;

p[5]:=y^2;

p[6]:=x^3;

p[7]:=x^2*y;

p[8]:=x*y^2;

p[9]:=y^3;

I am waiting for your positive response. Please take care and thanks


A classical probability result says that if G1 and G2 are two independent Gamma random variables with same scale parameter (let's say 1 to simplify) and shape parameters a1 and a2 respectively, then Gk / (G1 + G2 ) is a Beta random variable with parameters (ak , a3-k ) (k=1..2).

In the attached file it is shown that (Maple 2015) function Statistics:-PDF fails in computing the PDF of Gk.
Noting strange here if you observe that even in the extremely simple case Z = X / (X+Y), where both X and Y are independent Uniform random variable with support [0, 1), Maple 2015 already fails in computing PDF(Z).

An alternative to Statistics:-PDF is to write explicitely the double integration which defines CDF(Z) (to begin with, and later PDF(Gk)) and ask Maple to do the integrations.
This approach works for Z but requires helping Maple when X and Y are still independent Uniform random variables but with respective non instanciated supports [0, a1) and [0, a2).

Applying to the Gamma-Gamma case the recipies I introduced in the Uniform-Uniform case does not give any result, unless in the very particular case where the shape parameters a1 and a2 are (strictly) positive integers.

All the details are in X_over_(X_plus_Y).mw

Do you have any idea how to prove with Maple the probability result mentioned at the head of this question?

PS: The "classical method" to do compute PDF(Z) consists in changing the integration variables < x1, x2 > into < x1 = v1v2, x2 = v2 (1-v1) > (see for instance Stack exchange)... but even after having dome it I still cannot get the desired result.

Thanks in advance.

 

As I was comparing visually the first terms of a priori identical sums produced by add , I was surprised to find them different.
So I suspected some error in what I have done, until I realized that add randomly permuted the terms.
Each term is of the form (R + P)2 where R is a random number and P a polynomial.

This behaviour is illustrated in worksheet add_changes_ranks.mw and appears only when random numbers are used (provided the seed is not forced to some constant value)

Does someone ever onserved that or have any idea of what happens here (maybe this behaviour no longer happens in recent versions?) ?

Thanks in advance

I use  Maple 2015 and I try to understand how the simplification rules apply in the case of the expression 

f := n -> (ln(x)^n)^(1/n)

Here n is assumed to be a strictly positive and I consider only the cases "n is an integer" or "1/n is an integer".

All the questions are orange written in the attached file and resumed below:

  1. Why simplify(f(2)) simplifies f(2) whereas simplify(f(n)) doesn't simplifies f(n) for any integer n > 2?
     
  2. Why simplify(f(1/n)) simplifies f(1/n)?
     
  3. Why simplify(f(3)) with adhoc assumptions returns a simplified expression of some form whereas, for any integer n > 3,  simplify(f(n)) with (the same corresponding) adhoc assumptions returns a simplified expression of a complete different form than with n=3?

Can you please have a look to it and give me some clarifications?
Simplification_rules.mw

Thanks in advance

As I was numerically investigating this recent question I incidentally discovered a strange behaviour of Maple 2015 (which maybe exists in more recent versions?)

The attached worksheet presents an erratic behaviour (plus a remanance isssue because saving it, and opening it again changes the displays).
Note that this strange behaviour seems to occur only when tickmarks use the atomic name `#mo("2")`.

display_issue.mw

Here is a pdf print of this same worksheet: as I hope you will see (because I don't know what you are about to see when opening the attached worksheet) its content differs from the worksheet's. 

display_issue.pdf

Here are 3 screen captures which show what MY worksheet looks like

PAGE 1


PAGE 2
There is a typo in the comment below: read "void" instead of "coid", sorry for the mistake.


PAGE 3


Is this a Maple 2015 issue which has been fixed in earlier versions?
Is there a way to fix these issues?

( squircle is the humoristic name for the 2D open ball of center 0 and radius 1 in Ln norm ).
The equation of the squircle in Ln norm writes  |x|n+|y|n = 1

The attached file gives the exact values of the areas of squircles in norms L2L4L100L1
Unless for n=2 the results are dramatically poor (evalf/Int gives the same wrong results).

The function a(n) gives the exact expression of the squircle area in  Ln norm.

squircle.mw

I have some cubic and quartic equations with complex cofficients. Maple 2015 is able to solve these and returns the roots as labelled sets, so I can do things like "plot S[1]". I want to vary some parameters in the coefficients, and see what happens to the roots.
My problem is when I log out and then rerun the code, the labels 1,2,3,(4) are frequently attached to different roots than they were the first time. This is both unexpected and inconvenient. Is there any way to ensure that the same roots are always given the same labels?

[moderator: see also this Question from 2023]

Can anyone explain me the reason of the last result?
Thanks in advance

restart

kernelopts(version)

`Maple 2015.2, APPLE UNIVERSAL OSX, Dec 20 2015, Build ID 1097895`

(1)

a/n^b;
den := denom(%);
print(cat(`_`$50));

3/n^2;
den := denom(%);
print(cat(`_`$50));

1.23/n^1.65;
den := denom(%);
num := numer(%%);

a/n^b

 

n^b

 

__________________________________________________

 

3/n^2

 

n^2

 

__________________________________________________

 

1.23/n^1.65

 

1

 

1.23/n^1.65

(2)
 

 

Download What-does-happen-here.mw

Hi!

I am using a proceure to conpute de integral of a function by he Simpson's rule. My function is defined from a function and a procedure, but I am getting the error  "Error, (in w) invalid input: hfun2 expects its 1st argument, t, to be of type numeric, but received (1/10)*i+1/20"

As you can see in the attaxhed file, I have tried several ways to compute the integral but always returns the above error. Please, can yo help me?

Thanks

forum.mw

Hi Dear,

I hope everyone is fine here. In the attached file, I have generated a square matrix "Q" using two-dimensional polynomials. The dimension of the square matrix "Q" depends on M1 and M2 parameters. In my simulation, sometimes I need this matrix of 1000 by 1000 dimensions. Using the attached method, it took a lot of time to compute two-dimensional polynomials and then to compute the general square matrix "Q." I wanted to write this matrix using proc (procedures). Maybe by using this way, I don't need to compute the polynomials, and it took less time to compute the square matrix "Q." I know how to generate a matrix using proc when its dimension depends on one parameter. However, here, the dimension of matrix "Q" depends on two parameters, M1 and M2. So, I am a little bit confused about how to adjust them in proc. Please see the attached file and share your useful ideas. 

help.mw

Thanks in advance

Hi
I hope you are doing well. I have plotted (in the attached file) the contour plot of the function and its density plot; both have the same behavior but different appearances (error in direction may be rotation needs to apply). I don't know why it happens because this code works well for other solutions. Kindly have a look and fix the issue. I shall be waiting for your positive response. Please take care.
Help.mw

1 2 3 4 5 6 7 Last Page 1 of 73