MaplePrimes Commons General Technical Discussions

The primary forum for technical discussions.

Ya, GoPay memiliki akun WhatsApp resmi, +62 823.7534.2020. tetapi sebagian besar digunakan untuk promosi dan informasi marketing,

Ya, GoPay memiliki akun WhatsApp resmi, (0823°7534°2020). Nomor WhatsApp resmi GoPay tetapi sebagian besar digunakan untuk promosi dan informasi marketing,

Untuk Membuka blokir (BRImo) Anda bisa menghubungi CS BRI melalui Chat WhatsApp di (+62897_6933=364 ).

Untuk Membuka blokir (BRImo) Anda bisa menghubungi CS BRI melalui Chat WhatsApp di ("+62897=6933_364) .

We've reached quite a rhythm with Maple Flow - we update frequently, we add lots of improvements and we move fast.

What does this mean for you? It means that the feedback loop between development, the user experience and course correction has a fast time constant.

Without you being loud and vociferous, the feedback loop breaks. So don't be shy - tell us what you want!.

The new 2025.2 update builds on the theme of connectivity with two popular tools - Excel and Python. On top of that, we also have many other features and fixes that you've asked for.

Earlier versions of Maple Flow let you 

With the 2025.2 update, you can now copy and paste data from Excel into a Flow worksheet.

To be blunt, this is type of cross-application copy-paste behaviour is a no-brainer. It's such a natural workflow.

We've increasignly found that Python is now being used to script the interaction and data flow between different engineering tools. With Maple Flow 2025.2, you can now execute Maple Flow worksheets from a Python script. 

From Python, you can change and export any parameters and results defined in the worksheet

This gives me the dopamine hit of watching CPU utilization spike in the Task Manager (hey..I get my kicks where I can)

You can now do your parameter sweeps more quickly by executing the same worksheet in parallel, changing parameters for every run.

This is easy to set up - no special programming is needed.

  • Print Extents can now be set globally for all sessions, or just for the current session.
  • Any user-installed fonts used in the worksheet are now respected in the PDF export
  • Worksheets execute faster
  • The update includes fixes to many user-reported issues

You can install the Flow 2025.2 update via Help > Check for Updates (or if you're not already in the race, then grab a trial here and take Flow for a spin).

We're not pulling back on this aggresive development velocity, but we need you to point us in the right direction. Let's keep the feedback time constant small!

Hi Maplesoft Support / Community,

I've encountered a critical and bizarre bug involving Bits:-And correctness on large integers (~30 digits) derived from repeated integerdivq2exp operations.

  • Maple 2023 (Linux x86_64)
  • Maple 2025 (Linux x86_64)
  • Maple 2025 (Windows x86_64)

The correctness of Bits:-And depends on the order of execution

(See attached common.mpl, bug_test2.mpl, bug_test3.mpl logic).

Case "Fail" (bug_test2.mpl):

  1. Run operation (loops `integerdivq2exp`).
  2. Print result num1 (semicolon).
  3. Define num1_clean (hardcoded same value).
  4. Bits:-And(num1) -> INCORRECT.
  5. Bits:-And(num1_clean) -> INCORRECT.

Case "Pass" (bug_test3.mpl):

  1. Define num1_clean.
  2. Run operation (loops integerdivq2exp).
  3. Bits:-And(num1) -> CORRECT.
  4. Bits:-And(num1_clean) -> CORRECT.

The same behaviour can be observed in Worksheet mode using read.  (See worksheet_driver.mw)

But the result cannot be reproduced if not using read. (See worksheet_version.mw and worksheet_version2.mw)

Code below:

N := 2100:
n := 1000:
num := rand(0 .. 2^N)():
operation := proc(num, n)
    local q, k;
    q := num;
    for k from 1 to 2 do
        q := integerdivq2exp(q, n); 
    end do;
    q;
end proc:
read "common.mpl";

num1 := operation(num, n);
num1_clean := 1083029963437854242395921050992;

num1_clean_And_result := Bits:-And(num1_clean, integermul2exp(1, n) - 1);
num1_And_result := Bits:-And(num1, integermul2exp(1, n) - 1);

##################################

expected_result := irem(num1_clean, integermul2exp(1, n));

if num1 <> num1_clean then
    error "num1 does not match num1_clean";
end if;
print("num1 matches num1_clean");

if num1_And_result <> num1_clean_And_result then
    error "num1_And_result does not match num1_clean_And_result";
end if;
print("num1_And_result matches num1_clean_And_result");

if num1_And_result <> expected_result then
    error "num1_And_result does not match expected_result";
end if;
print("num1_And_result matches expected_result");
read "common.mpl";

num1_clean := 1083029963437854242395921050992:
num1 := operation(num, n):

num1_clean_And_result := Bits:-And(num1_clean, integermul2exp(1, n) - 1):
num1_And_result := Bits:-And(num1, integermul2exp(1, n) - 1);

##################################

expected_result := irem(num1_clean, integermul2exp(1, n));

if num1 <> num1_clean then
    error "num1 does not match num1_clean";
end if;
print("num1 matches num1_clean");

if num1_And_result <> num1_clean_And_result then
    error "num1_And_result does not match num1_clean_And_result";
end if;
print("num1_And_result matches num1_clean_And_result");

if num1_And_result <> expected_result then
    error "num1_And_result does not match expected_result";
end if;
print("num1_And_result matches expected_result");

The Autumn Issue is now up, at mapletransactions.org

This issue contains two Featured Contributions; a short but very interesting one by Gilbert Labelle on a topic very dear to my own heart, and a longer and also very interesting one by Wadim Zudilin.  I asked Doron Zeilberger about Wadim's paper, and he said "this is a true gem with lots of insight and making connections between different approaches."

The "Editor's Corner" paper is a little different, this time.  This paper is largely the work of my co-author, Michelle Hatzel, extracted and revised from her Masters' thesis which she defended successfully this past August.  I hope that you find it as interesting as I did.

 

We have three refereed contributions, a contribution on the use of Maple Learn in teaching, and a little note on my design of the 2026 Calendar for my upcoming SIAM book with Nic Fillion, as well.  All the images for the calendar were generated in Maple (as were most of the images in the book).

It's been fun to put this issue together (with an enormous amount of help from Michelle) and I hope that you enjoy reading it.

I would also like to thank the Associate Editors who handled the refereeing: Dhavide Aruliah, David Jeffrey, and Viktor Levandovskyy.

The recordings from Maple Conference presentations, including the workshops, are now available on the conference website.

Thank you to all those who attended or presented, you made the conference a great success!
We hope to see you all again next year.

 

Kaska Kowalska
Contributed Program Co-Chair

The Schatz Mechanism should move like this

However, with the default solver settings it froze after a few seconds in a planar link configuration. To make it run, I played around with advanced solver settings. Here is one attempt that went nuts:

(More solver settings for strange behavior can be found here: Schatz_Linkage.msim)

Some people might find this amusing. Of course, it is less fun when the initial plan was to spend an hour just for fun with a simple model (an hour is a fair estimate for similar simple looking models in MapleSim). The immediate reaction when seeing such simulation results is to blame the software for being either buggy or incapable. In this case, however, this was not the case, but identifying the root cause was not obvious.

The Schatz mechanism is a so called closed-loop mechanism where the links of the mechanism form a loop (the ground in the model closes the loop). In general, building and modeling mechanisms with loops is less straight forward than thought. Without a-priory knowledge or help (either by documentation or software hints) users can quickly find themselves in a situation of desperate trial and error. What was easy with other models can become a frustrating experience with unsatisfactory outcome. This happened to me on various occasions.   

What makes closed-loop mechanisms more challenging?  After resting for a long time on my virtual pile of unanswered questions, it turned out that the model, on top of being a closed-loop mechanism, is ill-conditioned: The Schatzmechanism is an over-constrained mechanism that is only mobile for certain geometric parameters. MapleSim can simulate such over-constrained mechanisms, but this can be a balancing act for the solver.

Who could have known this? A knowledgeable expert might say that users who do not know what they are doing should not use the software. But how to become aware of over-constrained assemblies when building and running a model in MapleSim does not require to be an expert?  In this case the geometry was taken from a reference that sets the length of the ground link to Ö3. Model build, assembly and simulation instantly worked … but not for an extended time span.

In retrospect, everything is clear. Models that do not assemble do not fit together. Models that freeze in motion “jam numerically”. Linkages and joints of closed-loop mechanisms made of infinite stiff components may not fit together in all geometrical configurations. During runtime, after successful assembly, a stiff model can make a simulation sensitive to numerical errors. This does not mean that the user is dealing with a so-called numerical stiff problem that can be addressed by using stiff solvers. In this case, stiff solvers could not prevent sudden freeze or inversion of movements.

The only remedies that work for infinite stiff and over-constrained mechanisms are the ones that work also in real life. By either introducing mechanical play or elasticity in supports, joints and links, the simulation becomes robust. Numerically, for this case where none of the many advanced solver options made a difference, a simple increase of the relative error in the standard simulation settings worked.  This remedy could be described as introducing more numerical play. Interestingly in a completely different approach of animating a Schatzmechanism @one man also needed to introduce “deformations” in his simulation to make it work.

The Schatzmechanism is of little commercial interest and can therefore be shared. Is it a rare case of successful assembly and freeze during runtime or is it more frequent that users run into similar problems? Only MapleSoft can tell, but in the latter case it could make sense that MapleSim supports the user. I see several possibilities for that:

  • A more prominent mention in the documentation that kinematic loops require caution could raise awareness.  
  • Algorithmically detecting kinematic loops and informing the user that closed loops can be potentially over-constrained in certain geometric configurations.
  • (If possible, analyzing the Jacobian in the frozen configuration might give better hints than solver messages during runtime can provide. The attached model gives the hint with MapleSim 2025.1 that the error tolerance might be too tight, but no indication why.)
  • Implementing the mobility formula, analyzing closed loops and issuing a warning when the mobility M is less than 1 (meaning no degree of freedom)

The latter option sounds appealing. However, the degree of freedom calculated by the mobility formula only provides a necessary but unfortunately not a sufficient condition for mobility.  For example, connecting a prismatic joint coaxially to another increases the mobility by one but does not add to the mobility of a mechanism. This means that an advanced algorithm must take the orientation of joints into account to determine the effective degrees of freedom. On the other hand, the Schatzmechanism and some other mechanisms have a mobility of M=0 but can be mobile for certain geometries.

Should Maplesoft implement mobility analysis or are CAD tools that offer some sort of mobility analysis more suitable? In my opinion, from a conceptual point of view, it would be beneficial and faster to have this support already in MapleSim before going into details.

Should the user refrain from modeling infinite stiff mechanisms? I do not think so because they are useful in the context of deriving (analytical) forward and inverse kinematics. Furthermore, there are more mechanisms out there that are mathematically, according to the mobility formula, immobile but useful in daily life. The telescopic fork is a prominet example.

Final note for math enthusiasts:

The Schatzmechanism (invented by Paul Schatz) is a byproduct of the inversion of a cube. Recalling that the diagonal of the unit cube is Ö3 gives a hint of why the Schatz mechanism becomes mobile for this parameter. Also related to the inversion of a cube is the oloid: a solid with a developable surface that touches with its entire surface a flat surface when rolling. The oloid and the Schatz mechanism are closely related, which can be appreciated from this video.

There is still time to register for Maple Conference 2025, which takes place November 5-7, 2025.

The free registration includes access to three full days of presentations from Maplesoft product directors and developers, two distinguished keynote speakers, contributed talks by Maple users, and opportunities to network with fellow users, researchers, and Maplesoft staff.

The final day of the conference will feature three in-depth workshops presented by the R&D team. You'll get hands-on experience with creating professional documents in Maple, learn how to solve various differential equations more effectively using Maple's numerical solvers, and explore the power of the Maple programming language while solving interesting puzzles.

Access to the workshops is included with the free conference registration.

We hope to see you there!

Kaska Kowalska
Contributed Program Co-chair

The full program for Maple Conference 2025 is now available. 

The agenda includes two full days of keynote speakers, presentations from Maplesoft product directors and developers, and contributed talks by Maple users all around the world. There will be opportunities to network with fellow users, researchers, and Maplesoft staff.

The final day of the conference will include three in-depth workshops presented by the R&D team.
The workshops will explore how to:

  • Create papers and reports in Maple
  • Solve various differential equations more effectively using Maple's numerical solvers
  • Solve Advent of Code challenges using Maple

Access to the workshops is included with the free conference registration.

We hope to see you there!

Kaska Kowalska
Program Co-chair

I must thank @Scot Gould for having asked this question more than a year ago and thus, without meaning to, having been the driving force behind this post.

There is an enormous literature about Monte-Carlo integration (MCI for short) and you might legitimately ask "Why another one?".

A personal experience.
Maybe if I tell you about my experience you will better understand why I believe that something is missing in the traditional courses and textbooks, even the most renowned ones.

For several years, I led training seminars in statistics for engineers working in the field of numerical simulation.

At some point I always came to speak about MCI and (as anyone does today) I introduced the subject by presenting the estimation of the area of a disk by randomly picking points in its circumscribed square and assessing its area from the proportion of points it contained.



Once done I switched (still as anybody does) to the Monte-Carlo summation formula (see Wikipedia for instance).

One day an attendee asked me this question "Why do you say that this [1D] summation formula is the same thing that the [2D] counting of points in the [circle within a box] example you have just presented?"

I have to say I was surprised by this question for it seemed to me quite evident that these two ways of assessing the area were nothing but two different points of view of, roughly, the same thing.

So I gave a quick, mostly informal, explanation (that I am not proud of) and, because the clock was running, I kept teaching the class.

But this question really puzzled me and I thought for a simple but rigourous way to prove these two approaches were (were they?) equivalent, at least in some reasonable sense.

The thing is that trying to derive simple explanations based on couting is not enough, and that you have to resort to certain probabilistic arguments to get out of it. Indeed, sticking to the counting approach leads to the more reasonable position that these two approaches are not equivalent.

The end of the story is that I spent more time on these two approaches of MCI during the trainings that followed.

Saying that, yes, the summation formula seems to be the reference today, but that the old counting strategy still has some advantages and can even gives access to information that the summation formula cannot.

About this post.
This post focuses mainly on what I call the Historical viewpoint (counting points), and is aimed, in its first part, to answer the question "Is this point of view equivalent or not to the Modern (summation formula) one?" (And if it is, in what sense is it so?).

Let me illustrate this with the example @Scot Gould  presented in its question. The brown bold curve on the left figure is the graph of the function  func(x) (whose expression has no interest here) and the brown area represents the area we want to assess using MCI.

In the Historical approach I picked unifomly at random N=100 points within the gray box (of area 2.42), found 26 of them were in the brown region and said the area of this latter is 2.42 x 26/100 = 0.6292. The Modern approach consists in picking uniformly N random points in the range x= [0.8, 3],  and using the blue formula to get an estimation of this same area ((Lbox is the x-length of the gray box, here equal to 2.2).

The quesion is: Am I assessing the same thing when I apply either method? And, perhaps more importantly, do my estimators have the same properties?


And here apppears a first problem:

  • Whatever the number of times you repeat the Historical sampling method, even with different points, you will always get a number of points in the brown region between 0 and N included, meaning that if S is the area of the gray box, the estimation of the brown area is always one of these numbers {0, S/N, 2.S/N, ..., S}.
  • At the opposite repetitions of the Modern approach will lead to a continuum of values for this brown area.
  • So, saying the two approaches might be equivalent simply means that a discrete set is equivalent to a non countable one.

If we remain at the elementary counting level, Historical and Modern viewpoints then are not equivalent.

Towards a probabilistic model of the Historical Process:
This goes against everything you may have heard or read: so, are the authors of these statements all wrong?

Yes, from a strict Historical point of view, but happily not if we interpret the Historical approach in a more loose and probabilistic manner (although this still needs to be considered carefully as it is shown in the main worksheet).

This probabilistic manner relies upon a probabilistic model of the Historical process, where the event "K points out of N belong to the brown area" is to be interpreted as the realization of a very special random variable named Poisson-Binomial (do not worry if you never heard about it: a lot of statisticians did not neither).

In a few words, whereas a Binomial random variable is the sum of several independent and identically distributed Bernoulli random variables, a Poisson-Binomial random variable is the sum of several independent but not necessarily identically distributed Bernoulli random variables. Thus the Poisson-Binomial distribution generalizes the Binomial one.

Using the properties of Poisson-Binomial random variables we must prove in a rigorous way that the expectations of the area estimators for both the Historical and Modern approaches are identical.

So, given this "trick" the two methods are thus equivalent, are they not? And that settles it.

In fact, no, the matter of equivalence still remains.

When uncertainty enters the picture.
Generally one cannot satisfy ourselves with the sole estimation of the area and we would like to have information about the reliability of this estimation. For instance if I find this value is 0.6292, am I ready to bet my salary that I am right? Of course not, unless I am insane, but the things would change if I were capable of saying for instance that "I am 95% sure that the true value of the area is between 0.6 and 0.67".

For the Historical vewpoint the Poisson-Binomial model makes possible to assess an uncertainty (not the uncertainty!) of the area estimation. But things are subtle, because there are different ways to compute an uncertainty:

  • At the elementary level the height of the gray box is an essential parameter, but it does not necessarily gives a good estimation of this uncertainty (one can easily reduced this latter arbitrarily close to 0!).
  • To get reliable uncertainty estimation the call to a probability theory related to Extreme Value Theory (EVT for short) necessary (all of this is explained in the attached worksheet).


For the Modern point of view it is enough to observe that there is no concept of "box height" and that it is then impossible to assess any uncertainty. Question: "If it is so, how can (all the) MCI procedures return an uncertainty value?"
The answer is simple: they consider a virtual encapsulating box whose eight is the maximum of the 
func(xi). This trick enables providing an uncertainty, but this is a non-conservative estimation (an over-optimistic one if you prefer, in other terms an estimation we must regard very carefully).

So, at the end Historical and Modern approaches are equivalent only if we restrict to the estimation of the area, but no longer as soon as we are interested in the quality of this estimation.

What does the attached file contain?
The attached file speaks a lot to the estimation of the estimator uncertainty.
The core theory is named (Right) EndPoint Theory (I found nothing on Wikipedia nor any easy-to-read papers about this theory, so I more or less arbitrarilly decided to refer to this one). Basically it enables assessing the (usually right) end-point of a distribution known only through (right) censored data.
The simplest example is those of a New York pedestrian who looks to the taxi numbers and asks himself how to assess the highest number a taxi has. Here we know this number exists (meaning that some related distribution is bounded), but the situation can be more complex if one does not ever know if this distribution is bounded or not (in which cas one seeks for a right end-point whose probability to be overpassed is less than some small value).
A conservative, and thus reliable, uncertainty on the area estimator  can only be derived in the framework of the end-point theory.

Once the basis of this theory are understood it becomes relatively simple to enhance the Historical approach to get estimators with lessen uncertainties.
I present different ways to do this: one (even if derived otherwise) is named Importance Sampling, and the other leads in a straightforward way to algorithms which are quite close to some used in the CUBA library (partially accessible through evalf/Int).

The last important, if not fundamental, concept discussed in this article concerns the distinction between dispersion interval and confidence interval, concepts that are unfortunately not properly distinguished due to the imprecision of the English language (I apologize to native English speakers for these somewhat harsh words, but this is the reality here).

Some references are provided in attached (main) worksheet, but please, if you don't want to end up even more confused than you were before, avoid Wikipedia.

To sum up.
This note is a non-orthodox presentation of MCI centered arround the Historical viewpoint which, I am convinced of that, deserves a little more attention than the disk-in-the-square picture commonly displayed in MCI courses and textbooks.
An I am even more convinced of that then this old-fashion (antiquated?) approach is an open door to some high level probability theories such than the EndPoint and the EVT one.

Of course this post is not an advocacy agaist the Modern approach, and does not mean that you have to ignore classical texts or that the Law of Large Numbers (LLN) or the Central limit theorms are useless stuff in MCI.

Maple, but not just Maple.
A part of the attached worksheet is devoted base presents results I got with R  (a programming language for statistical computing and data visualization), simply because Maple 2015 (and it is still true for Maple 2025) did not contain the functions I needed.

For instance R implements the Cuba library in a far more complete way than Maple (I give a critical discussion about the way Maple does it), enabling for instance the change of the random seed.

Main worksheet (I apologize in advance for typos that could remain in the texts)
A_note_on_Monte-Carlo_Integration.mw

The main worksheet refers to this one
How_does_the_variance_of_f_impact_the_estimator_dispersion.mw

Extra worksheet: An introduction to Importance Sampling
Importance_Sampling.mw

We are pleased to announce that the registration for the Maple Conference 2025 is now open!

Like the last few years, this year’s conference will be a free virtual event. Please visit the conference page for more information on how to register.

This year we are offering a number of new sessions, including more product training options, and an Audience Choice session.
Also included in this year's registration is access to an in-depth Maple workshop day presented by Maplesoft's R&D members following the conference.  You can find an overview of the program on the Sessions page. Those who register before September 14th, 2025 will have a chance to vote for the topics they want to learn more about during the Audience Choice session.

We hope to see you there!

We are a week away from the submission deadline for the Maple Conference!  
Presentation proposal applications are due July 25, 2025.

We are inviting submissions of presentation proposals on a range of topics related to Maple, including Maple in education, algorithms and software, and applications. We also encourage submission of proposals related to Maple Learn. You can find more information about the themes of the conference and how to submit a presentation proposal at the Call for Participation page.

We hope to see there.

Thank you for your patience and understanding during the recent outage of MaplePrimes. The outage was caused by a server issue. We have obtained and configured a replacement to prevent disruptions moving forward. We are sorry for any inconvenience this may have caused.

1 2 3 4 5 6 7 Last Page 1 of 80