## 792 Reputation

9 years, 233 days

## It can appear on the left side ......

@Carl Love  is a very interesting feature ! Great thanks for the trick

## I knew this more concise writing (X__||(...

@Carl Love   ... but I have to confess I use it rarely for readability concerns.
The same holds for the tilde operator where I prefer to use the "map" function.

## Huge thanks...

Generally I use the "||" constructor but, in the present case, I wanted a nice render of the equations, so th "__" constructor.
I never imagined myself to combine the two as you do at the end of your answer.

If it is not much to ask I would like to know what is the best way to proceed in this situation :

Suppose you define the pressure p of a gaz by the EOS : p = K*v-n  where n is the polytropic index and K some suitable constant.
For some situations n is defined by the ratio cp/cv of the heat capacities at constant pressure (cp) and volume (cv).

In Physics textbooks it is common to write relations such that
p = K*v-n
n =  cp/cv

but if I do this in Maple, cp is evaluated as c with a subscript equal to  K*v-n (which is perfectly normal)
To preserve the physical representation I used to write

p := K*v-n
n :=  c__p/c__v  # to avoid evaluation of p

Is it a safe method to procced ?
Does it exist a better alternative ?

## The theory and the practice...

Mathematically speaking :

(FR) la fonction "sécante" est définie comme étant la fonction réciproque de la fonction "cosinus" ...

(FR -> EN) the function "secant" is defined as the reciprocal function of function "cosine" ...

So there is absolutely no doubt that the correct translation of reciprocal is réciproque

But, in day-to-day language, even among people who share mathematical background and use to using mathematics in their activities (excluding teachers and professors), it is very common to use the french word inverse (inverse in english) to refer to the reciprocal function.

This is very likely that this abuse of terms is related to the notation F-1 for the reciprocal of the function F.
Thus it is not unusual to hear that "the secant is the inverse function of the cosine function" (if it is not "the secant is the inverse of the cosine")

## Thanks a lot...

@acer

I always thought that it was a pity that NameToRGB24 does not accept a "palette = " option ...
... but it was just an undocumented feature !

(I do understand Carl's disappointment)

Great thanks

## Trapped by a deceptive cognate ......

@Carl Love

Your redefinition of RGB24toName is a very astute stopgap.

About the ColorTools package : I believe it is fairly powerfull while not very easy to work with.
One of the main criticism I would make is that it is quite difficult to pick "the" good color (or to create its own palette) from existing ones because the GetPalette( NameOfThePalette ) display the colors in a disturbing order (see "Resene" for example).

Thanks for the answer ... and for having pointed out a translation mistake :

(FR) Fonction reciproque  <--> reciprocal inverse function (EN)  ... I will remember this

## I think that you have put your finger o...

You write

1. Your XP machine seems to be 4 physical cores without hyperthreading capability
TRUE : this is a capibility that is disabled by default (here again a company policy)

2. your new Windows 7 machine seems to be 4 physical cores with hyperthreading capability
TRUE again : I asked that hyperthreading be enabled on my new "Windows 7" machine.
3. On the Windows 7 machine it's quite possible that the OS distributes the load  ...
Very likely indeed
Here is a table that summarizes the performances I have just obtained (new machine / Windows 7)
(10000 runs, distributed on N nodes ... or the nearest integer of 10000 that N divides)
 N Approx. Mean Load (%) Execution time (s) Observation from the task monitor (performance tag) 2 25 881 4 active cores 3 38 557 6 active cores 4 50 409 8 active cores 5 60 381 ‘’ 6 73 370 ‘’ 7 90 355 ‘’ 8 95 343 8 active cores, all « flat »

One can notice that the the execution time with N nodes (TN) varies linearly, or so, between N=2, 3, 4.
For N larger than 4 the improvement is slighter.
The rightmost column refers to visual observation of the task monitor (ctrl+alt+suppr, tag "performance"). For N >= 4 the 8 cores exhibit a significant activity while, for N=3 two of them have no load at all, and that for N=2 four cores are inactive (odd nodes are active and even ones inactive).
The approximated mean load column (from the task monitor) increases inversely to the execution time (which seems normal).

It seems to me that the table above corroborates what you write in your last paragraph (at least as far as I understand ...)

Great thanks to you Acer for this fruitfull answer

## How an answer can be valuable even if it...

@Carl Love
I agree : the ratio 3.2/3.5 is anything but significative.

But the ratio 4/8 of the number of cores nodes should be :

given that all the cores recieve the same amount of runs to execute (resp 2500 on the 4 cores PC and 1250 on the 8 cores one) and that all the cores are active (I do not use Grid:-Launch plus a Send-Recieve protocol), the expected execution time should (?) divided by 2 on the 8 cores machine ... all other things being equal, and more specifically with the same OS.

Now, I agree with your suggestion "so you should run your test using the same number of nodes on each machine"
But two difficulties arise :

1. Considering the performances I announced beforehand, I would have like to proceed to some extended comparisons. But (company policy oblige) the migration of operating systems is generally an opportunity to upgrade the working station if not to change it. This is what was done for me and I am no longer capable to test my code on my previous machine
Accordingly, my comparisons are probably biased
2. I have observed the following behaviour while using Grid-Run as described in my initial post
Let us suppose I'm working on a 2x2 cores machine and that (1) I distribute 10000 runs over 4 cores and next (2) I distribute these 10000 runs over 2 cores (same proc or not ???)
Let T(4) and T(2) the corresponding execution times. I can expect that T(2) is twice T(4) .. but, for a reason I don't know,  it is not the case (I'm not a specialist of parallel computing or processors architecture).
A quick look to the performance tab of the task monitor shows in case (1) that the 4 cores are loaded the same way (saying 95% during the whole computation sequence) ... whereas in case (2) two cores are loaded up to a level of 75% (with large deviations) while the 2 others remain between 10% and 30%.

Furthermore, the performance history is very chaotic in case (2) whereas quite flat in case (1) ... something I (mis)interpreted as a better task control by the operating system in case (1)

On "my" new Windows-7 machine (4 dual core processors) I have obtained the following results

• Distribution over 8 cores (nodes) : 343 s
• Distribution over 4 cores               409 s (?!?!?!)
These results seem to corroborate your claim "using more nodes will incur a higher percentage of administrative costs" (???)

On "my" old Windows-XP machine (2 dual core processors) I had obtained these results

• Distribution over 4 cores (nodes) : 504 s
• Distribution over 2 cores               983 s
The expected 1:2 ratio is realized here, suggesting a higher efficiency in task control (???)

So I keep thinking that something "is not going well" (more likely  with Windows-7)

Other point : now that I have 8 nodes avaliable to me, it is perhaps better to use Grid-Launch with a "master" node and to distribute the compuation over the remaining 7 ???
There are a lot of questions and posts here I need to look at : even the distribution of similar computations is not as simple as we think it is

Even if your answer is far from the luminous solution to which I was expecting to, it leads me to ask myself a lot of questions.

I thank you for that

## Thank you ......

@Stephen Forrest
for reminiding me the "What's New pages".
I am pleased to contribute to the improvement of Maple but I'm not a bug-buster and my "discovery" was completely fortuitous

Have a good day

## Great and fruitful explanation !...

Huge thanks to you Stephen for this comprehensive explanation which, I'm sure, will be of particular interest to me

PS : Sorry for this rather long response time due to vacations

## it will be quite difficult but ......

@Thomas Richard ... I believe I have discovered the source of the problem.

My function f depends on x and on many other parameters (the number of them is not known a priori) named `P.1`, `P.2`, ...
So this is a simple example that mimics what happens

f := (x, `P.1`) -> x*`P.1` ;

CodeGeneration[Matlab](f, output=string);

If I use more "conventional" names (here f:=(x,a) -> x*a) all goes perfectly well and Maple does no replacement at all.
As I am not familiar with Matlab, I simply guess that `P.1` is not an authorized syntax for the name of a Matlab variable ... so the need for a translation.

If I am right, please consider the question as over.

PS :  I thumb up for the disagreement

## Clear !...

@Joe Riel  I have received your message loud and clear and it will be very useful for me.
Thanks a lot

## Thanks ......

@Kitonum  acer has provided an answer syntactically simpler than yours (no offfense here) which fits me just fine.

Your answer corresponds probably to the code I had expected for ... but as it often appears in Maple a simple workaround (see acer's reply) may exist.

Thanks again  for the work you did

## To end a question...

@tomleslie

Dear Tom,

no offence, but I think we should end this fruitless exchange before unnecessarily hurtful words shall be written.

I appreciate reading your many contributions on mapleprimes and this is what does matter.

As far as I am concerned I turn the page and it is fine.

Looking forward to hearing from you about other topics,

Respectfully.

PS : I have deleted my initial question

 First 21 22 23 24 25 Page 23 of 25
﻿