C_R

3577 Reputation

21 Badges

6 years, 95 days

MaplePrimes Activity


These are replies submitted by C_R

@sursumCorda 

That works if I paste from your answer or from https://en.wikipedia.org/wiki/Greek_and_Coptic

Its a bit complicated. Do I get unicode within Maple?

@Rouben Rostamian  

I thought I had excluded this possibitly by adding a mass of 0.001 kg at the center of the spherical contact element that does the rolling.

What you assumed was no mass at the center:

Disk_pendulum-_strip_down.msim

Without friction it should behave now like a pendulum with a horizontal prismatic joint at the pivot point.

Kinematically it does: the mass is falling straigt down.

Energetically after the first swing, energy is lost. This should not be for a conservative system.

However, in the elastic contact I added a bit of damping.

One could argue that this system is unphysical because no lateral forces act.

Effectively we see only a mass bouncing vertically. The erratic swing of the massless ridig body frame to the center of the contact element is only the solver searching for a solution that satisfies a constraint condition. In a vertical postition there are two possibilites left and right to the vertical.

However, increasing to a mass at the center to 0.01 kg, i.e. 1%, is not sufficient. Only with 0.1 kg the effect vanishes. This is 10% of the excentric mass and unexpected high.

At the moment I am a little surprised that the problem is still illconditionned with a mass ratio of 1%.

Adding to @Rouben Rostamian suggestion: Once you have computed the normal force of the contact you could investigate for slippage. In this question about some numerical artefacts I have uploaded a MapleSim model that still shows slippage for mu=1.

To implement slippage, numerical integration has to switch between two configurations of odes: One for slippage and one for rolling without slippage (this one you have implemented already).

Maybe someone knows whether dsolves event control can be/has been used for this.

P.S.:

The red curve red below (with reduced slippage) shows a good agreement with your implementation 👍

 

@Gabriel Barcellos

Your interpretation is correct (based on one attempt using the procedure p_O2 in Médio_spin_7_2_-_Forum_optimize_03_b.mw) all results for varying d close to d=3 are the same.

Before zooming in I would now create a coarse d-T map. Finding solution far away from  d=3 seems to be difficult. So far I could not find anything.

Everything else next year.

@dharr @Gabriel Barcellos

To wrap up, I have added my findings in the document from dharr

Campo_Médio_spin_7_2_new5MP_C_R_2.mw

in case more computing resources or smart simplifications are available in the future.

I could run the parameter x up to a point where the problem becomes numerically illconditioned, which requires higher precision and therefore computation time goes up.

I have also notices in my original worksheet all the below fsolve calls produce the same result

It could be that numerically there is nothing to gain but the result is stable with the given Digits whereas in the xyz there is more to gain at the cost of computational effort.

Happy holidays

@acer 

I could not find such settings. I also tried Edge. What seems to work is selecting the text with the pasted font, changing it to another font and then changing it to default. Whatever font default is.

@acer 

Your worksheet gives

(is(abs(x) = max(x, -x)) assuming positive);
                              true

(is(abs(x) = -min(-x, x)) assuming positive);
                              true

(is(abs(x) = -min(-x, x)) assuming negative);
                              true

@dharr 

I have used the above worksheet to compute solutions for x, y and y2 for a given z with fsolve similar to the original call

A first result is about 10 times faster

Substituting this results into the original expression for eqm gives not really equality

I attribute this to the large values for y and y2. So maybe that is not a real solution to the origianal problem with exp().

On the other hand if I give a solution from my worksheet as inital values, the agreement is better

All this without procedures. I think that a further gain with codgen,makeproc and codegen,optimized is possible. We might see a gain in speed of about x25.

Before going further in this direction, the following should be discussed:

Technically we could run x as a parameter (which means varying T) and with that solve for y, y2 and z.
z will give us d. I have tried this with initial values starting from here:

This can only be done in tiny increments of x. At each increment the former solution is used as start value. Very soon the solution jumps to negative values for z. So either there is a bug in the xyz approach, which I do not think, or we are dealing with a singularity.

@dharr : Thank you for the background. I was realy thinking about something else.

@Gabriel Barcellos

I have tried modified equations with a reduced number of arithmetic operations (removing 1/T in the sums, see eqm_eqm2_eqTPO.mw). I did this by chanceling beta in m0. To my surprise fsolve took longer with the simplified equations. Now there is only one simplifcation left which is removing redundancies in the 3 equations (see update 3). This is in itself a new question where I have no good answer for and my time to work on it is limited. However, worksheets with a reduced number of terms in the sums are a big help to work on this problem.

Concerning physics: It is the first time that I see such expressions (big sums which are not series) in a physical context. Since it is new to me I share @one man doubts and I cannot interprete results. Any bockgound would be of help. I was surprise to see that only S0 has to be modifed for a lower spin number. I did expect Z0 also to have a reduced number of additions. Mathematically we can always plug in results in the set of equations that fsolve solves to verify Maples numerical results. This is less of a worry to me.

Concerning solving the problem: You have to tell us what you are finally looking for. Is it a T-d diagram, as we see it at the end of your last attachment for 3/2 spin? Depending on that answer a grid of data point might be ennough for you or is a restructured problem with less redundancies (where we do not know how much fsolve will get faster) needed.

@Gabriel Barcellos

As I see it at the moment:

  • The equations eqm, eqm2 and eqTPO can be simplified to the below equations. But that's it pretty much. As @dharr said the problem is the large number of exponentials and this has not changed

 

  • I could not optimize your original code further than the procedure p_O2 (see  my answer to @acer). A gain in speed by a factor of 2 to 3 is possible and on my fast machine I could compute solutions. On my old PC it simply took too long. The code should be faster with the above simplifications. I can try if you can provide simplified equations.
  • I do not think that approximating the exponentials by series is a good idea. I cannot see now you will get exact results.
  • I also think that using computer algebra is less prone to errors.
  • I still think that a worksheet for spin 3/2 would be helpfull that @one man  can have a look at the equations to apply a different root finding method. If that works, it will be quite easy to scale up to 7/2. A worksheet with the big equations is simply not managable because of long computation delays and the responsiveness of the GUI. I can hardly edit code on my machine. What is possible is replacing S0 and Z0 in a worksheet for spin 3/2 and then executing the worksheet for a 7/2.

Update:

I forgot to upload

eqm_eqm2_eqTPO.mw (again corrected)

Update 2:

I had to correct eqTPO in the above.

Update 3:

Missed to add m2 in the argument of d__i in eqm2 and eqTPO. Now corrected.

Redundancies:
2 of the 6 sums in the above equations are doubled.

Also in the sums of eqm and eqm2 is redundancy. Computing all the 50238 exponentials, for example, in the denominator of

covers all the exponentials in the numerator. Here is code to verify this

exp_n:=indets(numer(lhs(eqm)),function):nops(%);
exp_d:=indets(denom(lhs(eqm)),function):nops(%);
exp_d minus exp_n;nops(%)

(note: the difference are the exponentials that do not depend on m. This is the reason why I had to correct a second time)

The number of non redundant exp calls is less than 50% of the exp calls in the above equations. So far I have not found a smart way to remove redundancies using Maple's strength. It could be that fsolve is already recognising some of the redundancies and there is not much to gain. Maybe someone can tell.

@acer 

I could make hfloat work togehter with the fsolve call. evalhf does not work when calling fsolve.
Making an optimized procedures out of the expression showed the biggest gain in computation time (real time).

It seems to me that for this particular problem there is not much to gain with hardware floats. I am surprised.

Download Campo_Médio_spin_7_2_-_Forum_optimize_03_b.mw

@Carl Love

How to map what you demonstrated above to a list of procedures. Here is an example

P:= [proc(x) sin(x) end proc,proc(x) sin(x)^2 end proc,proc(x) sin(x)^3 end proc]:

P(3.);

         [0.1411200081, 0.01991485669, 0.002810384737]

For the time beeing, I work separately listelemet by listelement

@acer 

There was an example in my question that has disappeared. I didn't have time to rewrite it.
I've progressed enough (with the support I got from this question) in the other question with the 100k exp calls that I feel confident in giving an overview of some ways to improve computing power. What I can see so far gain by hardware floats is limited.
I will post it there once presentable.

@acer 

dharr answered one part. His answer can be used for the second part of my question.

First 12 13 14 15 16 17 18 Last Page 14 of 70