## 787 Reputation

9 years, 149 days

## @acer You're right, I edited by...

@acer

You're right, I edited my answer consequently.

## @Traruh Synred I know how to use Hi...

I know how to use Histogram, but I want to plot binned data!

Can you please give an example of those binned data and of the original data instead of speaking in the void?

For one thing, a Histogram requires a large list of each data point and is thus a memory hog, and each time you make a histogram Histogram loops through the data for the quantity, and if you make it for a related quantity you have to loop again.

That is totally unclear, can you provide an example?

## @AHSAN Your claim that "each c...

@AHSAN

Your claim that "each curve have their maximum point at some vale of x" is obviously wrong (last figure on my last file on forst figure in For_reference.mw ... soom around the peaks).
and not consistent with your need "to obsrvse that difference in percentage at the maximum value of x
" at the same time (if x is a constant as you before claimed it was it' difference in percentage is 0).

I hope you'll find someone better able than me to understand the subtleties of your question.
For my part, I'm done with you.

## @AHSAN  I still don't underst...

@AHSAN

I still don't understand what is your criterion to assess this relative increase/decrease:

• Is this criterion the height of the peak?

• You taught before of "enlargement"; is your criterion the peak width at some conventional height?

• Is this criterion one of those I use in my first file?

Whatever, the case here is an example based on the peak height
Could_It_Be_This_2.mw

## @AHSAN My mistake, this phrase abou...

My mistake, this phrase about "a loop" is indeed an error.

It's likely  I didn't understand correctelyyour question.
I understood you wanted to compite the decrease/increase of the eights of the peaks.

What do you mean by "the decrease or incease in percentage between curves" ?
If you this

```A_ref := eval(A, lambda = 1.3015):
data_1 := [beta=0.1, Q=1.3015, lambda=0.9986];
B_1    := eval(B, data_1):
plot((B_1-A_ref)/A_ref*100, x=-4..0)```

you get a curve which represents the relative variation of B_1 wrt A_ref, but this is a function x.
So do you want a number or an algebraic expression or a curve

Could_It_Be_This.mw

## @dharr Right, I didn't pay atte...

Right, I didn't pay attention to that.

## In ths Search window...

type "SIR  model" and click on Search

## 2023 vs 2015...

Using fsolve(..., complex) as you suggested gives a solution close to yours to within  10-11 (or less) which verifies rel to within 10-42.
Given that diff(Re(rel[1]), A) is infinite at this point, the slightest variation in A makes "huge" differences in the values of rel[1].

2023_vs_2015.mw

So you're right, there is indeed a second real solution

## @Carl Love Thanks Carl, sorry for t...

@Carl Love

Thanks Carl... but this is not the result I get with Maple 2015:

```eval(rel, {
A = -2.7553365135418814642586082436429575890825402826031,
B = -0.70285804987973303586180028708027467941012949957141
})

[                                                     -43
[0.00004788232651393033381187767465396229938775 - 2 10    I,

-40
1.8649856419668477410903757547200476549949700479584 10

-41  ]
- 1.2309622539151182340327668587211822233603338843467 10    I]
```

A version issue?

## Are there other roots?...

@NIMA112

Are there other real roots than the one @Rouben Rostamian got?
I don't think so (the couple (A, B) @Carl Love found doesn't verify the equations with an enough small error to be considered, IMO, as a solution).

Some details are given here Fung_sand15.mw

## If the target starts from the left foc...

If the target starts from the left focus with velocity VT and the predator from the left vertex with velocity VP then: the predator will catch the prey only if VP >VT ant it will catch it at the center of the ellipse iif VP / VT = 1/e (e=eccentricity, assumed > 0 and < 1).
This is a particular situation where the capture at the center of the ellipse is not the rule.

So I doubt that, without extra conditions, the capture always takes place at the center of the ellipse

## It's quite simple...

Suppose you have to functions f[1](x) and f[2](x).
Heaviside(f(x)) (let's put aside the "x=0 case") has two values: 0 if f[1](x) < 0 ans 1 if f[1](x) > 0.
Then h(x)=Heaviside(f[1](x))+2*Heavside(f[2](x)) takes 4 values:

• 0 if f[1](x) < 0 and f[2](x) < 0
• 1 if f[1](x) > 0 and f[2](x) < 0
• 2 if f[1](x) < 0 and f[2](x) > 0
• 3 if f[1](x) > 0 and f[2](x) > 0

Then contourplotting h(x)  with contours=[0, 1, 2, 3] (provided you use a grid dense enough) will display the 4 domains corresponding to each of theconditions above.

I replaced the Heaviside function by a smooth tanh approximation) to ease the computations of the contour levels.
(the smoothing depends on a parameter [set to 1e6] in the tanh.

The generalization of the "Heaviside trick" is

`h(x) = add(2^(n-1)*f[n](x), n=1..N)`

## I believe your "proof" is not correct...

The values

`[E__1 = 0.991324553918355, E__2 = 0.972412189223068, h__1 = 0.999999468863441, nu = 0.473159649082875]`

verify eqE.
To get them do

```J := (lhs - rhs)(eqE)^2:
opt := Optimization:-Minimize(J, {0 <= nu, nu <= 0.5}, assume = nonnegative)```

But I agree that there is probably no solutions:

```J := add(`~`[lhs - rhs]([eqA, eqC, eqD, eqE]) ^~ 2);
opt := Optimization:-Minimize(J, {0 <= nu, nu <= 0.5}, assume = nonnegative, iterationlimit = 10000);
[                      [                          7
opt := [0.206149604449184010, [E__1 = 3.48734878853157 10 ,

5         ]]
E__2 = 13328.4876967435, h__1 = 6.85943362585648 10 , nu = 0.]]

eval([eqA, eqC, eqD, eqE], opt[2]);
[0.4017000000 = 4.03508624851090,

0.1745000000 = -1.93898396374958,

0.1517000000 = -1.41034232626533,

0.1332000000 = -1.93899197963935]

```

## Are you sure of your relations?...

A simple observation: nu is likely the Poisson coefficient and E1 and E2 are likely Young modulii.`

Thus h__eq (I guess a length?) has the same dimension than h__1.
The relations which define R__C, R__D and R__E do not seem consistent from a dimensional point of view: shouldn't contain h__1^2 instead of h__1

Maybe it could help if you give us the units you use?

More of this the ranges in the fsolve command seem (at least to me) quite weird (if I agree for the nu range, ranges for E are strange).

## @sursumCorda Thank's a lot.I vo...

Thank's a lot.

 4 5 6 7 8 9 10 Last Page 6 of 24
﻿