acer

32343 Reputation

29 Badges

19 years, 328 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@Rariusz Ok thanks. I am hoping that by also utilizing a smoothing of the diff(theta(t),t) data I can get an even better fit to the theta(t) data. By this I mean a weighting in the objective, for the raw theta(t) data alongside smoothed diff(theta(t),t) data.

I'll let you know when I get to report something. Perhaps tomorrow.

I still think that it might help significantly to know a ballpark for the frequency of the simulated i(t). In the absence of such knowledge I may try using an additional, moderate penalty against a higher frequency i(t).

@Rariusz Thank you. I hope that you can understand why your latest reponse contains the details which ought to have been provided originally.

As is not uncommon for such kinds of nonlinear optimization problem with an oscillatory aspect, there are many local minima which are "close" to globally optimal for some reasonable objective metrics. That might be a problem.

So, in just a few minutes a worksheet of mine produces a similar plot for ode_x2 which compares quite closely to your simulation, for a fit to the theta(t) data from the third column. It uses a continuous approximation for V(t) using the data from the second column. It took me about 15 minutes to write.

But... depending on the run, and depending on the ranges I supply for the parameters, I can get several sets of results for the parameters. All produce a reasonable fit (plot) for the simulation of theta(t) and diff(theta(t),t). But the frequency of oscillation of the simulated i(t) varies a great deal, from slow to very fast.

If there is some physical reason (constraints on the motor design, etc) why a low/medium/high frequency of oscillation is needed for the simulated i(t) , or some tighter ranges for the parameters, then now would be a great time to mention it.

Additionally, it's not clear whether you need to match the highly/fast varying data for diff(theta(t),t) with a highly varying/fast simulation of that dependent quantity, or whether you would be satisfied with a a simulation that approximated a smoothed curve of that data (fourth column).

What kinds of entries does the Matrix have? (eg. floats, integers, rationals, symbolic expressions?)

For floating-point entries, done right, it can take about a second (depending on your machine, of course) at hardware double precision.

How fast do you need it to be? How accurate? Is the Matrix dense?

ps. A 600-by-600 Matrix is 2-dimensional, and not "high-dimension". It is not small, but it has only two dimensions.

@Kitonum This person has not yet figured out how to tell us which old version he's using, despite posting many questions on this site where the version affects the answer.

I'm trying to get this straight, and sorry if I'm being dense.

Is this right:
 - The second column represents values for V(t) at the corresponding data values
    in the first column (ie, t). An interpolating function formed from these data values
    could be used as a known function in the dsolve call (where the OP originally has just V).
 - The goal is the find values for parameters J,b,K,R,L such that the numeric
    dsolve solution procedure assigned to ode_x2 will fit the data values in the third
    column (ie, theta(t)).

If that is correct, then what does the i(t) (for which a procedure in the dsolve return is assigned by the OP to ode_x1) represent, and does it correspond to anything in the supplied data? Is there some quantitaive or qualitative goal for ode_x1 at the optimal parameter values?

Is the fourth column of data supposed to come into the objective function (eg, fit to the diff(theta(t),t) from the dsolve procedure, by contributing to the objective)?

What are permissible ranges for the five parameters?

@radaar If you change the assignment to y:=copy(x) and add return y as a new last statement then you don't get the result you showed. sigma_ac2.mw

If you simply add a new last statement return copy(y) , and leave the assignment y:=x as it was originally, then you get the results you showed. sigma_ac1.mw

The distinction is in whether you want the ensuing min/max examination and replacement to take place on the copy of x, or to be done in-place on x itself.

@Mac Dude The case of a literal float multiplied by Pi, under an evalf call, seems like the odd man out to me.

I am not a fan of kernelopts(floatPi=true) , but I can understand how/why the other cases below behave as they do. It's a tricky business, though, and if there are additional subtleties to its behavior then that ought to be fully documented.

restart;

kernelopts(version);

`Maple 2018.0, X86 64 LINUX, Mar 9 2018, Build ID 1298750`

#
# This is the "new" floatPi behavior.
# It is (at least) comprehensible, even if
# not to everyone's taste.
#
restart;
kernelopts(floatPi=true): # default
Pi*704.0;

2211.681228

restart;
kernelopts(floatPi=false):
Pi*704.0;

704.0*Pi

#
# This is "new" floatPi behavior, together
# with normal evaluation rules for procedure
# calls (arguments evaluated up front).
#
restart;
kernelopts(floatPi=true):
trace(sin):
sin(Pi*704.0);

execute sin, args = 2211.681228
{--> enter sin, args = 2211.681228
<-- exit sin (now at top level) = -0.1272144399e-6}

-0.1272144399e-6

restart;
kernelopts(floatPi=false):
trace(sin):
sin(Pi*704.0);

execute sin, args = 704.0*Pi
{--> enter sin, args = 704.0*Pi

704.0

0.

0

0

<-- exit sin (now at top level) = 0}

0

#
# Since `evalf` has special-evaluation rules it can delay
# the evaluation of Pi*Q and do what it wants before passing
# as argument to `sin`.
#
restart;
kernelopts(floatPi=true):
trace(sin):
Q := 704.0:
evalf(sin(Pi*Q));

execute sin, args = 2211.681228
{--> enter sin, args = 2211.681228
<-- exit sin (now at top level) = -0.1272144399e-6}

-0.1272144399e-6

restart;
kernelopts(floatPi=false):
trace(sin):
Q := 704.0:
evalf(sin(Pi*Q));

execute sin, args = 704.0*Pi
{--> enter sin, args = 704.0*Pi

704.0

0.

0

0

<-- exit sin (now at top level) = 0}

0.

#
# Using % for the float gets behavior like with an
# assigned name.
#
restart;
kernelopts(floatPi=true):
trace(sin):
704.0:
evalf(sin(Pi*%));

execute sin, args = 2211.681228
{--> enter sin, args = 2211.681228
<-- exit sin (now at top level) = -0.1272144399e-6}

-0.1272144399e-6

#
# But now, with a literal float instead of the assigned
# `Q`, there seems to be no floatPi effect.
#
# This is the interesting case.
#
restart;
kernelopts(floatPi=true):
trace(sin):
evalf(sin(Pi*704.0));

execute sin, args = 704.0*Pi
{--> enter sin, args = 704.0*Pi

704.0

0.

0

0

<-- exit sin (now at top level) = 0}

0.

 

Download floatPi_anomaly.mw

So using an Equation Label seems to get the behavior like for a literal float (differing from that of an assigned name or %).

Let's try and narrow down the interesting case, which seems to be the case of a function call made under evalf. In other cases there is behavior similar to automatic simplification.

kernelopts(version);

`Maple 2018.0, X86 64 LINUX, Mar 9 2018, Build ID 1298750`

restart;
kernelopts(floatPi=true):
kernelopts(floatPi);
704.0*Pi;

true

2211.681228

#
# Like automatic simplification
#
restart;
kernelopts(floatPi=true):
kernelopts(floatPi);
'704.0*Pi';

true

2211.681228

restart;
kernelopts(floatPi=true):
kernelopts(floatPi);
f('704.0*3');
f('704.0*Pi');

true

f(2112.0)

f(2211.681228)

restart;
kernelopts(floatPi=true):
kernelopts(floatPi);
evalf(f('704.0*3'));
evalf(f('704.0*Pi'));  # ???

true

f(2112.0)

f(704.0*Pi)

restart;
kernelopts(floatPi=true):
kernelopts(floatPi);
f:=proc() return eval(args[1], 1); end proc:
trace(f);
evalf(f('704.0*3'));
evalf(f('704.0*Pi')); # ???

true

f

{--> enter f, args = 2112.0
<-- exit f (now at top level) = 2112.0}

2112.0

{--> enter f, args = 704.0*Pi
<-- exit f (now at top level) = 704.0*Pi}

2211.681228

 

Download more_floatPi.mw

I have been careful to put a forcing, explicit call like kernelopts(floatPi=truefalse) after each restart above, to ensure that it was being explicitly set each time. The default is supposed to be true. But I found the following, which I wanted to side-step,


 

restart;

kernelopts(floatPi=true):
kernelopts(floatPi);

true

kernelopts(inline=true):
kernelopts(inline);

true

kernelopts(assertlevel=0):
kernelopts(assertlevel);

0

restart;

kernelopts(floatPi=false):
kernelopts(floatPi);

false

kernelopts(inline=false):
kernelopts(inline);

false

kernelopts(assertlevel=1):
kernelopts(assertlevel);

1

restart;

kernelopts(floatPi); # not reset to default!?

false

kernelopts(inline); # reset to default, true

true

kernelopts(assertlevel); # reset to default, 0

0

 

Download kernelopts_anomaly.mw

@Joe Riel Not all readers will know the difference between loading from Library Archive (executing export ModuleLoad if a module, scanning archives for the name and adjusting permanent store) and executing a call to with (thus rebinding names of exports, if a module).

So the term "load" is somewhat ambiguous. And the subtle distinctions might be relevant.

Loading a module package, eg. by referencing the name of an export (triggering loading from Archive, and thence execution of its Moduleload), can occur even without calling with.

It'd be extra effort, but it might help clarify the issues if you used terms with strict definitions, or a glossary.

Just a thought.

@Mac Dude Given the subtleties in play it is not prudent to comment on some changes you made to an example, without the complete code-to-reproduce. An upload would do. For example, did you change kernelopts(floatPi=false) earlier?

I suspect that the discrepancy in the floating-point roundoff effect is due to some interplay between the use of an Equation Label and kernelopts(floatPi).

Notice how the argument to sin differs in the last three statements below.

restart;

kernelopts(version);

`Maple 2015.2, X86 64 LINUX, Dec 20 2015, Build ID 1097895`

(1)

kernelopts(floatPi=true): # default

Digits:=15;

15

(2)

trace(sin):

x1:=Int(3.52*10^8, ti = 0 .. 1):

R:=value(x1);

352000000.

(3)

sin(2*Pi*(3));

execute sin, args = 2211681228.12721
{--> enter sin, args = 2211681228.12721

<-- exit sin (now at top level) = -0.443987770092724e-5}

 

-0.443987770092724e-5

(4)

sin(2*Pi*R);

execute sin, args = 2211681228.12722

{--> enter sin, args = 2211681228.12722
<-- exit sin (now at top level) = 0.556012229902952e-5}

 

0.556012229902952e-5

(5)

sin('2*Pi'*R);

execute sin, args = 2211681228.12721

{--> enter sin, args = 2211681228.12721
<-- exit sin (now at top level) = -0.443987770092724e-5}

 

-0.443987770092724e-5

(6)

 

Download floatPiEqnLabel.mw

@Earl The frontend command is not necessary here, and I apologize if its inclusion was confusing. (I reach for it as a partial knee-jerk reaction when I see the derivatives, and I was trying some other things first.)

Here is the code, using eliminate without frontend. It is straightforward.

restart;

eq1 := -T*sin(theta(t)) = m*(diff(X(t), t, t)
       +L*(diff(theta(t), t, t))*cos(theta(t))
       -L*(diff(theta(t), t))^2*sin(theta(t))):

eq2 := T*cos(theta(t))-m*g = m*(diff(Y(t), t, t)
       +L*(diff(theta(t), t, t))*sin(theta(t))
       +L*(diff(theta(t), t))^2*cos(theta(t))):

K := eliminate({eq1,eq2},{m}):

conds := simplify(map(`=`,K[2],0)):

conds[1]/T;

L*(diff(diff(theta(t), t), t))+(diff(diff(X(t), t), t))*cos(theta(t))+sin(theta(t))*(diff(diff(Y(t), t), t)+g) = 0

(1)

 

Download eliminate_examp2.mw

There are a variety of ways to eliminate m from this particular pair of equations. Taking a difference of solve(...,m) calls is one alternative, but I find that somewhat ad hoc in the sense that it is not so generally applicable (generally, both equations might not contain or be solvable for m, etc).

To me this seems reasonable as a first approach: If you want to eliminate m then try the eliminate command.  Just my $0.02.

(I like Kitonum's ingenuity, hence my upvote for his answer.)

@vanzzy Post your followups on this topic/thread here. Most people cannot access the very old Maple 12 version. I won't have such access until later in the week at earliest.

I will delete more duplicates of this topic thread, even if you're just running into problems with the code people posted here.

@Mariusz Iwaniuk Calling simplify up front on the integrand is not a whole lot easier than calling convert, IMO.

But sure, it works for the second example above, even in Maple 2016 which is the version with which the OP saved his upload.

Using Maple 2016.2 for 64bit Linux the first example sometimes produces an error "too many levels of recursion", but more often loses the kernel connection, if simplify is first applied to the integrand as Mariusz has shown.

restart

kernelopts(version)

`Maple 2016.2, X86 64 LINUX, Jan 13 2017, Build ID 1194701`

(1)

`assuming`([simplify(ln(r)*sin(k1*Pi*ln(r/Rs)/ln(r4/Rs))/r)], [Rs > 0, r4 > 0, r4 > Rs, k1::posint])

ln(r)*sin(k1*Pi*(ln(Rs)-ln(r))/(-ln(r4)+ln(Rs)))/r

(2)

`assuming`([int(ln(r)*sin(k1*Pi*(ln(Rs)-ln(r))/(-ln(r4)+ln(Rs)))/r, r = Rs .. r4)], [Rs > 0, r4 > 0, r4 > Rs, k1::posint])

Error, (in sprintf) too many levels of recursion

 

``

Download 2_integrals_ac.mw

@vanzzy The `size` option was introduced in Maple 18.

Your Maple 12 came ten major releases ago, and is ten years old.

It is unproductive to posts questions for Maple 12 without telling us of that aspect.  There is a drop menu for "Product" available when you submit a question, where you can specify the version.

The good news is that the `size` option is unrelated to your label issue. I used it only to make the plot show a little more clearly.

@vanzzy What version of Maple are you using?

In really old versions you can set _EnvExplicit to true , if the explicit option is not valid when calling solve.

Don't forget the complications about what the "first" root might be, or evaluate to in floating-point (numeric root-finding and convergence), or that the curves may cross, and that there are discontinuties.

restart;

gm := V -> 1/sqrt(1-V^2):
T := w-k*V:
S := w*V-k:

f := unapply(I*B*gm(V)^3*T^3+gm(V)^2*T^2
             -I*gm(V)^3*T*S^2-1/3*I*B*gm(V)^3*S^3*T-1/3*gm(V)^2*S^2,
             w,B,V,k):

Hgen := simplify(rationalize(f(w,B,V,k)));

(-V^2+1)^(1/2)*(((3*k^2-w^2)*V^2-4*V*k*w-k^2+3*w^2)*(-V^2+1)^(1/2)+I*(B*V^3*w^3+(-3*B*k*w^2-3*B*k^2+3*w^2)*V^2+3*k*w*(B*k+2*B-2)*V-B*k^3-3*B*w^2+3*k^2)*(V*k-w))/(3*V^4-6*V^2+3)

altcols := ["Violet","Orange","Navy"]:

Vlist :=  [ 0.8, 0.9, 0.99 ];

[.8, .9, .99]

_EnvExplicit := true;

true

HSols:=[solve(numer(Hgen),w)]:
indets(%,name);

{B, V, k}

#
#
#   HSols[1] is the "first" root
#
#

k := 1:
R1 := simplify(HSols[1], size):
Pk1 := plots:-display(
  seq(plot([Im](eval(R1, V=Vlist[i])), B=0.0 .. 1.0,
           linestyle=[solid], gridlines=false,
           adaptive=false, numpoints=100,
           color=altcols[i], legend=[typeset(V=Vlist[i])]),
      i=1 .. nops(Vlist)),
  title=typeset('k'=k),
  gridlines=false
);
k := 'k':

k := 5:
R1 := simplify(HSols[1], size):
Pk5 := plots:-display(
  seq(plot([Im](eval(R1, V=Vlist[i])), B=0.0 .. 1.0,
           linestyle=[solid], gridlines=false,
           adaptive=false, numpoints=100,
           color=altcols[i], legend=[typeset(V=Vlist[i])]),
      i=1 .. nops(Vlist)),
  title=typeset('k'=k),
  gridlines=false
);
k := 'k':

 

Download rootplot_alt_EnvExplicit.mw

First 232 233 234 235 236 237 238 Last Page 234 of 592