acer

32622 Reputation

29 Badges

20 years, 43 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

[Silly clarification question by me deleted.]

Obtaining a specific number of reduced colors is an interesting problem.

@Adam Ledger Thanks for that. The situation seems similar to my conjecture. I will submit a bug report.

Do you have any conditions on i and j (ie. integers, positive, odd or even, etc)? Have you considered that for your particular restrictions on i and j the equation might always hold and, if so, then might it be that your unstated goal is to verify that?

FWIW, here is a (somewhat) related problem arising from `solve`,

restart;

ee:=(-sin((2*Pi*i-Pi)/(2*i-j))-sin((Pi*j-Pi)/(2*i-j))
    -sin(2*Pi*i/(2*i-j))-sin(Pi*j/(2*i-j)))/(2+2*cos(Pi/(2*i-j)));

(-sin((2*Pi*i-Pi)/(2*i-j))-sin((Pi*j-Pi)/(2*i-j))-sin(2*Pi*i/(2*i-j))-sin(Pi*j/(2*i-j)))/(2+2*cos(Pi/(2*i-j)))

solve(ee,i);

Error, (in RootOf) expression independent of, _Z

solve(ee,{i,j});

Error, (in RootOf) expression independent of, _Z

 

Download AL.mw

 

 

@Adam Ledger Names like _S000100 can be generated by the procedures called internally by the solve command. (It uses such names as placeholders for expressions with certain conditions upon them.) Offhand it sounds like solve (or its internals, such as within SolveTools) may have encountered an unexpected situation which it was unable to handle.

If you can provide sample code that reproduces the error message then we might be able to offer some concrete suggestion, or confirm it as a bug.

@Jaqr Offjand I might consider raising the working precision within Dens, while specifying a coarser tolerance to evalf(Int(....) through its epsilon option. And then perhaps the result within Dens could be examined and allowed through if its imaginary part were small enough.

Note that if you only increase the working precision in evalf(Int(...)) then it will default to striving for even more accuracy (which may be partly unnecessary extra cost) to match Digits. If you're fortunate then you might be able to increase the working precision (to get more accurate numeric evaluations of the integrand, with better roundoff error) while still keeping the target accuracy (epsilon option) coarser.

Consider this call,

  res := evalf(Int(...., digits=d, epsilon=eps ));

You may be able to increase that d while keeping eps only good enough to ensure that res is accurate enough only to ensure that its imaginary part is acceptably small. Let's suppose that you'd be OK with allowing abs(Im(res))<1e-6  say, in which case you might possibly be OK with Dens returning just Re(res). You can still increase d while using eps=1e-6 , and if you're fortunate that increased d will control roundoff error in the integrand enough. This approach might be less expensive than only increasing Digits (or a higher d) within Dens while letting evalf/Int use epsilon=10^(-d) by default. You might not really need such a fine default epsilon. Hope that makes sense.

However, as seen above I suspect, the epsilon use for the evalf/Int call might need to fine enough to let the optimization routine correctly estimate the gradients to its own satisfaction. I didn't try it, but it might even be possible to use UseHardwareFloats=false and say Digits=K=10 at the top-level (so that NonlinearFit didn't demand too accurate results from Dens while estimating gradients) while still allowing Dens to set UseHardwareFloats=deduced and Digits=higher_than_K and epsilon=10^(-K) to match the higher level's working precision.

In my experience getting the optimal performance out of a nested numeric problem like this requires fiddling with all the options. If the working precision is not enough for the target accuracy at any stage then the whole thing can just appear to stall completely, which is often even more awkward than an outright error message.

Alternatively, it might be possible to use evalc (and maybe Re or simplify under assumptions) to symbolically manipulate the integrand so that the imaginary part of res vanished or just come out smaller. We'd probably need to see your more involved examples to make decent suggestions about that...

@Jaqr If there is only one independent variable in the model function then it can be supplied to the NonlinearFit command as either a list or as just the single name. Both work, in that code.

In the Parameters section of the help page for NonlinearFit this is documented. (italics mine)

   v -- name or list(names); name(s) of independent variables in the model function

 

@Adam Ledger The help-page for the series command specifically mentions that the special "series" data structure is discussed on the help-page for type,series . And that second reference does discuss using the op command on the result returned by the series command.

Additionally, the underlying SERIES internal data structure (printed by the dismantle command) is among those laid out in the Appendix of the Programming Manual.

I've make the above links as URLs, but you can also read those pages directly within Maple's Help system.

 

Others have answered that you can test for linearity and supply the optimalitytolerance option to LSSolve only where relevant.

But similar to your earlier Question, I'd also suggest (as a Comment) using simple bounds rather than setting up their equivalent as supplied constraints.

Also, you have a few choices about the command to use.

restart;

with(Optimization);

[ImportMPS, Interactive, LPSolve, LSSolve, Maximize, Minimize, NLPSolve, QPSolve]

list1 := [0.127345646906885e-1-0.186555124779203e-2*D32-0.282637107183903e-3*D33, -0.427981372479296e-2+0.184372031092059e-1*D32+0.366060535331614e-2*D33, -0.279056870350439e-1+0.497068050546578e-1*D32+0.300683751452398e-1*D33, -0.159123153512316e-1-0.200310190531632e-2*D32+0.110642730744851e-1*D33, -0.358677392345135e-2-0.477282036776905e-2*D32+0.279495051520868e-2*D33, -.158025406913808+.301050727553470*D32+0.991309483578555e-1*D33, -0.767170565747362e-1+0.287589092672543e-1*D32+0.380554240544922e-1*D33, 0.134025593814442e-1-0.163134747085529e-1*D32-0.978424817965354e-2*D33, 0.177936771272063e-1-0.193555892719151e-1*D32-0.117324484775754e-1*D33, .136323651819599-.101383912457110*D32-0.800923073293239e-1*D33, 0.658540765374620e-1-.134530865070270*D32-0.449966493124888e-1*D33, 0.366589441985546e-1-0.923517762126252e-1*D32-0.313964041159186e-1*D33, 0.200320004853408e-2-0.454710553314498e-2*D32-0.121523285055995e-2*D33, 0.362766049610844e-2-0.103494064252009e-1*D32-0.347855768021822e-2*D33, 0.431461474510905e-2-0.122762710681104e-1*D32+0.305664301894285e-3*D33]:

LSSolve(list1, [0 <= D32, 0 <= D33]);

[0.568341870143581306e-3, [D32 = HFloat(7.07504600183494e-310), D33 = HFloat(1.5919542520404282)]]

bnds:=seq(var=0.0 .. infinity, var=[D32, D32]);

D32 = 0. .. infinity, D32 = 0. .. infinity

LSSolve(list1, bnds);

[0.568341870143581197e-3, [D32 = HFloat(0.0), D33 = HFloat(1.5919542520404282)]]

obj:=1/2*add(ee^2,ee=list1):
obj:=simplify(%); # optionally

0.6606247418e-1*D32^2+(-0.7794935628e-1+0.4996249667e-1*D33)*D32-0.3501541583e-1*D33+0.1099762000e-1*D33^2+0.2843981192e-1

# Building the objective that LSSolve documents as using,
# the general purpose Optimization:-Minimize command.
# to a suitable solver (here, QPSolve).
infolevel[Optimization]:=1:
Minimize(obj, bnds);
infolevel[Optimization]:=0:

QPSolve: calling QP solver
QPSolve: number of problem variables 2
QPSolve: number of general linear constraints 0

[0.5683418713028e-3, [D32 = HFloat(0.0), D33 = HFloat(1.591954251465317)]]

# Building the same objective that LSSolve documents as using,
# and forcing dispatch to NLPSolve along with `optimalitytolerance`.
#infolevel[Optimization]:=1:
NLPSolve(obj, bnds, optimalitytolerance=0.001);
infolevel[Optimization]:=0:

[0.568341871302769502e-3, [D32 = HFloat(0.0), D33 = HFloat(1.591954235723754)]]

 

Download worksheet_help_modif.mw

@Adam Ledger You might find that the results from the dismantle command help in understanding how the op results come about.

You could run these in separate Execution Groups, to see what I mean.
restart;

op( series(exp(z),z,3) );

dismantle( series(exp(z),z,3) );

op( 1+z+(1/2)*z^2+O(z^3) );

dismantle( 1+z+(1/2)*z^2+O(z^3) );

op( MultiSeries:-series(exp(z),z,3) );

dismantle( MultiSeries:-series(exp(z),z,3) );

# True here, as resulting order happens to agree.
MultiSeries:-series(exp(z),z,3) = series(exp(z),z,3);
evalb( % );

# For this particular example the structure is also identical.
evalb( addressof(MultiSeries:-series(exp(z),z,3))
       = addressof(series(exp(z),z,3)) );

# But not here, as their concept of the "requested" order differ.
MultiSeries:-series(sin(z),z,3) = series(sin(z),z,3);
evalb( % );

MultiSeries:-series(sin(z),z,4) = series(sin(z),z,4);
evalb( % );

# Make of the O(z^4) term what you will...
MultiSeries:-series(sin(z),z,3);

# Make of this what you will...
series(sin(z),z,3);

Another way to avoid any cancellation of factors (if you prefer the extra spacing as effect of the multiplication) is to handle terms separately,

labels=[z,Typesetting:-Typeset(Delta*z)/Typesetting:-Typeset(Delta*t)]

@leafgreen 

There are two ways to enter subscripts: as indexed names, or with double-underscore. The subscripting is just a visual effect, ie. their 2D-Math pretty-printing.

Your problematic example contained the wrong kind of subscripted name, the indexed name, which is not valid as the parameter of an operator/procedure. As mentioned above, the double-underscore variant is valid as operator paramater.

Now, you say that there are other places in your worksheet that looked similar... and worked. It seems unlikely that you entered the wrong variant by keyboard typing, in the problematic example (since you seem previously unaware that both were possible). One scenario is that you did a copy and paste of the double-underscore variant. In at least some older version of Maple there was a GUI bug, where a mouse-copied double-underscore name got pasted in as 2D Input of the indexed name.

What version are you using?

 

@quo When using an option name or value such as none, you may have to ensure that it doesn't collide with some another bound verson of that name (say, the export of a loaded package, or the name of a local at the current level).

So you can try using it as ':-none' instead of just none.

The colon-minus makes it mean the global version of the name, to guard against such collision. And those unevaluation quotes (single right-quotes) guard against the case that the unprotected global name might have been assigned.

 

@Iza Do you get Tom's result if you replace that line by,

Force[i,..]:=convert(map(rhs,[op(fsolve({r1, r2}, {H,V_A}, H=0..1))]),Array);
or,
Force[i,..]:=Array(map(rhs,[op(fsolve({r1, r2}, {H,V_A}, H=0..1))]));

@gaurav_rs another way would be to scale and translate the portion, bound it with a rectangle, and display it all as true plots.

I once thought that would a burden, especially to make the "axes" and tickmarks in the insert (textplot). But given that the inset curve looks somewhat jagged when done as an Image, it might be worth the effort. A good reusable procedure to do it with true inset plots would need to handle scaling and offset details nicely.

@phil2 Indeed, there are IMO several better/safer ways to install DirectSearch than to place additional files in the "lib" folder of one's Maple installation.

In Maple 2017 one can install the DirectSearch package from the Maple Cloud (a new feature of Maple 2017). This unpacks the files into a "toolbox" folder, whose "lib" subdirectory gets recognized by Maple automatically (without setting libname).

Or one could follow the instructions in the .zip file, if installing by hand from the Application Center (including in older Maple). The last time I looked there was a README in the v.2 of the DirectSearch package available at the App Center. It described three other ways: 1) use the InstallerBuilder .mla which self-unpacks to a toolbox folder, 2) create a toolbox folder in the appropriate place, and copy files there by hand into a "lib" subfolder, or 3) place the files in any convenient folder and then set libname in a Maple initialization file.

Could you not use CodeGeneration to produce the actual syntax of your target language?

First 279 280 281 282 283 284 285 Last Page 281 of 596