acer

32348 Reputation

29 Badges

19 years, 330 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@Rouben Rostamian  The difficulty you cite is due to the colored replacement (here) not being a name, that's true. But it's due to it being a function call, rather than a string.

Typesetting:-mo (or mi, mn, etc) can be useful for purely display purposes. But so-called TypeMK atomic identifiers (as names proper) can sometimes provide a closer match in type to a given name.

It's a shame that after more than a decade these mechanisms of 2D typesetting are still under-documented.

@Kitonum Thanks. An improvement would be to handle differently any name which was already an atomic identifier in TypeMK form (so as to inject a mathcolor into it, or replace such).

But that case is more rare. It may be adequate to the OP to handle double-underscore "atomic" subscripted names, which the above code demonstrates.

Of course, other marked up atomic identifiers (e.g. names with overdot, circumflex, etc, as from the Layout palette) can also be colored via the main menubar. And then assignment can be made, as above. Such colored names could made with the mouse rather than programatically, which I suspect is what Robert Lopez's Answer alludes to.

The mechanism by which NumericEventHandler works is supposed to be local to the execution of a procedure, thereby providing local context analogous to environment variables.

Can you show a complete example (and result) where that is inadequate or works otherwise?

Did you try Student:-Precalculus:-CompleteSquare ?

e:= x^2+y^2-8*x-12*y-92:

Student:-Precalculus:-CompleteSquare(e, [x,y]);

                  2          2
           (y - 6)  + (x - 4)  - 144

@Markiyan Hirnyk How did you come up with the number 9? It's used as an input to Mma's DominantColors command, in your screenshot. What led to the choice of 9?

@kuwait1 No, yy is not a pair (system) of differential equations. That's the whole point of the earlier replies.

Change the colon to a semicolon and you'll see how.

Don't forget that f(x,y) has been assigned a value of x*y .

[Silly clarification question by me deleted.]

Obtaining a specific number of reduced colors is an interesting problem.

@Adam Ledger Thanks for that. The situation seems similar to my conjecture. I will submit a bug report.

Do you have any conditions on i and j (ie. integers, positive, odd or even, etc)? Have you considered that for your particular restrictions on i and j the equation might always hold and, if so, then might it be that your unstated goal is to verify that?

FWIW, here is a (somewhat) related problem arising from `solve`,

restart;

ee:=(-sin((2*Pi*i-Pi)/(2*i-j))-sin((Pi*j-Pi)/(2*i-j))
    -sin(2*Pi*i/(2*i-j))-sin(Pi*j/(2*i-j)))/(2+2*cos(Pi/(2*i-j)));

(-sin((2*Pi*i-Pi)/(2*i-j))-sin((Pi*j-Pi)/(2*i-j))-sin(2*Pi*i/(2*i-j))-sin(Pi*j/(2*i-j)))/(2+2*cos(Pi/(2*i-j)))

solve(ee,i);

Error, (in RootOf) expression independent of, _Z

solve(ee,{i,j});

Error, (in RootOf) expression independent of, _Z

 

Download AL.mw

 

 

@Adam Ledger Names like _S000100 can be generated by the procedures called internally by the solve command. (It uses such names as placeholders for expressions with certain conditions upon them.) Offhand it sounds like solve (or its internals, such as within SolveTools) may have encountered an unexpected situation which it was unable to handle.

If you can provide sample code that reproduces the error message then we might be able to offer some concrete suggestion, or confirm it as a bug.

@Jaqr Offjand I might consider raising the working precision within Dens, while specifying a coarser tolerance to evalf(Int(....) through its epsilon option. And then perhaps the result within Dens could be examined and allowed through if its imaginary part were small enough.

Note that if you only increase the working precision in evalf(Int(...)) then it will default to striving for even more accuracy (which may be partly unnecessary extra cost) to match Digits. If you're fortunate then you might be able to increase the working precision (to get more accurate numeric evaluations of the integrand, with better roundoff error) while still keeping the target accuracy (epsilon option) coarser.

Consider this call,

  res := evalf(Int(...., digits=d, epsilon=eps ));

You may be able to increase that d while keeping eps only good enough to ensure that res is accurate enough only to ensure that its imaginary part is acceptably small. Let's suppose that you'd be OK with allowing abs(Im(res))<1e-6  say, in which case you might possibly be OK with Dens returning just Re(res). You can still increase d while using eps=1e-6 , and if you're fortunate that increased d will control roundoff error in the integrand enough. This approach might be less expensive than only increasing Digits (or a higher d) within Dens while letting evalf/Int use epsilon=10^(-d) by default. You might not really need such a fine default epsilon. Hope that makes sense.

However, as seen above I suspect, the epsilon use for the evalf/Int call might need to fine enough to let the optimization routine correctly estimate the gradients to its own satisfaction. I didn't try it, but it might even be possible to use UseHardwareFloats=false and say Digits=K=10 at the top-level (so that NonlinearFit didn't demand too accurate results from Dens while estimating gradients) while still allowing Dens to set UseHardwareFloats=deduced and Digits=higher_than_K and epsilon=10^(-K) to match the higher level's working precision.

In my experience getting the optimal performance out of a nested numeric problem like this requires fiddling with all the options. If the working precision is not enough for the target accuracy at any stage then the whole thing can just appear to stall completely, which is often even more awkward than an outright error message.

Alternatively, it might be possible to use evalc (and maybe Re or simplify under assumptions) to symbolically manipulate the integrand so that the imaginary part of res vanished or just come out smaller. We'd probably need to see your more involved examples to make decent suggestions about that...

@Jaqr If there is only one independent variable in the model function then it can be supplied to the NonlinearFit command as either a list or as just the single name. Both work, in that code.

In the Parameters section of the help page for NonlinearFit this is documented. (italics mine)

   v -- name or list(names); name(s) of independent variables in the model function

 

@Adam Ledger The help-page for the series command specifically mentions that the special "series" data structure is discussed on the help-page for type,series . And that second reference does discuss using the op command on the result returned by the series command.

Additionally, the underlying SERIES internal data structure (printed by the dismantle command) is among those laid out in the Appendix of the Programming Manual.

I've make the above links as URLs, but you can also read those pages directly within Maple's Help system.

 

Others have answered that you can test for linearity and supply the optimalitytolerance option to LSSolve only where relevant.

But similar to your earlier Question, I'd also suggest (as a Comment) using simple bounds rather than setting up their equivalent as supplied constraints.

Also, you have a few choices about the command to use.

restart;

with(Optimization);

[ImportMPS, Interactive, LPSolve, LSSolve, Maximize, Minimize, NLPSolve, QPSolve]

list1 := [0.127345646906885e-1-0.186555124779203e-2*D32-0.282637107183903e-3*D33, -0.427981372479296e-2+0.184372031092059e-1*D32+0.366060535331614e-2*D33, -0.279056870350439e-1+0.497068050546578e-1*D32+0.300683751452398e-1*D33, -0.159123153512316e-1-0.200310190531632e-2*D32+0.110642730744851e-1*D33, -0.358677392345135e-2-0.477282036776905e-2*D32+0.279495051520868e-2*D33, -.158025406913808+.301050727553470*D32+0.991309483578555e-1*D33, -0.767170565747362e-1+0.287589092672543e-1*D32+0.380554240544922e-1*D33, 0.134025593814442e-1-0.163134747085529e-1*D32-0.978424817965354e-2*D33, 0.177936771272063e-1-0.193555892719151e-1*D32-0.117324484775754e-1*D33, .136323651819599-.101383912457110*D32-0.800923073293239e-1*D33, 0.658540765374620e-1-.134530865070270*D32-0.449966493124888e-1*D33, 0.366589441985546e-1-0.923517762126252e-1*D32-0.313964041159186e-1*D33, 0.200320004853408e-2-0.454710553314498e-2*D32-0.121523285055995e-2*D33, 0.362766049610844e-2-0.103494064252009e-1*D32-0.347855768021822e-2*D33, 0.431461474510905e-2-0.122762710681104e-1*D32+0.305664301894285e-3*D33]:

LSSolve(list1, [0 <= D32, 0 <= D33]);

[0.568341870143581306e-3, [D32 = HFloat(7.07504600183494e-310), D33 = HFloat(1.5919542520404282)]]

bnds:=seq(var=0.0 .. infinity, var=[D32, D32]);

D32 = 0. .. infinity, D32 = 0. .. infinity

LSSolve(list1, bnds);

[0.568341870143581197e-3, [D32 = HFloat(0.0), D33 = HFloat(1.5919542520404282)]]

obj:=1/2*add(ee^2,ee=list1):
obj:=simplify(%); # optionally

0.6606247418e-1*D32^2+(-0.7794935628e-1+0.4996249667e-1*D33)*D32-0.3501541583e-1*D33+0.1099762000e-1*D33^2+0.2843981192e-1

# Building the objective that LSSolve documents as using,
# the general purpose Optimization:-Minimize command.
# to a suitable solver (here, QPSolve).
infolevel[Optimization]:=1:
Minimize(obj, bnds);
infolevel[Optimization]:=0:

QPSolve: calling QP solver
QPSolve: number of problem variables 2
QPSolve: number of general linear constraints 0

[0.5683418713028e-3, [D32 = HFloat(0.0), D33 = HFloat(1.591954251465317)]]

# Building the same objective that LSSolve documents as using,
# and forcing dispatch to NLPSolve along with `optimalitytolerance`.
#infolevel[Optimization]:=1:
NLPSolve(obj, bnds, optimalitytolerance=0.001);
infolevel[Optimization]:=0:

[0.568341871302769502e-3, [D32 = HFloat(0.0), D33 = HFloat(1.591954235723754)]]

 

Download worksheet_help_modif.mw

@Adam Ledger You might find that the results from the dismantle command help in understanding how the op results come about.

You could run these in separate Execution Groups, to see what I mean.
restart;

op( series(exp(z),z,3) );

dismantle( series(exp(z),z,3) );

op( 1+z+(1/2)*z^2+O(z^3) );

dismantle( 1+z+(1/2)*z^2+O(z^3) );

op( MultiSeries:-series(exp(z),z,3) );

dismantle( MultiSeries:-series(exp(z),z,3) );

# True here, as resulting order happens to agree.
MultiSeries:-series(exp(z),z,3) = series(exp(z),z,3);
evalb( % );

# For this particular example the structure is also identical.
evalb( addressof(MultiSeries:-series(exp(z),z,3))
       = addressof(series(exp(z),z,3)) );

# But not here, as their concept of the "requested" order differ.
MultiSeries:-series(sin(z),z,3) = series(sin(z),z,3);
evalb( % );

MultiSeries:-series(sin(z),z,4) = series(sin(z),z,4);
evalb( % );

# Make of the O(z^4) term what you will...
MultiSeries:-series(sin(z),z,3);

# Make of this what you will...
series(sin(z),z,3);

Another way to avoid any cancellation of factors (if you prefer the extra spacing as effect of the multiplication) is to handle terms separately,

labels=[z,Typesetting:-Typeset(Delta*z)/Typesetting:-Typeset(Delta*t)]
First 274 275 276 277 278 279 280 Last Page 276 of 592