> f:=proc() return args; end proc:
> stopat(f):
> f(x-x=0);
f:
1* return args
DBG> args
Execution stopped: Stack limit reached.
acer
It may be that, for procedural form, there can be problems internal the solvers, when they come to compute numerical estimates of the gradient of the objective. Supplying the means to compute the gradient numerically is a possible workaround for such an issue.
The above did not manifest itself in D.J.Keenan's suggested usage, as in that case the first argument immediately evaluates to an expression (for which maple can compute a formulaic gradient, symbolically).
So, one might wonder, how hard is it to supply the objective gradient oneself, given that the objective is taken as a "black box"? It's not so hard. Matrix form input, as the Optimization packages defines that concept, is probably the easiest way to do it. I realize that the original posting mentioned procedural form input, but it should be possible to set up Matrix form input programmatically.
Below I'll show my attempt at solving with a supplied objective gradient, for the original example. I'm doing this despite the fact that it's clear that the `objective` procedure is equivalent to a simple expression. If the internal computation of derivatives numerically is really what's at fault underneath, then this technique may be necessary to not see *any* procedural form multivariable problem stall (by mistakenly computing that the gradients are all zero).
Here's the example, in full, using this approach. (I made a minor edit to the `objective` procedure, where is looks as if the assignment N:=nargs was omitted.)
objective := proc(x1, x2, x3, x4, x5, x6, x7, x8, x9)
local phi, i, p, N;
N := nargs:
phi[0] := Pi/2;
for i from 1 to nargs do
phi[i] := args[i];
end do;
p := seq((sin(phi[i-1])*mul(cos(phi[j]), j=i..N-1))^2, i=1..N);
return p[7];
end proc:
some_point := [seq(RandomTools[Generate](float(range=0..evalf(2*Pi), method=uniform)), i=1..9)]:
p := proc(V)
global objective;
objective(V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]);
end proc:
objgrd := proc(V, W)
W[1] := fdiff(objective, [1],
[V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]);
W[2] := fdiff(objective, [2],
[V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]);
W[3] := fdiff(objective, [3],
[V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]);
W[4] := fdiff(objective, [4],
[V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]);
W[5] := fdiff(objective, [5],
[V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]);
W[6] := fdiff(objective, [6],
[V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]);
W[7] := fdiff(objective, [7],
[V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]);
W[8] := fdiff(objective, [8],
[V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]);
W[9] := fdiff(objective, [9],
[V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]);
NULL;
end proc:
bl:=Vector(9,fill=0.0,datatype=float):
bu:=Vector(9,fill=evalhf(2*Pi),datatype=float):
Optimization[NLPSolve](9, p, [], [bl,bu],
maximize=true,
objectivegradient=objgrd, initialpoint=Vector(some_point),
method=sqp ); # or method=modifiednewton
acer
It may be that, for procedural form, there can be problems internal the solvers, when they come to compute numerical estimates of the gradient of the objective. Supplying the means to compute the gradient numerically is a possible workaround for such an issue.
The above did not manifest itself in D.J.Keenan's suggested usage, as in that case the first argument immediately evaluates to an expression (for which maple can compute a formulaic gradient, symbolically).
So, one might wonder, how hard is it to supply the objective gradient oneself, given that the objective is taken as a "black box"? It's not so hard. Matrix form input, as the Optimization packages defines that concept, is probably the easiest way to do it. I realize that the original posting mentioned procedural form input, but it should be possible to set up Matrix form input programmatically.
Below I'll show my attempt at solving with a supplied objective gradient, for the original example. I'm doing this despite the fact that it's clear that the `objective` procedure is equivalent to a simple expression. If the internal computation of derivatives numerically is really what's at fault underneath, then this technique may be necessary to not see *any* procedural form multivariable problem stall (by mistakenly computing that the gradients are all zero).
Here's the example, in full, using this approach. (I made a minor edit to the `objective` procedure, where is looks as if the assignment N:=nargs was omitted.)
objective := proc(x1, x2, x3, x4, x5, x6, x7, x8, x9)
local phi, i, p, N;
N := nargs:
phi[0] := Pi/2;
for i from 1 to nargs do
phi[i] := args[i];
end do;
p := seq((sin(phi[i-1])*mul(cos(phi[j]), j=i..N-1))^2, i=1..N);
return p[7];
end proc:
some_point := [seq(RandomTools[Generate](float(range=0..evalf(2*Pi), method=uniform)), i=1..9)]:
p := proc(V)
global objective;
objective(V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]);
end proc:
objgrd := proc(V, W)
W[1] := fdiff(objective, [1],
[V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]);
W[2] := fdiff(objective, [2],
[V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]);
W[3] := fdiff(objective, [3],
[V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]);
W[4] := fdiff(objective, [4],
[V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]);
W[5] := fdiff(objective, [5],
[V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]);
W[6] := fdiff(objective, [6],
[V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]);
W[7] := fdiff(objective, [7],
[V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]);
W[8] := fdiff(objective, [8],
[V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]);
W[9] := fdiff(objective, [9],
[V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]);
NULL;
end proc:
bl:=Vector(9,fill=0.0,datatype=float):
bu:=Vector(9,fill=evalhf(2*Pi),datatype=float):
Optimization[NLPSolve](9, p, [], [bl,bu],
maximize=true,
objectivegradient=objgrd, initialpoint=Vector(some_point),
method=sqp ); # or method=modifiednewton
acer
You could first try units of days (if you hope to integrate out as far as a year). Then if you have some success you could try something finer. Trying to integrate out as far as a year, in seconds, with a requested tolerance of 10^(-8) or so, is asking for a lot.
Creating one system out of all three that you had originally seems to me to be a good idea, possibly even necessary.
As for method, I would start out with rkf45, and then if you find that you really need a stiff solver then perhaps move to rosenbrock (or gear?).
I seemed to be able to integrate out as far as a few thousand seconds, using your original setup (extremely fine scale, high requested accuracy, three separate systems) but with method=rkf45.
Using the three-system setup that you had, the stiff solver method=rosenbrock will probably require procedures like `diff/X1` (where that is formed by requesting more from the listprocedure output). I suspect that this can be accomplished something like,
dX1 := subs(sol2, diff(x(t), t)):
`diff/X1` := proc () global dX1; dX1(args[1]) end proc:
But maybe that's not right. By doing that for all of dX,dY,dZ,dZ1,dY1,dX1 I was able to get dsolve/numeric to produce sol3. But I wasn't able to get it to produce numbers, without first running out of resources. I concluded this was due to the three-system approach and the very high accuracy requirements.
acer
You could first try units of days (if you hope to integrate out as far as a year). Then if you have some success you could try something finer. Trying to integrate out as far as a year, in seconds, with a requested tolerance of 10^(-8) or so, is asking for a lot.
Creating one system out of all three that you had originally seems to me to be a good idea, possibly even necessary.
As for method, I would start out with rkf45, and then if you find that you really need a stiff solver then perhaps move to rosenbrock (or gear?).
I seemed to be able to integrate out as far as a few thousand seconds, using your original setup (extremely fine scale, high requested accuracy, three separate systems) but with method=rkf45.
Using the three-system setup that you had, the stiff solver method=rosenbrock will probably require procedures like `diff/X1` (where that is formed by requesting more from the listprocedure output). I suspect that this can be accomplished something like,
dX1 := subs(sol2, diff(x(t), t)):
`diff/X1` := proc () global dX1; dX1(args[1]) end proc:
But maybe that's not right. By doing that for all of dX,dY,dZ,dZ1,dY1,dX1 I was able to get dsolve/numeric to produce sol3. But I wasn't able to get it to produce numbers, without first running out of resources. I concluded this was due to the three-system approach and the very high accuracy requirements.
acer
There's nothing wrong with your syntax, quantum.
He did not place uneval quotes around the argument,
objective(x1, x2, x3, x4, x5, x6, x7, x8, x9)
and the result was that this evaluated immediately to,
sin(x6)^2*cos(x7)^2*cos(x8)^2
which is in fact what got used as the objective for him.
If the uneval quotes are added, so as to instead pass,
'objective'(x1, x2, x3, x4, x5, x6, x7, x8, x9)
then the same warning about "no iterations performed" shows up, along with an inferior answer I believe.
So, what to do? I would suggest supplying the derivatives (in procedural form) and specifying a non-default method (as an option).
acer
There's nothing wrong with your syntax, quantum.
He did not place uneval quotes around the argument,
objective(x1, x2, x3, x4, x5, x6, x7, x8, x9)
and the result was that this evaluated immediately to,
sin(x6)^2*cos(x7)^2*cos(x8)^2
which is in fact what got used as the objective for him.
If the uneval quotes are added, so as to instead pass,
'objective'(x1, x2, x3, x4, x5, x6, x7, x8, x9)
then the same warning about "no iterations performed" shows up, along with an inferior answer I believe.
So, what to do? I would suggest supplying the derivatives (in procedural form) and specifying a non-default method (as an option).
acer
I tried to watch maple do its work here, but...
> restart:
> stopat(`PD/PD`,22): # Maple 11.00
> D(x->sin(x));
[]
`PD/PD`:
22* body := pointto(disassemble(addressof(p))[6]);
DBG> next
Error, (in D/procedure) invalid keyword expression
Ok, I can imagine that pointto() might not work properly within the debugger.
So I trace `PD/PD` instead. It returns x->cos(x) to `D`. OK.
So, continuing to watch D at work, it look's like it's the call reduce_opr(eval(d,1)) which turns x->cos(x) into cos.
How dangerous would it be to replace one's D with a version which did not call reduce_opr ?
acer
Here's what I thought, when I first read this. (It seems similar to Axel's thoughts, too...)
The given curve describes a circle, not a line. Any point radiating from the center of the circle will intersect that circle perpendicularly. This is true, in particular, of the line which emanates from the circle's center and passes through the point in question. So, find the line which passes through the circle's center and the given point.
Finding the equation of the line which passes through two distinct points is basic.
Calculating the tangent line, and its slope, the reciprocal, etc, seems needlessly difficult. I would guess, from the easiness of the question, that such a complicated approach is beyond the poster's current knowledge.
On the other hand, perhaps the fact that every radius intersects the circle perpendicularly is not supposed to be known by the original poster. Maybe that's unclear.
acer
If you suspect that your problem contains expressions or procedures whose floating-point evaluation show numerical issues when computed at the default working precision of Maple, then consider raising that setting.
The sames goes for an optimization problem which, just by eye-balling its set-up, might look badly scaled. Or you could try to improve the scaling of the formulation.
The numerical issues might be due to round-off error accumulated during the floating-point evaluation of Qmess at numeric values of (lam,a). For "non-atomic" expressions such as in the formula in the body of Qmess Maple's numeric computation model will not promise a particular level of accuracy. Rather, Maple will use a fixed working precision (possibly with guard digits for some subcalculations).
So, try raising the value of the Maple environment variable Digits. You might try 14 first, and then more on to a value of 20, and then perhaps by doubling after that. You might also try to conceive of a means of corroborating your computed results (usually a good idea, with most numerical computations, in most any program) within Maple itself.
If you find that higher working precision makes all your calculations unbearably slower, then you could try to find out just which parts of the calculation require the higher setting in order to get a satisfactory accuracy. For example, it may be the only Qmess requires it. You could try reformulating Qmess for numeric evaluation, or bump up Digits only within Qmess. You can also look at `evalf` as a means to do floating-point evaluation at an alternate fixed precision.
You might also wish to read about the options for NLPSolve, in particular the 'optimalitytolerance' option which will depend by default on the value of Digits.
acer
Maple is (mostly) interpreted, rather than compiled. The Maple kernel interprets the source of the routines which are stored in its system archives (libraries). There are some very nice benefits for us all, partially brought about by this aspect of Maple.
One such benefit is the runtime debugger, see the help-page ?stopat for some introduction. Another such benefit is that the contents of Maple's own library functions can be viewed by the user.
For example,
showstat(discont); # show with line numbers
interface(verboseproc=3):
eval(discont); # lineprint the routine
One can even view routines local (but not exported) from a module, which are normally hidden. Eg,
restart:
showstat(Statistics:-Visualization:-BuildPlotStruct);
kernelopts(opaquemodules=false):
showstat(Statistics:-Visualization:-BuildPlotStruct);
Some compiled routines, built into the kernel, are not accessible. Eg,
showstat(print);
showstat(irem);
While there is some variation in quality due to the long span over which Maple has evolved, the quality is rather good and studying it can help one learn some math as well as the maple language.
acer
One might call it isolating for an expression.
Now, it may matter to you whether it is allowed to introduce any subexpressions of the expression you are trying to isolate.
Consider this example, in which Vo/Vs does not appear explicitly.
eq := (Vs - Vo) = R;
If both the lhs and rhs of eq are divided by either Vs or Vo then the subexpression Vo/Vs (or Vs/Vo its reciprocal) will manifest itself. I can imagine that you might say yes or no to allowing that, depending on what you want to do with the answers.
So, would you say that either of these next two are valid examples of the sort of solution that you're after, given example eq above?
try1 := Vo/Vs = Vo/(R+Vo);
try2 := Vo/Vs = (-R+Vs)/Vs;
Alternatively, you might not only disallow the above manipulations, but also discount recognition of the reciprocal Vs/Vo.
acer
One might call it isolating for an expression.
Now, it may matter to you whether it is allowed to introduce any subexpressions of the expression you are trying to isolate.
Consider this example, in which Vo/Vs does not appear explicitly.
eq := (Vs - Vo) = R;
If both the lhs and rhs of eq are divided by either Vs or Vo then the subexpression Vo/Vs (or Vs/Vo its reciprocal) will manifest itself. I can imagine that you might say yes or no to allowing that, depending on what you want to do with the answers.
So, would you say that either of these next two are valid examples of the sort of solution that you're after, given example eq above?
try1 := Vo/Vs = Vo/(R+Vo);
try2 := Vo/Vs = (-R+Vs)/Vs;
Alternatively, you might not only disallow the above manipulations, but also discount recognition of the reciprocal Vs/Vo.
acer
A := Matrix(7,7,[[1,-5,0,0,0,0,0],
[0,1,-5/4,0,0,0,0],[0,0,1,-5/4,0,0,0],
[0,0,0,1,-5/4,0,0],[0,0,0,0,1,-5/4,0],
[0,0,0,0,0,1,-5/4],[0,0,0,0,0,0,1]]):
B := Vector(7,[1,1/4,1/4,1/4,1/4,0,204]):
with(LinearAlgebra):
# Here's what happened at some of the steps.
B[7]:=B7: # generalization, instead of a particular B[7]
sol:=LinearSolve(A,B); # solve that system
eqs:=seq(sol[i],i=1..7); # not really necessary; same as sol[i], but shows them
# You want to know which values of B7 give integer
# values to all the members of that sequence.
# Each eq[i] must be equal to a (very possibly
# different) integer. Integer solutions to the
# equation x=eq[i] are sought. The isolve routine
# is for computing such things, and it allows one
# to specify a name in the event that the solutions
# are parametrized.
# Take the equation x=eq[3] for example.
#> isolve(x=eqs[3],Z||3);
# {x = 499 + 625 Z3, B7 = 204 + 256 Z3}
# The equation in the above result which specifies
# x = 499 + 625*Z3 is of no immediate interest. If
# you know B7 then you can find all the x's using
# the eq sequence, or using sol.
# And yes, the key is to extract the equation of the
# form B7=.... from the above result. Fortunately,
# Maple's `type` routine can check for such patterns
# easily. One of those two equations will be found
# to be of the right type, and one will not.
#> type( x = 499 + 625*Z3, identical(B7)=anything );
# false
#> type( B7 = 204 + 256*Z3, identical(B7)=anything );
# true
# Generate all those equations, including those with
# B7 given by parametrized equations, at once.
seq(isolve(x=eqs[i],Z||i), i=1..7);
# Extract from that only the equations of the form
# B7 = ...., and do it all at once. The `map`
# routine lets it all be done at once. The % token
# is maple's programmatic way to allow access to the
# previous result.
# I put {} around % so that there would be something
# for `map` to get a grip on. Otherwise, `map` would
# see all the sequence at once and wouldn't know
# which arguments were which.
# The above is a common trick in Maple. Suppose
# that you have a routine, `foo`, which can take
# either two or more arguments. How do you
# know whether the call foo(a,b,c) was meant to
# be foo( a, (b,c) ) a two-argument call with an
# sequence of two things b and c as the second
# argument *or* foo(a,b,c) with a three-argument
# call. How convenient if you can instead do it
# as foo(a,{b,c}), when it's clear what the second
# argument is. So, passing sequences is generally
# ambiguous. Better to somehow encapsulate any
# sequence that you want to get passed as a single
# argument.
# Encapsulating a sequence in a list with [] or
# in a set with {} is a pretty common Maple trick.
# In my actual case, `map` is going to
# return a set if I pass it {%} instead of the
# sequence which is what % actually was. I used `op`
# on the result because I want the contents of the
# result set and not that actual set.
map(y->op(select(x->type(x,identical(B7)=anything),y)),{%}):
# So now there are seven equations of the form
# B7 = ...., for you particular Matrix A and Vector B.
# There are other examples of A and B for which the
# seven equations B7=... would contain an x. In such
# a case, this crude approach would break down and
# not work without some extra efforts. As it is,
# the example is simple enough.
# All seven of these equations B7=.... must have
# integer solutions, for each solution to your
# problem. So I called isolve on them all at once
# because they must *all* hold at once for the B7
# to be a valid solution value.
isolve(%);
# The above call gives a solution in which all
# the variables are parameterized. Only the B7 or Z7
# part of the result is of interest. So extract it,
# once again using `map` and % to denote the previous
# result.
select(x->type(x,identical(Z7)=anything),%);
# So, that might explain some of my thinking at the
# time.
# As for your other questions, you could consider
# these results.
> S1 := {u = v, a = b, f = g, a = g}:
> select(x->type(x,identical(a)=anything),S1);
{a = b, a = g}
> op(select(x->type(x,identical(a)=anything),S1));
a = b, a = g
> S := {u=v,a=b,a=g,f=g}, {q=r,a=d,a=p,j=k}:
> map( y->op(select(x->type(x,identical(a)=anything),y)), {S});
{a = b, a = g, a = d, a = p}
# At an even lower level,
S1;
op(S1);
# I hope that helps some.
acer
Your Matrix and Vector are given by,
A := Matrix(7,7,[
[1,-5,0,0,0,0,0],
[0,1,-5/4,0,0,0,0],
[0,0,1,-5/4,0,0,0],
[0,0,0,1,-5/4,0,0],
[0,0,0,0,1,-5/4,0],
[0,0,0,0,0,1,-5/4],
[0,0,0,0,0,0,1]]);
B := Vector(7,[1,1/4,1/4,1/4,1/4,0,204]);
You may be able to get rid of some overhead, and make the thing faster, by doing the most expensive part of the linear solving just once. The first and more expensive bit can be the LU decomposition of the Matrix. But if you compute that part first, just once, then what's left within the loop is the less expensive back- and forward substitutions done for each different B.
with(LinearAlgebra):
(p,lu):=LUDecomposition(A,output='NAG'):
for i from 4 by 4 to 2000 do
B[7] := i; X := LinearSolve([p,lu], B);
if not type(X,'Vector'(integer)) then
next; else print(X^%T); end if;
end do:
Faster still, one could compute the symbolic answer, and then within the loop merely instantiate at values for the parameter. Ie,
B[7]:=B7:
X:=LinearSolve(A,B):
for i from 4 by 4 to 2000 do
X_inst := eval(X,B7=i):
if not type(X_inst,'Vector'(integer)) then
next; else print(X_inst^%T); end if;
end do:
On the other hand, the example seems simple enough to grind out an answer. Maybe there are things in the numtheory package to do this, but that's not my field. So I take a crude approach below. (If anyone wants to teach me some number theory here, that could be fun.)
> B[7]:=B7:
> sol:=LinearSolve(A,B):
> eqs:=seq(sol[i],i=1..7):
> seq(isolve(x=eqs[i],Z||i), i=1..7):
> map(y->op(select(x->type(x,identical(B7)=anything),y)),{%}):
> isolve(%);
{Z3 = 4 _Z1, Z7 = 204 + 1024 _Z1, Z4 = 3 + 16 _Z1, Z6 = 51 + 256 _Z1, Z1 = _Z1,
Z5 = 12 + 64 _Z1, Z2 = _Z1, B7 = 204 + 1024 _Z1}
> select(x->type(x,identical(Z7)=anything),%); # the answer
{Z7 = 204 + 1024 _Z1}
So it looks like the solution is just 204 + 1024*n .
acer