acer

32343 Reputation

29 Badges

19 years, 328 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@vanzzy Why on earth did you not tell us what you actually want in the first place? Up until the last Comment you did not specifiy that you wanted only the imaginary part of one "root" (in fact you conveyed something else, IMO). And you did not bother to give the V values before.

You didn't even bother to edit the original Question and supply the equations in copy'able text (or upload in a worksheet). You said you would, but you didn't. Someone else had to do it for you.

restart;

gm := V -> 1/sqrt(1-V^2):
T := w-k*V:
S := w*V-k:

f := unapply(I*B*gm(V)^3*T^3+gm(V)^2*T^2
             -I*gm(V)^3*T*S^2-1/3*I*B*gm(V)^3*S^3*T-1/3*gm(V)^2*S^2,
             w,B,V,k):

Hgen := simplify(rationalize(f(w,B,V,k)));

(((3*k^2-w^2)*V^2-4*V*k*w-k^2+3*w^2)*(-V^2+1)^(1/2)+I*(V*k-w)*(B*V^3*w^3+(-3*B*k*w^2-3*B*k^2+3*w^2)*V^2+3*k*w*(B*k+2*B-2)*V-B*k^3-3*B*w^2+3*k^2))*(-V^2+1)^(1/2)/(3*V^4-6*V^2+3)

altcols := ["Violet","Orange","Navy"]:

Vlist :=  [ 0.8, 0.9, 0.99 ];

[.8, .9, .99]

HSols:=[solve(numer(Hgen),w,explicit)]:
indets(%,name);

{B, V, k}

#
#
#   HSols[1] is the "first" root
#
#

k := 1:
Pk1 := plots:-display(
  seq(plot([Im](eval(HSols[1], V=Vlist[i])), B=0.0 .. 1.0,
           linestyle=[solid], gridlines=false,
           adaptive=false, numpoints=100,
           color=altcols[i], legend=[typeset(V=Vlist[i])]),
      i=1 .. nops(Vlist)),
  title=typeset('k'=k),
  gridlines=false
);
k := 'k':

k := 5:
Pk5 := plots:-display(
  seq(plot([Im](eval(HSols[1], V=Vlist[i])), B=0.0 .. 1.0,
           linestyle=[solid], gridlines=false,
           adaptive=false, numpoints=100,
           color=altcols[i], legend=[typeset(V=Vlist[i])]),
      i=1 .. nops(Vlist)),
  title=typeset('k'=k),
  gridlines=false
);
k := 'k':

 

 

Download rootplot_alt.mw

@erik10 By default dsolve/numeric will return a procedure which takes a numeric value t and returns a list containing t, x(t), and y(t), all estimated for the given numeric value of the argument. You can call this procedurelist output, as it's a single procedure which returns the results in a list. For any given numeric t this single procedure will return all the dependent results computed at that numeric value. If you wanted to use this with, say, fsolve then you'd have to pass fsolve some wrapping procedure which took in numeric t and extracted just a single dependent value from the list. That is doable, but a little awkward.

The alternative suggested below is to specify output=listprocedure which returns a list of procedures, each of which can take a numeric argument and return the estimated scalar result (respectively, for x(t), or y(t), etc). You can extract these procedures from the list, individually, and work with them individually. This is easy to use, and can also bring flexibility, as shown in the Answers to this thead which utilize fsolve.

Both forms of output involve procedures which use the same kinds of mechanisms to numerically solve the DE. The basic mechanism is to construct polynomial interpolants (or a piecewise of such) with coefficients that guarantee the accuracy specified when dsolve was called (at computed/requested data points).

Sometimes (eg, a system with two dependent variables, say x(t) and y(t) and a coupled system) it can be more efficient to use the single procedurelist output, rather than extract two procedures from listprocedure output. It can be beneficial to have the mechanism not duplicate results. But verifying such claims generally, via timing all that, is tricky (as garbage collection and memoization, etc, can come into play).

Now, let's talk about plotting. The plots:-odeplot call you made in a prior Comment computes 201 data points for each of x(t) and y(t). The plot driver will use use piecewise low order interpolation of those data points for the purpose of rendering the curve. It does not make sense to expect that the zoomed plot will be relatively accurate to more than a few decimal places for independent t values lying between those 201 computed data points. When you zoom in on a plot all that happens is that the plot driver recomputes from the previously computed data points; it doesn't request additional data points or additional accuracy. Zooming into a plot just uses the lower order interpolation of the plotter, which is completely separate from any accurate, high order interpolation within the dsolve/numeric engine or its returned procedures.

Now let's talk briefly about using dsolve/events versus the fsolve with the procedures extracted from output=listprocedure. The former has the potential to be quite a bit more efficient, as it allows the same mechanism that computes the dependent results to track itself and, say, detect a zero. It also knows how accurately it is computing the results. In contrast, the fsolve approach can resort to a numeric root-finding method like Newton's or secant, which can bounce all over the domain. And it has to find a suitable initial point that converges. And it has less idea about how accurate the function evaluations are (or could be requested) than does dsolve itself. So here we have a classic dichotomy of performance versus ease-of-use.

By coincidence, there was a recent, related question on stackexchange.com .

@gaurav_rs The numeric quadrature of the summation can be done pretty quickly at Digits=16 too, or at Digits=35. And using the closed form for the sum then for Digits=16 is faster still.

So the slowness you claim may be much more related to your original approach than it is to Digits.

restart;

Digits := 16:

f := (r,t,n) -> add(BesselJ(0, t*r)*r^(2*(i-1)), i=1..n):

CodeTools:-Usage(
  evalf(Int('f'(r,151.5793716314014,100), r=0..1))
);

memory used=172.66MiB, alloc change=38.01MiB, cpu time=1.05s, real time=1.05s, gc time=85.43ms

0.2274316631710838e-1

restart;

Digits := 16:

f := (r,t,n) -> Sum(BesselJ(0, t*r)*r^(2*(i-1)), i=1..n):

CodeTools:-Usage(
  evalf(Int('f'(r,151.5793716314014,100), r=0..1))
);

memory used=156.61MiB, alloc change=126.02MiB, cpu time=900.00ms, real time=871.00ms, gc time=76.16ms

0.2274316631710838e-1

restart;

Digits := 16:

f := unapply(sum(BesselJ(0, t*r)*r^(2*(i-1)), i=1..n), [r,t,n]):

CodeTools:-Usage(
  evalf(Int('f'(r,151.5793716314014,100), r=0..1, method=_Dexp))
);

memory used=52.02MiB, alloc change=4.00MiB, cpu time=354.00ms, real time=355.00ms, gc time=41.19ms

0.2274316631710838e-1

restart;

Digits := 35:

f := (r,t,n) -> add(BesselJ(0, t*r)*r^(2*(i-1)), i=1..n):

CodeTools:-Usage(
  evalf(Int('f'(r,151.5793716314014,100), r=0..1))
);

memory used=227.46MiB, alloc change=38.01MiB, cpu time=1.28s, real time=1.28s, gc time=103.39ms

0.22743166317108384811188537753590361e-1

restart;

Digits := 35:

f := (r,t,n) -> Sum(BesselJ(0, t*r)*r^(2*(i-1)), i=1..n):

CodeTools:-Usage(
  evalf(Int('f'(r,151.5793716314014,100), r=0..1))
);

memory used=211.26MiB, alloc change=126.02MiB, cpu time=1.20s, real time=1.12s, gc time=158.08ms

0.22743166317108384811188537753590361e-1

restart;

Digits := 35:

f := unapply(sum(BesselJ(0, t*r)*r^(2*(i-1)), i=1..n), [r,t,n]):

CodeTools:-Usage(
  evalf(Int('f'(r,151.5793716314014,100), r=0..1))
);

memory used=143.57MiB, alloc change=44.01MiB, cpu time=934.00ms, real time=892.00ms, gc time=118.77ms

0.22743166317108384811188537753590361e-1

 

Download evalfInt_example_16_35.mw

Perhaps only grainy images of solutions or suggestions could be offered in response.

@bliengme I think that it's still unclear what you want, and what is wrong with the following.

Do you accept what Maple's plots:-semilogplot gives you, for the case of a semilog plot with logarithmic scaling on the horizontal axis and linear scaling on the vertical axis? If so, then what's wrong with the reverse of that, as in the plots:-logplot behavior?

(Naturally, the labels could be anything you want.)

Or, perhaps it is the spacing of the subticks (the minor gridlines) that you want different?

Please state exactly what you want, not what some others think of definitions.

restart;

plots:-logplot(10^y, y=0..1,
               gridlines=true, labels=["y","x"]);

plots:-semilogplot(log10(x), x=1..10,
                   gridlines=true, labels=["x","y"]);

 

Download logplot.mw

 

@bliengme In a semilogarithmic plot, one axis has a logarithmic scale and the other axis has a linear scale.

From the description of the logplot command, "semi-logarithmic plot of functions where the vertical axis has a logarithmic scale." And the example shows that the horizontal axis has the usual linear scale.

So, how is the logplot command not supplying what you're asking for?

Anyway, you can also control the mode of either axis, separately. As described in the other link I gave.

Using dualaxisplot for this seems silly.

 

@eslamelidy You try a modestly high-order series solution, for various values of eq(z).

eslam_acc_complex.mw

@xinwmath What special form of generally invalid result from combine(..,symbolic) or simplify(...,symbolic) are you looking for?

restart;

expr3 := 3*(8*L__a+4*lambda+(4*I)*sqrt(3)*lambda)^(2/3)
         *(8*L__a-(4*I)*sqrt(3)*lambda+4*lambda)^(2/3)*lambda^2*(L__a-lambda);

3*(8*L__a+4*lambda+(4*I)*3^(1/2)*lambda)^(2/3)*(8*L__a-(4*I)*3^(1/2)*lambda+4*lambda)^(2/3)*lambda^2*(L__a-lambda)

ans3 := simplify(expand(combine(expr3,symbolic)),symbolic);

48*(L__a^2+L__a*lambda+lambda^2)^(2/3)*lambda^2*(L__a-lambda)

simplify(expand(combine(ans3 - expr3,symbolic)));

48*lambda^2*((L__a^2+L__a*lambda+lambda^2)^(2/3)-((L__a^2+L__a*lambda+lambda^2)^2)^(1/3))*(L__a-lambda)

simplify(expand(combine(ans3 - expr3,symbolic)),symbolic);

0

 

Download symbolic_sick.mw

@vv That is good and simple. Vote up.

But if the "polynomial entries in a single variable" have floating-point coefficients then it does not treat the concept of rank in the same manner as the Rank command (or Matlab's, etc, via singular values which is pretty standard). I mean, following substitution of numeric values for the unknown b, of course.

floatRank.mw

@Kitonum Another kick at the can, for Joe's query. (If `solve` returns an inquality, or non-equality, then it'll be more difficult still.)

Of course, this is the complement.

restart;
with(LinearAlgebra):

A := <b,1,3 | 4,b,6>:
B := <b,1,3 | 4,b+1,6>:

GE1 := GaussianElimination(A):
S1 := [solve({`*`(seq(GE1[i,i], i=1..2))=0}, b)]:
select(u->Rank(eval(A, u))<2, S1);
                           [{b = 2}]

GE2 := GaussianElimination(B):
S2 := [solve({`*`(seq(GE2[i,i], i=1..2))=0}, b)]:
select(u->Rank(eval(A, u))<2, S2);

                               []

@Kitonum Thanks for that observation. How about this amendment?

restart;
A := <b,1,3 | 4,2,6>:
GE := LinearAlgebra:-GaussianElimination(A):

solve({`*`(seq(GE[i,i], i=1..2))<>0}, b);

                            {b <> 2}

@Joe Riel Another way would be to perform Gaussian elimination, and solve for the set of restrictions than all diagonal elements are nonzero.

For example,

restart;

A := <1, 1, -2, -3|-1, -9, b, 11|-1, 3, 2, -1|b, -10, -4, 6>:

cols := [1,2,3]:
GE := LinearAlgebra:-GaussianElimination(A[..,cols]):
solve({seq(GE[i,i]<>0, i=1..nops(cols))}, b);

                            {b <> 2}

solve({seq(Or(GE[i,i]>0, GE[i,i]<0), i=1..nops(cols))}, b);

                        {2 < b}, {b < 2}

cols := [1,2]:
GE := LinearAlgebra:-GaussianElimination(A[..,cols]):
solve({seq(GE[i,i]<>0, i=1..nops(cols))}, b);

                            {b = b}

The help pages for topic   DocumentTools/Components/Plot has, as its very last example a 3D animation that is embedded with a custom scaling (zoom). That's the only way I know to currently get a 3D plot shown automatically with non-square viewing window as well as a nice zoom.  The point is that there is no irritating white space that one often gets at top and bottom.

I have a procedure (somewhere, on some hard-drive) that takes an constrained 3D plot and applies a transformation to get custom aspect ratios of the axes (eg, x vs z, and y vs z).  The point here is that currently Maple offers a unconstrained 3D plot and display its axes as a cube, or it matches the axes to the numeric ranges of the data. But there's nothing to allow arbitrary, pleasing axes-ratios in the case that the x- or y-data are on a completely different scale than the z-data.

That is not too hard to set up with numeric integration. You can even make a "black box" procedure which accepts a numeric value for c and returns the float approximation of the integral. That could be plotted as a function of c, etc.

(If taking a numeric approach then I'd recommend trying either one of the Monte-Carlo or the _CubaCuhre methods for evalf(Int(...)) since the integrand will be discontinuous at the boundary, because outside the region it will be zero. Most other methods rely on smoothness, and the cost of splitting at the implicit boundary will not be nice, especially if nesting a 1D integrator.)

Or are you really hoping for an explicit formula? Whether that can be accomplished will depend on the example. In general it won't be possible.

@Mac Dude Yes, UseHardwareFloats=true will cause some computations to be done faster. But preventing many instances of software float computation will break far more computations.

The UseHardwareFloats environment variable was introduced in Maple 6, and for several major releases almost the only effect it had was on whether datatype=float acted like datatype=sfloat or datatype=float[8] for rtables.

A few years later the scalar HFloat was devised, and a few people sought out ways to make fast scalar floating-point computations easier to accomplish. (There will always be some people whose wish for a silver bullet defies cold logic. HFloats are not immediate and still need memory management, and do not bring the same degree of performance benefits as evalhf, let alone the Compiler.) The option hfloat for procedures arose around the same time, and allowed more flexibility than evalhf even if not as much performance benefit.) Then UseHardwareFloats was used to also control default HFloat creation upon extraction of scalars from float[8] rtables, and in modern Maple it can plot a role similar to option hfloat, but at the top-level. Alas, UseHardwareFloats documentation is thin.

The reason I'm describing some history is that it's important to realize that a very large portion of the Maple Library (many 100s of thousands of lines of code) was designed and existed for decades under the scheme that increasing Digits would normally allow more accurate floating-point computation. This aspect is still relied upon in many places in the Library.

But if you set UseHardwareFloats to true then in modern Maple that will strictly prevent higher software precision computation and thus also more accurate results from being attained in quite a few routines, some of them key.

And there are additional, important nuances, aside from just high working precision. The hardware float environment has restricted range (roughly +-10^308 down to about +-10^(-308), as I reckon you know). But with software floats much larger or smaller exponents can be used, even with default Digits=10. There are Library commands which rely on that in order to function as designed.

Consider the expression exp(750.1 - x) where x is in the range 760 .. 765. This produces values which are not implausible for an underlying physical setting or model. But if one happens to expand that symbolically, then under forced hardware floating-point the result becomes Float(infinity)/exp(x) which will bring no joy for x in the stated range. So, here, with UseHardwareFloats=true a reasonable problem has suddenly become intractable and requires considerably more care and effort to handle. Here are a few examples, but note that many more problematic cases can arise. Float computations can be problematic under all settings, but this hardware float setting introduces a lot of issues which Maple's software floating-point arena handles nicely.

You wrote, "If it isn't working or useful, then why is the option even there?"  Now, I most certainly did not say or imply that UseHardwareFloats=true "isn't working or useful! I don't know how you managed to make that non sequitur. My opinion that UseHardwareFloats=true is not a good top-level, initialized setting does not at all imply that it is never useful.

Just as you can set option hfloat on a procedure of your own devising, you can also set UseHardwareFloats=true. Within a procedure, or for a limited kind of top-level calculation (pure float linear-algebra, say) it can indeed work and be useful. As Joe, mentioned, as an environment variable its value is inherited by child procedure calls, but setting it does not affect the parent.

So it can be useful, in a targeted, specific computational subprocess like a custom procedure for some task you might have. But setting it at the top-level, as a blanket setting, is going to break stuff.

First 233 234 235 236 237 238 239 Last Page 235 of 592