Carl Love

Carl Love

28055 Reputation

25 Badges

12 years, 356 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are answers submitted by Carl Love

I don't trust solve in any situation where the number of equations is greater than the number of variables being solved for. In those  cases, I use eliminate instead:

eliminate({x^2= 2, x^3=sqrt(2)^3}, {x});

The return value is always a list of two sets, the first containing the solutions and the second the residual information (possibly empty). The residual information is expressions (other than itself) which when equated to represent what's left after the solved-for variables are eliminated from the equations.

Now here's my unsubstantiated guess as to why solve gets this wrong sometimes, as in the present example: In situations where the residual information is not the empty set, solve correctly and intentionally returns NULL. Here are two such situations, shown here from the point of view of eliminate:

eliminate({x=2, y=3}, {x});
                       [{x = 2}, {y - 3}]
eliminate({x=2, x=3}, {x});
                        [{x = 2}, {-1}]

Note that in the second case, the residual information is self-contradictory, yet eliminate intentionally returns it anyway, as it may be useful for some purposes (that there's no need to go into here). My guess about solve is that it performs a preliminary and superficial analysis of the system of equations, and if it thus guesses that this is one of those situations where the residual information will be non-empty, it returns NULL.

Because my guess about what solve is doing in these situations is something that I've thought about for years, I'd appreciate it if someone could provide some evidence for or against it. The code underneath solve is very long, and has been patched many times over the years, so direct evidence may be difficult to obtain; but perhaps an experiment could be devised to collect some indirect evidence.
 

The issue that you're describing is merely a feature of Maple's GUI's pretty printing of Vectors and Matrices. It has absolutely nothing to do with the form by which a Matrix is imported, where it's imported from, or whether it's imported at all; it has nothing to do with how the Matrix is stored in Maple; there is no distinction between a "standard" form and some other form(s). So, there's no need (and no way) to "recast" a Matrix from one form to another because those other forms simply don't exist. The complete Matrix exists in Maple's memory even if it is displayed in the summarized form that you described.

Do:

interface(rtablesize);

If you have a default setup, this will return 10. A Vector or Matrix with a number of rows or columns greater than this number will be displayed in the summarized form (when it's prettyprinted). If want to change that, do something like

interface(rtablesize= 30);

This will affect the prettyprinted display of all Vectors and Matrices until you change it back, start a new session, or do restart.

[This solution is similar to MMcDara's/Sand15's "Exact solution", although I derived it independently before his was posted. I explicitly use the BetaDistribution. He uses it also, but not explicitly by name. In my (limited) experience, Statistics works better the closer one sticks to the pre-defined distributions.]

It may surprise you that your problem can be solved completely symbolically (and even using fewer lines of code than you used), such that your QStar can be expressed as a fairly simple parameterized random variable to Maple's Statistics. After that, it's trivial to compute the mean, standard deviation, etc., as symbolic expressions and to generate samples in the millions (rather than the 1000 that you asked for), all without a single loop.

Indeed, QStar can be expressed as an affine (i.e., linear plus a constant) combination of two Beta random variables. There are five parameters to the combination:

  • a: the lower bound (not the sample min) of the underlying Uniform distribution
  • b: the upper bound (not the sample max) of the underlying Uniform distribution
  • n: the sample size (the 10 that you used)
  • co: exactly as you used
  • cs: exactly as you used

Here a worksheet with the analysis:
 

Analyzing a random variable derived from the minimum and maximum of a uniform sample

Author: Carl Love <carl.j.love@gmail.com> 23-March-2019

restart
:

St:= Statistics
:

Set lower and upper limits of uniform distribution (a and b) and sample size n. Then generate sample and find its min A and max B.

Note that this code is commented out because I want these five variables remain symbolic, at least for the time being.

(*
(a,b,n):= (10, 20, 10):
(A,B):= (min,max)(St:-Sample('Uniform'(a,b), n)):
*)

Your computation, with symbolic variables:

Ecost:= co*int((Q-x)*f(x), x= A..Q) + cs*int((x-Q)*f(x), x= Q..B):

DCost:= diff(Ecost, Q);

co*(int(f(x), x = A .. Q))+cs*(int(-f(x), x = Q .. B))

QStar:= solve(eval(DCost, [f(x)= 1/(B-A)]), Q);

(A*co+B*cs)/(co+cs)

A and B are derived as mathematical operations (min and max) performed on a random sample of a particular distribution (Uniform(a,b)), thus they are random variables, and thus QStar is a simple linear combination of two random variables and thus is itself a random variable. Surprisingly, we can express and evaluate QStar as a random variable that Maple can easily work with.

 

The k-th order statistic of a sample of size n drawn from Uniform(0,1) has distribution BetaDistribution(k, n-k+1) (see the Wikipedia article "Order statistic" https://en.wikipedia.org/wiki/Order_statistic#Probability_distributions_of_order_statistics). The minimum corresponds to k=1 and the maximum to k=n. So, if we tranlate from interval [0,1] to [a,b] the min and max have distributions A:= (b-a)*BetaDistribution(1,n) + a and B:= (b-a)*BetaDistribution(n,1) + a.

Q:= eval(
   QStar,
   [
      A= a + (b-a)*St:-RandomVariable(BetaDistribution(1,n)),
      B= a + (b-a)*St:-RandomVariable(BetaDistribution(n,1))
   ]
)
:

We can get Maple to derive symbolic expressions for the parametric statistics of Q:

Stats:= seq(St[p], p= (Mean, StandardDeviation, Skewness, Kurtosis)):
QStats:= <cat~([mu, sigma, gamma, kappa], __Q)> =~ <Stats(Q)>;

Vector(4, {(1) = `#msub(mi("&mu;",fontstyle = "normal"),mi("Q"))` = (a*co*n+b*cs*n+a*cs+b*co)/(co*n+cs*n+co+cs), (2) = `#msub(mi("&sigma;",fontstyle = "normal"),mi("Q"))` = sqrt(n*(a^2*co^2+a^2*cs^2-2*a*b*co^2-2*a*b*cs^2+b^2*co^2+b^2*cs^2)/(co^2*n^3+2*co*cs*n^3+cs^2*n^3+4*co^2*n^2+8*co*cs*n^2+4*cs^2*n^2+5*co^2*n+10*co*cs*n+5*cs^2*n+2*co^2+4*co*cs+2*cs^2)), (3) = `#msub(mi("&gamma;",fontstyle = "normal"),mi("Q"))` = -2*n*(a^3*co^3*n-a^3*cs^3*n-3*a^2*b*co^3*n+3*a^2*b*cs^3*n+3*a*b^2*co^3*n-3*a*b^2*cs^3*n-b^3*co^3*n+b^3*cs^3*n-a^3*co^3+a^3*cs^3+3*a^2*b*co^3-3*a^2*b*cs^3-3*a*b^2*co^3+3*a*b^2*cs^3+b^3*co^3-b^3*cs^3)/((co^3*n^5+3*co^2*cs*n^5+3*co*cs^2*n^5+cs^3*n^5+8*co^3*n^4+24*co^2*cs*n^4+24*co*cs^2*n^4+8*cs^3*n^4+24*co^3*n^3+72*co^2*cs*n^3+72*co*cs^2*n^3+24*cs^3*n^3+34*co^3*n^2+102*co^2*cs*n^2+102*co*cs^2*n^2+34*cs^3*n^2+23*co^3*n+69*co^2*cs*n+69*co*cs^2*n+23*cs^3*n+6*co^3+18*co^2*cs+18*co*cs^2+6*cs^3)*(n*(a^2*co^2+a^2*cs^2-2*a*b*co^2-2*a*b*cs^2+b^2*co^2+b^2*cs^2)/(co^2*n^3+2*co*cs*n^3+cs^2*n^3+4*co^2*n^2+8*co*cs*n^2+4*cs^2*n^2+5*co^2*n+10*co*cs*n+5*cs^2*n+2*co^2+4*co*cs+2*cs^2))^(3/2)), (4) = `#msub(mi("&kappa;",fontstyle = "normal"),mi("Q"))` = (9*co^4*n^3+6*co^2*cs^2*n^3+9*cs^4*n^3+15*co^4*n^2+42*co^2*cs^2*n^2+15*cs^4*n^2+72*co^2*cs^2*n+12*co^4+12*cs^4)/((co^2+cs^2)^2*(n^2+7*n+12)*n)})

Now use the particular numeric values that you used in your Question:

NumVals:= [a= 10, b= 20, n= 10, co= 20, cs= 25]
:

We can generate huge samples from Q instantaneously:

n:= eval(n, NumVals):  Sam:= St:-Sample(eval(Q, NumVals), 10^6):  n:= 'n'
:

Compare the theoretical and sample statistics:

<Stats(Sam)>, evalf(eval(QStats, NumVals));

Vector(4, {(1) = 15.4545135702626, (2) = .589764236722842, (3) = -.351366162924583, (4) = 4.44997814866995}), Vector(4, {(1) = `&mu;__Q` = 15.45454545, (2) = `&sigma;__Q` = .5904268660, (3) = `&gamma;__Q` = -.3524307757, (4) = `&kappa;__Q` = 4.454789470})

St:-Histogram(Sam, gridlines= false);

 


 

Download OrderStats.mw

You just need to add sin(x+t) and sin(x-t), letting x be the space parameter and t the time parameter:

plots:-animate(
   plot,
   [
      [sin(x+t), sin(x-t), sin(x+t)+sin(x-t)], x= -2*Pi..2*Pi, 
      color= [red,blue,black], thickness= [0,0,2]
   ],
   t= 0..2*Pi, size= [967,610], frames= 100
);

 

That's all that's needed mathematically. You can add various aesthetic doo-dads like the red dots, of course.

Here's a procedure for a linear-time[*1], linear-memory algorithm to generate permutations:

RandPerm:= proc(n::nonnegint)
description "Fisher/Yates/Durstenfeld shuffling algorithm";
option `Reference: https://en.wikipedia.org/wiki/Fisher%E2%80%93Yates_shuffle`;
local P:= rtable([$1..n]), k, j;
   for k from n by -1 to 2 do
      j:= 1+irem(rand(), k);
      (P[k],P[j]):= (P[j],P[k])
   od;
   [seq(k, k= P)]
end proc
:

Usage:
RandPerm(9);
      [4, 6, 8, 5, 9, 3, 2, 1, 7]

A compiled-Maple version of the above procedure is possible, although I'd be skeptical about relying on a compiler-supplied version of rand. The cycle length of the generator needs to be significantly larger than n! to achieve uniformly random selection from all possible permutations. As of Maple 2018, the Maple-library version of rand has a cycle length slightly larger than 2080!, which is probably adequate for most purposes; but since 21! > 2^64, a generator based on 64-bit hardware arithmetic can't effectively shuffle a deck of playing cards, or even a half deck.

[*1]By linear-time, I mean that the computation time is proportional to the length of the output---n in this case. Obviously, this is the best possible asymptotic time complexity.

The only practical reason I can see for doing such a computation is if it's done in modular arithmetic. Is that your overall goal? Here's an example of how it's done. The first two lines are to choose a random modulus m such as might be used in a cryptographic application.

r:= rand(2^64..2^65):
m:= (nextprime(r())-1) * (nextprime(r())-1);

          m := 868409667625403659344120587678635445760
815427339135043 &^ 5579408896058 mod m;
             28610635084817154115285869033088238569

This is returned instantaneously.

This is an example for educational purposes only. My m is still not large enough or random enough for a secure cryptographic application.

The problem with Kitonum's solution is that the root plots are discontinuous. That will almost always happen if you try to directly plot parameterized roots returned by solve (or other algebraic solution techniques). I get around that by turning each parameterized root into an IVP and solving with dsolve(..., numeric).

restart:
Digits:= trunc(evalhf(Digits)):
gm:= V -> 1/sqrt(1-V^2):
T:= w-k*V:
S:= w*V-k:
H:= unapply(
   numer(
      T*S^2*gm(V)^3*3/2+I*S^2(1+I*27/4*T*gm(V))*gm(V)^2-
      (1+I*27/4*T*gm(V))*(1+I*5*T*gm(V))*T*gm(V)
   ),
   w,V,k
):
Vlist:= [
   Record("V"= 0.8, "color"= "Black", "plot"= ()),
   Record("V"= 0.9, "color"= "Red", "plot"= ()),
   Record("V"= 0.99, "color"= "Blue", "plot"= ())
]:
R:= [solve(H(w,V,k), w, explicit)]:
d:= nops(R): #number of roots
for v in Vlist do
   S:= dsolve(
      {   #odes:
          seq(eval(diff(r[i](k),k) = diff(R[i],k), V= v:-V), i= 1..d),
          #initial conditions:
          ([seq(r[i](0), i= 1..d)] =~ evalf(eval(R, [V= v:-V, k= 0])))[]
      },
      numeric
   );
   v:-plot:= plots:-odeplot(
      S, 
      [seq([[k, Re(r[i](k))], [k, Im(r[i](k))]][], i= 1..d)], 
      k= 0..1,
      color= v:-color,
      linestyle= ['[solid,dash][]'$d]
   )
od:   
plots:-display([seq(v:-plot, v= Vlist)], size= [987,610]);

First, change all < to <=. The optimizer can't handle <. Then do:

Optimization:-NLPSolve(lhs(obj), [const[], obj], 'maximize');

Perhaps you meant sqrt(x-6) rather than sqrt(x-5)? Otherwise, there's no reason why they should "combine".

@a_simsim If you change the command copy(nodeprop) to copy(nodeprop, 'deep'), then you'll get your expected results. This is because not merely the Records themselves but also the Vectors in the Records are mutable structures, which means that a direct assignment merely creates a new pointer to the original structure rather than a new structure. The keyword deep means that copy is recursively applied to all substructures.

There is a blatant bug that can be seen in lines 5-6 of showstat(Statistics:-DataSummary): The X data themselves are being used as the weights, while the weights that you passed (as Y) are totally ignored. It's exactly as if it were behaving correctly and you had specified DataSummary(X, weights= X).

The command is GraphTheory:-SetVertexPositions.

X:= Matrix(9, 9, [[0, 0, 1, 0, 1, 1, 1, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 1], [0, 0, 0, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 1, 0]]):

#Coordinates
P:= [[40, 40], [22, 22], [36, 26], [21, 45], [45, 35], [55, 20], [55, 45], [26, 59], [55, 65]]:

GT := GraphTheory;
G := GT:-Graph(X);
GT:-SetVertexPositions(G, P);
GT:-DrawGraph(G);

It is interesting that the position information is stored in the graph itself rather than simply in the plot. Also, note that SetVertexPositions modifies its first argument, the graph, rather than giving a return value.

You need to use all vertices, in order, in a list; so, tweaking is best done by alternating calls to GetVertexPositions and SetVertexPositions. Several examples are shown at help page ?GraphTheory,GetVertexPositions.

Is this what you're looking for?

restart:
A:= <
   1.364, 0.64765768e-1  ; 
   2.05,  -.182176113    ; 
   2.664, -0.13914542e-1 ; 
   2.728, 0.2193938e-1   ; 
   4.092, -0.18349139e-1 ; 
   4.1,   -.312968801    ; 
   5.328, -0.1819837e-2  ; 
   5.456, -.28840961     ; 
   6.15,  -.57076866     ; 
   7.992, .175022254
>: 
F:= unapply(CurveFitting:-LeastSquares(A, x), x):
P:= plot(
   [F, A], 0..8, color = [red, blue], style = [line, point],
   legend= ["Метод наименьших квадратов", "Экспериментальные данные"], 
   legendstyle= [font = ["Roman", 15]],
   labels= [typeset(d, ", ", `мм`), typeset(ln(`I__ `/I__0))], labelfont = ["Roman", 15],
   axesfont= ["ROMAN", "ROMAN", 15], linestyle = [solid], symbolsize = 20, 
   title= "Определение линейного коэффициента поглощения", titlefont = [Roman, bold, 20]
);
X:= A[..,1]:  Y:= A[..,2]:  `Y^`:= F~(X):
plots:-display(
   [
      P,
      Statistics:-ErrorPlot(
         `Y^`, xcoords= X, yerrors= abs~(Y-`Y^`), markers= false, thickness= 0
      )
   ],
   axes= normal, gridlines= false
);

Rather than putting an upper bound on k in the for loop, you could put an upper limit on the time your willing to spend on the computation. Here's my version:

Wrapped_prime:= proc(p::prime, {base::posint:= 10, filter::anything:= (()-> true)})
global lastK;
local pow:= base^trunc(ln(p)/ln(base)), b2:= base^2, k, m:= p;
   lastK:= 0;
   for k do
      #Recursive computation of repunit-wrapped number:
      pow:= b2*pow;  m:= pow + base*m + 1;
      if filter(k) then 
         if isprime(m) then return k fi;
         lastK:= k
      fi
   od
end proc
:     

Since repunits exist in any base, not just base 10 (Mersenne numbers being the most-famous example), the procedure will work in any base, defaulting to 10. The keyword parameter filter lets you specify a filtering of the k values, for your example, it's filter= (k-> irem(k,6) = 0). The  global lastK lets you check what was last value of k for which the wrapped number was verified composite by isprime.

The first argument to timelimit is the number of seconds of CPU time to expend before giving up.

timelimit(5, Wrapped_prime(397, filter= (k-> irem(k,6)=0)));
Error, (in isprime) time expired
lastK;
                              1656
timelimit(5, Wrapped_prime(397, base= 5));
                               2
timelimit(5, Wrapped_prime(397, base= 2));
                               24

 

Your system of equations contains numerous terms of the form

sum(FyjR, j = 1 .. m),

which makes it seem as if you believe that FyjR can be indexed by j; it can't be. That sum evaluates to simply m*FyjR.

First 148 149 150 151 152 153 154 Last Page 150 of 395