Carl Love

Carl Love

28055 Reputation

25 Badges

12 years, 360 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are replies submitted by Carl Love

@Adam Ledger Yes, the PSLQ algorithm, which is used to derive the summation formulas, is pretty deep.

@Adam Ledger I don't yet understand it either, and I'm not embarrassed to admit it.

Do not put off learning complex analysis just because you haven't yet mastered real analysis. Complex "makes more sense" overall. It has far fewer pathological counter-examples than does real analysis.

I often bemoan the fact that complex numbers are totally excluded from elementary calculus education. This leads to the abominations int(1/x, x) = ln(abs(x)) (rather than just ln(x)) and the Maclaurin series for arctan has radius of convergence 1 even though arctan has derivatives of all orders for all real numbers.

@vv Yes, the unstated pedagogical point of this Question is to develop custom quadrature rules to handle classes of integrands with specific types of singularities by using weight functions. Student:-NumericalAnalysis:-Quadrature(..., method= gaussian...) doesn't look for singularities. As is shown even in elementary calculus classes, applying standard quadrature rules to Int(sqrt(x), x= 0..1) gives poor results due to the derivative going infinite at x= 0.

@Mariusz Iwaniuk As I mentioned above, this behavior is explained on the help page ?evalf,sum. Since there's an option to override the behavior (formal= false), it must be by design. I guess it's an open question whether it's a bad design.

@Adam Ledger The code above is much simpler than many things that you've posted! What don't you understand? I'll explain it. I know that you're interested in number theory. If so, you need to understand the ModExp algorithm from the above code because it's a fundamental algorithm of computational number theory.

Isogenies of elliptic curves over finite fields is directly implemented in SageMath (a free and open-source alternative to Maple). See the Wikipedia article SageMath. That's not to say that someone hasn't implemented it in Maple; I'm just not aware of it if they have.

@tomleslie I have no doubt that the problem is the implied multiplication, as you guessed.

@Jjjones98 The first step is to show that if f is any polynomial of degree 2*n-1 or less (n is 4 in your example), then the approximation is in fact exact equality. For further details, see the Wikipedia article.

@Jjjones98 Yes, the approximation relationship that you state holds for all suitably smooth functions f (but not for just any f). However, I don't know how you'd use Maple to show that in general. If you want to use Maple to show it for specific examples of f, that's easy. For a discussion of the general case, I suggest that you read the Wikipedia article "Gaussian quadrature".

Here's an example. I also compute the coefficients c_k, the formula for which I got from the Wikipedia article.

GramSchmidt:= proc(B::{Vector, list}, IP)
local n:= numelems(B), R:= Vector(n), M:= Vector(n), i, j;
   for i to n do
      R[i]:= B[i] - add(IP(R[j],B[i],args[3..])/M[j]*R[j], j= 1..i-1);
      if R[i] = 0 then error "Linear dependence detected at vector", i end if;
      M[i]:= IP(R[i]$2,args[3..])
   end do;
   (R,M)
end proc:
      
(Phi,H):= GramSchmidt(
   [seq(x^k, k= 0..5)],
   ((f,g,x::name)-> int((1-x)^(3/2)*f*g, x= 0..1)),
   x
);

Phi, H := Vector(6, {(1) = 1, (2) = x-2/7, (3) = x^2+8/99-(8/11)*x, (4) = x^3-16/715+(24/65)*x-(6/5)*x^2, (5) = x^4+128/20995-(256/1615)*x+(288/323)*x^2-(32/19)*x^3, (6) = x^5-256/156009+(3200/52003)*x-(1600/3059)*x^2+(800/483)*x^3-(50/23)*x^4}), Vector(6, {(1) = 2/5, (2) = 8/441, (3) = 128/127413, (4) = 512/8690825, (5) = 32768/9256590525, (6) = 131072/608470202025})

(1)

NodesAndWeights:= proc(n::posint, phi::Vector, h::Vector)
local p:= phi[n+1], x:= indets(p, name)[], X:= Vector([fsolve(p)]);
   (X,
    unapply(lcoeff(p)*h[n]/lcoeff(phi[n])/diff(p,x)/phi[n], x)~(X)
   )
end proc:    
 

(X,C):= NodesAndWeights(4, Phi, H);

X, C := Vector(4, {(1) = 0.5245119224e-1, (2) = .2562853614, (3) = .5482993220, (4) = .8271746506}), Vector(4, {(1) = .1219793965, (2) = .1688857691, (3) = 0.9204388701e-1, (4) = 0.1709094716e-1})

(2)

Example integrand:

f:= x-> exp(sin(x)):

GaussQ:= add(C[k]*f(X[k]), k= 1..4);

.5368478052

(3)

Compare with integral:

int((1-x)^(3/2)*f(x), x= 0. .. 1.);

.5368477922

(4)

 

Download Guassian_Quadrature.mw

 

 

@ebrahimina You changed x[j] from my code to x[i]! The x[j] comes directly from the formula in your Question, so how could you make this mistake?

Kitonum's code is essentially the same as mine. How is it that you transcribed his correctly, yet you've transcribed mine incorrectly twice? Are you trying to make me look like a fool?

@assma You have

local x, y; z,

which should be

local x, y, z;

 

@torabi Your equation has diff(u(r),r) raised to to the powers n-1 and n-2 with n < 1. Your boundary conditions say that this derivative can be 0. So, it's dividing by 0.

@Jjjones98 What do you mean by the "given" integral? You didn't "give" any integral in your question.

@gaurav_rs If you're talking about solving multiple systems--- A1.X1 = B1, A2.X2 = B2, etc. ---that can be done over the 24 processors by distributed processing, which is a form of multi-processing that is more loosely structured than parallel processing. Specifically, the concurrent processes don't share memory. It is done by Maple's Grid package. It is much easier to implement than parallel processing because you don't need to control multiple processes trying to write to the same areas of memory. But likewise, it is slower than parallel and uses much more memory.

@gaurav_rs Two hours seems like way more than what should be required, unless there's something important that you're not telling me, such as that you want thousands of digits of accuracy. Here are my results:

Digits:= 30:
n:= 400:
A:= LinearAlgebra:-RandomMatrix(n$2, datatype= float):
B:= LinearAlgebra:-RandomVector(n, datatype= float):
X:= CodeTools:-Usage(LinearAlgebra:-LinearSolve(A, B, method= hybrid)):

memory used=0.66GiB, alloc change=2.45MiB, cpu time=4.28s, real time=4.89s, gc time=1.92s
LinearAlgebra:-Norm(A.X - B, infinity)*n/LinearAlgebra:-Norm(B,1);

1.69177744953225012309207287051*10^(-27)

So, about 27 digits of accuracy in under 5 seconds.

As far as I know, there is no pre-packaged parallel method for LinearSolve; you'd need to write your own.

 

First 339 340 341 342 343 344 345 Last Page 341 of 709