Carl Love

Carl Love

28055 Reputation

25 Badges

12 years, 356 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are replies submitted by Carl Love

@Arif Ullah khan To quote directly from the paper, here are the algorithm steps that I object to:

  • Step 1: "Select a value of B."
  • Steps 2-4: [I have no problem with these steps.]
  • Step 5: "Check to see if Eq. (2.19) is satisfied; if not iterate on B until satisfied."

I don't think that that should be acceptable in a scientific journal, Physics of Fluids.

@Arif Ullah khan The author's algorithm is not well specified. The last step is, essentially, "If the results are not sufficiently close, pick different initial values and try again." There's no indication of what's "sufficiently close" or how to converge to correct initial values. 

If it's your job to referee this paper, I would reject it until this last step is specified more precisely. To me, this algorithm is the only significant contribution that the paper makes. 

The majority of papers that I've read on boundary-layer ODE BVPs (several of which I've discussed here on MaplePrimes) have similar flaws. It's a very shoddy area of mathematics.

Also, don't assume infinity = 6. Continue integrating until the results are sufficiently stable. The fact that this can be easily done is a significant improvement over other BVP methods for systems with a boundary at infinity.

@Carl Love It is somewhat easier to count partial orders (relations that are reflexive, transitive, and antisymmetric) because there are far fewer of them for any given n. There is a recursive formula to count the number of transitive relations given the number of partial orders. For n= 0..7 the number of partial orders are 1, 1, 3, 19, 219, 4231, 130023, 6129859. The details can be found by following the links that I gave above.

@tomleslie The union of two intervals is not necessarily an interval, yet your procedure can only return an interval for the union.

Your title "Plotting in Maple" sidesteps the main issue. Once any functional equation is solved, plotting it is usually trivial. Any list of numeric [x,y] pairs is sufficient. The means of generating the pairs depends on the form of the solution.

But, AFAIK there's no stock method in Maple for solving fractional differential equations. You need to use an ad hoc method, i.e., one that's tailored to the particular equation. So, to get further help, I think that you'll need to post the equation itself.

So, I changed the title.

@Mo_Jalal From the code that I see in your subsequent Question, I think that you've already discovered that fnormal is the solution for that "nearly zero" issue.

@janhardo That's not at all surprising to me. I've never seen a system of mathematical typesetting where the double quote character was a derivative symbol.

It's a tough problem for which no general solution is known. There is some relatively recent research in this area. Brute force counting is out of the question since there are 2^(7^2) = 6.x10^14 relations on 7 elements. According to The Online Encyclopedia of Integer Sequences https://oeis.org/A006905 the counts for n= 0..7 are 1, 2, 13, 171, 3994, 154303, 9415189, 878222530.

I suggest that you start researching with the Wikipedia article "Transitive relation", and follow the research links in the section "Counting transitive relations".

I changed the title of your Question from "Discrete mathematics in Maple" to "Counting transitive relations".

@jum The last command given in my code above, unapply, converts the matrix-vector expression into an algebraic function with no matrices or vectors. If you end this command with a semicolon rather than a colon, you will see this algebraic representation.

@Carl Love Multiple assignment can be done like this:

k||(1..2):= $1..2;

@Arif Ullah khan I have read the paper that you sent me. In section 2.2 of the paper, the author describes an ad hoc method for the solution of these equations via a closely related system of two uncoupled single-equation IVPs. The method is very simple, almost too simple to be believed. I haven't implemented it in Maple, but you should try it, and I'll help you if you get stuck.

Later on, the author describes a problem using the shooting method: For some parameter values, there are 2 or 4 solutions to the BVP. I have encountered this problem for the parameter values gamma=1, rho=nu=20. I suspect that the author's ad hoc method also suffers this problem, and it's just a matter of choosing different initial values until something works.

@dharr The OP uses a recent version of Maple (2020, I think) but is attempting to learn from some very old worksheets.

@Ali Hassani Here are some minor corrections and/or refinements of your summary.

You wrote:

  • 1) A process is an "independently running" with no sharing the memory and, consequently, needs more memory than the "Threads" package which fundamentally works based on the shared memory.

Using the Grid package for parallel programming involves starting multiple processes, each of which is like a complete Maple session (but without a separate user interface), so each one uses as much memory as a single Maple session, plus whatever memory is used for your computations: all the globals and locals.

When using Threads, everything happens in a single process. It uses only slightly more memory than a serial computation; only the local variables of procedures running in parallel need duplication; none of the "overhead" of setting up a Maple kernel needs to be duplicated.

  • 2) The word "task" is not essentially limited to "Threads". "Task" is the smallest work unit.

Whereas process has a fairly widely accepted specific meaning (which you can look up as "Process (computing)" in Wikipedia), task does not (also look up "Task (computing)"). But, as for the Threads and Grid packages, I think that it's okay to consider a task to be the smallest unit into which the work is divided by those packages. Certainly, this is the meaning of task meant by the option tasksize, which is used by both packages.

The unit of computation in Threads that is analogous to process in Grid is the thread. Look up "Thread (computing)" in Wikipedia. Actually, this is the Wikipedia article that I think best explains the difference  between shared-memory (like Threads) and independent-memory (like Grid) parallel computing.

  • 3) Too large "tasksize" may be resulted in being idle some processors, and too small "tasksize" could lead to more large CPU-time.

A large tasksize may lead to some idle processors and thus a longer real time (and maybe also a shorter CPU time). A small tasksize will certainly lead to more CPU time, and, if it's too small, more real time. One needs to find the "sweet spot" in the middle that minimizes real time.

  • 4) If code is threadsafe, "Threads:-Map" command is preceded Grid:-Map.

It can be stated more generally: If the code is threadsafe, then using the Threads package will be more efficient than using Grid, regardless of the specific commands used (MapSeq, or something else).

  • Unfortunately, because of uncontrolled sharing in writing in memory, many commands in Maple are not threadsafe.

Yes, that's 100% correct.

Your test function has a discontinuous noise component: a normal random variable of fixed variance. I don't think that that's what Calvin's algorithm was intended for. Here is an example with a continuous noise component, a Wiener process:

Fi:= Finance:  CF:= CurveFitting:
npts:= 10^4:
WS:= Fi:-SamplePath(
    Fi:-WienerProcess(<<.1>>)(t), t= 0..1, timesteps= npts-1
)[1,1]:
T:= rtable(1..npts, k-> (k-1)/(npts-1), datatype= float[8]):
WF:= t-> CF:-ArrayInterpolation(T, WS, t, uniform):
W:= t-> (t-0.5)^2 + WF(t):
CodeTools:-Usage(Calvin(W, 350));
memory used=457.37MiB, alloc change=0 bytes, cpu time=7.61s, 
real time=7.47s, gc time=437.50ms

            [0.702660743159389, -0.527027281446182]

plot(W, 0..1);

The computations take much longer because I discretized the Wiener process to a very large number of points, 10,000.

@ ArrayTools:-Insert was introduced in Maple 2016. Here's a retrofit for the above code that doesn't use it:

Calvin:= proc(
    X::procedure,
    n::posint, 
    gamma::procedure:= (n-> sqrt(n+1)/ln(n+2))
)
description "Minimize continuous function with random noise on 0..1";
option
    `Reference: `,
        `https://www.sciencedirect.com/science/article/pii/S0885064X01905746 `,
        `James M. Calvin, "A One-Dimensional Optimization Algorithm and Its `,
        `Convergence Rate under the Wiener Measure", _Journal of Complexity_ `,
        `17, 306-344 (2001)`,
    `Maple author: Carl Love <carl.j.love@gmail.com> 2020-Aug-27`
;     
local 
    T:= rtable(1..n+2, [0, 1], datatype= float[8], subtype= Vector[row]),
    XT:= X~(T), Y:= copy(XT),
    t, tmin, M, z:= evalf(gamma(0)), tau:= 1, k, i,
    Insert:= proc(V::Vector, i::posint, k::posint, x)
        V[i+1..k+2]:= V[i..k+1];
        V[i]:= x
    end proc    
;
    (tmin,M):= `if`(XT[1] < XT[2], [T[1],XT[1]], [T[2],XT[2]])[];
    for k to n do
        Y[..k+1]:= XT[..k+1] -~ (M - z);
        i:= 1 + max[index]( (T[2..k+1] - T[..k]) /~ Y[2..k+1] /~ Y[..k]);
        t:= T[i-1] + (T[i]-T[i-1])*Y[i-1]/(Y[i-1]+Y[i]);
        tau:= min(tau, t - T[i-1], T[i] - t);
        z:= min(z, evalf(gamma(k))*sqrt(tau));
        Insert~([T, XT], i, k, [t, evalf(X(t))]); 
        if XT[i] < M then (tmin,M):= (t, XT[i]) fi;
    od;
    [tmin, M]
end proc
:
N:= Statistics:-Sample(Statistics:-RandomVariable(Normal(0, 0.1))):
S:= Array(1..1, datatype= float[8]):
W:= t-> 1 + (t - 0.5)^2 + N(S)[1]:
CodeTools:-Usage(Calvin(W, 350)); 
memory used=20.76MiB, alloc change=0 bytes, cpu time=141.00ms, 
real time=146.00ms, gc time=0ns

             [0.495821549079493, 0.710534027290144]

Regarding the precision, or Digits setting: It's ridiculous to use any precision higher than hardware floats for this algorithm or for an objective function with this much noise.

First 167 168 169 170 171 172 173 Last Page 169 of 709