Carl Love

Carl Love

28055 Reputation

25 Badges

12 years, 354 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are replies submitted by Carl Love

@Thomas Dean 

Both my solution using ArrayTools:-AddAlongDimension and the one using ListTools:-LengthSplit implicitly rely on the fact that the pattern of indices in sumidx is extremely simple and thus sumidx isn't needed at all: We just need to add the 1st 10 elements of V, the 2nd 10 elements, etc. That's why they are so fast. I assume that in your actual application, there wouldn't be such a simple pattern, and you'd actually need the list sumidx. So, I don't think that it's valid to compare the method that uses LengthSplit with those using sumidx and for or seq for the summations.

Note that I made corrections to the Answer above; so, if you've tried that code already, please try it again.

Here's another method that I like much better than that. It also uses no indices; additionally, it uses no matrices or any other containers other than lists. The only difference is in the last 2 lines, which construct and S

restart;
roll:= (n::posint)-> RandomTools:-Generate(list(posint(range= 9), n)):
nN:= (Jx:= 5)*(Ix:= 10): 
N:= sort(10*['seq'(3..2+Jx)$Ix] + roll(nN));
V:= 10*(roll(nN) + 10*iquo~(N, 10));
S:= add~([ListTools:-LengthSplit](V, Ix));

The output for S is the same as before.

@Carl Love The exact cause of your error is that you executed the code once, and then, without doing the restart, you executed it again. When a for loop terminates because its upper bound has been reached, its index variable is incremented past the upper bound. Thus, n=3 after your for loop. This is then re-used as the index to sum. The index variable of sum can't have a numeric value.

If you execute your posted code straight through (including the restart), then you'll get a different error. This error is caused by your use of single quotes in the expression 'int'('R(x)' * ......). No quotes are needed. Just use int(R(x) * ......).

@rlopez Thanks for clarifying that. I was well aware of all the mathematical issues involved. I asked because I didn't know whether the OP's Question was simply ill-posed or whether "level curve f(x,y) = c" was some sort of "acceptable" or "known" textbook-problem abbreviation for "level curve f(x, y, 0) = c" that I simply had never seen before.

@Thomas Dean For each value of N, take the average of the 10 CPU times and 10 elapsed times. Set Tcpu and Telapsed to those averages. The raw data is not needed after that. For the larger values of N, just use whatever measurements you have; it needn't be 10.

@Thomas Dean I figured that you were doing integer factoring. Several points:

[Edit: In section 1 below, I originally incorrectly said that the n in the formula represented the number of bits of the input number. Actually it's the input number itself. This has now been corrected.] 

1. There's no known polynomial-time algorithm for it (unless you include quantum computation on a scale that's not known to have been achieved yet). NFS is known to be "sub-exponential, super-polynomial". The best implementations are known to be specifically 

O(e^(4*(ln(n)*ln(ln(n))^2/9)^(1/3)))

where n is the number being factored (rather than the number of its bits or digits). To put this in terms of your computation, we let d = ceil(log[10](n)), so d is the number of base-10 digits. Using this in the formula above, simplifying, and discarding lower-order terms in the exponent, we get

O(A^((d*ln(d)^2)^(1/3)))

where A = exp(4*(ln(10)/9)^(1/3)), which is approximately 12.67.

2. For any algorithm, using parallelism (multi-processing, hyper-threading, distributed processing, etc.) will neither change the "big-O" nor help you to determine it. At best, parallelism gives you a constant factor A (0 < A < 1) reduction in elapsed time, and you "pay" for it with a constant factor B (B > 1) increase in total CPU time (and hence electricity usage), where necessarily A*B > 1 (often much greater and increasing with the number of processors used). Since "big-O" (formally known as part of Landau order notation) ignores constant factors, parallelism won't lower the order. (Although it's possible that poorly implemented parallelism could increase the order.) So, for this purpose, you should only use sequential (i.e., single-processor) computation.

3. The "closeness" (as measured by the sum of the squared residuals or by any other metric) of a "fit" will always get better as the number of parameters being fitted approaches the number of data points. For example, a degree-d general polynomial (so, d+1 coefficients are being fitted) can always be found that's an exact fit (meaning the residuals are 0) for any set of d+1 data points with distinct independent-variable values. The statistical predictive value of such a fit is worthless. So, criteria in addition to small residuals are needed to assess the quality of a fit. When doing a big-O computation with a single independent variable (the n or N in this case), I don't think that you should ever use a model function with more than one term.

4. So, is there a method better than trial and error? Yes, start with the theoretical big-O known for the algorithm, which I gave above.

This Reply is "tangential"[*1] to your main Question.

Your Question shows the plaintext input

series(tan^(-1)(x), x, 9)

and the output is the series for arctan(x), and I suspect that that was your intended function. In Maple's 1D input, tan^(-1)(x) is not arctan(x). (I'm not criticizing your post; I'm just curious how this expression came to be in this form and whether Maple, MaplePrimes, or your OS or web browser had anything to do with that.) So, I guess that you entered the command using 2D Input. My first question is How exactly did you create that 2D Input (my curiosity is only about the first argument)? My second question is How did you transcribe that to MaplePrimes (e.g., simple typing, copy/paste, something else)?

In 1D input

(tan@@(-1))(x) = arctan(x)  # i.e., the functional inverse of tangent
(tan^(-1))(x) = cot(x)  # i.e., cotangent, the multiplicative inverse of tangent
tan^(-1)(x) = 1/tan

In the last of those, tan is being treated as just a name, and (-1) is seen as a constant function whose argument is x. Since function application has higher operator precedence than ^, the expression is the same as tan^((-1)(x)) tan^(-1) = 1/tan.

[*1] I realize of course that the actual function used is irrelevant to your Question, and all that matters is that the series have an O(...) term. 

Does "level curve f(x,y) = c" mean the intersection of the level surface f(x,y,z) = c with the plane z=0? I'm asking for a response from anyone who knows; I think this OP is unlikely to respond.

@dharr Thank you for spotting my error. I corrected the code and plot in my Reply above.

@dharr Here is Maple code to construct @Tokoro 's  1-ohm and 2-ohm graphs:

GT:= GraphTheory:
G__1ohm:= (GT:-Graph@`union`@op@(`{}`~)~)(
    [$1..12],
    [   #Edges forward from vertex...
        #  1       2     3     4       5      6
        {2,3,4}, {3,5}, {8}, {7,10}, {6,8}, {7,8}, 
        # 7     8        9       10      11    12
        {9}, {11,13}, {11,12}, {12,13}, {13}, {13}
    ]
);
  G__1ohm := Graph 6: an undirected unweighted graph with 13 
     vertices and 21 edge(s)

G__2ohm:= (GT:-Graph@`union`@op@(`{}`~)~)(
    [$1..14],
    [   #Edges forward from vertex...
        # 1      2       3      4     5      6       7
        {2,3}, {4,5}, {4,5,6}, {5}, {6,7}, {9,13}, {8,11},
        #  8        9       10      11    12     13      14
        {10,11}, {10,12}, {11,12}, {14}, {14}, {14,15}, {15}
    ]
); 
  G__2ohm := Graph 7: an undirected unweighted graph with 15 
     vertices and 25 edge(s)

Surprisingly, both are planar:

(print~@subs)(
    FONT("HELVETICA","DEFAULT",6)= FONT("TIMES","BOLD",8),
    GT:-DrawGraph~([G__1ohm, G__2ohm], style= planar, size= [500$2])
):

@mehdi jafari Letting m:= M-1, I factored out m^(m-s) from beta(j,s) and then combined that factor with t^(m-s) to get (m*t)^(m-s). But it's no problem to put that m^(m-s) in beta itself, like this:

Lpoly:= proc(t::algebraic, M::And(posint, Not(1)))
uses It= Iterator;
local
    beta:= proc(j,s)
    local k;
        J[]:= <seq(0..j-1), seq(j+1..m)>;
        mm/(-m)^s/mul(j-~J)*add(mul(J[[seq](k)]), k= It:-Combination(m,s))
    end proc,
    m:= M-1, mm:= m^m, J:= rtable(0..m-1, datatype= integer[4]), j, s
;
    [seq](add(beta(j,m-s)*t^s, s= 0..m), j= 0..m)
end proc
:    

 

@Guimzo 

Types versus properties as Boolean predicates in Maple:

To answer your question carefully, I need to tell you about the distinction that Maple makes between types and properties. Whether this distinction matters in this particular case depends on how complicated the values returned by your f can be, which I don't know having not seen f. Also, you may already be well aware of this distinction.

A single example expression should suffice to explain this. Consider

ex:= (1 + sqrt(2))^2 + (1 - sqrt(2))^2;
      

It's trivial to see by hand computation that ex = 6. On the other hand, we can see from the prettyprinted output that Maple didn't automatically make this simplification. (The fact that we could give a command to Maple (e.g., expand) with which it could very easily make that simplification to is irrelevant to the point that I'm making; what matters here is that it didn't do it automatically.) We could say that ex = 6 in the mathematical sense of =, but that ex <> 6 in the sense of raw data structures in a computer's memory. Since 6 is an integer, we'd say in Maple's terminology that ex has the property integer but that it's not of type integer. The command for checking properties is is:

is(ex, integer);
      true

type(ex, integer);
      false

As you can probably imagine, checking properties is much more computationally expensive than checking types.

So, with that in mind, note that the code below checks that the type of f(i) is integer, rather than whether f(i) is an integer in the mathematical sense. If there are cases where those two things differ, post the code of f and I will adjust it.

Let be a set or list of the desired arguments to f. Each command below will form a sequence whose element order logically corresponds to the order in A.

To form the sequence of all in such that f(i) is integer:

select(type, A, integer &under f)[]

To form the sequence of all integers f(i) for in A:

This code that you had will work, as you've mentioned, but it's unnecessarily elaborate:

select(is@ `type`, f~([{seq}(k, k = a)[ ]]), integer)[ ]

It can be simplified to

select(type, f~([A[]]), integer)[]

@evanhowington The interpretation of a <= b as not being exactly the same as (a < b or a = b) is perhaps idiosyncratic to Maple, but one doesn't need to use sqrt or any other possibly complex-valued operation to see its effect. Compare

is(x <= x);
      false

is(x <= x) assuming x::real;
      
true

The default assumption on symbolic variables in relations with algebraic operands is that they're complex; any other assumptions need to be explicitly stated.

@Guimzo Just change is@`<` to is@`>`.

Note that select(is@`<`, ..., b) is equivalent to 

select((x,b)-> is(x < b), ...)

but I think that the former is a bit more efficient (not sure about that). More generally, f@g is equivalent to (x,y)-> f(g(x,y)) (if g takes two arguments), and select(f@g, ..., y) is equivalent to 

select((x,y)-> f(g(x,y)), ..., y)  

Putting an infix operator in backquotes usually allows it to be used as a function name; for example `<`(a,b) is equivalent to a<b and `+`(a,b) is equivalent to a+b. To use an operator without explicit operands (such as using is@`<` or simply `<` as the first argument to select) requires that the operator be used in its function-name form (which is almost always the operator in backquotes).

Using is(x < b) is a bit more accurate than evalf(x) < b for those cases where higher-than-default decimal precision is needed to decide the inequality.

@acer You wrote:

  • It seems just as easy to just make real assumptions on t,x,lambda (as I had shown in my Answer) than it does to call evalc and make complex assumptions on epsilon1,epsilon2lambda1.

Yes, certainly that's true in this case. I use evalc by habit because I think that in general it does some simplifications that simplify doesn't even though the simplify is given equivalent assumptions. As a simple example, none of the following do anything for me (using Maple 2019 at the moment):

simplify(exp(I*x)) assuming x::real;
simplify(exp(I*x)) assuming real;
simplify(exp(I*x), assume= real);

First 96 97 98 99 100 101 102 Last Page 98 of 709