Carl Love

Carl Love

28055 Reputation

25 Badges

12 years, 360 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are replies submitted by Carl Love

@mmcdara But you do have the foreknowledge that I sought: For any given matrix, K is fixed, and every list or vector used during the construction of the matrix can be length K. That's much more efficient than zero-padding each list separately.

The base being prime has nothing to do with the construction of these matrices using my method. Whether you choose to only construct them when the base is prime is up to you.

So, here's my optimized code:

NimMatrix:= module()
local
   sav_tablesize,
   ModuleLoad:= proc() sav_tablesize:= interface('rtablesize'); return end proc,
   ModuleUnload:= proc() interface('rtablesize'= sav_tablesize); return end proc,

   ModuleApply:= proc(N::And(posint, Non(1)), K::posint)
   uses AA= ArrayTools:-Alias;
   local 
      P:= AA(Array(0..K-1, k-> N^k), 0, [K]),
      NK:= P[-1]*N #NK = N^K
   ;
      interface('rtablesize'= max(interface('rtablesize'), NK));
      #The procedures baseN and `&Nim` are defined here so that 
      #they get fresh remember tables for each Matrix.
      baseN:= 
         proc(n::nonnegint)
         option remember;
         local q:= n, k;
            <seq(irem(q, N, 'q'), k= 1..K)>            
         end proc
      ;     
      `&Nim`:= 
         proc(a::nonnegint, b::nonnegint)
         option remember;
            local r:= add(irem~(add(baseN~([a,b])), N)*~P);
            #Implement symmetry:
            if a=b then r else procname(b,a):= r fi
         end proc
      ;         
      AA(Array((0..NK-1)$2, `&Nim`), 0, [NK$2])
   end proc
;
export
   baseN, `&Nim`
;
   ModuleLoad()
end module:
 

Usage:

M:= NimMatrix(3,3);

@tomleslie The reason for the different answers between our procedures is that my procedure Nim is doing the list addition "with carry". I did this because the OP stated incorrectly in the Question that the addition of the digits was to be done in base 10. It's actually not supposed to be done in any base, just natural integer arithmetic. The base-10 part seemed so weird to me that I got a clarification about "carry" from the OP. With the clarification, I now know that there's no carry, and that This algorithm has absolutely nothing to do with base 10! Some people confuse the natural unbased integer arithmetic with base 10. It only superficially seems that way because the end results are converted to base 10 for display purposes only. Otherwise, n is not the same thing as convert(n, base, 10), obviously.

This issue only matters, and our procedures will only differ, when the base is greater than 4.

I have a new batch of more-efficient procedures that I'm about to put up.

 

@mmcdara If you want to go the polynomial route, use PolynomialTools:-FromCoefficientList rather than series.

My B10 would be faster as a for loop because then I'd not need to compute each power of b separately. In my Answer, I was going for brevity of code (without sacrificing any generality).

If you have foreknowledge of the overall maximum list length needed, then ZPad can be made much faster by using Vectors of that length. It appears that for your work that there is such foreknowledge.

I need for you to clarify if a "carry" operation for the addition is needed when the base is greater than 4. Suppose that the base is 9 and the original operands are 7 and 8. Do we do this:
[[7], [8]] #operands converted to base 9
[15]       #sum of the above
[6]         #above mod 9
6           #above reconverted to base 10

or this:
[[7,0], [8,0]] #operands converted to base 9, padded for carry
[5,1]            #sum of the above
[5,1]            #above mod 9
14               #above reconverted to base 10
?

As soon as you can clarify that, I'll be able to give you an efficient procedure to create a whole Matrix.

Hmm. I just realized that I could figure it out directly from your code, presuming that you think that your code works for the higher-base cases. And it appears that you use the first of my two options. In that case, my Nim procedure won't work for bases greater than 4, and is generally inefficient.

@tomleslie The operation a +~ b will fail if the two lists have different lengths. See procedure ZPad in my Answer.

@tomleslie The OP is developing an algorithm for a BVP solution method that's not currently implemented in Maple. Of course, that algorithm is ultimately intended to be used for problems for which there's no easy symbolic solution. Obviously this work is in its early stages. A good way to test it is to compare its results against an exact result that's known to be correct.

This is all me guessing based on the OP's questions over the past few days.

@acer I mean that the execution of the OP's error statement prevents my procedure's return value from being displayed because now the OP's wrapping procedure has no return value.

Is that clear now? Sorry, I often convey the context by using boldface, which may not be apparent in a literal or out-loud reading. When I use boldface here on MaplePrimes, it's always meant to represent literal Maple code. If that Maple code in also an English word that's meaningful in the same context, I can see that that can cause confusion. So, I should've said "blocked by the execution of an error statement" rather than "blocked by error."

@VEIND You've quickly come to a situation requiring unapply. It's due to the order in which things are evaluated. In this code:

f:= x-> diff(g(x), x);
f(0);

the order is not as it appears on screen. The order is

  1. Make the unevaluated expression diff(g(x), x) into a function of x, and assign it to f
  2. Replace x with 0
  3. Evaluate the resulting expression diff(g(0), 0), which is nonsense because you can't differentiate with respect to a number.

Changing that to

f:= unapply(diff(g(x), x), x);
f(0);

makes the order

  1. Evaluate diff(g(x), x), and make the result a function of x assigned to f
  2. Replace x with 0, and do the arithmetic.

@waseem The solution is still to use rhs(exact) rather than exact. This is needed to strip off the "y(x) =" part, which is meaningless to plot.

@acer 

There are some issues in play in this particular situation that can confound all but expert users:

  • It's not immediately clear whether Typesetting:-mrow causes display to the screen via return value or via side effect.
  • Indeed, it's neither a procedure nor an appliable module; it's just a mysteriously exported name of Typesetting---so, there's no code to view.
  • Indeed, it isn't even documented.
  • There's no associated `print/...` procedure; indeed, using it with a high printlevel reveals nothing.

I wouldn't be surprised if you, like me, have learned everything that you know about using Typesetting:-mrow by reverse engineering its expressions with lprint.

To the OP, nm: If you're faced with deciding between It's displayed via return value or It's displayed via side effect, guess the former first and investigate that possibility: assign the result to a variable and apply lprint.

You don't even have two equations. You have two expressions.

The only sensible problem akin to NonlinearFit (*footnote) that I can see to form about these two expressions is

  • Find values of a[1] and b[1] that minimize the "difference" between the two expressions for all pairs (x, t) in some set S.

It remains for you to specify the set S and how "difference" is to be measured. There are some standard ways of measuring it. Let AD be the algebraic difference. Then the measured difference could be

  1. the (double) integral of |AD|^2 over S (or sum, if S is countable)
  2. the (double) integral of |AD| over S (or sum, if S is countable)
  3. the maximum of |AD| over S
  4. there are several other variations available in the package DirectSearch.

Also, it'd make it much easier if the set S were topologically compact, such as the cross product of two closed and bounded intervals, although this requirement is not absolutely essential.

So, is this what you had in mind?

(*footnote) The actual command NonlinearFit could be used if S were finite or we decided to approximate S by some finite subset of it. Otherwise, a command akin to NonlinearFit (such as Optimization:-Minimize) would be used.

 

@Gabriel samaila I got the ranges from the axes of the 3D plots: Nt = 0.01..0.4, Nb = 0.06..0.4, Br = 0.1..2.0, Sc = 0.1..2.0.

I haven't gotten fsolve to converge on a solution yet. I am going to try some other technique. One potential problem is that different values of those 4 hidden parameters were used for each row of Table 1. Do you think that that's possible? If that's the case, then it's hopeless to compute those values.

The paper should provide those values. It violates scientific protocol to omit them, and the paper should fail peer review for that reason alone.

@Gabriel samaila I know the default values; they've been encoded into my procedure Solve all along. I need ranges. For each of those 4 parameters, what is the minimum and maximum possible value? Or, if you can't answer that, what are the minimum and maximum reasonable, realistic, or practical values?

@Gabriel samaila Yours is a very interesting problem; I'm not getting tired of it or you at all. I don't know what you mean by "editing" sols(1): I don't see any legitimate reason to edit Maple output other than changing the formatting for display or publication purposes.

Regarding guessing the parameter values needed to construct Table 1: It's possible to do this systematically. We have the very lucky situation that the number of unknown parameters, 4, exactly equals the number of known values of C__f. So the situation is akin to numerically solving a system of 4 equations for 4 unknowns. This can be done with fsolve (that's fsolve, not dsolve) with procedure-form input (aka operator-form) where each procedure calls my Solve procedure to invoke dsolve, varying the parameters. I'm working on it now. This use of fsolve is very similar to what you'd need to do if you were backsolving to satisfy initial conditions for a shooting method.

It would help if you could tell me valid and/or reasonable ranges for the parameters Nb, Nt, Br, and Sc.

@Preben Alsholm Thank you, Preben. Since the categories DAE and IVP aren't mutally exclusive, I had some skepticism. I don't have much experience with DAEs.

Fabio, I suggest that you read the help pages ?dsolve,numeric,DAE and ?dsolve,numeric,DAE_extension. Solving DAE IVPs apparently has numerous pitfalls that solving ODE IVPs doesn't. In particular---if I've interpreted the help pages correctly---whether initial conditions are required for the purely algebraic dependent variables, and which of those variables they're required for, is highly specific to both the problem and the solution method. If that's true, then I can see that a small change in any method's code between two versions of Maple could easily change the required initial conditions.

Try removing all options other than numeric, the goal being to get through the input processor without error rather than getting an accurate or efficient solution. Once we get through that, then we'll go for accuracy, and after accuracy, we'll go for efficiency.

@waseem If you can supply the algorithm or a pointer to an online reference that I can understand, then I can code it in Maple. The material that I've been able to find has been either too broad or too engineering oriented. Please narrow it down to about five journal-length pages written with the clarity and good English grammar of a typical Wikipedia article.

First 333 334 335 336 337 338 339 Last Page 335 of 709