<

acer

26612 Reputation

29 Badges

17 years, 6 days

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@Thomas Richard I do not know of any launching option or init file setting that would change that limit.

I tried editing my init file and including a concocted line for FileMRU11. When I launched the GUI that entry did not appear in the drop-list, and it vanished upon full closing.

@nm The (mapped) operator,

   z->assign(z)

is a procedure in its own right. It declares its own z as its (only) procedural parameter. That z is not a parameter or local of a procedure within which that inner prodecure is defined.

Consider this example:

  f := proc(x)
    local p, r;
    p := proc(z) z^2; end proc;
    r := z -> sin(z);
    (z->sqrt(z))(p(r(x)));
  end proc;

The "z" that is a parameter of
   proc(z) z^2; end proc
is not a local or parameter of f.

The "z" that is a parameter of
   z -> sin(z)
is not a local or parameter of f.

The "z" that is a parameter of
   z->sqrt(z)
is not a local or parameter of f.

Please add any closely related followup example here, instead of in a separate Question thread.

@C_R You wrote, "What I just tried: When this equation is inserted into an evalf call, "t54" and "la" are defined as double in the C code (without requiring your solution).

codegen neither requires your solution nor evalf workarounds (which make the code less clean)."

You just described something as involving an "evalf call", and then characterize than that as not being an "evalf workaround".

By the way, I deliberately didn't mention the approach of wrapping in an evalf call since that could possibly alter the structure (prematurely). That's one reason why I went for the type approach. Another possible reason is that roundoff error might produce inferior float approximations -- you didn't mention using evalf[15] or (possibly necessarily) even higher working precision. Another reason is that some targeted variable (which one hopes to be declared float) might only appear in equations that also involve variables that one wants declared integer. Forcibly specifying the types of the (Maple language) procedure seems like a much tighter and generally better approach.

CodeGeneration[C] tries to deduce the types of variables during its pass through the code. Yes, the absence of floats in formulas can lead it to figure an integer rather than a float type declaration; that aspect is by design. Yes, it is known to be quirky with regard to the defaulttype option. That's not really news. That's all part of why I'd instead suggest putting type specifications on the procedure's parameters.

@C_R I don't understand what you're now trying to say or ask, sorry. In particular, I don't understand what "required" means in your Reply.

Using "float[8]" as type-specification of the Maple language procedure's parameters induced CodeGeneration[C] to translate to a C language declaration with "double". And the Maple procedure can be generated with that effect specified programmatically, which is what you originally seemed to be asking. It's unclear to me how you might now be trying to follow that up.

It's true that CodeGeneration[C] has some warts. It might not always do as good a job as does Compiler:-Compiler. And so on.

I cannot recall seeing anyone use "double" as a type in Maple in a highly useful and practical scenario. It happens that CodeGeneration, Matrix/Array/Vector & LinearAlgebra/Statistics, and Compiler:-Compile, etc, know about the Maple type float[8]. I personally would not expect any of those to understand "double" to mean the same thing as "float[8]".

It may be that someone added a Maple type "double" in Maple 2015. I'd be surprised if they had also made the effort to make all the Maple contexts that recognized "float[8]" also accept "double" in the very same ways. I think that doing so would be misdirected effort.

You are missing a multiplication between one instance of Pi and the bracket that immediately follows it.

You'll need to correct that, either with an explicit multiplication symbol or (if still using 2D Input) an extra space.

(In an Answer below this was corrected in the 1D plaintext input.)

@mmcdara Perhaps this will explain a little.

The elementwise tidle syntax is less flexible here, and goes for speed. (Less flexibility and terseness often go together...) The map command provides flexibility here.

restart;

with(Statistics):

S := Sample(Binomial(10, 1/3), 3);

S := Vector[row](3, {(1) = 5.0, (2) = 5.7, (3) = 2.0}, datatype = float[8])

S[2] := 5.7: S;

Vector[row]([5., 5.70000000000000, 2.])

op(0,S), rtable_options(S, datatype);

Vector[row], float[8]

 

The row Vector S has float[8] datatype. That means that any rational or integer placed within it gets
stored as a floating-point number.

The next variant preserves that float[8] datatype. The command round does indeed get applied and
produces integers, but once the results get stored in this rtable the datatype stricture causes them
to once again become floating-point values (64bit floats, stored contiguously in memory).

 

S2 := map[evalhf](round, S);

S2 := Vector[row](3, {(1) = 5.0, (2) = 6.0, (3) = 2.0}, datatype = float[8])

op(0,S2), rtable_options(S2, datatype);

Vector[row], float[8]

 

The next variant does not preserve the datatype. So the generated integers can remain. The result
here is also row Vector.

 

S3 := map(round, S);

S3 := Vector[row](3, {(1) = 5, (2) = 6, (3) = 2})

op(0,S3), rtable_options(S3, datatype);

Vector[row], anything

 

For reasons of efficiency and syntactic-efficiency elementwise operations (using tilde) attempt to
use kernel builtins or evalhf'able operations if available, and in such a way preserve the datatype.

 

The terse elementwise syntax F~(...) doesn't have a convenient way to add extra options and be
flexible. (Extra options within the bracket modify the individual calls to F, and thus can't modify
how the elementwise mechanism itself works.)

In the absence of flexibility, this variant goes for speed when possible, and in this example acts
like the S2 variant above.

The next variant preserves that float[8] datatype. The command round does indeed get applied
and produces integers, but once the results get stored in this rtable the datatype stricture causes
them to once again become floating-point values (64bit floats, stored contiguously in memory).

This is like the S2 variant above.

 

S4 := round~(S);

S4 := Vector[row](3, {(1) = 5.0, (2) = 6.0, (3) = 2.0}, datatype = float[8])

op(0,S3), rtable_options(S2, datatype);

Vector[row], float[8]

Download float8_map_tilde.mw

Btw, I didn't do any conversion to list, since it's not needed here and seems more complicated.

ps. Sometime I switch to Source mode in the Mapleprimes editor, and paste in output from Maple's Commandline Interface (or from the GUI with prettyprint=1) between <pre></pre> tags. That can be related to whether my rtable outputs appear explicitly instead of as some strange handle.

@mmcdara The following seems "simple" to me.

Applying straight map to a datatype=float[8] rtable gets rid of that datatype. (It's elementwise operations like round~ that you might wish/need to avoid for this goal. This is one of several important ways that map and elementwise ~ are different in behaviour.)

restart;                             
kernelopts(version);                 

    Maple 2015.2, X86 64 LINUX, Dec 20 2015, Build ID 1097895

with(Statistics):                    
S := Sample(Binomial(10, 1/3), 3);

     S := [5., 5., 2.]

map(round,S);

          [5, 5, 2]

This now seems quite tangential to the original Question.

In my Answer I just wanted to show using evalf[16] to get several places more accuracy than with evalf[15], for the Quantile of the left tail of ChiSquare. I didn't mean to start a thread on UseHardwareFloats or various ways of mapping over hardware datatype rtables, etc. Sorry for any confusion. Please branch it off into a separate thread if you'd like to discuss that in detail.

@mmcdara It is interesting that the floating-point Quantile implementation uses hardware precision calculations that are not so accurate for the left-tail of the ChiSquare random variable.

That can be alleviated by using either:
- an exact symbolic formula for the inverse of the CDF (or exact rational input), then evalf'ing
- forced "software" floating-point calculuations, by either using Digits>=16 or setting UseHardwareFloats to false.

restart:

kernelopts(version);

`Maple 2022.1, X86 64 LINUX, May 26 2022, Build ID 1619613`

with(Statistics):

Y := RandomVariable(ChiSquare(2)):

CDF(Y, y);

piecewise(y < 0, 0, 1-exp(-(1/2)*y))

symb := Quantile(Y, y);

-2*ln(1-y)

evalf[15](eval(symb, y=0.95));

5.99146454710798

# evidence that previous result was accurate
evalf[15](evalf[100](eval(symb, y=0.95)));

5.99146454710798

Quantile(Y, 0.95); # good

HFloat(5.991464547107979)

evalf[15](eval(symb, y=0.05));

.102586588775101

# evidence that previous result was accurate
evalf[15](evalf[100](eval(symb, y=0.05)));

.102586588775101

Quantile(Y, 0.05); # not so good

HFloat(0.10258658882606375)

evalf[15](Quantile(Y, 0.05)); # not so good

HFloat(0.10258658882606375)

evalf[15](evalf[16](Quantile(Y, 0.05))); # better

.102586588775101

restart:

with(Statistics):

UseHardwareFloats:=false: Digits:=15:

Y := RandomVariable(ChiSquare(2)):

Quantile(Y, 0.05);

.102586588775102

Download Quantile_ChiSquare.mw

@The function Here is another look at Tom's idea, with 2D Input.

The essence of Tom's idea is that a Matrix of formulas can actually be obtained for your example, which provide a way to get A^n without actually powering the Matrix A by repeated multiplication.

restart

A := `<,>`(`<|>`(0, 1, 0, 0), `<|>`(0, 0, 1, 0), `<|>`(1, 0, 0, 0), `<|>`(0, 0, 0, -1))

Matrix(%id = 36893627973028077188)

An := LinearAlgebra:-MatrixPower(A, n)

Matrix(%id = 36893627973028056468)

A^3, eval(An, n = 3)

Matrix(%id = 36893627972913816028), Matrix(%id = 36893627972913816148)

E := Equate(An, LinearAlgebra:-IdentityMatrix(4))

[(2/3)*cos((2/3)*Pi*n)+1/3 = 1, -(1/3)*cos((2/3)*Pi*n)+(1/3)*3^(1/2)*sin((2/3)*Pi*n)+1/3 = 0, -(1/3)*cos((2/3)*Pi*n)-(1/3)*3^(1/2)*sin((2/3)*Pi*n)+1/3 = 0, 0 = 0, -(1/3)*cos((2/3)*Pi*n)-(1/3)*3^(1/2)*sin((2/3)*Pi*n)+1/3 = 0, (2/3)*cos((2/3)*Pi*n)+1/3 = 1, -(1/3)*cos((2/3)*Pi*n)+(1/3)*3^(1/2)*sin((2/3)*Pi*n)+1/3 = 0, 0 = 0, -(1/3)*cos((2/3)*Pi*n)+(1/3)*3^(1/2)*sin((2/3)*Pi*n)+1/3 = 0, -(1/3)*cos((2/3)*Pi*n)-(1/3)*3^(1/2)*sin((2/3)*Pi*n)+1/3 = 0, (2/3)*cos((2/3)*Pi*n)+1/3 = 1, 0 = 0, 0 = 0, 0 = 0, 0 = 0, cos(Pi*n)+I*sin(Pi*n) = 1]

solve(E, allsolutions)

{n = 6*_Z1}

about(_Z1)

Originally _Z1, renamed _Z1~:
  is assumed to be: integer
 

Student:-Calculus1:-Roots(add(t^2, t = `~`[lhs-rhs](E)), n = 1 .. 30)
min(%)

[6, 12, 18, 24, 30]

6

 

Download mat_pow_example.mw

@wswain That is quite often done, and does't normally cause issues.

In the context of linear algebra, texts usually use I to mean the identity matrix.

(You seem to have confused that with Maple's use of I to denote the imaginary unit.)

@ecterrab One of the points is that there is no call to any diff or conjugate before the forget.

So what is being forgotten that affects the (sole, only) call to diff that happens afterwards?

(I already knew what the subfunctions option to forget is supposed to do, thanks. I just don't see how that is relevent when the problematic example call to diff was not also called before the forget&unwith. What is being forgotten?)

The difference does not seem to lie in Physics:-ModuleUnload (which gets called by issuing restart but not by issuing unwith(Physics) , btw).

I also tested forcing a call to that, and it doesn't seem to be the key.  diff_forget_hmm2.mw

@Preben Alsholm That is an interesting observation.

It's still not clear to me what is the difference, since :-diff doesn't have a remember table, just before this call to forget. Some other action may be key, induced by the forget(:-diff) call, related to whether option subfunctions is utilized.

(I have not yet traced through that call to forget, say in the debugger, to try and pinpoint the difference. Clearly the "forgetting" doesn't relate specifically to f(x) or its conjugate per se, since that was not previously used. But the forgetting does something, and option subfunctions=false apparently disables whatever that is...)

restart;

kernelopts(version); # No Physics update applied

`Maple 2022.1, X86 64 LINUX, May 26 2022, Build ID 1619613`

restart;

with(Physics):

unwith(Physics);

op(4,eval(:-diff)); # no remember table

forget(:-diff,subfunctions=false);

#forget(Physics:-diff); # doesn't see to matter here

:-diff(:-conjugate(f(x)), x);

diff(conjugate(f(x)), x)

restart;

with(Physics):

unwith(Physics);

op(4,eval(:-diff)); # no remember table

forget(:-diff,subfunctions=true); # default

:-diff(:-conjugate(f(x)), x);

(diff(f(x), x))*(-conjugate(f(x))/f(x)+2*abs(1, f(x))/signum(f(x)))

Download diff_forget_hmm.mw

1 2 3 4 5 6 7 Last Page 1 of 495