sand15

797 Reputation

11 Badges

10 years, 13 days

MaplePrimes Activity


These are replies submitted by sand15

@Anthrazit 
Good luck

@Anthrazit 

I see...
I can't do more for you and I hope someone else will fix your problem.
As a last resort, even if I know this is a lot of work, could it be an option for you to use only Maplets?

@dharr 

For information:
Concerning the square root of symmetric positive definite matrices I submitted a question here
https://www.mapleprimes.com/questions/235834-MatrixPower-Doesnt--Give-The-Right-Answer


Extremely useful!
I agree with @C_R, a post would seem to me more appropriate and would give it more exposure.

@Preben Alsholm 

Given the definition of the integration over an infinite domain (simply definition in the sequel):

int(C, x=0..+infinity) = limit(int(C, x=0..a), a=+infinity);

one gets

limit(int(C, x=0..a), a=+infinity);
eval(%, C=0)
                       signum(C) infinity
                           undefined

In the other way

limit(int(0, x=0..a), a=+infinity);
                               0

Does this difference comes from the way eval acts?
---------------------------------------------------------------------------------------------------
Now consider this

int(0, y=0..1, x=0..+infinity);
                           undefined

But, applying the definition:

int(limit(int(0, x=0..a), a=+infinity), y=0..1);
limit(int(int(0, y=0..1), x=0..a), a=+infinity);
limit(int(0, y=0..1, x=0..a), a=+infinity);
                               0
                               0
                               0

So, do we have to consider that Maple doesn't apply the definition when it directly computes the integral?

@vv @Mahardika Mathematics

Even if the result is very pretty, it xwould have be nice to knowm how this (extremely complex) eqyation has beeen derived.

To @vv : give a look to this site
https://www.researchgate.net/publication/368438992_CREATING_3D_GRAPH_EQUATION_by_DHIMAS_MAHARDIKA
it contains a few other amazing images but the full article is not free.

To @Mahardika Mathematics : do you have a free paper about your work?

@Thiago_Rangel7 

If you prefer the way MMA simplifies expressions, why don't you use MMA instead of Maple?

I wonder if I'm not going to some Mapleprimes-like forum about MMA and ask
"Why doesn't MMA simplify it to (e + r)/cot(alpha/2)?", just to have an idea of what the answers will be.

The only"properties one can define for a NewDistribution those ones  List_of_properties.mw

It took me a lot of time to find where this information, and as some others about random variables, where hidden in the Statistics package.
I guess that the properties Parameters and ParentName, that any KnownDistribution has, are used elsewhere for some internal purpose.

@MaPal93 

Sorry for this late reply but I won't have much time to spend with you from now on as I have just returned to work.

Concerning your not that naive question: "by hand" simplification often involve non explicit knowledge (for instance a variance is a strictly positive quantity, a correlation coefficient rho verifies abs(rho) <= 1, and so on).
If you want Maple to simplfy an expression must set explicitely all the assumtions which seems implicit to you.

Here is your file with simplified expression for the lambdas.
(NOTE that I didn't run your code for I don't have time: I just copy-paste some expresssions into a variable Z ans simplified Z).

Analytical_SOL.mw

@RezaZanjirani 


 

restart

Z := (beta__c*y__c^2 + beta__m*y__c*y__n + beta__n*y__n^2)/(2*h*(-r__c*y__c - r__n*y__n + h)) + (beta__c*(1 - y__c)^2 + beta__m*(1 - y__c)*(1 - y__n) + beta__n*(1 - y__n)^2)/(2*h*(h - (1 - y__c)*r__c - (1 - y__n)*r__n)):

sys := {diff(Z, y__c) = 0, diff(Z, y__n) = 0}:

solve(sys, [y__c, y__n]);

[[y__c = 1/2, y__n = 1/2]]

(1)

infolevel[solve]:=100:
solve(sys, [y__c, y__n])

Main: Entering solver with 2 equations in 2 variables

Main: attempting to solve as a linear system
Main: attempting to solve as a polynomial system
Main: Polynomial solver successful. Exiting solver returning 1 solution

 

[[y__c = 1/2, y__n = 1/2]]

(2)

SolveTools:-PolynomialSystem(sys, {y__c, y__n});

Main: polynomial system split into 1 parts under preprocessing

Main: trying resultant methods
PseudoResultant: normalize equations, length= 4401
PseudoResultant: 15047473 [1 7000392304 y__c] 2 2 4401 2 107 0
PseudoResultant: factoring all equations length = 4401
PseudoResultant: factoring done length = 4401
PseudoResultant: normalize equations, length= 5467
PseudoResultant: 3917948 [1 4033335347 y__n 1] 2 2 5467 2 107 0
PseudoResultant: normalize equations, length= 4898
PseudoResultant: normalize equations, length= 5452
PseudoResultant: 592024 [1 100005376 y__n] 2 2 4898 2 107 1
PseudoResultant: normalize equations, length= 1300
PseudoResultant: 408832 [1 200169041 y__c] 1 1 1276 2 78 1
PseudoResultant: normalize equations, length= 3
PseudoResultant: -10 [] 0 0 3 0 3 1
PseudoResultant: 4229639 [1 4233342435 y__n 2] 2 2 5452 3 123 0
PseudoResultant: normalize equations, length= 5060
PseudoResultant: normalize equations, length= 5271
PseudoResultant: 1672501 [1 700025440 y__n] 2 2 5060 3 123 1
PseudoResultant: normalize equations, length= 8883
PseudoResultant: 1953068 [1 700028504 y__n] 2 2 5271 4 301 1
PseudoResultant: normalize equations, length= 15387
PseudoResultant: 2065476 [1 700026140 y__n] 2 2 8883 3 123 1
PseudoResultant: normalize equations, length= 8228
PseudoResultant: 2918733 [1 700042247 y__n] 2 2 15387 4 301 0
PseudoResultant: normalize equations, length= 14354

PseudoResultant: 1 solutions found, now doing backsubstitution
PseudoResultant: backsubstitution of y__c
PseudoResultant: backsubstitution of y__n

 

{y__c = 1/2, y__n = 1/2}

(3)

 


 

Download solving.mw

@Scot Gould 

Thank you Scot

@ecterrab 

My apologies for being so late thanking you for your reply.
It lifts the veil on this mysterious (for me at least) package.

Great thanks

@tjxkdcl 

There is no such things in Maple (Ridge, LASSO, LARS, ...).
I've written my own code for Ridge regression (which is the simplest method, just look to the formula in my first reply).

Simple Ridge regression algorithms generally proceed this way:

  1. Define a range D of prior values for lambda.
  2. Take a value of from D.
    1. For this value of lambda compute the minimizer w(lambda) according to the formula I sent you.
    2. Assess the "quality" of this lambda (see below).
  3. Return to point 2.

Assessing the "quality" of lambda:
This is usually done by using some resampling method (often cross-validation or Leave-k-out) which, all take the following form:

  1. Split, randomly or not, your data base in two disjoint sub-bases (let's say L and T).
  2. Using subase L: compute w(lambda) for a given value of lambda (alg. above) (previous algorithm).
  3. Compute the prediction error w(lambda) gives on sub-base T.

This pseudo-code is to be executed a large number of times to reduce the splitting effect,
(note that the w's you get at point (2) are all different, thus are the prediction errors you get: but they all are a realization of the prediction error associated to the particular value of lambda you used).

So, to be clear, if you take 100 lambda values within the domain D and assess the prediction error through 100 (L, T)-splitings, you have to run 10^4 computations.
The best lambda is the one which minimizes some criterion, for instance the mean of all the replicates of the prediction error.

Let me know if you need more help.
(I'm going to sleep right now).

@ecterrab 

Thank you Edgardo for this detailed answer.

I really feel that Physics is a world within Maple.
For instance, why are there so many updates and why are they not synchronized with the version changes of Maple?
Why is Physics such a special package?

First 7 8 9 10 11 12 13 Last Page 9 of 25