acer

32587 Reputation

29 Badges

20 years, 38 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

I have been noticing this effect recently. Several of the worksheets and Documents that mapleprimes members have uploaded are written in 2D Math, and often are full of lots of top-level code. Re-executing these documents often involves a surprising delay.

It seems worse on 64bit Linux, where I can almost watch the 2D Math parser and Typesetting system crawl through each individual command. It's like using a 1200 baud modem all over again.

Actually, even using procedures doesn't avoid it completely, as the system pauses also while digesting the definition of procs written in 2D Math. Of course, subsequent execution of the procedures is OK.

acer

I'm not sure whether the "information jumps" to which Jacques alluded are better measured as (the ratio of) absolute deltas or actual entries.

Eg,

plots[pointplot]([seq([i, (S[i]-S[i+1])/(S[i+1]-S[i+2])], i = 1 .. 50)]);

That seems to bring out jump points that match OK with the reciprocal log plot in another comment below.

Like so much else with plots, it's sometimes far easier to see a quality than to measure and detect it with code in a reliable way.

acer

Thanks for clarifying that, Doug.

Apart from this Table element mutability issue, what would you want to see improved in Maplets? The way in which only a child maplet can have the current focus, but not the parent (until the child is closed), is another issue.

And how would you rank the importance of elements missing from Embedded Components? What about the inability to programmatically control or define those components?

You've obviously had a great deal of experience with Maplets. Where do you think efforts would be best spent?

acer

Thanks for clarifying that, Doug.

Apart from this Table element mutability issue, what would you want to see improved in Maplets? The way in which only a child maplet can have the current focus, but not the parent (until the child is closed), is another issue.

And how would you rank the importance of elements missing from Embedded Components? What about the inability to programmatically control or define those components?

You've obviously had a great deal of experience with Maplets. Where do you think efforts would be best spent?

acer

It does seem like there are jumps, at elements 2,4,8,15,18, 21,26,28,..

plots[pointplot]([seq([i, 1/abs(ln(S[i]))], i = 1 .. 30)]);

acer

Is this illustrating how it might be done in Matlab?

acer

Is this illustrating how it might be done in Matlab?

acer

This looks like a job for simulation, to me. The sort of thing where one starts off with a set of rank correlation coefficients, and possibly a joint distribution or copula. I am not expert in this area. If you are lucky, Axel Vogt might have some good advice for you. I too am curious as to how to do this with Maple -- whether it can be done reasonably easily with the Statistics package or whether some key subtask would be facilitated with the Financial Modeling Toolbox.

acer

This looks like a job for simulation, to me. The sort of thing where one starts off with a set of rank correlation coefficients, and possibly a joint distribution or copula. I am not expert in this area. If you are lucky, Axel Vogt might have some good advice for you. I too am curious as to how to do this with Maple -- whether it can be done reasonably easily with the Statistics package or whether some key subtask would be facilitated with the Financial Modeling Toolbox.

acer

I don't understand what exactly it is that you want.

Do you want to generate samples of 10000 elements from each of 20 logistic random variables, so that their correlation matrix fits a 20x20 matrix which you know and supply in advance? Are you asking how to produce the 20 sets of random deviates, given the 20x20 matrix of desired correlation values? (I believe that is how Jacques, and consequently Robert, have responded.)

Or do you simply want (somehow) to generate (correlated in some way or not..) 20 logistic random variables and then compute their sample correlations as product-moment coefficients? If so, then do you care how it's done, so that the samples are made deliberately to not all be uncorrelated?

acer

I don't understand what exactly it is that you want.

Do you want to generate samples of 10000 elements from each of 20 logistic random variables, so that their correlation matrix fits a 20x20 matrix which you know and supply in advance? Are you asking how to produce the 20 sets of random deviates, given the 20x20 matrix of desired correlation values? (I believe that is how Jacques, and consequently Robert, have responded.)

Or do you simply want (somehow) to generate (correlated in some way or not..) 20 logistic random variables and then compute their sample correlations as product-moment coefficients? If so, then do you care how it's done, so that the samples are made deliberately to not all be uncorrelated?

acer

Just look here.

acer

Just look here.

acer

Just some observations:

> I1:=int( 1/x^p, x=1..infinity ):

> value( I1 ) assuming p<0;

                                   infinity

> value( I1 ) assuming p<1, p>0;
                                   infinity

> gorp:=value( I1 ) assuming p<1;
                                 infinity p - infinity
                         gorp := ---------------------
                                         p - 1

> eval(gorp,p=1/2);
                                   undefined

> limit(gorp,p=1/2);
                                   infinity

> limit(gorp,p=a) assuming a<1;
                                   infinity

If I had to guess, then I'd wonder whether int() is not taking some limit, in some subcase. Or perhaps it goes to unnecessary trouble just because it does not notice some common behaviour over more than a single subregion (eg. p<0 and 0<p<1). One can see it take longer, and collect garbage, when using only the assumption p<1.

Ok, so I peeked a little. The unsimplified result is obtained by the FTOC integrator. So one could look harder at why IntegrationTools:-Definite:-Integrators:-Asymptotic:-EstimateInternal and friends are not getting the simpler result of `infinity` prior to the FTOC integrator's being used. It is the "Asymptotic" integrator that gets the simple `infinity` result for the assumption p<0. So without looking harder I'll stick with the guess for now, that the Asymptotic integrator gets confused, possibly through not noticing commonality of two regions of interest...

acer

John,

I like your idea, and I hope that you feel encouraged rather than discouraged. You saw a shortcoming or weakness in Maple, and proposed a way to a solution. That it may benefit from discussion is natural, I think. But it's very good for such activity and proposals to go on here. Good for users, not just for "maple". I hope that your attitude to this idea, or others you have, stays positive.

I have one or two other similar proposals (which I keep meaning to blog about). But I feel that these things might be better on sourgeforge or a similar place. For one thing, there may be a need for collaboration and source revision, and that's the sort of thing that should be done right -- in a tried and tested way. Otherwise, the gatekeeping could be very onerous.

On the subject of LaTeX export support, it seems to me that conversion of worksheets and Documents is in more need of heavy work. I'm comparing that to the `latex`() command in the Library. Of course that increases the amount of needed effort by a great deal. I wonder who feels that an "expression or object" LaTeX converter should be distinct from the GUI's LaTeX exporter.

acer

First 548 549 550 551 552 553 554 Last Page 550 of 596