Carl Love

Carl Love

28055 Reputation

25 Badges

12 years, 360 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are replies submitted by Carl Love

@maple2015 You said that the Z values were experimentally determined, yet when the fitted model function deviates from those Z values, you call it an "error". Those two things are not consistent with each other. If you're going to use regression, then you must accept that the Z values themselves have error and that the best model function may not pass through any of them. To do otherwise is bad science and often a morally corrupt business / engineering practice (part of "lying with statistics").

The difference between the z-value obtained from the model function and the corresponding experimental z is called a residual.

The difference between an experimental z-value and its true (but unknown) value is called an error. By true value, I mean the mean of all possible experimental values at the given x and y.

If you want to give more weight to certain z-values, it can be done with the weights option to LinearFit. I wouldn't consider it corrupt if certain values were given more weight because they had been experimentally verified more times. Indeed, if W were the weights vector, you could just let W[i] be the number of times that Z[i] had been verified.

By the way, I'm not trying to argue that any of the models that I've proposed are correct. I wouldn't claim that any model is correct without there being some scientific basis for the inclusion of each of its terms. Rather, I'm arguing that your reason for rejecting models is wrong.

@maple2015 Ah, now that you've said that X and Y are both ratios, and, even better, that they have a common factor (the amount of cement), it's clear to me that a good model to try is multiplicative rather than additive: Z = C*X^p*Y^q, where C, p, and q are values to be determined. It's done with logarithms like this:

lnZ:= Statistics:-LinearFit([1, ln(x), ln(y)], < X | Y | ln~(Z) >, [x,y], summarize);
Summary:
----------------
Model: -12.751403+5.0633641*ln(x)-8.0127107*ln(y)
----------------
Coefficients:
              Estimate   Std. Error  t-value   P(>|t|)
Parameter 1   -12.7514    1.6204     -7.8693    0.0000
Parameter 2    5.0634     1.3071      3.8739    0.0017
Parameter 3   -8.0127     0.7647     -10.4783   0.0000
----------------
R-squared: 0.8927, Adjusted R-squared: 0.8774

Model:= simplify(exp(lnZ));
     Model := 0.289825157651345e-5*x^5.063364078/y^8.012710653

 

 

You must learn to do by hand the solving of pairs of linear simultaneous equations by using (Gaussian) elimination of 2x3 matrices of simple real fractions to put those matrices into row-echelon form and reduced row-echelon form (RREF). There's no point in asking questions about higher-level topics like QR until you understand RREF. Nearly every problem in finite-dimensional linear algebra can be answered (at least in principle) by finding the RREF of some matrix. RREF is not always the most computationally efficient method, but it's almost always the best way to get the exact answer by hand computation.

@shakuntala Rouben did put a link to the software in the second line of his Answer. See the blue word Imagemagick? Words in blue amongst otherwise black text are almost always links.

@asa12 I'm sorry, but you just ask too many questions all at once, and I'm tired. Maybe someone else can answer. Please learn some basic undergraduate level (sophomore level) linear algebra. Learn how to find the eigenvalues and eigenvectors of a 2x2 matrix exactly by hand computation. Learn what symmetric and hermitian matrices are. Learn what the characteristic polynomial of a matrix is. All of these things can be easily learned by Googling. Personally, I think that Wikipedia is the best place to start learning a math topic (or just about any academic topic). There are a lot of math videos on YouTube also, but personally I find the pace much too slow.

By the way, you are by far the number one question asker of all time on MaplePrimes. And I'd guess that 1/3 to 1/2 of your Questions somehow involve matrices and the LinearAlgebra package. So it's quite odd that you don't know what a symmetric matrix is. A little bit of academic study of linear algebra would save you a huge amount of time posting questions and going in logical circles with Maple.

I may be wrong about this, but I don't recall you ever acknowledging that you understood an answer that you've got here, that you learned something. Nor do I recall you ever giving an Answer a Vote Up. Instead, it seems that most answers are just met with a barrage of new questions. All of these things make it tiring and frustrating for the people who answer your questions.

@asa12 I don't know specifically about physics, but what size of matrices are you generally interested in? I'd use a float-point matrix for anything larger than a "toy" size because the algorithms for rational matrices are based on finding the roots of a high-degree polynomial. Those are usually unstable.

If the matrix is symmetric, then by declaring it so (with option shape= symmetric to Matrix) a better algorithm will be used by Maple.

@Muhammad Usman If (and only if) the polynomial has a double root, then the antiderivative can be expressed in terms of the hyperbolic arctangent. Indeed, the expression is relatively simple. The conditions required could be expressed as conditions on your coefficients d[i], but they don't depend on the on whether the roots or the coefficients are real.

If the polynomial has three distinct roots, I think that the answer can only be expressed in terms of Elliptic functions.

@spalinowy From a quick scan with the "naked eye" (i.e., no computation), I see that you can eliminate some subset of {Z1(s), Z2(s), Phi1(s), Psi2(s)}, but certainly not Psi1(s). If you want to proceed along those lines, the command is eliminate.

@mew26 Ah, I see the problem then. This is a bit more complicated than what subs can do. There is a related command subsindets that can handle this. But before I can solve it fully, I need to know whether in each target integral, the only exponent used on x is n for some specific value of n. (I don't consider the lone x as having an exponent, although, of course, in some contexts we would say that its exponent is 1.)

Would you please enter here in MaplePrimes the exact subs command that you're trying to use? Please type it here directly rather than copy-and-paste. Or you can upload a worksheet using the green uparrow on the toolbar in the MaplePrimes editor.

There's no reason why one shouldn't be able to substitute one unassigned indexed name for another with the same index. For example:

J:= Int(a[n]*x^n*u(x), x= -infinity..infinity):
subs(a[n]= b[n], J);

However, note that in a name such as a_n or a__n (rather than a[n]), one should consider the "index" strictly as the literal letter n rather than as an index that can take numeric values. In these cases, it'd be better to refer to n as a subscript rather than as an index.

 

@Muhammad Usman If you enter it into Maple, you get a result (in terms of Elliptic functions). Is that result from Maple not useful to you?

@maple2015 Your 6-term model is worthless. That's obvious from the p-values (the last column of the summary table). Once you see that, there's no point in pursuing any further analysis. Don't be fooled by the relatively high R2, 86%. Adding terms will always increase R2, regardless of whether the terms are significant.

Why are you ignoring the model that I proposed? I don't claim to have any great insight that led me to it. But, having stumbled upon it, I see that its p-values are really, really, good---so good that it looks like a textbook problem that was designed to lead you to that model.

@Muhammad Usman In that case, the answer is as I said. Just "freeze" u(eta) to a simple name such as u__eta, and enter the integral into Maple. The results do not depend on the discriminant. Also, without loss of generality you can make a[3]=1 and d[2]=0 because this can be accomplished with a trivial linear substitution.

@acer I hadn't found an exact simplification of the residuals to 0 at the time that I posted. My first attempt tried evala, but it balked, I think because of the abs. Since it was well past when I wanted to go to sleep, I was for the moment satisfied with a floating point verification, which I did but didn't post because it's trivial. I totally agree with you about the mathematical relevance of an exact simplification, and I'm impressed with the one that you found.

@Kitonum Sorry, you're right. I misinterpretted your assuming as an attempt to tell solve that the variables were real. And I interpretted it that way because that would be a natural and useful thing to do if it were possible.

First 322 323 324 325 326 327 328 Last Page 324 of 709