| Re: On errors calculated by curve-fitting routines [message #59201 is a reply to message #59078] |
Thu, 06 March 2008 21:08   |
Gernot Hassenpflug
Messages: 18 Registered: April 2001
|
Junior Member |
|
|
Craig Markwardt <craigmnet@REMOVEcow.physics.wisc.edu> writes:
> Gernot Hassenpflug <gernot@nict.go.jp> writes:
>> I find that in IDL the routines POLY_FIT, LMFIT and CURVEFIT can all
>> calculate the parameter covariance matrix and it is documented that
>> LMFIT uses the method of Burrell and Numerical Recipes. I cannot tell
>> what method the other two routines use.
>
> Anthony mentioned MPFIT, which is a non-linear fitting engine
> translated from MINPACK. As far as I understand, the covariance
> matrix is equivalent to that from Numerical recipes.
>
>> I am hoping that contributors to this list could give their comments
>> and opinions on what method of parameter variance and covariance is
>> most sound, and which routines are therefore preferred for a
>> polynomial fitting case (possibly over-determined).
>
> For linear least squares, I think the covariance matrix is reasonably
> useful. In my field, it's common to use the delta-chi-square method
> described in Numerical Recipes, which usually involves making a
> confidence grid for pairs of parameters that are of interest.
Craig, thank you very much for your response, I will probably spend
the entire weekend comparing test cases between the various IDL
routines, including your really impressive MPFIT set of packages,
Maple, R, Numeric Python, and my own IDL code (using the Numeric
Recipies idea).
I understand that multivariate "error" even for only real quantities
does not have simple answers, and that covariance is not the only or
even best estimator for many cases, as you point out.
I'd just like to ask, since I cannot quite tell if I have grasped the
ideas from Numeric Recipes correctly (and so my own IDL code for
comparison with the others may be incorrect): the covariance matrix
calculation uses the basis functions (e.g., 1, x, x^2) and the
variances of the dependent (y) variable, but *not* the dependent
variable itself nor any quantitative measures of the goodness of the
fitting process (presumably the variances of the dependent variable
are supposed to contain all such information in theory).
I ask this because other methods, such as that used by Maple, seem to
scale their result by the residual sums of squares, for example. I am
still awaiting the book by Bevington (can only get 1st edition from
library services, so need to purchase 2nd edition) and the one by
Himmelblau from 1970 which is the basis of the Maple method.
Best regards,
Gernot Hassenpflug
--
BOFH excuse #100:
IRQ dropout
|
|
|
|