I have a set of data with errors, how do I compute the error for the fit $f(x)=a$?
I remember there are formulas for the errors in parameters for fit $f(x)=ax+b$, i.e. $\delta a$ and $\delta b$. Where can I find these kind of formulas?
I have a set of data with errors, how do I compute the error for the fit $f(x)=a$?
I remember there are formulas for the errors in parameters for fit $f(x)=ax+b$, i.e. $\delta a$ and $\delta b$. Where can I find these kind of formulas?
Since you know the errors $\sigma_i$ on the measurements $x_i$, one method is to write a $\chi^2(a) = \sum \frac{(x_i-a)^2}{\sigma_i^2}$, and minimize it with respect to $a$, providing an estimator $\hat{a}$, and a minimum $\hat{\chi^2}$. 68% confidence intervals on $a$ are then obtained by determining the values of $a$ for which $\chi^2 = \hat{\chi^2}+1$. In the case of your constant model, this can be done analytically.
There's no general formula for all functions, because the small-signal noise propogation requires that you know the sensitivity (a derivative), and... some functions aren't differentiable.
A general technique that works, is to add a dither function (a small random addition to all data points) scaled to be comparable to experimental error, and do a few dozens or hundreds of trial curve-fits to observe the results. This is a kind of Monte Carlo calculation.
A few operations (like a discrete Fourier transform) have known noise-propogation properties that are simple. Others (like determination of phase by taking arctangent of a ratio) have singlularities (nondifferentiable at some values).
It is always possible, in a multivariate fit, to add one more variable and find that the data cannot solve for that particular unknown. Just as you may need five data points to find five unknowns, you may have five data points but no unique solution for all five unknowns. The error sensitivity, if two solutions are possible, is formally infinite.