13. Data Analysis#

The experimentalist always has to contend with noise on data. Some of the causes and methods that can be used to remove it have been discussed in Chapter 9. However, once the mean value of a series of experimental measurements is obtained, some measure of its precision and its accuracy is always required. This can mean making a comparison with data of the same quantity obtained by others, published in the literature, or with an accepted standard. The type of errors caused by poor experimental method or carelessness, i.e. blunders, can be significant and are not accounted for by any statistical analysis because they are entirely unpredictable. These are not considered further. The errors described in this chapter are predictable, but only in a statistical sense. This means that if the distribution the errors will form is known, then the chance that any given value may occur with can be predicted.

It is to be expected that by repeating a measurement its precision will improve, how- ever, the accuracy may not. This is because accuracy is a measure of how a result differs from the true value of that measurement; the freezing or boiling point of water for ex- ample. If an average result is not accurate, then there has to be something systematically wrong in the instrument or experimental method that is causing this. Precision, on the other hand, is a measure of the spread or dispersion of the data about its mean value and can be improved by careful experimentation and by repeating the measurements. Clearly, the best result is precise and accurate, and the worst is imprecise and inaccurate. The precision increases if more measurements are made, whereas the accuracy may not.

Because of the random nature of the noise associated with all measurements, it is never possible to measure an exact value, i.e. one with no error. In fact, if someone were to report such a measurement then this would be very suspicious. Similarly, reporting results that are better than could be obtained from the known random nature of experimental results, as described by the normal (Gaussian) or other distributions, should also be a cause for suspicion. Although errors on a measurement are to be expected as the natural consequence of any experiment, this does not mean that they cannot be reduced by good experimental practice and by repeating measurements; but remember that they can never be reduced to zero. Clearly, what is needed is to define a quantity that indicates what an experimental result is and a second quantity that indicates the chance of this being the correct result. The average is obviously a good measure of an experimental result, and the standard deviation a measure of the dispersion or range that a value can reasonably be expected to have either side of the average. Confidence limits are then constructed from the standard deviation with which it is possible to feel confident to within a certain level, usually 95 %, that a measurement will be within that value of the mean. How these calculations are done is described next.