To physicists the term ``error" is interchangeable with `` uncertainty" and does not have the same meaning as ``mistake". Mistakes, such as ``errors" in calculations, should be corrected before estimating the experimental error. In estimating the reliability of a single quantity (such as the diameter of a cylinder) we recognize several different kinds and sources of error:
FIRST, are actual variations of the quantity being measured, e.g. the diameter of a cylinder may actually be different in different places. You must then specify where the measurement was made; or if one wants the diameter in order to calculate the volume, first find the average diameter by means of a number of measurements at carefully selected places. Then the scatter of the measurements will give a first estimate of the reliability of the average diameter.
SECOND, the micrometer caliper used may itself be in error. The errors thus introduced will of course not lie equally on both sides of the true value so that averaging a large number of readings is no help. To eliminate (or at least reduce) such errors, we calibrate the measuring instrument: in the case of the micrometer caliper by taking the zero error (the reading when the jaws are closed) and the readings on selected precision gauges of dimensions approximately equal to those of the cylinder to be measured. We call such errors systematic.
THIRD, Another type of systematic error can occur in the measurement of a cylinder: The micrometer will always measure the largest diameter between its jaws; hence if there are small bumps or depressions on the cylinder, the average of a large number of measurements will not give the true average diameter but a quantity somewhat larger. (This error can of course be reduced by making the jaws of the caliper smaller in cross section.)
FINALLY, if one measures something of definite size with a calibrated instrument, errors of measurement still exist which (one hopes) are as often positive as negative and hence will average out in a large number of trials. For example, the reading of the micrometer caliper may vary because one can't close it with the same force every time. Also the observer's estimate of the fraction of the smallest division varies from trial to trial. Hence the average of a number of these measurements should be closer to the true value than any one measurement. Also the deviations of the individual measurements from the average give an indication of the reliability of that average value.
Average Deviation:
If one finds the average of the absolute values of the deviations, this ``average deviation from the mean" may serve as a measure of reliability. For example, let column 1 represent 10 readings of the diameter of a cylinder taken at one place so that variations in the cylinder do not come into consideration, then column 2 gives the magnitude (absolute) of each reading's deviation from the mean.
Measurements | Deviation from Ave. | |
9.943 mm | 0.000 | |
9.942 | 0.001 | |
9.944 | 0.001 | |
9.941 | 0.002 | |
9.943 | 0.000 | |
9.943 | 0.000 | |
9.945 | 0.002 | Diameter = |
9.943 | 0.000 | |
9.941 | 0.002 | 9.943![]() |
![]() | ![]() | |
Ave = 9.943 mm |
Ave = 0.0009 mm![]() |
Expressed algebraically, the average deviation from the mean
is where xi is the ith
measurement of n taken,
and
is the mean or arithmetic average of the readings.
Standard Deviation:
A more useful measure of the spread in a set of measurements is the standard deviation S (or root mean square deviation). One defines S as
The standard deviation, S, clearly weights large deviations more heavily than the average deviation and thus gives a less optimistic estimate of the reliability. Careful analysis shows that even a better estimator (and appreciably less optimistic for small sets of measurements) is
If the error distribution is
``normal" (i.e. the errors, have a Gaussian distribution,
,
about zero), then on average 68% of a large number of measurements will lie
closer than
to the true value. While few measurement sets have
precisely a ``normal" distribution, the main differences tend to be in the
tails of the distributions. If the set of trial measurements are generally
bell shaped in the central regions, the ``normal" approximation generally
suffices.
Relative error and percentage error:
Let be the error in a measurement whose value is a.
Then
is
the relative error of the measurement, and
is the
percentage error. These terms are useful in laboratory work.
A.) If the desired result is the sum or difference of two
measurements, the ABSOLUTE uncertainties ADD:
Let and
be the errors in x and y
respectively. For the sum we have
and the relative error is
.
Since the signs of
and
can be opposite, adding the absolute values gives a pessimistic estimate
of the uncertainty. If errors have a normal or
Gaussian distribution and are independent, they combine in quadrature, i.e.
the square root of the sum of the squares, i.e.,
For the difference of two measurements we obtain a relative error of
.
which becomes very large if x is nearly equal to y.
Hence avoid, if possible, designing an experiment where one
measures two large quantities and takes their difference to obtain the
desired quantity.
B.) If the desired result involves multiplying (or dividing) measured quantities, then the RELATIVE uncertainty of the result is the SUM of the RELATIVE errors in each of the measured quantities.
Proof:
C.) Corollary: If the desired result is a POWER of the measured quantity, the RELATIVE ERROR in the result is the relative error in the measured quantity MULTIPLIED by the POWER: Thus z = xn and
The above results also follow in more general form: Let R = f(x,y,z) be the functional relationship between three measurements and the desired result. If one differentiates R, then
For example, consider the density of a solid (Exp. M1). The relation is
Suppose you have measured the diameter of a circular disc and wish to
compute its area . Let the average value of
the diameter be
mm ; dividing d
by 2 to get r we obtain
mm with a relative error
of
. Squaring r we have
12.163 |
12.163 |
121.63 |
24.326 |
1.2163 |
0.72978 |
0.036489 |
147.938569 |
A rule of rather general applicability is to use one more digit in
constants than is available in your measurements, and to save not more than
one more digit in computations than the number
of significant figures in the data.
When using a calculator, one can include many more digits in the calculations.
However, at the end be sure to round off the final answer to display the
correct number of significant figures.
SAMPLE QUESTIONS
The htmlscript Quiz System working demo is now available.
Suggestions on Form for Lab Notebooks:
Date performed:____________
Partner:_________________________
Subdivisions:
If appropriate, name and number each section as in the manual.
DATA:
Label numbers and give units. If appropriate, record the data in
tabular form. Label the tables and give units.
CALCULATIONS:
State the equations used and present a sample
calculation. (Inclusion of the arithmetic is not necessary.)
CONCLUSIONS:
If any important conclusions follow from the
experiments, state them and show by a brief statement
how they follow. Compare your results with accepted values if the
experiment has involved the measurements of a physical constant.
Errors:
Some of your experiments will be qualitative while
others will involve quantitative measurements of physical constants. In
experiments where it is appropriate, estimate the uncertainty of
each measurement used in a calculation and compute the uncertainty of
the result. Does your estimate of uncertainty indicate satisfactory
agreement between your result and the accepted one (or between your several
values if you have several)? Intelligent discussion is welcomed, but don't
make this section a burden on you.