Errors of Measurement

ACCURACY

The ability of an instrument to measure the accurate value is known as accuracy. In other words, it is the the closeness of the measured value to a standard or true value. Accuracy is obtained by taking small readings. The small reading reduces the error of the calculation. The accuracy of the system is classified into three types as follows:

      • Point Accuracy

The accuracy of the instrument only at a particular point on its scale is known as point accuracy. It is important to note that this accuracy does not give any information about the general accuracy of the instrument.

      • Accuracy as Percentage of Scale Range

The uniform scale range determines the accuracy of a measurement. This can be better understood with the help of the following example:
Consider a thermometer having the scale range up to 500ºC. The thermometer has an accuracy of ±0.5 percent of scale range i.e. 0.005 x 500 = ± 2.5 ºC. Therefore, the reading will have a maximum error of ± 2.5 ºC.

      • Accuracy as Percentage of True Value

Such type of accuracy of the instruments is determined by identifying the measured value regarding their true value. The accuracy of the instruments is neglected up to ±0.5 percent from the true value.

PRECISION

The closeness of two or more measurements to each other is known as the precision of a substance. If you weigh a given substance five times and get 3.2 kg each time, then your measurement is very precise but not necessarily accurate. Precision is independent of accuracy. The below examples will tell you about how you can be precise but not accurate and vice versa. Precision is sometimes separated into:

      • Repeatability

The variation arising when the conditions are kept identical and repeated measurements are taken during a short time period.

      • Reproducibility

The variation arises using the same measurement process among different instruments and operators, and over longer time periods.

Conclusion:-

Accuracy is the degree of closeness between a measurement and its true value. Precision is the degree to which repeated measurements under the same conditions show the same results.

Accuracy and Precision Examples

A good analogy for understanding accuracy and precision is to imagine a football player shooting at the goal. If the player shoots into the goal, he is said to be accurate. A football player who keeps striking the same goalpost is precise but not accurate. Therefore, a football player can be accurate without being precise if he hits the ball all over the place but still scores. A precise player will hit the ball to the same spot repeatedly, irrespective of whether he scores or not. A precise and accurate football player will not only aim at a single spot but also score the goal.

 

top right image shows the target hit at a high accuracy but low precision. The bottom left image shows the target hit at a high precision but low accuracy. The bottom right image shows the target hit at low accuracy and low precision.

More Examples

  • If the weather temperature reads 28 °C outside and it is 28 °C outside, then the measurement is said to be accurate. If the thermometer continuously registers the same temperature for several days, the measurement is also precise.
  • If you take the measurement of the mass of a body of 20 kg and you get 17.4,17,17.3 and 17.1, your weighing scale is precise but not very accurate. If your scale gives you values of 19.8, 20.5, 21.0, and 19.6, it is more accurate than the first balance but not very precise.

Difference between Accuracy and Precision

In the previous few sections having discussed what each term means, let us now look at their differences.

Accuracy 

Precision

Accuracy refers to the level of agreement between the actual measurement and the absolute measurement.

Precision implies the level of variation that lies in the values of several measurements of the same factor.

Represents how closely the results agree with the standard value.

Represents how closely results agree with one another.

Single-factor or measurement are needed.

Multiple measurements or factors are needed to comment about precision.

It is possible for a measurement to be accurate on occasion as a fluke. For a measurement to be consistently accurate, it should also be precise.

Results can be precise without being accurate. Alternatively, the results can be precise and accurate.

ERRORS IN MEASUREMENT

Every measurement carries a level of uncertainty which is known as an error. This error may arise in the process or due to a mistake in the experiment. So 100% accurate measurement is not possible with any method.

An error may be defined as the difference between the measured and actual values. For example, if the two operators use the same device or instrument for measurement. It is not necessary that both operators get similar results. The difference between the measurements is referred to as an ERROR.

To understand the concept of measurement errors, you should know the two terms that define the error. They are true value and measured value. The true value is impossible to find by experimental means. It may be defined as the average value of an infinite number of measured values. The measured value is a single measure of the object to be as accurate as possible.

Types of Errors

There are three types of errors that are classified based on the source they arise from; they are:

  • Gross Errors
  • Random Errors
  • Systematic Errors

Gross Errors

This category basically takes into account human oversight and other mistakes while reading, recording, and readings. The most common human error in measurement falls under this category of measurement errors. For example, the person taking the reading from the meter of the instrument may read 23 as 28. Gross errors can be avoided by using two suitable measures, and they are written below:

  • Proper care should be taken in reading, recording the data. Also, the calculation of error should be done accurately.
  • By increasing the number of experimenters, we can reduce the gross errors. If each experimenter takes different readings at different points, then by taking the average of more readings, we can reduce the gross errors

Random Errors

The random errors are those errors, which occur irregularly and hence are random. These can arise due to random and unpredictable fluctuations in experimental conditions (Example: unpredictable fluctuations in temperature, voltage supply, mechanical vibrations of experimental set-ups, etc, errors by the observer taking readings, etc. For example, when the same person repeats the same observation, he may likely get different readings every time.

This article explored the various types of errors in the measurements we make. These errors are everywhere in every measurement we make. To find more articles, visit BYJU’S. Join us and fall in love with learning.

Systematic Errors:

Systematic errors can be better understood if we divide them into subgroups; They are:

  • Environmental Errors
  • Observational Errors
  • Instrumental Errors

Environmental Errors: This type of error arises in the measurement due to the effect of the external conditions on the measurement. The external condition includes temperature, pressure, and humidity and can also include an external magnetic field. If you measure your temperature under the armpits and during the measurement, if the electricity goes out and the room gets hot, it will affect your body temperature, affecting the reading.

Observational ErrorsThese are the errors that arise due to an individual’s bias, lack of proper setting of the apparatus, or an individual’s carelessness in taking observations. The measurement errors also include wrong readings due to Parallax errors.

Instrumental Errors: These errors arise due to faulty construction and calibration of the measuring instruments. Such errors arise due to the hysteresis of the equipment or due to friction. Lots of the time, the equipment being used is faulty due to misuse or neglect, which changes the reading of the equipment. The zero error is a very common type of error. This error is common in devices like Vernier callipers and screw gauges. The zero error can be either positive or negative. Sometimes the scale readings are worn off, which can also lead to a bad reading.

Instrumental error takes place due to:

  • An inherent constraint of devices 
  • Misuse of Apparatus 
  • Effect of Loading

Errors Calculation

Different measures of errors include:

Absolute Error

The difference between the measured value of a quantity and its actual value gives the absolute error. It is the variation between the actual values and measured values. It is given by 

Absolute error = |VA-VE|

Percent Error

It is another way of expressing the error in measurement. This calculation allows us to gauge how accurate a measured value is with respect to the true value. Per cent error is given by the formula

Percentage error (%) = (VA-VE) / VE) x 100

Relative Error

The ratio of the absolute error to the accepted measurement gives the relative error. The relative error is given by the formula:

Relative Error = Absolute error / Actual value

How to Reduce Errors in Measurement

Keeping an eye on the procedure and following the below listed points can help to reduce the error.

  • Make sure the formulas used for measurement are correct.
  • Cross check the measured value of a quantity for improved accuracy. 
  • Use the instrument that has the highest precision.
  • It is suggested to pilot test measuring instruments for better accuracy.
  • Use multiple measures for the same construct. 
  • Note the measurements under controlled conditions.