Analytical Chemistry

Absolute and Relative Error 

Absolute error

The absolute error of a measurement is the difference between the true and measured values.

E = Xi – Xt

Where, E = absolute error, Xi = measured value, Xt = true or accepted value

The sign of absolute error tells whether the data is low or high than the true value. A negative sign indicates the value is low while a positive value indicates the measured value is higher than the true value.

Relative error

The relative error is a more useful quantity than an absolute error because it is often measured during analysis. Depending on the size of the object being measured, the relative error indicates how accurate the measurement is. The relative error is either measured in percentage or parts per thousand (ppt).

If relative error is expressed in percentage, then the expression is

If relative error is expressed in parts per thousand (ppt), then the expression is

Accuracy

Accuracy indicates the closeness of the measurement to the true or accepted value. It is always expressed by absolute and relative error. This measures the agreement between the result and the accepted value. Accuracy is often more difficult to determine because the true value is really unknown. The accepted value must be used instead of the true value.

The comparison of two types of data is always made on the basis of an inverse measure of accuracy which is an error. The smaller the error measured the greater the accuracy.

The accuracy has been classified into 3 types:

  • Point Accuracy: Point accuracy denotes that the instrument’s accuracy is only applicable to that specific point on its scale. This kind of accuracy, however, says nothing about the instrument’s overall accuracy. An excellent illustration of point accuracy is mass measurement.
  • Accuracy as Percentage of True Value: When the accuracy of the instrument is evaluated by comparing the measured value to its true value, the result is expressed as a percentage of the true value. Up to 0.5 percent of the true value is ignored due to instrument accuracy.

Accuracy as Percentage of Scale Range: The accuracy of a measurement is expressed as a percentage of the scale range.

Precision

Precision describes the reproducibility of measurements. In other words, precision can be defined as the closeness of results that have been obtained in exactly the same way. The precision of a measurement can be easily measured by simply repeating the experiment to obtain a replicated sample.

The precision of measurement is generally measured by the standard deviation, variance, and coefficient of variation. These are functions of deviation from the mean. The precision can be also expressed in terms of the range of data. It doesn’t tell anything about true value.

Precision includes two important terms:

  • Repeatability: The fluctuation that results from taking multiple measurements under the same circumstances in a short period of time.
  • Reproducibility: The variation appears when employing the same measurement technique with various instruments, and operators, and over extended time periods.

The well-precise value may be inaccurate. The accurate value may not be precise sometimes. The determinate type of error which lead to inaccuracy may or may not affect precision depending on how nearly it remains constant throughout that series of experiment. This indicates accuracy and precision of data are not indicative of each other.

Accuracy and Precision Examples

Imagine a football player aiming at the goal to get a concept of accuracy and precision. The player is considered accurate if his shot finds the net. A football player who repeatedly strikes the same goalpost is precise but not accurate. Therefore, if a football player strikes the ball all over the place but still scores, he or she is accurate without being precise. Whether he/she scores or not, a precise player will consistently strike the ball to the same spot. A precise and accurate football player will not only aim at a specific spot but also score a goal.

Difference between Accuracy and Precision

Accuracy

Precision

Accuracy defines the closeness of the measurement to the true or accepted value

Precision defines the reproducibility of measurements.

Measurement can be accurate but not necessarily precise.

Measurement can be precise but not necessarily accurate.

Indicates how closely the results match the reference value.

Indicates how closely results agree with one another.

Can be calculated using just one measurement.

Requires a number of measurements to be calculated.

May be affected by the determinate error.

May be affected by the indeterminate error.

 Atomic Absorption Spectroscopy

Atomic absorption spectroscopy (AAS) is an absorption spectroscopic method that uses the absorption of light by free atoms in a gaseous state to determine the quantitative composition of chemical components. It is used to determine the concentration of metals present in a sample to be analyzed.

Principle of atomic absorption spectroscopy

If a solution containing metal salt (M+X) is aspirated to the flame, a vapor that contains atoms of metal may be formed. A large number of the gaseous metal atom remains in the ground state, and are capable of absorbing radiant energy of their specific wavelength. If the light of resonance wavelength is passed through the flame containing the atoms which are analyte, the part of the light will be absorbed and the extent of absorption will be directly proportional to the number of ground state atoms present in the flame.

The process by which gaseous metal atoms are produced into the flame can be illustrated as:

When a metal atom is changed into gas and light is passed from the sources, the ground state of the atom gets excited by absorbing the radiation of a particular wavelength. The absorbance is given by Beer-Lambert’s law; the logarithmic ratio of the intensity of incident light to the intensity of absorbing species.

Atomic absorption spectroscopy instrumentation

The block diagram illustrating the instrumentation of atomic absorption spectroscopy can be shown as:

Atomizers: In order to analyze a sample for its atomic constituents, the element has to be atomized. The atomizers most commonly used nowadays are flames and electrothermal (graphite tube) atomizers. Depending on it, AAS is of two types; Flame-AAS, and electrothermal or graphite furnace-AAS

  • Flame-AAS: In a flame atomizer, a solution of the sample is nebulized (sprayed) by a gaseous mixture of an oxidant and a fuel and carried into a flame where atomization occurs.
  • Electrothermal or graphite furnace atomizer: Atomization takes place in an open-ended graphite tube in the graphite-furnace AAS process. The material is atomized as the tube’s temperature rises. Radiation enters the tube from one end and excites the analytes. The detector at the other end determines the absorbed fraction.

Radiation source: A Hollow cathode lamp is the most used type of radiation source. Such a lamp uses low pressure to fill the cylindrical metal cathode with Ar or Ne gas, which contains the desired element as well as an anode. The cathode and anode are subjected to a high voltage, which causes the filled gas to ionize. The cathode material is energized during the glow discharge to release the radiation of the target element as the gas ions are propelled towards the cathode. Other radiation sources used commonly are electrodeless discharge lamps (EDL) and Deuterium lamps (DL).

Monochromator: In order to distinguish the radiation specifically to each element from other radiation released by the radiation source, the radiation is then passed via a monochromator. It is carried out by using either a prism monochromator or diffracting grating.

Detector: The monochromator directs the chosen light onto the detector, commonly a photomultiplier tube, which transforms the light signal into an electrical signal proportional to the light intensity.

Amplifier: The processing of the electrical signal is amplified by the amplifier.

Read-out: device: On readout devices like meters or digital displays, the output from the detector is appropriately displayed. It has the ability to show absorbance at certain wavelengths as well as the absorption spectrum.

Application of atomic absorption spectroscopy

Some of the major applications of atomic absorption spectroscopy are:

  • Analysis of magnesium and calcium in tap water (water analysis).
  • Determination of amount of trace elements in contaminated soil (soil analysis).
  • Clinical analysis: Estimation of metals in biological fluids such as blood, urine, serum, etc.
  • Environmental analysis
  • Trace elements analysis in foods, cosmetics, hair, etc.
  • Mining: The amount of metal such as gold can be determined in rocks.
  • Pharmaceuticals: The amount of catalyst used may be present in trace amounts in final pharmaceuticals. Thus, AAS is used to determine the amount of catalyst present.

Advantages of atomic absorption spectroscopy

  • Easy to use
  • Cheap
  • Rapid
  • Greater sensitivity
  • Efficient atomization
  • Even small quantities of the sample can be analyzed (5-50μL).
  • Samples either in solid or slurry or solution form can be analyzed

Disadvantages of atomic absorption spectroscopy

  • Fails to detect non-metals
  • Simultaneous analysis of elements is not possible
  • Only able to detect 70 elements excluding earth metals

Complexometric titration

Complexometric titration is such a volumetric analysis that involves the formation of a soluble but slightly dissociated complex or complex ion. The formation of the colored complex determines the endpoint of the titration. Metal ions in solutions are estimated by complexometric titration. This is the titration between the metal ion and complexing agent in presence of a suitable indicator. The indicator is also a complexing agent but it should form a less stable complex with metal ions and should impart different distinct colors in complexation and decomplexation. Some examples of complexometric indicators include calcein, Eriochrome Black T, curcumin,  hematoxylin, fast sulphon black, etc.

The reaction quickly reaches equilibrium when each drop of titrant is supplied. There wouldn’t be any possibility of anything interfering. A complexometric titration can be used to locate the equivalent point with extreme accuracy.

EDTA is the most commonly used chelating agent in complexometric titrations. These titrations are very sensitive to pH and metal ion indicators are used to detect the endpoint. Most of the EDTA titrations belong to complexometric titrations. 

EDTA Complexometric Titration

EDTA is a common organic reagent used in complexometric titration. It is a hexadentate chelating ligand (a powerful complexing agent) coordinated with metal ions through its two nitrogens and four carboxylic groups. 

It is shortly written as H2Y2-. It reacts with most metals in a 1:1 ratio, one mole of H2Y2- reacts in all cases with one mole of the metal ion, and in each case, also, two moles of hydrogens are formed.

M-EDTA complexes with metal ions of charge number 2 are stable in alkaline  (eg. pH 8-10: Ca2+, Sr2+, Ba2+, Mg 2+) or slightly acidic solution (pH 4-6; Pb2+, Zn2+, Co2+, Ni2+, Mn2+, Fe2+, Cd2+, Sn2+, with + 3 or + 4 (eg. pH 1-3: Zr+4, Hf+4, Th+4, Bi+3, Fe+3) may exist in solutions of much higher acidity.

EDTA complexes with metal as illustrated below:

The stability of the complex is characterized by the stability constant (or formation constant), K:

Mn+  +  Y4- ↔  (MY)(n-4)+

K  =  [(MY)(n-4)+ ] / [Mn+] [Y4-]    (mostly expressed as logK)

Types of complexometric titration

  1. Direct Titrations: The solution containing metal ions is titrated directly with EDTA solution. It is like acid-base titration. The EDTA solution is added to the metal-containing sample and burette until the endpoint is reached in this titration standard. At the equivalence point, the concentration of the metal ion being determined decreases, which is determined by the change in color of the metal indicator. Metals such as copper,  zinc, barium, mercury, aluminum, lead,  chromium, bismuth, etc. can be identified using direct complexometric titration.
  2. Back Titration: Many metals can’t be titrated directly due to various reasons like a color not being distinguishable, lack of suitable indicator, a slow reaction between metal and EDTA, and so on. In such case, an excess of standard EDTA is added and the resulting solution is buffered to the desired pH. Then excess EDTA is back-titrated with a standard metal ion (eg. Mg2+ solution).
  3. Substitution or Replacement Titration: This type of titration is used for metal ions that do not react with a metal ion indicator or for the metal ions which form EDTA complexes that are more stable than those of other metals, such as Mg and Ca. In such a case, a solution containing the Mg-EDTA complex is added and the metal ion displaces the Mg from Mg-EDTA Complex. For example, titration of calcium: In the direct titration of calcium ions, solochrome black-T gives a poor endpoint, if Mg is present, it is displaced from its EDTA complex by calcium.
  4. Indirect Titration: When combined with metal cations, some anions generate a precipitate. With EDTA, these anions do not interact. Indirect titration with EDTA can be used to analyze them.

Metal ion indicators

Metal ion indicator is used for the precise determination of endpoint in complexometric titration. For visual detection of the endpoint, the metal ion indicator should satisfy the following criteria.

  • To ensure that the color change happens as close to the equivalence point as feasible, the indicators must be extremely sensitive to metal ions.
  • The color reaction should be specific or at least selective.
  • Before the endpoint, when nearly all of the metal ion has complexed with EDTA, the color response must be such that the solution is intensely colored.
  • The metal-indicator complex must possess sufficient stability, but the metal-indicator complex must be less stable than the metal-EDTA complex to ensure that, at the endpoint, EDTA removes metal ions from the metal-indicator complex. The change in equilibrium from the metal indicator complex to the metal-EDTA complex should be sharp and rapid.
  • It should be simple to see the color contrast between the metal indicator complex and the free indicator.
  • All these criteria must be fulfilled within the pH range at which titration is performed.

Examples of Metal ions indicators

Some examples of metal ions indicators are:

  • Patton & Reeder’s Indicator (12-14 pH) (Ca)
  • Murexide (10 – 11 pH) (Cu, Ni, Co, Ca)
  • Solochrome black (10 pH) (Mg, Mn, Zn, Cd…etc)
  • Xylenol orange (4-6 pH) (Ca, Pd, Ni…)
  • Xylenol orange (1-2 pH) (Bi, Zn, Co…)
  • Methyl thymol blue (0-2 pH) (Th, Zi, Hf, Zn, Co…etc)

Uses of EDTA in inorganic analysis

  • Determination of water hardness.
  • For the analysis of more than 40 cations like Al, Fe, Cu, Zn, Cr, Mg, Bi, Pb, rare earth, etc
  • Determination of anions: eg. arsenate, chromate, fluoride, pyrophosphate, sulfate, etc.
  • It can be used in the analysis of technological materials such as alloys, cement, coins, mineral oils, petrol, many ores, and rocks.
  • EDTA as masking or sequestering reagent

Application of Complexometric Titration

  • The hardness of water is estimated by complexometric titration.
  • To determine the metal content in medicines, it is commonly employed in the pharmaceutical industry.
  • Numerous cosmetic products contain titanium dioxide. By using complexometric titration, this can be analyzed.
  • Analysis of urine samples.
  • It has many applications in analytical chemistry.

Errors in Chemical Analysis: Determinate and Indeterminate Errors

Errors in chemical analysis are simply defined as the difference between a measured value and the true value. It denotes the estimated uncertainty in a measurement or experiment

Types of Errors in chemical analysis

Errors are mainly of three types in chemical analysis:

  • Random error (Indeterminate error)
  • Systematic error (Determinate error)
  • Gross error

1. Determinate errors

Determinate or systematic errors are those errors that have definite values and have some assignable cause. For every repeated measurement carried out in the same manner, these errors are consistently the same. Systematic errors usually introduce bias into the outcome of the measurement. The accuracy of the results is influenced by significant mistakes. These mistakes can also be identified and corrected because they are reproducible.

Bias measures the systematic error associated with analysis. It has a -ve sign if it causes results to be low and has positive sign if the results are high.

Types of determinate error (systematic error)

Personal error: During the measurement of the analytical experiments, there is often a need for personal judgment. For example: estimating the portion of the pointer between two scale divisions, the color of the solution at the endpoint, the level of the liquid of the mark in pipette or burette, etc. The main source of personal error is prejudice or bias. Human has a tendency to estimate scale reading to the precision in a set of results. We sometimes knowingly cause the result to fall closer to the true value. Number bias is also another important source of personal error. Colour blindness person can cause a personal error in volumetric analysis.

Operational error

Instrumental or reagent error: The measuring tools also have a certain amount of determinate error. Burette, pipette, and volumetric flask, for instance, always deliver slightly differently from what the scale indicates. These inconsistencies primarily result from using the glassware at a temperature that is different than the calibration since doing so damages the container wall when it is heated to dry because of contamination on the interior surface. Due to excessive use and the battery’s low voltage, electronic devices may also experience errors. The failure to accurately and often calibrate the instruments could possibly be the cause of the errors. Similar to how changing temperatures can affect numerous electronic components, errors can result from these fluctuations.

Methodic error: The non-ideal type of chemical and physical behavior of reagents and reactions on which an analysis is based can introduce methodic errors. The slowness of reaction, in the completeness of reaction, un-stability of chemical species, and possible occurrence of side reaction can cause methodic errors which may interfere with the measurement process. For example, in volumetric analysis, a small amount of excess reagent is necessary to change the color of an indicator to signify the completion of the reaction. The errors that are associated with the methods are often very difficult to detect and the most serious type of error out of all types of systematic error.

Constant or proportional error: Constant type of determinate errors are independent of the size of the sample analyzed. When there are constant mistakes, the relative error changes as the sample size is altered, but the absolute error remains constant. As the size of the quantity being measured shrinks, the effect of constant error becomes more pronounced. Meanwhile, Proportional errors decrease or increase in proportion to the size of the sample. The presence of interfering impurities in the sample is the main cause of the proportional error. Iodine is released, for instance, while estimating Cu++ with potassium iodide. If the samples are contaminated with Fe+++ ions, I2 is also produced from KI, and an error in the estimation of Cu++ results. The amount of Fe+++ also doubles when the sample size is doubled, and error starts to become compulsive. Such errors are referred to as additive errors since the proportional error is sample size dependent.

2. Indeterminate errors (Random error)

Data are distributed more or less symmetrically around the mean value as a result of the indeterminate error. The precision of measurement reflects the random error. Hence, measurement precision is impacted by random or indeterminate errors.

Variable fluctuations that are unavoidable or unknown that may have an impact on the findings of experiments are what create indeterminate (or random) errors. Uncertainties in measurements can lead to random or indeterminate errors.

Errors of this kind, which are random or indeterminate, always exist in measurements. Such an error can never be completely ruled out. They are a significant factor in the determination of the analyte’s uncertainty. Most of the causes of random errors are impossible to pinpoint. Due to the low value of individual causes of such errors, they cannot be quantified even if their sources are known. However, the combined impact of all errors leads to large variability in the measurement at random.

3. Gross errors

Either too high or too low findings are the outcome of this kind of error. They are the outcome of human mistakes. Outliers, or results that appear to differ significantly from all other measured data in a set of repeated measurements, are frequently the result of gross error. 

Minimization of errors

Indeterminate errors are beyond the control of the analyst, however, determinate errors can be minimized. Some of the common methods that can be employed for the minimization of errors are:

Calibration of apparatus: The instrument’s calibration is one method of reducing error. Instruments should always be regularly calibrated because they could alter as a result of water, corrosion, overuse, etc. The calibration can be done using (i) comparison with a standard, and (ii) external standard calibration. 

Blank determination: A determination is performed under identical conditions by excluding the sample in order to reduce errors brought on by reagent impurities.

Independent method of analysis: This approach enables the parallel use of the technique that is being evaluated and a reliable analytical strategy. The independent approach ought to differ as little as possible from the investigative approach. This reduces the possibility that a common element in the sample will have the same effect on both approaches. By using the statistical test, we were able to determine whether the bias in the method being studied or the Undetermined errors in the two approaches was to blame for the difference in the results.

Control determination: To reduce errors, a standard material is employed in experiments under identical experimental conditions.

Dilution method: The dilution method is a vital technique for lowering interference errors. In this procedure, dilution is carried out in a way that interferent species have little to no impact below a specific concentration. This approach requires extreme caution because the sample is diluted, and the dilution may change how the sample’s analyte is measured.

Standard addition: The known amount of standard solution of analyte is added to one portion of the sample. The responses before and after the addition are measured and used to obtain the analyte concentration. The standard addition method assumes the linear relationship between response and analyte concentration.

Internal standard method: A known amount of reference species is added to all the samples, standard, and blank. The response signal is obtained as the ratio of the analyte signal to the reference species signal. It is utilized in chromatographic and spectroscopic analysis.

Parallel determination: To reduce the likelihood of unintentional errors, duplicate or triple determination is performed instead of a single determination.