Near Infrared (NIR) Spectroscopy - The rapid analyses technique of the future
Published:January 6, 2006
By:Tokkie Groenewald (Managing Director Labworld (Pty) Ltd.) - Dr Hinner Köster (Managing Director Scinetic)
A new buzz-word is rapidly spreading through the manufacturing and analytical environment. Discussions on the application of Near Infrared (NIR) Spectroscopy are often found on the agenda of production, project, technical and even board meetings. Engineers and chemists are confronted more frequently with the challenge to decide whether a NIR instrument will be the solution to their production quality control and other analytical problems. Are there really amazing machines that can determine protein, fat, moisture, fibre, starch, nicotine, alcohol, sugar, amino acids and whatever analyses you require on any product from foodstuff to fertilizer and animal feeds in less than a minute and so simple to operate that even the unskilled labourer can handle it? In this article we will endeavor to answer the questions that often crop up in the evaluation of NIR as an analytical technique.
Theory of NIR Spectroscopy
The word “spectroscopy” is derived from the Latin root spectrum (appearance, image) and the Greek word skopia (to view). This definition is rather descriptive of the spectroscopic measurement itself i.e. to view a light image coming from a specimen (Miller, 2001).
In essence, NIR technology involves light interacting with matter where electromagnetic radiation occurs in the form of waves. The wavelength of a wave is the distance between the two peaks or high points and is indicated by the symbol λ (Shadow, 2000; Figure 1). A wavelength in the NIR spectrum is normally measured in nanometer (nm) where
1 nm = 10-9 m or 1000 nm = .001 mm.
That part of the spectrum visible to the human eye extends from about 400 nm to 800 nm, while the infrared spectrum extends from about 2 500 nm to 25 000 nm. Near infrared is considered as that part of the spectrum lying between the visible region and the infrared region. The range of wavelengths NIR covers are from 750 nm to 2 600 nm (Figure 2).
Figura1: Typical wave propagating through space.
Figure 2: The electromagnetic spectrum.
In addition, molecules are groups of atoms which have combined to form chemical compounds. For example, methane contains one carbon atom (C) and four hydrogen (H) atoms. Specific bonds between the atoms vibrate at a certain frequency and each type of these chemical bonds within a sample will absorb NIR rays of a specific wavelength, while all other wavelengths are being reflected (Figure 3).
Figure 3: The vibrating bond between carbon (C) and hydrogen (H) atoms absorbs NIR waves of a particular wavelength and reflects all other waves.
NIR Reflectance vs NIR Transmission In practice, the sample to be analysed is bombarded with NIR rays of different wavelengths as illustrated in Figure 4. For each wavelength, some of the rays will then be absorbed by specific chemical bonds. At the same time other rays will be scattered and reflected by other chemical bonds. This process is commonly described as NIR Reflectance. In contrast, some of the rays may pass through the sample, which is then described as NIR Transmission (often referred to as NIT).
Figure 4: NIR rays absorbed by some bonds, reflected by other bonds (NIR Reflectance) or transmitted through a sample (NIR Transmission generally termed NIT).
The scattered reflected and/or transmitted rays of each wavelength are concentrated onto a measuring cell. A number of reflections at different wavelengths are measured and then converted to analytical results by a microprocessor.
There is often a misunderstanding of the term NIR Reflectance. The rays are not merely reflected from the outside surface, but actually penetrate the sample. Each time a chemical bond is encountered that does not absorb the particular wavelength, the rays are scattered and reflected in all directions. These scattered beams may then be absorbed or reflected by other chemical bonds until a portion of the rays eventually exits the sample in all directions (Figure 5). The depth of penetration of the beam into the sample is not determined by the position of the detector, but rather by the strength of the light source.
Figure 5: The penetration of NIR rays in NIR Reflectance and NIR Transmission mode.
NIR Transmission, where the detector is placed behind the sample is ideal for transparent liquids and some products that are not too optically dense. Products such as sunflower, canola and soil samples that are optically very dense allow for such a small portion of the rays to pass through the sample, that reliable measurements by NIT are very difficult to obtain. Subsequently, for NIT measurement of these products the sample has to be poured in sample cells with a width of only 6 mm. Vastenhoudt (1995) reported that it is generally accepted that analysis of intact grains by NIR Reflectance is more accurate than that by NIT. Williams and Norris (2001) further indicated that whole grain NIR analysis by reflectance eliminated path length as a source of error, which therefore suggests that NIR Reflectance rather than NIT should be the recommended measurement principle for such dense products and mixtures that contain these products.
In practice, when an analysis on a NIR instrument is done, the operator only has to place the sample in a sample cup and press a button to start the analyses. The full analysis is normally printed in less than a minute. If this was so simple, why then is there the difference of opinion about the applicability and accuracy of NIR results? It certainly is not a magic box that can analyse anything that is poured into it but with the correct development and maintenance of calibrations, this exciting new technique is rapidly being established as the major rapid analyses method of the future.
Calibrations
The NIR instrument is not “calibrated” like a balance where the readings are merely adjusted up or down to a standard value.
The instrument has to be trained to recognise different products and constituents. This process of “training” is called the calibration procedure and herein lies the secret of success of this revolutionary technology.
For the training, a number of samples are analysed by traditional chemical analytical methods to determine the actual composition of the samples. Each of these samples is further placed in the NIR instrument and the reflectance values from the different wavelengths are obtained. With the aid of a microcomputer and powerful chemometric software the combination of analytical results and reflectance values are transformed to the calibration constants. This software is so powerful that great care must be taken that it does not merely present a statistical solution, but actually supplies a scientific solution that can be verified.
To develop any new calibration or even for maintaining existing calibrations, it is important to first physically source an ideal set of samples. For each product the sourced sample set must include samples that represent as much of the variation of the analytical and nutrient components that can be expected. This set should ideally also contain samples representing the natural variation that can occur. This includes the variation in cultivars, growing areas, growing conditions and growing seasons. Dersjant-Li and Peisker (2005) recently emphasised the large variation in nutritional composition between soya samples collected from different countries or even from different areas within the same country. Once a set of samples that covers most of the variation has been sourced, the majority of calibration software programs have a tool, which then aids in selecting a further sub sample set to prepare the calibration.
Furthermore, universal calibrations often supplied with the purchase of NIR instruments would rarely represent a true reflection of samples from local areas and usually need quite a bit of adjustment. This can be done by adding a number of carefully selected samples from a specific local product to the existing calibration data.
Factors that may affect accuracy of analytical results
The reliability of NIR results is normally determined by comparing the results with traditional chemical results. However, most of the factors affecting the accuracy of NIR results also affect the results of traditional chemical analyses. Therefore a careful study of all factors affecting accurate and reliable analytical results of samples is essential to ensure that realistic values are used for calibrating and comparing results, and to ensure the user understands the true value of the results given to him. To demonstrate the effect of various factors influencing the accuracy of results obtained for specific samples, an extensive study was performed by Mr. Gerrie Scholtz and Professor Hentie van der Merwe at the University of Free State using more than 600 lucerne samples as part of a project to develop a new quality grading system for lucerne in South Africa (Scholtz, 2005; unpublished). Some of the preliminary data of this study together with trials done by Groenewald (2005; unpublished) will be used to illustrate the variation in analytical results that can be caused by a few of the major factors. The term “variation” in the discussion below further expresses the difference between the maximum and the minimum value obtained for each result.
Laboratory and sample variation A major problem in preparing good calibrations and verifying the effectiveness of the NIR applications is obtaining reliable chemical analyses. One of the misconceptions is that chemical results from laboratories are absolute and always accurate. Laboratories often supply results expressed to two decimal points where the variation that occurs within the sample and during the sample preparation and analytical procedures may be much larger.
The following analytical results are typical examples received for a lucerne sample that was analysed at a specific laboratory:
The first question that comes up when evaluating these results are what do these results mean and how should they be interpreted? Furthermore, it is important to know the accuracy of such results when using the absolute value obtained from a specific laboratory. Therefore, if the overall effect of sample variation, handling variation, sample preparation variation and analytical variation was understood by the laboratory, it is more correct to express the results as:
By determining the expected variation and providing it with the analyses, the user will be supplied with a much better understanding of the real value of the result and will be able to take better informed decisions based on such results.
It is surprising how often the variation within samples is not well understood and considered in making sound decisions based on the analytical results. For example, a single sample of a kilogram or even less is usually taken from a large batch of product and sent for analyses. The result of such a sample is then accepted as absolute and used to evaluate the whole batch and adjust production processes or matrix values accordingly.
To demonstrate the affect of both laboratory and sample variation, tests were done on lucerne samples (Groenewald, 2005; unpublished) to determine the type and size of variation that may occur with analytical results. With this study:
• The stem and leaf fractions of the same lucerne sample were analysed separately to compare the normal quality variation between these two fractions.
• Four separate well-mixed unground samples (a – d, Figure 6) were each analysed four times on a NIR instrument (PERTEN DA7200, Perten Instruments AB, Sweden) to determine the repeatability that can be expected with this technology under well controlled conditions.
• Five separate well-mixed unground samples (e – i, Figure 6) were each carefully split into two portions. Each of the ten portions was then sent to the same laboratory at different times for analyses. The variation was then determined between the results of the two duplicate portions of each of the five samples.
• A separate well-mixed sample was carefully split into two portions. Each of the two portions was then sent to a different laboratory for analyses. This exercise was repeated nine times with different samples (j – r, Figure 6), using a number of different recognised laboratories to compare the typical variation experienced between such laboratories.
Analyses were performed on a range of nutrients. The conclusion reached for the different nutrients was basically the same. For the purpose of this article, only the variation in protein values will be illustrated (Figure 6).
Figure 6: Variation experienced in protein analytical results of various lucerne samples.
From this study, it was evident that the repeatability of NIR results was at least as good as for blind duplicates of the same product when chemically analysed by the same laboratory. This confirms the statement made by Williams and Norris (2001) that the precision of modern NIR instruments is often superior to that of the reference method with which they are compared. In contrast, the variation in the results obtained for the same sample between different laboratories illustrates the problem which often arises if a single reference laboratory is not used. In fact, variations may be due to different analytical methods, different handling procedures, possible contamination during sample handling/grinding, human error and a host of other factors. It is therefore not a question of which laboratory or method is more correct, but rather ensuring that the same reference laboratory is used for a particular application, especially when resolving disputes.
Most laboratories pride themselves on their performance in inter laboratory control schemes where carefully prepared homogeneous samples are sent to various laboratories. Such control scheme test samples often receive more careful attention than run of the mill routine samples and it does not necessarily include the very important aspect of sample preparation. For analyses, where the repeatability between different laboratories can easily be controlled, a small pool of carefully selected and controlled laboratories could possibly be used. In selecting the correct pool of laboratories actual samples in their natural state (ideally as variable as can be experienced under normal conditions) should, when distributed, also be exposed to the full sample preparation and grinding procedures. This way the selected laboratories may be better equipped to also report the actual variation that the customer can expect in typical samples and not only the theoretical variation that the instrument used for analyses can achieve. However, for more complex analytical applications (e.g. in vitro digestibility, amino acids, NDF, ADF, etc ) where significant variation is known to exist between different laboratories, using a single reference laboratory may well be the best solution.
Barton and Keys (2001) did an independent study to further illustrate the large between laboratory errors when sending samples to four well-recognised laboratories. In this example the variation was expressed as the standard deviation (SD), which is calculated by the formula:
SD = Σx2 - [(Σx)2 /N]/(N-1)}½ where N is the number of samples and x is the analytical result of a particular sample.
The result of the study was as follows:
From the discussion above, it is further clear that when two parties enter into a supply agreement or if quality grading regulations (e.g. with lucerne) are formulated, a reference laboratory should be agreed on in advance by all parties concerned. For the feed industry this laboratory should for example be the same as that which is used to develop and maintain the information used in computer simulation models and linear regression programs for the formulation of animal feeds. The precision of the reference method should further be determined by analysing blind duplicate samples at different times. This precision should be noted in the agreement and reported with each analysis.
Sampling variation Sampling is the most important single source of error in any chemical or physicochemical analysis of agricultural commodities and most food products and ingredients. Ideally the sample must be truly representative of the total population in every sense, including chemical composition, physical constitution and the presence of foreign material (Williams and Norris, 2001). Representative sampling of forages is even more complicated where care has to be taken to protect the natural ratio of leaves and stems. In dried lucerne for instance the stem/leaf ratio can easily be altered by careless sampling and sample preparation.
As part of his project, Scholtz (2005; unpublished) did an experiment to determine the variation that can occur within the same consignment of lucerne (Figure 7). The consignment consisted of thirty bales and a sample was extracted from each bale with a borer and analysed for protein content.
Figure 7: Variation in protein results obtained from 30 individual lucerne bales and by averaging different number of samples from the same consignment.
When only one bale was sampled and used to evaluate the whole consignment, the result could have been anything between 12,8% and 21,3%, depending on which bale was selected. If five bales were selected randomly and the average of these five bales used to represent the consignment, the result could have been anything between 15,2% and 18,6%. For an average of ten bales the variation was between 16,2% and 18,2% and if twenty bales were selected as representative of the consignment the variation would have been between 17,2% and 17,8%, depending on which twenty bales were selected. The samples with the lowest and highest values were re-analysed and the accuracy of the analyses was confirmed. From this experiment it can be concluded that for a 30-bale consignment, at least 20 bales had to be analysed to get an average protein value that is within acceptable accuracy. Whether this is feasible to be done in practice needs to be determined, however this analysis variation is reality and has to be clearly understood when an analysis is performed to determine the nutrient value of a large batch of any specific product. It must also be emphasized that the results obtained for a particular bale does not necessarily represent the average value of that bale. During sampling, a portion of mainly stems or leaves may have been taken, depending on the bore position and technique. In a further experiment, various samples have to be taken from a single bale to determine the variation due to the position and angle of sample boring. It is foreseen that the results of such an experiment may well indicate that fewer bales may be sampled, but more samples may be taken from each bale to give better representative results from fewer total samples.
Variation caused by sample grinding Grinding is another major cause of variation in analytical (chemical and NIR) results. It is probably the largest cause of contamination between samples, especially in routine operations when a large number of samples are processed. Also, the mean particle size, particle size distribution and therefore the diffuse reflectance signal can be markedly affected by the condition of the grinder used in sample preparation. It is one of the most important phases of analytical work and should only be assigned to competent and conscientious workers (Williams and Norris, 2001). One of the major breakthroughs in modern NIR technology is that instruments are now available that can accurately and effectively analyse samples without prior grinding.
Operating environment It is important to carefully consider the environmental conditions required for various instruments. Some NIR instruments can operate under factory conditions where there are vibrations and temperature variations. Other instruments are more suited to only laboratory conditions and may require specific air conditioned and temperature controlled environments and vibration free operational areas. This may not always be easy to achieve in a manufacturing environment and should therefore be taken into account when the purchasing of certain instruments for such environments are done.
Calibration maintenance
A good working calibration must continually be checked and updated. The main reason for calibration verification and updating is not instrument related but rather related to sample composition. Calibrations developed with samples from one season only, may not necessarily be applicable to all crops of following seasons. Changes in environmental conditions, cultivars or a host of other external factors such as pesticides used, weed control, degree of maturity, moisture variations and many others do occur (Dersjant-Li and Peisker, 2005) and may affect the reliability of NIR results if it is not included in the training data set. It is therefore necessary to establish proper calibration verification and updating procedures. Samples should regularly be sent for normal chemical analyses to confirm that the calibration is still applicable for that particular product.
Conclusion
If managed properly, NIR Spectroscopy is the ideal technology for rapid analyses. In the feed industry it may be used for raw material quality grading (e.g. lucerne), analyses of ingredients for feed formulation and verification of final product quality. No special skills or sample preparation are required; the instruments are easy to operate and results are immediately available. Calibration preparation and especially proper calibration maintenance programs are however essential to ensure continued reliable results. Finally, to make informed decisions based on any analytical results (NIR as well as traditional chemical analyses), it is important to take the normal variation within a particular product into account and thereby knowing what the expected accuracy of the results would be.
Herein we express our deepest gratitude to AFMA (Animal Feed Manufacturers Association of South Africa) for the kind consent on the publishing of this article, appreciation that we gladly extend to the authors.