ICUMSA News n°23 – 1994

Message from the President

Havana Session and USNC Meeting in New Orleans

I think that members who attended our 21st session in Havana from 15 to 20 May 1994 and the meeting of the United States National Committee for Sugar Analysis in New Orleans on the 25th and 26th May would agree that these were very successful meetings. The achievements of the last four years were reviewed, and plans made for the coming 22nd Session.

One of the important decisions taken was to contribute US$ 50,000 towards the establishment of the 100 °Z point at near infrared wavelengths. The outcome of this work will permit the use of polarimeters employing light sources with these wavelengths for Official measurements. The Braunschweig Sugar Institute and the PTB had advised that the plans for this work are well in hand and a report on progress can be expected in May 1995 ICUMSA News.

Referee Appointments for 22nd Session

General Subjects Referee
1 Raw sugar (cane and beet)R. M. Urquhart
2 White sugar (cane and beet)G. Parkin
3 Specialty sugarsM. Burge
4 MolassesC.J. Shelton
5 CaneM.A. Brokensha
6 BeetM. Kunz
7 Cane sugar processingO.L. Crees
8 Beet sugar processingJ.-P. Lescure
9 Starch-derived sweetenersG. Mitchell
1 Constitution and by-lawsM.R. Player
2 Oligosaccharides and polysaccharidesK. Thielecke
3 Method format, collaborative testing and statistical treatment of dataM.A. Godshall
4 Polarimetry and quartz platesA. Emmerich
5 Dry substanceG. Vaccari
6 SpectrophotometryG. Mantovani
7 Colour, turbidity and reflectance measurementR. Riffer
8 Chromatographic techniques for sugarsK.J. Schäffler
9 Chromatographic techniques for non-sugarsP. Bourlet
10 Enzymatic and immunological methodsS.J. Clarke
11 DensityF. Spieweck
12 RheologyR. Broadfoot
13 Refractive indexI. Weingärtner
14 Microbiological testsR. Strauss
15 Reducing sugarsL. B. Jørgensen
16 AshJ.- P. Ducatillon

Members interested in participating in the work programmes of the various Refereeships who have not already been appointed Associate Referees should contact the Chairman of their National Committees without delay.


The Proceedings for the 21st Session are being prepared by Mr. J.V. Dutton as Editor/Manager of the Publications Department in association with CPL Scientific Limited who will typeset and supervise the printing. Copies will be available by April and details of how to place orders are expected to be advertised in some sugar journals in March 1995.

Some observations on conducting collaborative tests

Mary An Godshall, Sugar Processing Research Institute, Inc. USA,

Referee, ICUMSA Subject 3

Now that the 21st Session has been completed and ICUMSA looks forward to the 22nd Session, it might be well to review some of the factors that go into planning and conduct of collaborative analytical studies. In the last two Sessions of ICUMSA, both at Colorado Springs in 1990 and again in Havana in 1994, ICUMSA and its Referees have demonstrated a strong commitment to the standardisation and validation of analytical methods through collaborative testing.

The importance of collaborative testing

Numerous global factors have contributed to the increased importance of standardisation of methods. These include the harmonisation of trade and tariffs, increases in quality control standards, the globalisation of ISO-9000, increased global competition and international trade. Most standardising bodies will no longer accept methods that have not undergone validation.

What does a collaborative test accomplish?

It is a procedure by which several laboratories test a method whose steps have been standardised using a set of common samples. These tests are also sometimes called “round robin tests.”

In an ideal situation, the method being tested will have been developed with sufficient “ruggedness” testing that it will be able to successfully accommodate some degree of deviation from the method. If the method has been investigated thoroughly before being subjected to collaborative testing, the test coordinator(s) will be able to answer questions about whether a deviation from protocol is significant or not. Even better, allowed deviations will be written into the method. The final goal of the collaborative testing is to produce numerical values for repeatability and reproducibility of the method. Once a method has successfully undergone the “round robin” testing, it is considered a “validated” method and can be used in contracts and has a chance to be accepted by international organisations, such as ISO, Codex Committee and others. Within ICUMSA, a validated method is given the designation of “Official”:

When does a collaborative test become necessary?

Methods are typically developed over several years in several different laboratories. As time goes on, the method may enter into commerce. Once a method is in use among buyers and sellers to determine quality and therefore the price of a product, the need for standardisation and validation becomes apparent. It is at this point that a collaborative test is usually deemed necessary.

Unless a method is in commercial use, either for buying or selling, or for other regulatory purposes, such as for safety, toxicology or labelling, there is usually no need to do a collaborative test.

Brief history of collaborative testing in ICUMSA

Even in the earliest days, ICUMSA employed comparative testing. However, it was not systematised and often limited to only one or two laboratories. In 1971, Dr. Frank Carpenter of the United States National Committee initiated discussion via a letter to the President, Dr. A. Carruthers in which he stated that “ICUMSA should have a system for all Subjects regarding the status of methods.” The discussion led to the formation of Subject 1a, “Method specification”, which produced its first Report during the 1978 meeting in Montreal.

During the 20th Session between 1986 and 1990 a great deal of activity took place. ICUMSA formed a Working Group on Collaborative Studies, which made recommendations for systematising the way methods are examined. During this period IUPAC (International Union of Pure and Applied Chemistry) and AOAC (Association of Official Analytical Chemists) together produced a document, that standardized the way a collaborative study should be carried out, and recommended statistical procedures for evaluating the data to produce repeatability and reproducibility figures (Protocol for the design, conduct and interpretation of collaborative studies, William Horwitz, Pure and Applied Chemistry 1988, vol. 60 pp 855–864). This paper is reproduced in toto in the Proceedings of the 1990 ICUMSA Session pp 146–158. ICUMSA joined the international movement towards harmonisation by accepting these proposals as guidelines for future collaborative testing. In 1990, Subject 1a was replaced by a new Subject 3 “Method format, collaborative testing and statistical treatment of data”.

The basic criteria for a collaborative test

The protocol for the design of a collaborative test is simple in its basic criteria, which are:

– Number of laboratories. A minimum of 8 laboratories should participate. Up to 10 laboratories is most desirable, but not required. The test should include international participation. Only when the instrumentation is very expensive and rare, can as few as 5 laboratories be accepted, but this is becoming less and less acceptable in actual practice. It is preferable that all laboratories be familiar with the method.

– Number of samples and replications. At least 5 samples should be used. Duplication may be done as known, blind or split level. Blind duplication is most desirable. This means, in practice, that 10 samples are distributed. Samples should encompass the expected range of analyte concentration and be representative. To prevent identification of duplicates, the samples should not be mailed to the participants from different locations, but should be collected and prepared by one person, packed similarly, randomly coded and mailed from one location.

– Statistical analysis. Statistical analysis is done as one-way analysis of variance, applied separately to each material, to estimate repeatability and reproducibility parameters. The report should contain all the raw data and results are preferably shown with and without outlier removal. The text of the harmonised protocol, referred to above, goes into more detail on each of these points, and is recommended reading for anyone planning to conduct a collaborative test.

Although the above criteria can be simply stated, complexity does arise in implementation of the details. As with any activity as complex as the design and conduct of a collaborative test, understanding of the many issues and ramifications continues to evolve. Each test has unique features, and questions continue to arise about such issues as testing unstable samples, microbiological methods, test kits, nonparametric methods, and others.

Common pitfalls

Some of the common pitfalls in collaborative testing include: Distribution of unstable or nonrepresentative samples; using laboratories that are unfamiliar with the method; poorly written instructions; a poorly designed test which is either too involved or expected to answer too many different questions. In the worst case, the method is not ready for collaborative testing.

– Unstable samples. Fortunately, unstable samples are not a big problem for ICUMSA, where most of the samples are either stable solids or syrups. If a sample does deteriorate, the results for that sample should be discarded. Application of correction factors on deteriorated samples is not valid. Further research needs to be done on stabilising cane juice so that tests can be conducted on it. The choice of the samples is perhaps the single most important step in a collaborative study, upon which all the results depend. If the samples are not good – that is, homogeneous, representative and properly preserved and packaged – there is no hope for the success of the study,

– Laboratories unfamiliar with the method. If a laboratory is unfamiliar with the method, distribution of several practice samples of known concentration can be included, and extra time allowed for the learning process. The statistical analysis will usually show if an individual laboratory is having a problem with the method.

– Complicated tests. The test should be kept as simple as possible. It is not necessary to include a larger than recommended number of laboratories, samples or extra replication on the assumption that ”more is better“. These practices usually only confuse the statistical analysis and overwork the collaborators. Sometimes a collaborative test is also designed to compare methods, and more than one method is tested at the same time. Care should be taken when this is done; and discussed with a statistician before proceeding.

– Method write-up. Finally, it cannot be emphasised enough how important the method write-up is. It should be clear and unambiguous, with as much detail as possible. Many problems arise simply from misinterpretation of vague instructions, and method changes may occur if steps, equipment or specifications are left out. Even simple and seemingly obvious procedures need to be detailed.

One last item involves the distribution of materials other than the samples. Sometimes, a test may call for a reagent or supply that is not commonly available, such as a certain type of filter aid, filter paper, resin, etc. A natural inclination would be to distribute this material along with the test. This practice is to be discouraged because it decreases some of the natural variation of the results. Part of the method development in a situation like this should include the determination of alternative sources of supply or a range of acceptable materials to which the cooperating laboratories may have access.

Revisions to the ICUMSA Methods Book

J.V. Dutton (UK)

Consequent upon Recommendations adopted at the 21th Session (ICUMSA News No. 22, September 1994) as well as Postal vote on 3 Methods, there have been changes in the status of many Methods contained in the Methods Book which was published in April 1994, just prior to the Havana meeting.

In addition to giving details here, all holders of the book will be supplied with a new page listing the changes. Principally, but not exclusively, the changes affect 35 Methods and Specification & Standard SPS-5, all of which contained asterisks indicating ‘subject to ratification at the 21st Session, 1994’. The details are as follows:

Molasses Methods
GS4/7-1 Confirmed as Accepted (General Subject 4, Rec. 1)
GS4-5 Confirmed as Official (General Subject 4, Rec. 2)
GS4/3-9 Confirmed as Official (General Subject 4, Rec. 6)
GS4/7-11 Confirmed as Official (General Subject 4, Rec. 7)
GS4-13 Confirmed as Accepted (General Subject 4, Rec. 3)
GS4-15 Confirmed as Accepted (General Subject 4, Rec. 3)
Cane Processing Methods
GS7-3, -5, -7, -9, -11, -13, -15, -19, and 21 All confirmed as Accepted (General Subject 7, Rec. 2)
GS7-25 Adopted as Official (General Subject 7, Rec. 1)
GS7-27 Adopted as Official (General Subject 7, Rec. 1)
Beet Processing Methods
GS8-5 Confirmed as Accepted (General Subject 8, Rec. 3)
GS8-7 Confirmed as Accepted (General Subject 8, Rec. 4)
GS8/2/3/4-9 Confirmed as Accepted (Postal Vote, December 1994)
GS8/4/7-11 Confirmed as Accepted (Postal Vote, December 1994)
GS8/4/6-13 Confirmed as Accepted (General Subject 8, Rec. 7)
GS8/4-15 Confirmed as Accepted (Postal Vote, December 1994)
Specifications & Standards
SPS-4 New polynomials to be added in 1996 updating  (Subject 11, Rec. 1)
SPS-5 Table 1 of Part 1 confirmed as Official (Subject 12, Rec. 1); Pidoux Formula in Part 2 confirmed as Accepted (Subject 12, Rec. 2); Breitung Diagram in Part 2 confirmed as Accepted (Subject 12, Rec. 3); Part 3 confirmed as Official (Subject 12, Rec. 4)

Editor: Rud Frik Madsen, Strandpromenaden 38, DK-4900 Nakskov, Denmark – Tel: +45 53921675, Fax: +45 53928559