ICUMSA News n°7 – 1989

Message from the President

Preparation for the 20th Session

This is probably the busiest time of the session with collaborative test programmes nearing completion and thoughts being turned towards writing referees’ reports. CSR’s Central Laboratory has participated in the test programmes of four referees and we trust that our data will be used to good effect. Referees are reminded that participants in collaborative tests have a natural interest in the data and time should not be lost in working up test results and communicating the outcome to all who contributed to the work.

Referees are also reminded that because we would like to contain the length of the proceedings, the detail of collaborative test programmes should not be placed in an appendix to the referee’s report. A paper should be written for submission to a refereed journal where it will enjoy the widest possible exposure.

Participation in Conference Proceedings

We have had a number of enquiries from people contemplating attendance at next year’s Colorado Springs meeting both as members and observers. For those who have not previously been involved in ICUMSA affairs the format of our meetings might not be familiar. The conference comprises the presentation of the twenty-nine subject reports by their respective referees. For the most part it is assumed that attendees will have had the opportunity to study reports beforehand and the referees’ conference presentations will deal mainly with the highlights and the detailed discussion of important issues raised in those reports. It is therefore important that intending participants indicate their intention of attending the Colorado Springs meeting to their National Committee chairmen so that advance copies of reports can be distributed when they become available.

Anyone with views they want discussed at the conference is advised to take them up with the referees for the subjects involved so that they can be dealt with in the referees’ reports. The conference is not normally the place to introduce new material unless of course it is of such importance that it can hardly be ignored.

Work of 21st Session

The Colorado Springs meeting should fulfil two important purposes – discuss what was done in the last four years and then plan what should be undertaken in the next four. I believe we should place more emphasis on planning the future so that our intentions better match the resources available for their execution.

On the assumption that the Commission endorses the goals which have been set before National Committees, then we should try to use the conference io make plans for carrying out the work of recommendations to be adopted. It should be more efficient to resolve issues in face to face discussion than to have such matters discussed by letter after the conference. Anyone who has ideas about the conference agenda that they wish to have considered by the administration is invited to submit it in the next few months.

Survey on Sugar Scale and Polarisation Method

A survey is being conducted to establish the extent of the adoption of the new sugar scale (°Z) and the removal of the option of deducting 0.1 degrees from polarisation of raw sugars. At this time only twelve National Committees have returned their survey forms. It would be appreciated if those countries who have not responded could attend to this matter as soon as possible.

Report on Second IUPAC Harmonization Workshop

by Mary An Godshall, Sugar Processing Reserach, New Orleans, LA

The Second IUPAC Harmonization Conference was held on April 17-18, 1989, in Washington, D.C. The purpose of the meeting was to agree upon and adopt a protocol for the presentation of analytical methods, that is, essentially, how the method write-up should present the reliability characteristics of the method, which have already been determined by collaborative study.

The two stated objectives were: (1) to ensure that all methods published as standards meet minimum requirements of specific performance characteristics, thereby providing guidelines for committees writing the standards; and (2) to ensure that the text of the standardized method incorporates practical information regarding its precision and other performance characteristics.

While the recommendations have not been adopted yet, pending, in most cases, minor modifications, I thought I would take the opportunity to discuss various points that will be of interest to referees and others involved in conducting and interpreting collaborative tests.

To put it in very simple terms, what do all those numbers mean and of what use are they? The following is a compendium of bits of wisdom and personal observations gleaned from the meeting which may be helpful to referees in preparing their reports for the 1990 meeting. It was also instructive to note how much controversy there was on almost every single point discussed in the proposed recommendations, and yet agreement was reached on almost every matter.

Throughout the conference, the many experts stressed the need for flexibility and common sense. Sometimes, when using statistics, we can be so dazzled that we forget that we have a very practical purpose in mind – to have methods that are practical and applicable to the industry, and whose performance is known.

Review of method write-up. While many ICUMSA referees already have this done, it is strongly recommended that the method write-up be (1) reviewed by someone who is intimately familiar with its details and/or (2) circulated to several potential participants, familiar with the method, for review. This helps to assure that the method is clearly written, unambiguous and correct. Outliers often can result from a poorly written method, containing elements in it that can be interpreted in different ways. (I am sure that referees who have conducted collaborative studies are often surprised at the number of interpretations given to the seemingly most clearly presented instructions).

The report. It is necessary to distinguish between the two types of written presentations that deal with the data obtained from a collaborative study: The first is the Report of the Collaborative Study, which will be presented at the 1990 session and should subsequently be published in the scientific literature. The second is the Method Write-up, which will be in the methods book, and which will contain, in summary form, the performance characteristics of the method, along with the detailed instructions for carrying out the method.

Raw Data. The raw data from the collaborative study should always be published in the report of the collaborative study. The failure of ICUMSA to require this in the past has greatly hampered the review of methods, making it impossible to evaluate such important statistics as repeatability and reproducibility. Presentation of the raw data also allows others to perform their own statistics on the data, if they wish. On the other hand, the raw data is not published in the method Write-up, which contains only the summary statistics of the collaborative test results.

The presentation of the table of statistical data. The format of the table that presents the summary statistics of the collaborative test will be standardized as a result of this Conference. The data are to be presented, one sample at a time, by matrix/analyte/concentration level, and not as an overall result. The format for the presentation of the test results is shown in Table 1. AOAC already uses this format in its journal when results of collaborative studies are published.

Table 1: Draft example of the format for a table of statistical data for inclusion in a standard method. Results of
                interlaboratory test
                Results for (analyte) in (material) expressed in (concentration units)

Sample (by concentration range)                                                                   A             B             C             D

Number of laboratories retained

Number of laboratories rejected

Number of accepted results

Mean value

True or ·accepted value (optional, only include if known)

Repeatability standard deviation (Sr)

Repeatability relative standard deviation (RSDr)(%)

Reproducibility standard deviation (SR)

Reproducibility relative standard deviation (RSDR) (%)

Repeatability limit (or value r = 2.8 x Sr)

Reproducibility limit (or value R = 2.8 x SR)

Outliers. The area that engendered the greatest amount of disagreement was the treatment of outliers, especially what constitutes too many outliers. There are two types of outliers: individua) values and laboratories. In the past, in general, it appears that the entire set of data from a laboratory was thrown out if there were outliers; now it is permissible to discard individual values and retain the bulk of the laboratory’s results. The proposal still has to be agreed upon otherwise, by the working group. The wording of the proposal regarding outliers is the following:

“Not more than about 1 in 5 sets of data (obtained from the analysis of samples with different matrices and levels of analyte concentration) should contain more than 20-25 % of unexplained statistically outlying results.”

In one of the original proposals, there was a suggestion that the data should be presented both with and without outliers removed. This seems to have been discarded as it is not mentioned in the final draft.

Valid data. The criteria says that only “valid” data must be used. It is useful to consider what valid data is, and only the coordinator of the study can decide this. If outlying data are the result of not following the test, then data cannot, a priori, be considered valid and should be thrown out before any outlier tests are done. Invalid data, thus, is the result of a laboratory misapplying the method; these data should be thrown out before doing any statistics; it is not even considered outlying data; it is just not part of the data set. Outlier tests should be done only on those data for which there is no known factor to explain the result. In this case, it then has to be considered valid data, and should be included in the raw data of the study that is subjected to a collaborative test.

Acceptable variability in a method. Perhaps one of the most useful guidelines to come out of this meeting is the use of the reproducibility standard deviation (RSDR). In the past, ICUMSA has not had clear guidance as to what degree of variability is acceptable in deciding to adopt a method. This has led to the rejection of several useful methods in the past. The sugar industry has been greatly influenced by the very close reproducibility that can be found in polarization measurements and has set an unrealistically high standard, as a result, for other tests.

Dr William Horwitz [1] noticed that the reproducibility standard deviation (reproducibility coefficient of variation) for chemical analyses varies inversely with the concentration, and that this has been constant over thousands of analyses of many, many different constituents. The RSDR of any given test will not usually exceed twice the value given below:

Analyte concentration RSDR (%)
1ppb 45
0.01 ppm 32
0.1 ppm 23
1ppm 16
10ppm (0.001 %) 11
100ppm (0.01 %) 8
1000ppm (0.1 %) 5.6
14.0
102.8
1002.0

It should be noted that physical parameters do not necessarily follow the same pattern, usually falling well below these levels, and this is to be used only in the interpretation of data from chemical measurements.

Repeatability and reproducibility values. These are also called repeatability r and reproducibility R limits and represent the maximum tolerable difference at 95 % probability for two independent determinations. The value is generally obtained by multiplying the repeatability or reproducibility standard deviation by 2.8 (see Table 1). Sometimes slightly different values than 2.8, such as 2.79 or other, will be seen but 2.8 is accepted.

These are useful statistics for comparing single test results from two laboratories (use R) or when an analyst is checking his own precision (use r).

The problem of trying to make the collaborative study do too much. There has been a tendency within ICUMSA to make the collaborative study be all things to all people. Many times, the things evaluated in a collaborative study are variables that determine the robustness of the method and should already have been worked out. For instance, using different reagents, temperature regimes, using matrices without previously determining if the method works on it, and sundry other offences. In a well-characterized method, these elements will be presented as ranges within which the analyst can operate. These practices also greatly confuse the statistical analyses and conclusions.

Samples. In conclusion follow a few words about the choice of samples used in the collaborative study. The IUPAC recommendations require that repeatability and reproducibility be determined individually for each sample. This is, of course, because these statistics vary with concentration. The sample is really the summation of matrix/analyte/concentration. Therefore, choice of sample is critical. One should choose samples that represent a range of the analyte (low, intermediate and high concentration levels). One should also choose different matrices, if the test is to be used for different types of samples, for example, raw cane sugar, white sugar, raw beet sugar, and molasses. If we were to devise a test for an analyte for these four types of matrices, then these should all be included in the test. One should not assume that the method is applicable to another, quite different matrix or concentration level, without actually determining it. This may require additional coordination among the referees for different commodities but is certainly worth the effort·in the final result.

Reference

{1] Horwitz. W.: Evaluation of analytical methods used for regulation of foods and drugs, Anal. Chem. 54 (1), 1982,
       67A-76A

Editor: R. Pieck, Klein Spanuit 9, B-3300 Tienen, September 1989 – Tel. +32 16/81 24 36 – Telex 222 51 – Telefax +31 16/82 03 17