Triangle NoTES - May 2012

This newsletter is to give more information on the audits performed for the SSAS audit committee’s use in making the determination of the pass/fail points for the new privatized Method 25 audit program.

As it stands now, there is still only one provider for audits, so there will still be at least a couple of months delay before anyone will be required to perform an audit for Method 25. In reality it may even be longer than that depending on how long it takes to get the second provider in the system and to get the requirements set.

Part of the problem is the EPA requirements as state (Emphasis mine):

The final rule requires that acceptance limits must be set so that 90 percent of qualified laboratories
would produce results within the acceptance limits for 95 percent of all future audits.”

 

This procedure must use well established statistical methods to analyze historical results from well
qualified
testers. The acceptance limits shall be set so that there is 95 percent confidence that 90 percent of well qualified labs will produce future results that are within the acceptance limit range”

 

One of the problems is there is no definition of “qualified” or “well qualified” to use in the determination, so the default would seem to be to set the requirements for the labs so that 90% will pass with a 95% confidence level. The same holds true for all testers. The question is whether the two are cumulative in the case of Method 25. If it is not cumulative, the definition of “well qualified “might include”perfect” since the tester/lab combination would be held to the same criteria as the lab itself, thus assuming no failures due to sampling.

 

The concept of 90% of all of the audits of any type collected in the field, recovered in the lab, and analyzed passing at the 95% confidence level requires the pass/fail line to be set so high, the audits do not indicate anything of the quality of the testing to the agencies.

 

I am not a statistician by any means, so it took a few hours of discussion for me to understand how one could take the historical audit data from EPA and the recent private audit data and get similar pass/fail criteria. The historical data on the low end had 4 audits reported as zero and 4 reported in excess of 1000% high, with the worst 3784% high. While most of the ~66 audit private data set passed the +20% criteria, some of the lowest levels of sample concentration were about 30% high. As it was explained to me and now repeated from my memory, the problem with these sets was the level of “noise” which was inherent with them. That problem is expanded by the 95% confidence level required, which takes the allowable range to twice the standard deviation from the mean. This means the normal laboratory audits where everything is consistent would have slightly expanded pass criteria, but where there is any level of variation the percentages expand dramatically. So, even though the private audits might have passed at +30%, statistically the requirement would be more like +60% to meet the confidence criteria. This seems more like the old grading on the curve some of us remember from school, but which is supposedly no longer done. In any case, it is not the type of tool previously used by the regulatory community under this criteria.

 

This seems to be similar to the issues we have seen in the past concerning the philosophy of audits. My QA officer always had a problem with using one of our internal audits for confirming the level of contamination of a recovery system, which also served to help remove said contamination. The concept was all audits had to pass our criteria. Our criteria was not based on any statistical formula using our data, but on the +20% of the EPA audit program at the time. Of course, there was a requirement in our QA Manual which required a certain number of audits to be passed every month, but we allocated about twice that number for use each month. I saw the benefit of using some audits to check the systems versus trying to make sure all of the audits passed. The result was better quality sample analysis because an audit was sacrificed when there was a questionable recovery system rather than a sample. We met our requirement for passing audits by cycling them through the systems in rotation when there was no concern with the systems. Thus, those audits let us know the people and procedures were also working correctly in addition to the audits used to confirm a system was clean and functioning properly. If the system was not clean and functioning

correctly, that audit was not expected to pass and would not be a problem other than for the QA officer. The accreditation auditor also had issues with this approach, which is why we finally had to drop the internal audit program. In his opinion, any known value sample collected for recovery was essentially an audit no matter what it was called. Since there was the possibility of these other known samples as true audits and vice versa, there was a concern over potential manipulation of audits. The only solution was to remove the audit program and use equipment blanks and direct injections of liquids into the recovery systems for any internal checks.

 

Just as the information gained from the failed internal audits was useful, so is the regulatory use of failed audits. Just like a passing audit, an audit failure high on the low end or an audit failure low on the high end is not generally a concern as long as the facility also shows compliance. The failure high on the low end would indicate a high bias for the outlet, which would not affect compliance. The failure low on the high end would indicate a low bias on the inlet, which again would not affect compliance. Both situations are failures of the audit, but still give a measure of confidence to the administrator. Changing the pass/fail criteria so the pass rate equally covers both high and low failures at the same level does not provide that type of confidence at all.

 

I have no idea what changes, if any, the EPA may finally make to the requirements. If there are no changes made, the only way anyone should ever fail an audit is by trying to do so.

 

Of course, there is also the possibility the terms “qualified” and “well qualified” could be somehow quantified as a percentage of the total number of labs or testers. It could allow the removal of the outlier results where the audits were reported as zero and several hundred percent of the actual value, which would be in the 10% level for “allowed failure”. Even then the effect of the variation being multiplied in order to give an acceptable confidence level could be significantly decreased. A single standard deviation range, for example, since the 90% for labs only has an impact if there are 10 or more from which to choose.

 

I was also surprised that audits were excluded for instrumentation methods because they are field calibrated. The ability to report correct concentrations of a known sample could still be impacted even with a perfect calibration anywhere. The older papers published on audits by EPA employees relating to Method 18, which was also excluded from the audit requirements, indicated some clear issues with failed audits for some compounds even with the required recovery studies. It would seem that any method reporting concentrations of compounds could be audited in similar fashion both in the lab setting and in the field to show confidence levels for any reported concentration.

 

Wayne Stollings

Triangle Environmental Services, Inc.

 

Wstollings@aol.com