Advanced QA/QC For Part 75 CEMS

AUTHORS

Russell S. Berry
Jack C. Martin
Stephen K. Norfleet
RMB Consulting & Research, Inc.
5104 Bur Oak Circle
Raleigh, NC 27612

Charles E. Dene
Electric Power Research Institute
3412 Hillview Avenue
Palo Alto, CA 94303

Abstract

For the last five to six years, utilities have been continuously monitoring emissions of SO2, NOx and CO2 from fossil fuel-fired sources in accordance with EPA Title 40 CFR Part 75 regulations. With several years of Part 75 CEMS experience behind the utility industry, one primary concern continues to be the costs associated with CEMS operation. CEMS-related costs of complying with Part 75 can be generally categorized as (1) equipment-related costs -- i.e., equipment procurement, installation, operation and maintenance, (2) costs associated with inaccurately measuring SO2 emissions for allowance tracking purposes, or (3) costs associated with quality control and quality assurance activities.

To reduce the utility industry’s costs associated with Part 75 CEMS requirements, EPRI has funded several projects designed to improve CEMS accuracy, eliminate CEMS and EPA Reference Method biases, and reduce equipment-related costs. One such effort currently being conducted is the "Advanced QA/QC for CEMS" project. The scope of work for this project focuses on reducing costs associated with QA/QC activities and on reducing costs resulting from inadequate CEMS equipment performance. The objectives of this project are (1) to develop more cost effective, alternative Part 75 QA/QC procedures that provide an equivalent or higher level of data quality, and (2) to identify CEMS design and equipment enhancements that will improve CEMS performance and/or reduce operation and maintenance costs.

This paper describes the results of an extensive CEMS performance data review and possible alternative QA/QC procedures being developed as part of this project. This paper also discusses the status of other ongoing project activities associated with CEMS design and equipment enhancements.

Introduction

Since the Environmental Protection Agency’s (EPA’s) promulgation of 40 CFR Part 75 on January 11, 1993, utilities have installed and continue to operate approximately 1,500 continuous emission monitoring systems (CEMS) on fossil fuel-fired boilers and stationary gas turbines. Approximately 260 of these CEMS are installed on "Phase I" sources; the remaining CEMS are installed on "Phase II" sources. For most Phase I sources, CEMS were required to be installed by November 15,1993. Phase II sources had to install CEMS by January 1, 1995 (in most cases).

As utilities continue to operate CEMS, an enormous database of CEMS equipment and performance information is being compiled. Quality assurance and quality control (QA/QC) results and overall CEMS performance data can be obtained from the electronic data reports (EDRs) being submitted to EPA. Equipment performance information and experiences being recorded by CEMS technicians provide valuable guidance regarding possible enhancements to existing CEMS designs and equipment enhancements. Ongoing research being conducted by the Electric Power Research Institute (EPRI) and CEMS equipment vendors is providing ways to improve CEMS accuracy and reliability.

For the Electric Power Research Institute’s (EPRI’s) "Advanced QA/QC for CEMS" project, CEMS operation, maintenance and performance information is being collected and evaluated to develop alternative Part 75 QA/QC program activities and to identify possible CEMS design and equipment enhancements. Of particular interest are those alternative QA/QC procedures and CEMS enhancements that will reduce the cost of CEMS while ensuring or improving the accuracy and reliability (i.e., quality) of the data.

Project Activities

To date, EPRI has completed several tasks associated with the "Advanced QA/QC for CEMS" project and is currently working on several other tasks. EPRI has:

  1. Conducted an extensive CEMS data review to evaluate existing QA/QC requirements,
  2. Prepared a report presenting results of the QA/QC evaluation and proposed modifications to several existing Part 75 QA/QC procedures,
  3. Begun identifying and documenting equipment and CEMS design modifications that will reduce operation and maintenance (O&M) costs and/or improve CEMS accuracy,
  4. Conducted a CEMS Information Exchange Meeting for EPRI CEMS target members, and
  5. Prepared draft procedures for a field study to refine EPRI's dilution ratio algorithm implementation guidelines.

Through the remainder of 1999, EPRI will continue to identify and evaluate possible CEMS equipment and procedural enhancements. A dilution ratio algorithm field study will be conducted. Information supporting reduced QA/QC efforts will be prepared and presented to EPA. As needed, EPRI will also be working with EPA to obtain approval to use proposed alternative QA/QC procedures. CEMS program enhancements identified in 1999 will be discussed in EPRI’s revised "Continuous Emissions Monitoring Guidelines" manual (to be published in late 1999).

As information is collected, some possible improvements may be developed that have never been implemented and evaluated on Part 75 CEMS. If so, EPRI anticipates assessing and refining the potential improvements prior to issuing implementation guidelines for these improvements. If such a field demonstration is conducted, other "known" enhancements may also be incorporated in order to evaluate the cumulative benefit of all of the enhancements. This additional CEMS design/equipment evaluation fieldwork and a final project report are scheduled to be completed in 2000.

Currently, CEMS design and equipment information is being collected from the following sources in order to identify possible CEMS enhancements.

  1. EDR Databases -- The EDR data submitted to EPA contain enormous amounts of information that can be used to identify specific types of problems that, if resolved, could reduce O&M costs and could improve CEMS accuracy. This data is being reviewed extensively to identify possible CEMS enhancements. Information being reviewed is described below, in detail.
  2. A CEMS Technician Meeting – EPRI held a meeting for CEMS target member technicians (selected by member companies) responsible for operating and maintaining CEMS. During the meeting specific details regarding all CEMS components, their performance, possible enhancements and current operation and maintenance efforts were discussed. As examples, discussions addressed:
  1. Dilution air conditioning system designs and the performance of specific components,
  2. Design and performance issues associated with extractive gas sample conditioning systems,
  3. The performance of each type of analyzer and typical problems being encountered,
  4. Sample transport and sample/calibration gas manifold design and performance issues,
  5. Appropriate dilution ratio, pressure and temperature correction procedures,
  6. Flow monitor accuracy and performance issues,
  7. Factors contributing to or causing CEMS biases,
  8. Possible alternative QA/QC procedures,
  9. Current CEMS operation and maintenance costs, including estimated labor efforts, spare parts, consumables, etc.,
  10. Possible data acquisition and handling system (DAHS) enhancements, including increased CEMS automation,
  11. Training practices, and
  12. EPA audit experiences.
  1. The Dilution Ratio Algorithm Field Study – EPRI will be selecting two to four utilities to participate in a field study to implement and evaluate the dilution ratio algorithm procedures. In conjunction with these efforts, the possible need for and benefits of other enhancements will be considered, as well.
  2. Contact With Individual Utilities – In addition to the CEMS technicians’ meeting, EPRI will continue to discuss current problems and possible CEMS enhancements with individual utilities. Note that based on EPRI’s review of the EDR database described below, "poor CEMS performers" have been identified. EPRI will attempt to discuss CEMS performance problems with these specific utilities. To the extent that the poor CEMS performance is equipment related, the possibility of design and component modifications to eliminate or minimize the problem(s) will be investigated.

When addressing the types of equipment improvements and procedural changes that could result in more cost-effective CEMS QA/QC programs, such improvements and changes can be categorized into several different groups. Some changes may affect the required Part 75 activities, checks, procedures, and performance specifications. Other changes may affect the accuracy, operation, maintenance, recordkeeping and reporting activities not specifically dictated by EPA. For example, some changes (i.e., alternative QA/QC procedures) may affect the way a linearity check is performed or the required linearity check limits, while other modifications may impact the time spent conducting daily operation and maintenance checks.

Likewise, in general terms, possible alternative QA/QC procedures being developed as part of this project can be categorized as either (1) variations of existing procedures, or (2) new and substantially different procedures. Alternative procedures that are similar to the existing QA/QC procedures incorporate recommended changes that will reduce or eliminate unnecessary aspects of the existing QA/QC requirements. The "new" procedures being considered incorporate substantially different approaches to CEMS QA/QC and may also require CEMS design or equipment modifications.

The possibilities of "new" and substantially different QA/QC requirements are still being investigated. These alternative procedures may permit the elimination or reduction of existing requirements – such as the relative accuracy test audit (RATA) procedures. Regarding possible modifications to existing procedures, EPRI has already completed an extensive data review effort to identify and recommend changes to existing QA/QC procedures.

Based on the results of QA/QC data evaluations and an understanding of how the Part 75 QA/QC requirements evolved from previous CEMS regulations, EPRI has developed possible modifications to the existing Part 75 QA/QC requirements that reduce compliance costs and still ensure data quality. In particular, calibration error test, interference check, linearity check and RATA procedures have been evaluated. Note that many possible alternative procedures could be developed for each existing QA/QC activity. All of the recommended alternative QA/QC activities presented herein preserve much of EPA’s existing procedures. Recommended procedural changes primarily modify the specific manner in which tests are performed (e.g., the number of repetitions required for a test), but not the general concepts of the tests (e.g., a RATA involves comparing CEMS and RM measurements).

EDR Database Evaluations

Developing a database of Part 75 QA/QC results that could be evaluated was an integral task in the assessment of existing QA/QC procedures. The assessment of existing procedures focused on four requirements, defined in Appendices A and B of 40 CFR Part 75. Together, these requirements serve as the fundamental QA/QC program required by EPA to establish proper operation of analyzer-based CEMS: the daily calibration error test, the daily flow interference check for flue gas flow meters, the quarterly linearity check and the semiannual or annual RATA. With the exception of the additional quarterly leak check requirement for differential pressure type flow meters, these four assessments represent all the regular, ongoing QA/QC requirements for CEMS included in 40 CFR Part 75.

The purpose of the comprehensive evaluation was threefold:

Data Selection: The QA/QC results used in this evaluation were obtained from EPA’s raw EDR files. In order to avoid numerous difficulties associated with extracting the desired data from "complicated" EDRs, only specific raw EDR files were selected for this project. The evaluation of QA/QC data was limited to data from single stack units; QA/QC data for common stack and multiple stack analyzers were excluded. Writing programs to extract the QA/QC data from EDRs for common and multiple stack applications would have been significantly more difficult, and it is not believed that the QA/QC results for these EDRs would reflect CEMS performance results that were any different than the EDRs analyzed. As a matter of practicality, the evaluation was also limited to QA/QC data reported for the year 1997, which represented the most recent complete calendar year at the time of this study. Earlier data were excluded for three reasons:

  1. Most of the units represented were only required to begin monitoring in 1995, and early data represented a "shakedown" period where the initial performance of the CEMS hardware may not have been indicative of experienced operation,
  2. EPA made a number of changes to the monitoring and reporting provisions in Part 75 during 1995 and 1996 that might make a clear interpretation of the data more difficult, and
  3. The quality of EDRs submitted to EPA in 1997 had improved over those submitted in 1995 and 1996.

Except as described above, all quarterly reports submitted to EPA for 1997 were included in the evaluation. Tables 1 through 3 summarize the types of units represented in the evaluation.

Table 1. Number of Single Stack Units with 1997 EDRs

Single Stack Units

1497

Appendices D & E Units or Units with No 1997 Operation

330

Units with Full Analyzer-based CEMS

753

Appendix D Units; Units with NOx Analyzer-based CEMS only

414

Table 2. A Summary of Units with SO2 CEMS

Units with Analyzer-based SO2 Monitoring Systems

753

SO2 Controls

Uncontrolled

591

Dual Alkali

3

Wet Limestone

72

Magnesium Oxide

3

Wet Lime FGD

37

Sodium Based

9

Dry Lime FGD

20

Other

18

Table 3. A Summary of Units with NOx CEMS

Units with Analyzer-based NOx Monitoring Systems

1167

NOx Controls

Uncontrolled

712

Selective Catalytic Reduction

37

Low NOx Burners (w/o Over Fire Air)

220

Selective Non-Catalytic Reduction

17

Low NOx Burners (w/ Over Fire Air)

46

Combustion Mod. With Fuel Reburn

4

Over Fire Air Only

20

Other

127

Undefined

2

   

Of the 5703 single stack quarterly reports for 1997 found in EPA’s raw data files, 1198 quarterly reports were excluded from this evaluation due to the absence of analyzer QA/QC data. The provisions in Appendices D and E of 40 CFR Part 75 provide alternative monitoring options for certain oil- and gas-fired units. Units that use the monitoring provisions in Appendix D and E, are exempt from the analyzer QA/QC requirements in Appendix B of Part 75 since they are not required to install analyzers for monitoring SO2, NOX or CO2 emissions. Units that did not operate during the 1997 calendar year would also be exempt from QA/QC activities for that year, and are not represented in this evaluation. Units that employed the alternative SO2 monitoring procedures in Appendix D but did not meet the "peaking unit" criteria in Appendix E were required to monitor NOX emission with an analyzer-based CEMS. The QA/QC data for these analyzers are reflected in the evaluation.

Finally, data for all analyzer calibrations performed when the unit was off-line were excluded from this evaluation. The off-line calibrations were excluded because of concerns that the off-line calibrations might not be representative of typical calibrations and that their inclusion might complicate the analysis of the results.

Exporting QA/QC Data: To facilitate a comprehensive evaluation of the CEMS QA/QC data, a custom computer application was developed to extract calibration, flow interference, linearity, and RATA data – a separate application was written to extract each type of QA/QC data. The custom applications automatically read and processed quarterly EDR files. The programs extracted QA/QC data from the EDR files, calculated a number of preliminary statistics regarding the data and linked this information to unit operations data, unit configuration details and CEMS hardware information. Once the data were extracted from the EDRs, the programs exported each set of data into spreadsheet compatible database files for analysis.

A variety of data were complied into five evaluation databases using the QA/QC data export programs. Efforts were made to include a large amount of detail and inter-related information in each database so that it might be possible to document anticipated and unexpected relationships between data. The five databases (program output files) were used to conduct the evaluations of QA/QC results. RATA results were complied in one file, and linearity results were tabulated in another. Two files were used to present the calibration results; one file contained the statistics for each individual analyzer and the other file grouped the results of various types of analyzers for each quarter for each plant. A fifth file summarized the number of irregularities and problems that the export programs encountered while trying to interpret all of the EDR file data.

All calibration, flow interference, linearity and RATA results were compiled with corresponding unit specific and quarterly operations information. The information was tabulated with each record to allow for easy identification and categorization of various analyzers, monitoring methods, and unit types. A list of unit specific information included in each database is presented in Table 4.

Table 4. Unit Specific Data Recorded in Each Database

ORIS Number Unit ID File Name Plant Name
Unit Short Name Unit Class Boiler Type Primary Fuel
SO2 Controls NOX Controls Particulate Controls SO2 Method
NOX Method CO2 Method Opacity Method Secondary Fuels
Max Load Year Quarter Operating Hours
Total Gross Load Total Heat Input Company Name State
Region Mailing Address Phone Fax Number

Using the component and system ID provided in each of the QA/QC records, analyzer specific information was correlated and tabulated with the unit information and the calibration, flow interference, linearity and RATA results. The analyzer specific information included in each database is presented in Table 5.

Table 5. Analyzer Information Included in Each Database

Comp/Sys ID Parameter Monitored Primary or Backup Component Type
Sample Method Analyzer Model Manufacturer Serial Number

Daily Calibration Test and Flow Interference Check Results: In the calibration error database, calibration statistics were tabulated separately for zero and high injections for each analyzer for each quarterly report. A broad range of information was compiled from the calibration data including the total number of injections, the total number of failures, maximum calibration error, average calibration error and the standard deviation of the calibration error. Statistics were also complied to gauge the number of dramatic calibration failures by summing the number of occasions when calibration error exceeded 7.5 percent and 10 percent. The average and standard deviation of the calibration error for only failures was also calculated.

The number of days during the quarter when multiple calibrations were performed on a single analyzer was tabulated. Statistics were also tabulated on the number of "double calibrations" observed. Double calibrations were defined as calibrations that occurred within a two-hour window of one another. This information was used to determine how frequently analyzers may be drifting enough to be recalibrated, but not enough to fail the calibrations.

As previously discussed, two separate spreadsheet compatible output files were created by the calibration data export program. In the first output file, calibration statistics were tabulated for each individual analyzer at each unit for each quarter. In the second output file, calibration data were itemized for each type of analyzer (flow, SO2, NOX, CO2 or O2) at each unit for each quarter regardless of whether the analyzers were primary or backup analyzers and regardless of how many analyzers were represented (e.g., two flow monitors being used in an "X" pattern).

For volumetric stack flow monitors, flow interference check statistics were compiled with the daily calibration data. The total number of flow interference checks and flow interference check failures were tabulated as well as the number of times when the flow interference check was failed but the associated daily calibration was passed.

The calibration related statistics compiled are summarized in Table 6 below. The calibration error values reported in the EDR files were used to calculate all statistics; calibration error values were not checked or recalculated by the QA/QC data export program.

Table 6. Information Included in the Calibration Results Database

Component or System ID Span Value Calibration Level Number of Calibration Injections
Calibration Failures Calibrations Exceeding 7.5% Calibrations Exceeding 10% Average Error
Calibration Error Standard Deviation Maximum Calibration Error "Double" Calibrations Multiple Calibration Days
Average Calibration Error for Failures Standard Deviation of Calibration Failures Average Error for Double Calibrations Minimum Double Calibration Error
Maximum Double Calibration Error Double Calibration Standard Deviation Total No. of Flow Interference Checks Flow Interference Check Failures
Interference Check Failures w/ Cal. Pass      

The tabulated calibration results included unit specific and quarterly operations information. Analyzer specific information was compiled using the component/system ID referenced in the daily calibration test data (230) records. The analyzer specific information was not, however, included in the calibration output file where the calibration statistics were grouped together according to analyzer type.

Quarterly Linearity Test Results: Linearity results were determined based on the raw run values reported in the linearity test (601) records. For each linearity test reported in the EDR files, the information in Table 7 was complied in the linearity database.

In addition to the information listed in Table 7, several other types of information were included in the database to facilitate data evaluation. The number of incomplete "pre-test" linearity runs (if any were detected) was recorded. Data for the daily calibrations both before and after the quarterly linearity test and analyzer specific information were compiled based on the component/system ID reference in the linearity test data (601) records. The tabulated linearity results also included unit specific and quarterly operations information.

Table 7. Information Included in the Linearity Results Database

Component or System ID Span Value Linearity Test Results Linearity Test Pass/Fail Status
Average Reference Value at Each Level Average Analyzer Response at Each Level 3-run Results at Each Level 3-run Pass/Fail Status at Each Level
Reference Value for Each Injection at Each Level Average Analyzer Response for Each Injection at Each Level Results for Each Injection at Each Level Pass/Fail Status for Each Injection at Each Level

RATA Results: The RATA results database was developed using raw run values reported in the RATA/bias test (610) records and results calculated by the export program. All bias, relative accuracy and other calculated test values were derived by the QA/QC data export program in order to eliminate mathematical error that may exist in the EDR data. For each RATA reported in the EDR files, the information in Table 8 was complied.

Table 8. Information Included in the RATA Results Database

System ID RATA Start Time RATA End Time Operating Level
Average RATA Load Total Runs Reference Method Test Average CEMS Average
Standard Deviation Confidence Coefficient Relative Accuracy Bias Adj. Factor

RATA results were also calculated using fewer runs. Results were tabulated using only the first three, four, five and six RATA runs. When the abbreviated RATA results were determined, no run data were excluded, regardless of whether the data were excluded in the regular RATA determination.

As with the other databases, to facilitate the evaluation of these RATA results, additional information was included in the database. The number of pre-RATA test runs (that were either excluded or performed as part of an incomplete RATA) was included. The daily analyzer calibration data associated with each RATA were compiled. Pre-RATA and post-RATA calibration data were tabulated. For NOX systems, both the NOX and diluent analyzer calibration data were included in the database. The RATA database also included a status flag that indicated whether or not a daily analyzer calibration occurred during the period in which the RATA was performed. The tabulated RATA results included unit specific and quarterly operations information corresponding to the quarterly EDR file where the records were recorded. Analyzer specific information was complied corresponding to the system ID reported with the RATA results. For NOX systems, both the NOX and diluent analyzer specific data were tabulated.

EDR Database Problems and Irregularities: One of the most important considerations during the development of the QA/QC data export program was how to design the program to handle reporting irregularity and format problems within the EDR files. As previously noted, at least some EDR reporting irregularities and formatting problems still exist in many EDRs. There is also a significant disparity in the amount of attention given by different utilities to EDR preparation and data evaluation prior to submission of the EDRs. Failing to handle reporting irregularities, format problems and suspect data properly could easily skew the results.

A secondary consideration concerning how to handle EDR reporting irregularities and format problems was a desire to salvage as much of the QA/QC data as possible while excluding improperly formatted data that could not be interpreted. A unit with data reporting problems might, very reasonably, also represent a unit with QA/QC problems. To exclude such data, if it can be salvaged, would result in unnecessarily biased results.

Some formatting problems could be "repaired" without impacting data quality. If various record types were found to be out of order, the records were sorted. Some irregularities in the component or parameter type fields in the 510 records could be interpreted. For example, a "NO2" parameter type should have been labeled "NOX". Other formatting errors could also be reasonably "corrected." For example, when an invalid gas level indicator was detected in the calibration data, it was reasonably assumed that the invalid level indicator represented a zero calibration. Historically, the most common invalid indicator error has occurred when an "L" is used in place of a "Z" for zero.

Some data could be salvaged by looking at the interrelationships between the record types. For example, the component and parameter types for an analyzer are typically defined in the monitoring system/analytical component definition (510) records, but a number of units fail to report these records. In such an event, the type of component or parameter measured by an analyzer was identified by the record types where the component/system data was found. For example, a component/system ID should only be found in the stack volumetric flow (220) records if it represents a stack flow meter.

Some problems, while they may be identifiable, are not easily rectified. A blank field in the EDR is an example. Another example is a component or system ID that is used to identify more than one parameter. Other problems cannot even be easily identified, such as an incorrectly entered span value, a bad RATA reference value or a false unit operating signal. Considering the enormous amounts of QA/QC data evaluated, however, the impact of a limited number of non-rectifiable and unidentifiable errors on the overall results of the evaluation is believed to be small.

Calibration, flow interference, linearity or RATA data reported for a component or system ID that was not identified in the 510 records or used in the hourly analyzer data (200, 201, 202, 210, 211 or 220) records were excluded from the evaluation. In these cases, it was not possible to identify the type of analyzer represented. Calibration and flow interference check results were excluded if the unit was indicated to be off-line or if the date and time associated with the record were invalid or did not correspond to the appropriate quarter. Calibration data were also excluded if the reference value, measured value or calibration results fields were found to be blank.

Both linearity and RATA data were excluded if the run start date field was blank or if the run number was blank or zero. RATA data were included in the database but excluded from evaluation where more than three runs were indicated as not used or where fewer than nine runs were recorded. RATA data were also excluded if a RATA run sequence problem or other significant format problem was detected, such as using a run number more than once during a single test. Linearity data were excluded from evaluation if an invalid level indicator was found, if less than three injections were reported at each level or if injections were reported for fewer than three levels for a given analyzer.

In addition to the automated process instituted for handling reporting irregularities, the output of the QA/QC data export program was manually screened prior to the evaluation. During the manual inspection, it became apparent that there were instances where the data were mislabeled, e.g., where the QA/QC data for a CO2 analyzer were reported as data for another analyzer or where the indicated units of measure were obviously incorrect. While no corrections were required for the linearity test, flow interference or daily calibration error data, some corrections were made to the RATA data. In one instance, data were omitted where nonsensical values were reported for both the Reference Method and CEMS analyzers. In a few other cases, the component type was changed to reflect an obvious mislabeling of the system ID or the use of inappropriate units of measures. It is important to note that these revisions were made simply to more appropriately group the results by monitoring system type – i.e., the RATA results are presented separately below for NOX, SO2, CO2 and flow. Only for instances where the proper component type was obvious (given the type of analyzer, unit or emission controls) were changes made. Where the data were not clearly mislabeled, no changes were made.

Data Review and Evaluation

After creating the databases described above, a series of evaluations were performed to assess CEMS performance using the existing QA/QC procedures and to assess the CEMS performance that may be expected using alternative procedures. Evaluations were performed using the calibration, linearity and RATA databases.

Calibration Data Evaluations: When evaluating the 1997 calibration data, the objectives were to examine the necessity of daily calibrations and to identify possible improvements in CEMS technologies that could result in less frequent calibration requirements by increasing system stability and accuracy. For evaluation purposes, the calibration database was divided into four subsets of data -- consistent with the monitoring systems required by Part 75.

After identifying the calibration results for each parameter, the database was further separated by calibration level (zero and span) for each parameter. For each monitor and each quarter, the database was used to quantify the:

In addition to these results, the database was used to quantify the number of times during a quarter that the flow monitor failed the flow interference check but passed the daily calibration check.

Linearity Data Evaluations: The primary quarterly component of EPA’s Part 75 QA/QC requirements is the linearity check. A linearity check is performed by injecting each monitoring system with three linearity gases -- a low, mid, and high gas concentration. The gases selected for a linearity check must have concentrations between 20 and 30 percent, 50 and 60 percent, and 80 and 100 percent of the monitor’s span value, for the low, mid, and high linearity gases, respectively. Each of the three linearity gases must be injected into the monitoring system three times, and the average response for the three injections at each level are used to determine linearity. In most cases the daily span calibration gas, also serves as the high linearity gas during the linearity check. The intended objective of the linearity procedure is to verify that the monitoring system outputs are linear over each instrument range. However, as currently being performed, a linearity failure may also be the result of other factors, e.g., analyzer stability or climatic changes.

The evaluation of 1997 linearity data was conducted primarily to determine if multiple injections of each linearity gas (low, mid and high) were necessary. The linearity procedures were evaluated by comparing the results of the first injections for the three linearity gases with the typical triplicate-injection linearity test results. The following steps describe of the procedures used in the linearity data evaluation.

  1. The database was separated into subsets of the individual components being measured (SO2, NOx, etc.). For CO2/O2, a large number of the EDR records reported the CO2/O2 linearity results in duplicate, once for the CO2/O2 monitoring system and a second time for the NOx CEMS. The CO2/O2 database was sorted to separate results reported for the CO2/O2 and NOx CEMS, and the CO2/O2 CEMS records were used for the analyzer evaluations.
  2. Once the subset database for an individual component was established, the database was further sorted with respect to sample method (in-stack dilution, extractive, etc.). This enabled the evaluation to be completed separately for different CEMS sampling methodologies, and thus, to examine any differences the sampling method had on instrument linearity.
  3. Results of the first injection of each linearity gas (low, mid & high) were reviewed to determine whether or not the results passed or failed to meet the applicable linearity requirements (Shown in Table 9). If the first injection for all three linearity gases met the applicable linearity requirements, then the complete (triplicate injection) linearity results were reviewed to determine if the applicable requirements were met. Note that for the purpose of further evaluating the linearity data, when the analyzer response for any one of the linearity gases did not meet the applicable requirements on the first injection, the database program included a record indicating which linearity gas(es) had failed.

Table 9. Linearity Requirements

Parameter

Linearity Check Requirements

SO2/NOx

£ 5.0% of tag value, or

± 5.0 ppm from tag value

CO2/O2

£ 5.0% of tag value, or

± 0.5% from tag value

RATA Data Evaluations: When evaluating RATA data, the primary objective was to determine if equivalent results could be obtained using fewer than the normal nine to twelve test runs. As previously discussed, the raw RATA data were used to calculate relative accuracy (RA) and bias results. For each RATA, six sets of RA and bias results were calculated. One set of results was calculated from the nine test runs used to comply with the Part 75 RATA requirements. Five other sets of results were then calculated using the first three, four, five and six RATA test runs. The data evaluation involved comparing these five sets of results with the 9-run RA and bias results for each RATA.

To conduct the evaluation, Reference Method and CEMS averages, the standard deviation of the differences, and the confidence coefficient (calculated given the appropriate t-value) were determined for each set. As with the other databases, the RATA data was then sorted into separate subsets of data for each component including:

The emphasis of the analysis was to compare the results of a 3-run RATA to the corresponding results of the "9-run" RATA submitted to EPA. The following procedure was used in the RATA data evaluation.

All RATA's where the 3-run RA was less than or equal to 7.5 percent were identified. For units that qualified as low-emitters according to Appendix B of 40 CFR Part 75, the SO2 and NOX specifications to qualify for annual testing were applied in lieu of the 7.5 percent criteria. Also, the additional less restrictive requirement for CO2/O2 monitors outlined in the proposed revisions to 40 CFR Part 75, Appendix B (April 1998) was used as a criterion for annual testing. The low-emitter criteria are summarized in Table 10. No low-emitter criterion was used in the identification of acceptable flow monitor RA's.

Table 10. Low-Emitter Criteria

Parameter

Low-Emitter

Qualification Criteria

Mean Difference Between the Reference Method and CEMS Measurements

SO2

£ 250.0 ppm

± 12.0 ppm

NOX

£ 0.200 lb/mmBtu

± 0.015 lb/mmBtu

CO2/O2

Not applicable

± 0.7 %

  1. All of the remaining evaluation activities involved only those data sets that met the 3-run RA criteria in Step 1. For each 3-run RATA that met the annual specification, the corresponding 9-run RATA results were evaluated to determine if they also met the annual specification.
  2. For each 3-run RATA (except for the diluent CO2/O2) that meet the appropriate specification in Step 1, the bias test was performed based on the 3-run RATA results (regardless of the corresponding 9-run RA results). If the 3-run RATA results for a component passed the bias test, then a bias adjustment factor (BAF) of 1.000 was assigned to that component for that 3-run RATA. If the component failed the bias test, then a BAF was calculated based on the 3-run results and assigned to that component for the 3-run RATA.
  3. The bias test was also performed on the 9-run RATA results for each component that had a 3-run RA meeting the initial criterion in Step 1. As required by Part 75, if the component passed the bias test, then a bias adjustment factor of 1.000 was assigned to that component for the 9-run RATA. If the component failed the bias test, then a BAF was calculated based on the 9-run results and assigned to that component for the 9-run RATA.
  4. The average BAF was calculated for each component for the qualifying 3-run RATA's and the corresponding 9-run RATA's.

Summary of Results

After reviewing the data in each database and performing the calculations and comparisons discussed above, a series of summary tables were developed and the existing QA/QC procedures were evaluated. In general, the QA/QC data evaluation indicate that significant reductions in the existing QA/QC efforts could be achieved without sacrificing CEMS data quality.

Calibration Results: Tables 11 and 12, present the Part 75 CEMS calibration results by component for 1997. Table 11 shows the percentages of quarters for each component that reported less than or equal to a specified number (from 0 to 5) of calibration failures in a particular calendar quarter. For example, of the 2780 calendar quarters with SO2 calibration data reported, 79.0 percent of those quarters did not report any calibration failures at the zero calibration level. For the SO2 span calibration results, this percentage decreases significantly to only 47.7 percent of the quarters reporting no calibration failures. The percentages naturally increase as the number of calibration failures increase. For SO2, 94.6 and 86.0 percent of the 2870 calendar quarters reported five (5) or fewer calibration failures for the zero and span levels, respectively.

Table 12 shows the total number of calibration injections, failures, and double calibrations observed at the zero and span levels for each of the four components for all EDRs included in the evaluations. The number of calibration injections for the zero and span are relatively consistent for each component. For example, there were 322,187 zero and 323,935 span calibrations of SO2 monitors. For all components, note that there is a distinct difference in the number of zero and span calibration failures. For NOx monitors, the number of span calibration failures (13,924) was nearly five times the number of zero calibration failures (2,903). This is consistent for all the components except flow, where the number of span failures (5,062) was only 1.26 times the number of zero failures (4,009). In general, the relatively high ratio of span to zero calibration failures is expected for gaseous monitors. The zero readings from a gaseous monitor are not typically affected as much by changes in CEMS operating conditions.

As expected, the number of double calibrations performed at the zero and span levels are relatively consistent with each other. This is to be expected, since most CEMS perform the zero and span calibrations consecutively. Note that the double and failed calibrations provide a good indication of how frequently the CEMS passed calibration error tests but were still adjusted. For example, the SO2 monitors only had 2,636 calibration failures at the zero level but performed a second zero-level calibration within two-hours of the daily calibration 34,682 times.

With the exception of the flow monitors, double calibrations were being performed approximately 10 percent of the time. These results indicate that, on average, one or more of the analyzers in each CEMS were being adjusted once every 10 days. Keep in mind that if one gaseous analyzer was subjected to a double calibration, the other gaseous analyzers in that CEMS were also calibrated a second time. The double calibration figures for the flow monitors are artificially high because a few utilities have their flow monitors set up to calibrate once per hour.

Without exception, calibration failures (including zero and span failures) occurred infrequently. For all components, zero and/or span gas calibration failures occurred less than 4.0 percent of the time, or less than once for every 25 daily calibrations. When evaluating the calibration performance of CEMS based on the type of sampling method, however, in-situ CEMS failure rates were approximately twice as high as the rates observed for the dilution and extractive CEMS. As examples, for SO2 and NOX span calibrations, the failure rates for the extractive and dilution (both types) systems ranged 2.0 to 3.0 percent. The in-situ CEMS span failure rates,

Table 11. Summary of Reported Calibration Failures per Quarter

Parameter

Percentage of Quarters with "X" Calibration Failures or Less

Total # of Calendar Quarters

0 Failures

1 Failure

2 Failures

3 Failures

4 Failures

5 Failures

Zero

(%)

Span

(%)

Zero

(%)

Span

(%)

Zero

(%)

Span

(%)

Zero

(%)

Span

(%)

Zero

(%)

Span

(%)

Zero

(%)

Span

(%)

SO2

79.0

47.7

87.7

63.8

91.2

73.7

93.1

79.1

94.1

83.3

94.6

86.0

2870

NOx

82.4

45.3

90.6

61.7

93.9

72.0

95.1

77.6

96.2

81.6

96.8

84.5

4306

Flow

86.7

81.8

91.0

87.3

93.4

90.9

94.7

92.6

95.7

93.9

96.3

94.7

2774

CO2/ O2

80.0

56.4

90.0

73.2

93.7

81.1

95.6

85.6

96.7

88.7

97.2

90.7

4418

Table 12. Summary of Calibration Results

Parameter

Total Calibration Injections

Calibration Failures

Double Calibration

Zero

Span

Zero

Span

Zero

Span

SO2

322,187

323,935

2,636

9,167

34,682

35,540

NOx

424,560

425,559

2,903

13,924

40,811

41,683

Flow

354,638

351,889

4,009

5,062

120,244

119,285

CO2/O2

380,900

381,405

3,119

8,440

38,129

38,967

were 5.4 and 8.0 percent, respectively. Zero calibration failure rates for dilution, extractive and in-situ systems were 0.6, 1.6 and 1.0 for SO2 and 0.5, 0.5 and 1.3 for NOX, respectively.

As part of the flow calibration analysis, the relationship between the performance of daily calibrations and the daily interference checks was also examined. The database shows that a total of 351,889 span calibration checks were performed on flow monitors with only 5,062 calibration failures. Consequently, on 346,827 occasions, the flow monitor passed the daily span calibration check. From the calibration database, only 519 instances were documented where the daily calibration check passed, but the flow interference check failed. Stated another way, approximately 99.85 percent of the time, when a flow monitor passes the daily calibration check, it also passes the flow interference check.

The differences in gaseous analyzer zero and span calibration performance strongly indicate that instrument drift is not the dominate factor causing calibration failure. If analyzer drift were the problem, zero failures would dominate. Consequently, other system fluctuations are causing a significant portion of the failures. For dilution systems in particular, fluctuations in the dilution ratio due to changes in ambient conditions, stack temperature and pressure, and dilution air pressure are directly reflected in the span calibration failures and do not have an effect on most zero calibration results. As improvements are made to better compensate for changes in CEMS operating conditions, the number of span calibration failures should decrease significantly, approaching the zero calibration results.

Linearity Results: The evaluation of linearity results from 1997 quarterly EDRs is summarized in Tables 13 and 14. For the purposes of this comparison, linearity results based only on the first set of linearity gas injections are referred to as "1-run" linearity results. The EPA’s current linearity procedure is referred to as a 3-run linearity. Table 13 provides the total number of linearity tests in the database for each component. For example, there are a total of 4105 SO2 linearity tests in the database. The total number of linearity tests for each component is also provided for each type of sample acquisition method. The four basic sample acquisition methods are in-stack dilution, out-of-stack dilution, extractive, and in-situ. For SO2, the total number of in-stack dilution system linearity tests is 2898. Table 13 also provides a breakdown of the linearity test evaluation results. For example, of the 4105 SO2 linearity tests evaluated, the initial set of linearity gas injections (constituting a 1-run linearity test) passed the linearity requirements 3935 times. For these 3935 acceptable 1-run linearities, the 3-run linearity failed only 15 times. In general terms, if a CEMS passes the first linearity test run, it will likely pass all three runs, as noted in Table 14. Of the remaining 170 3-run SO2 linearities that did not have acceptable 1-run results, 137 passed the 3-run results (indicating that these runs had just slightly failed the linearity requirements on the first run). Furthermore, a distribution of linearity gas levels causing the 1-run failures is provided. For the 170 3-run SO2 linearities that did not have acceptable 1-run results, 132 times the low gas failed, 67 times the mid gas failed and 26 times the high gas failed the linearity requirements during the first set of injections. Note that in some cases more than one gas failed, so the total for these three columns is greater than 170. As expected the failure rate is generally higher for the lower linearity gas levels. One exception, of course, is the

Table 13. Summary of Linearity Check Results

Parameter

Sample Method

Total Linearity Tests

Evaluated

1-run Linearity Results That Met EPA’s Linearity Performance Specifications And Also:

3-run Linearity Tests That Did Not Meet EPA Requirements Using The 1-run Results

3-run Linearities That Failed First Injection But Passed 3-run Average

Linearity Gas Level(s) That Caused The 1-run Failure

Total

Passed 3-run Linearity

Failed 3-run Linearity

Low

Mid

High

SO2

In-stack Dilution

2898

2777

11

87

55

15

110

90

Out-of-stack Dilution

554

528

2

17

7

6

24

18

Extractive

488

460

1

23

2

3

27

22

In-situ

165

155

1

5

3

2

9

7

All

4105

3920

15

132

67

26

170

137

NOx

In-stack Dilution

3344

3148

21

122

81

27

175

141

Out-of-stack Dilution

699

666

2

22

14

6

31

19

Extractive

1374

1312

15

21

21

19

47

27

In-situ

194

185

1

7

4

1

8

5

All

5611

5311

39

172

120

53

261

192

CO2

In-stack Dilution

2850

2768

18

10

45

14

64

17

Out-of-stack Dilution

543

521

5

6

11

2

17

8

Extractive

478

452

1

15

18

3

25

8

In-situ

160

151

0

4

6

0

9

6

All

4031

3892

24

35

80

19

 

39

O2

Extractive

943

931

2

5

5

4

10

6

In-situ

20

20

0

0

0

0

0

0

All

963

951

2

5

5

4

10

6

Table 14. Comparison of 1-run Linearity Results to 3-run Linearity Results

CEMS

Sampling

System

Percentage Acceptable 1-run Linearity Checks With Acceptable 3-run Linearity Check Results

SO2 (%)

NOx (%)

CO2 (%)

O2 (%)

In-stack Dilution

99.61

99.34

99.35

Not

Applicable (NA)

Out-of-stack Dilution

99.62

99.70

99.05

NA

Extractive

99.78

98.87

99.78

99.79

In-situ

99.36

99.46

100.00

100.00

All

99.62

99.27

99.39

99.79

CO2 results, because at the low linearity level (with an absolute error of 0.5% allowed) CO2 analyzers are effectively afforded a 10 percent error requirement compared to 5 percent for the other analyzers.

The values in Table 14 represent, by component and sampling method, the number of times a CEMS is likely to pass a 3-run linearity if it passes the 1-run linearity. Over 96 percent of the linearity checks in the database met the applicable 1-run linearity check requirements, and of that ~96 percent, 99.43 percent of the corresponding 3-run linearity results satisfied the applicable requirements.

RATA Results: The main focus of RATA data evaluations was to compare RA results based on three runs (3-run RATAs) with the results achieved after the completion of nine or more runs (herein referred to as a 9-run RATA). The initial evaluation involved identifying the 3-run RATAs with a RA of less than or equal to 7.5 percent (or the corresponding low-emitter annual RATA specification). The 3-run relative accuracy was calculated given the number of runs (n=3) and the corresponding t-value of 4.303, as specified in Appendix A of 40 CFR Part 75. Once a RATA data set was identified as passing the 3-run RA (i.e., a RA £ 7.5 % or the low emitter limits), the 9-run RATA results were calculated. Table 15 provides a summary of the results of this evaluation for each component. The comparison of the 3-run to 9-run RATA results showed that on average 99.24 percent of the 3-run RATAs that met the annual RA criteria also meet the same criterion after completion of the 9-run RATA. It should also be taken into consideration that the flow monitor relative accuracy criteria used for this evaluation were more restrictive than currently required by EPA. Even so, 98.74 percent of all 3-run flow RATAs that met the more restrictive £ 7.5 percent RA requirement also passed the 9-run RATA with a RA of £ 7.5 percent. Note the two sets of flow monitor results are provided: (1) results that reflect all of the flow RATAs in the database and (2) results based on the normal-load flow RATAs, only.

Table 15. Summary of RA Evaluations

Parameter

Total RATAs

9-run RATAs That Met Annual Requirement

3-run RATAs That Met Annual Requirement

3-run RATAs w/ Corresponding 9-run RA That Met Annual Requirement

%

SO2

914

898

716

714

99.72

NOx

1333

1298

1037

1029

99.23

Flow (All Loads)

2759

2394

1819

1796

98.74

Flow (Normal load, only)

764

676

522

519

99.43

CO2 / O2

942

935

886

885

99.89

Table 15 provides:

  1. The total number of RATAs in the database for each component,
  2. The number of RATAs that had a 9-run RA result below the annual requirement,
  3. The number of RATAs that had a 3-runs RA below the annual requirement, and
  4. The number of RATA data sets with acceptable 3-run RATAs that also had corresponding 9-run RA results below the annual requirement.

For example, the SO2 RATA database consisted of 914 SO2 RATAs. Of these RATAs, 898 RA results were below the Part 75 annual testing requirement. Of the 914 total RATAs, 716 3-run RA results were below the annual requirements, and 714 of these 716 3-run RATAs (99.72%) also had 9-run RA results below the annual test requirements. The data clearly indicates that, if the CEMS passes a 3-run RATA, it will likely pass the 9-run RATA.

When evaluating the effects of using a 3-run RATA to determine bias adjustment factors, two primary concerns had to be addressed. First, the bias test is defined as a comparison between the mean of the differences (Reference Method – CEMS) and the confidence coefficient for any given RATA. If the mean of the differences is less than or equal to the confidence coefficient, then the CEMS passes the bias test. For the 3-run RATAs, this test becomes easier to pass, because the confidence coefficient is directly impacted by the t-value, which is significantly larger for three runs (t-value = 4.303) than it is for nine runs (t-value = 2.306). Second, a RATA consisting of more than nine test runs can potentially omit test runs that would reflect a higher bias adjustment factor. Whereas the 3-run bias results were not afforded this option. During the evaluation of this data, the question became, would these two affects on reported bias factors "cancel" each other, and if not, was there a procedure that could be developed for determining representative bias factors using 3-run RATA results?

Table 16 present a summary of the bias factor comparison. As expected, the number of 9-run RATAs that failed the bias test was consistently greater than the number of 3-run RATA bias test failures. However, the bias adjustment factors were consistently higher for 3-run RATAs that failed the bias teat when compared to the 9-run RATAs that indicated a bias. As an example, the average SO2 BAF for the 3-run RATAs was 1.044 compared to 1.031 for the 9-run RATAs. Reviewing the average BAFs for all 3-run and corresponding 9-run RATAs (including those with a factor of 1.000, and again, only using RATA data sets with 3-run RATA results below the annual performance criteria), the 3-run BAF averaged only 0.001 less than the average BAF for the 9-run RATAs. The conclusion, based on these results, is that when using 3-run RATAs to calculate the BAF fewer CEMS would be required to apply BAFs. However, those CEMS that do apply BAFs will tend to apply slightly larger corrections, and the overall impact on ARD’s Allowance Tracking Program would be negligible.

The final issue that was addressed when evaluating RATA data involved the potential inappropriate failure of CEMS due to the extremely high t-values required when fewer RATA test runs are performed. As seen in Table 15, only about 78 percent of the SO2 CEMS pass a 3-run RATA while 98 percent passed the 9-run RATA. The difference in pass/fail rates was primarily due to the t-values being used in the calculations and not the result of CEMS performance.

To illustrate this point further Table 17 present a comparison of RATA results for SO2 monitors based on 3-run, 4-run, 5-run, 6-run and 9-run RATAs. As would be expected, the number of RATA's meeting the annual requirements increases as the number of test runs increases and the t-value decreases. Of the 914 SO2 RATAs in the database, 85.8, 88.5 and 90.4 percent of the 4-run, 5-run and 6-run RATA results, respectively, were below the annual performance test requirement.

Table 16. Results of the Bias Test Comparison

Parameter

Overall

3-Run BAF

(Average)

Overall

9-Run BAF

(Average)

Difference

Average 3-run BAF for

BAFs > 1.000

Average 9-run BAF for BAFs > 1.000

SO2

1.008

1.007

0.001

1.044

1.031

NOx

1.010

1.012

-0.002

1.041

1.031

Flow (All Loads)

1.009

1.010

-0.001

1.033

1.027

Flow (Normal)

1.008

1.009

-0.001

1.033

1.027

Table 17. Example Results of the Relationship Between 3-, 4-, 5- and 6-run SO2 RATAs

Criteria

3-Run RATA

4-Run RATA

5- Run RATA

6-Run RATA

9-Run RATA

RATA's w/ a RA That Meets the Annual Requirement

716

784

809

826

898

RATA's w/ Corresponding 9-run RA That Meets the Annual Requirement

714

782

807

824

NA

Percentage of RATAs That also Meet Annual Requirements After 9-runs

99.72

99.74

99.75

99.76

NA

Average Bias Adjustment Factor (BAF)

1.008

1.007

1.007

1.008

1.007

Average Corresponding 9-run BAF

1.007

1.007

1.008

1.008

NA

Difference

0.001

0.000

0.001

0.000

NA

Average BAF > 1.000

1.044

1.035

1.035

1.036

1.033

Average Corresponding 9-run BAF > 1.000

1.031

1.031

1.032

1.032

NA

Two other important facts are revealed in this table.

  1. The average BAFs are not impacted significantly, regardless of how many runs are used.
  2. The diminishing effect of the confidence coefficient is observed when reviewing the average bias adjustment factors for RATA data sets with a BAF greater than 1.000. This is evident in the comparatively large step change in the average BAF from the 3-run to 4-run RATA's. This step change is primarily the result of a decrease in the t-value from 4.303 to 3.182.

Based on these evaluations, RATA results can be adequately reported, in many cases, using as few as three test runs. Furthermore, if additional test runs are needed to reduce the magnitude of the confidence coefficient in order to pass a RATA, there will be no negative impact on the quality or the conservative nature of the data being reported to EPA.

Alternative QA/QC Procedures

Based on the assessment of 1997 QA/QC results for Part 75 CEMS, a variety of possible alternative QA/QC procedures could be developed and technically justified by existing EDR data. This section presents a general discussion of some possible alternative procedures that could be recommended to EPA. When developing possible alternative procedures, the primary objectives were to:

  1. Develop alternative QA/QC procedures that will ensure CEMS data quality.
  2. Reduce the cost of QA/QC efforts for sources required to operate CEMS.
  3. Consider eliminating aspects of existing QA that may no longer be necessary due to improvements in the CEMS equipment and Reference Method procedures.
  4. Develop procedures that were similar to or based on the existing QA/QC procedures already familiar to EPA.

Alternative Calibration Procedures: At this time, no changes to the daily calibration procedures are recommended. However, in the future as CEMS designs and technologies improve, resulting in decreases in span and zero calibration failures, EPA should consider possible reductions in the calibration requirements. In some countries, for example, calibrations are only required once every 500 unit operating hours. Currently, with CEMS requiring adjustments, on the average, once every ten days and failing calibrations once every 50 days, only a few more CEMS equipment improvements may be necessary before routine calibration requirements could be reduced without any significant impact on data quality.

While no changes to the daily calibration procedures are recommended, EPA should consider eliminating the 7-day calibration drift requirement. Given the types of equipment currently being used and the existing Part 75 daily calibration specifications, the 7-day drift test is essentially performed on an ongoing basis. When the 7-day performance specifications were initially developed many years ago as part of the original PSTs, daily drift specifications were two to four times higher than the current Part 75 calibration error specifications. Consequently, at that time, ensuring the Ageneral,@ initial stability of the analyzers was more crucial. Given the current performance of Part 75 CEMS equipment, 7-day drift tests often become an exercise for the CEMS technicians to predict daily changes in the weather.

EPA should also investigate eliminating the flow monitor interference check requirements. Based on the data evaluated for this project, problems resulting in flow interference check failures also result in a calibration error test failure.

Alternative Linearity Check Procedures: EPA should strongly consider modifying the existing linearity procedures. Overwhelmingly, the data indicates that three injections are not necessary. Furthermore, it is common practice for the CEMS technician to calibrate the CEMS just prior to performing the linearity, using the high linearity gas. The recommended procedure for conducting a linearity test is:

  1. Conduct a calibration error test, and then
  2. Inject a low and mid linearity gas once each.

If the low and mid linearity gas results meet the Part 75 linearity requirement, then the CEMS passes the linearity test. There is no need to consume half of a technician’s workday (trying to avoid missing data), only to obtain nearly identical triplicate measurements for each gas.

Alternative RATA Procedures: Based on the RATA data evaluations, utilities should not be required to complete a minimum of 9 runs when performing a RATA. As any experienced source tester knows, the results of the RATA can frequently be determined after three runs. The data evaluation presented above simply confirms this statement. If at any point, beginning with the RA results calculated after run 3 of a RATA, the utility is satisfied with the RA and bias results, testing should be allowed to stop and the RATA should be considered complete.

Almost without exception, if the RATA criteria (annual or semi-annual) were met using test results from the first three, four, five or six test runs, performing additional test runs had no impact on the RATA results and consequently no benefit. Using this procedure, many RATAs (approximately 80%) will be reduced to three or four runs, and over 90 percent of the RATAs will likely be completed in six runs or less. Note that in order to discard any test runs a minimum of nine runs should still be required.