A common question related to aerosol particle counters is how well particle counting results should match each other. An example of when this becomes important is new particle counters being added to an existing fleet. The new instruments could be supplied from the original manufacturer or from a new vendor. Even when the particle counters are supplied by the same manufacturer, it is not uncommon for different models to be purchased because of improved specifications or product obsolescence. All of these considerations are important because they can impact the resulting particle count data. The purpose of this short application note is to provide a practical guide for the expectations one should have when comparing particle count data from similar and dissimilar instruments. There are three common cases that provoke questions about data agreement between two aerosol optical particle counters:
- • Group 1: Same instruments from the same manufacturer
- • Group 2: Like instruments from different manufacturers*
- • Group 3: Unlike instruments from same or different manufacturer
* The term “Like” refers to particle counters having the same key specifications of first channel sensitivity, sample flow rate and the number (and value) of size channels (indicative of resolution).
Relative Data Comparisons and Contamination Trending
A reasonable expectation is that all aerosol optical particle counters falling into Groups 1 and 2 should provide similar relative particle trend data over time. Specifically, the data that can be compared is cumulative, normalized counts. All of the common particle trend profiles should compare well, including periods when counts are stable, periodic/irregular particle events, as well as the presence of a clean-up curve for significant upsets. Large offsets in the data (up to a factor of 2 or more) can be tolerated, especially in the lower ISO classes, because steady-state particle levels are generally so much lower than action levels or limits for a particular environment.
Best practices suggest avoiding Group 3 for even relative data comparison because there are simply too many variables associated with the instrumentation that can lead to differences in the particle count data. Although this conclusion seems obvious, it is worth mentioning here because it does happen in the real world and these comparisons should be avoided.

Absolute Data Comparisons
Absolute comparisons of data are sometimes necessary, and care must be used when setting expectations for how well particle data from different instruments match. The three categories of factors that impact data matching between aerosol particle counters are summarized in the following table.

For Group 1
The reason Group 1 offers the best chances for close agreement for cumulative and differential data agreement across a pool of many instruments that may have been manufactured at different times is because data differences resulting from instrument and calibration factors have been for the most part eliminated. Manufacturing consistency and quality control are the only limiting factors. It is assumed the “how used” factors can be minimized by taking care with how and where the aerosol samples are collected. However, even in this best-case scenario, it is impossible for two particle counters to count exactly the same and some variation in sample data will exist.
For Group 2
Absolute data comparisons are complicated primarily due to the differences in the designs between manufacturers of optical particle counters. For example, “like” instruments will perform differently even though they have the same key specifications. They will count and size particles differently because of design choices for the laser type and beam shaping in the sample region, optical detection and signal processing approach, and sample cell and flow delivery (including recirculation). The cumulative impact of these factors for instruments produced by different manufacturers results in differences in the cumulative and differential data they produce.
Calibration factors are also a reality that affect data matching between like instruments produced by different manufactures. There are a variety of suppliers for the materials used to calibrate an optical particle counter. These monodisperse particles have different mean sizes and size distributions. It is common for specific part numbers from the same manufacturer to have specification differences batch to batch. Next, the method to deliver the particles during the calibration can be a source of error. Particles must be free from contamination and delivered in the correct concentration when setting channel thresholds. Finally, the type of reference instrument used to calibrate the instrument under test will have an impact on its counting performance.
In addition, the number of size channels and resolution that an instrument provides will also have an effect on data matching. More channels equal additional areas of variation; likewise, a small change in sizing will have a much greater effect on an instrument with higher resolution. An additional point worth mentioning is that “splits” is a common approach used to set a particle counter’s inner channel size threshold during calibration. Adjusting and setting a size threshold to match the median of the monodisperse particle challenge in adjacent channels is not an exact science and there are many variables associated with this calibration step. The cumulative error pertaining to calibration factors can be significant and does contribute to data mismatch between like particle counters.
For Group 3
There should be no expectation of any data agreement for instruments falling into this category because the instrument design and its calibration process are completely different. The differences in sensitivity of instruments in this group affect cumulative counts and the ability to determine comparative accuracy in polydisperse aerosol distributions (i.e. actual sampling of an environment with particles of varying sizes).
Guidelines for Matching Expectations
Based on the discussion above, the following data matching expectations are recommended. Only cumulative counts should be considered; differential count data should be avoided due to the reasons detailed in the following table. The test environment must contain a statistically meaningful concentration of particles to be able to draw a conclusion with any level of confidence.

Summary
Care must be used when attempting to compare data between aerosol optical particle counters. As a general rule of thumb, the same models from the same manufacturer will produce the best data matching under equivalent sampling conditions. Differences in cumulative and differential data from “like” instruments should always be expected. These baseline shifts in the particle count data can generally be tolerated because it is consistent over time so expectations can be reset. If comparisons between like instruments must be made, the total normalized cumulative counts are the suggested data to use. It is reasonable to expect a larger difference in inter-channel cumulative and differential data for like instruments. Finally, comparison of particle data for unalike instruments from the same or different manufacturers should always be avoided due to differences caused by instrument and calibration factors.
