SQA Services, Inc.

Global Quality On Demand

The Importance of a Correct Gauge R&R; Study





By: Victor Baeza, SQA Field Engineer

Variation is everywhere in our world. To improve any process or product, the variation of key critical-to-quality parameters needs to be measured precisely and accurately. It is amazing how many organizations fail to consider this aspect of a process improvement project, capability study, statistical process control chart, on-going inspection process, or Six Sigma project.

In a Six Sigma project, be it for DMAIC or DMADV, the variation of the measurement system needs to be correctly assessed at the initial phase of the project. For any process that involves a measurement system, a gauge repeatability and reproducibility (R&R) study (measurement system analysis) must be performed to determine the amount of variation that will be produced from the main two contributors of the measurement system: the gauge/test equipment (repeatability) and the operator (reproducibility). The interaction between these contributors to variation, as well as the part-to-part (process) variation, must likewise be assessed. For a measurement system to be capable and stable, the variation contributed by the gauge and the operator must be minimal. This will allow the resolution of the measurement system to distinguish among the various parts (part-to-part variation) for the process or product being measured.

One may think of the measurement system as the line of communication from the process to the inspector, operator, or engineer, and therefore a noise-free line with the best possible resolution is desired. If the line of communication is not clear or there is too much background noise (Gauge R&R variation) in the line of communication, incorrect conclusions or decisions may be derived from the process and the project may be set in the wrong direction.

To derive the true variation in a Gauge R&R study, the parts selected for the study need to be representative of the whole process, which will be measured on an on-going basis by the gauge under analysis. Therefore, the complete normal distribution of the part or process should be considered for the parts collected. The selection of the parts should include different machines, set-ups, operators, shifts, etc. For a study that may only include 3-10 parts, this may be difficult. However, the entire range of the normal distribution of the part or process should be represented in the selection of parts.

For example, consider a machine shop that produces parts with a CNC machine. If the Gauge R&R team responsible for collecting the 5 or 10 parts for the study collects all the parts from one machine, one tool, one set-up, one operator, one shift, then mostly likely the variation among the parts will be minimal and not representative of the true normal process. If these parts are then used in a study and measured with a caliper with a tolerance of 0.001″, the variation derived from the repeatability (test equipment) and reproducibility (operators) will then be erroneously high and may cause the team to classify the measurement system as incapable of being utilized to measure the parts or feature under test. However, if the samples were collected over a longer sampling distribution (different machines, tools, set-ups, operators, shifts, etc.), then the true variation of the parts would be collected and provide a more accurate Gauge R&R study. In some manufacturing processes, these samples may be rather simple to collect, by having someone randomly pick parts from different lots (different machines, tools, set-ups, operators, shifts, etc.) already in inventory.

Choosing the correct criterion to determine a measurement system’s capability and stability is extremely important for a Gauge R&R team. Many organizations take only the percentage of Gauge R&R variation to the specification of the part feature (percent study variation/tolerance), which is basically the percentage amount of the variation of the Gauge R&R study to the specified tolerance range. This could lead to a false sense of security, as one may fail to consider the amount of true variation of the R&R to the total variation of the study. A true test determines the following:

  • whether the Gauge under analysis repeatedly distinguish part characteristics with minimal variation
  • if the measurement system is stable and capable.

Related links

Business Development Inquiries

+1 800-333-6180 x4442,
+1 310-802-4442