Many people think GR&R studies are expensive, but it’s not the studies themselves that are expensive. Rather, it’s the time and manpower involved, especially if a lot of testing is required. If the gage is designed properly for the part, the cost of such a study should be minimal.
GR&R studies are not complicated, but they can be tricky. A recent case involving some air tooling is a good example. An air plug needs a master, but a master is not always sold with the gage. Most manufacturers test their plugs to ensure they pass all repeat and calibration tests before they ship.
In this case, the customer got the air plug, mastered it and performed a GR&R test. The gage passed flawlessly—it repeated like a champ with excellent reproducibility. All was well with the measuring system until the customer tried to use the parts in an assembly, and they didn’t mate properly.
The upset customer sent the gage back to the manufacturer, which checked it out and found no issues. The manufacturer asked for the master, and the gage also worked well with that. Then, the manufacturer sent the master to the lab for calibration and found that it was out of tolerance. The customer had been unknowingly biasing its measurement by using an oversized master. The gage was very precise, but it was not accurate.
Another case involved an engine manufacturer using a handheld snap gage with a dial indicator to measure a part. The GR&R target was 10 percent on a part, with a total tolerance of 0.001 inch (±0.0005 inch). In other words, all the measurements for a given workpiece needed to fall within a range of 0.0001 inch.
Everything seemed to be in order. The manufacturer had a 10:1 ratio between part tolerance and gage accuracy. It had successfully measured the part for decades using the same type of gage. The part hadn’t changed. The tolerance hadn’t changed. Yet, the manufacturer was achieving GR&R results of 30 to 35 percent—not even close to the target.
What the manufacturer failed to appreciate was that its gaging requirement had changed. Previously, a part would “pass” as long as it fell within a tolerance range of 0.001 inch, but the GR&R requirement was much more demanding. The problem wasn’t the snap gage, which was in good condition, with a repeatability of 20 microinches. The problem was simply that the manufacturer had the wrong dial indicator on the gage. With a resolution of 0.0001 inch, the indicator itself consumed the entire allowance for variation under the GR&R study. This left no room for the inevitable variation from other sources.
Other things to keep in mind about GR&R studies include:
• A gage qualified for one part might not be good for another. A caliper gage might perform well on a square part but probably wouldn’t on a round one.
• The gage should not influence the part. If the gage has high gaging force and squeezes the part, this is an unacceptable condition.
• Not all gages need to have a 10-percent GR&R. Acceptable results depend on what the manufacturer can accept as reasonable assurance for adequately qualifying its part and/or the cost of the gaging process. This limit is often larger based on some of today’s tolerances.
When performing a GR&R:
• Ensure the gage is set up in the same type of environment in which it will be used.
• Have the gage set up as it will be used in its location. That means nothing should be missing from or added to the final gage design.
• Ensure the operators are instructed in the proper use of the gage. Have them practice loading and unloading parts to get a feel for the process.
• Make sure the gage, master and parts are clean and at a stabilized temperature.
• Mark the parts so that operators measure them at the same location every time. This is pertinent to tight-tolerance applications in which form or other part characteristics might affect the results or influence the measurements.
• If using manual data entry, double-check the results. Nothing will throw off results faster than a dropped zero or a couple of dyslexic numbers.The goal is not to measure parts as “good” or “bad,” but to capture as many part measurements by different operators as makes sense.
The results of the variation analysis are important. While the minimum sample size for the test might be ten parts, requiring two measurements each by two operators, that may not be enough for a particular process. The end result of GR&R testing should be a measurement process with which manufacturers can feel comfortable for qualifying their parts.