A Look at Adjustment Thresholds and Risk Tolerance
Throughout my career in metrology, I’ve heard a large number of opinions on adjustment thresholds. Everything from “only adjust it if it’s out” to “adjust every time” are common practices depending on industry, lab, or even which way the wind is blowing that day.
What is an adjustment threshold? An adjustment threshold is the level, typically expressed as a “percentage of the tolerance,” at which an item being calibrated will be adjusted to “match” the nominal value. A calibration adjustment can be time consuming, so it would make sense to have some sort of rule as to when we will decide to adjust an instrument.
I’ve heard stories of various statistical models that have been used to set thresholds at different levels and they’ve all seemed quite valid, however, I’ve seen very little of the actual models that I’ve heard so much about. The military has historically used a 70% rule. That is to say that if the deviation from the nominal exceeds 70% of the tolerance, then the item should be adjusted. As we will see later the 70% rule actually represents a good balance of adjustments vs. “out of tolerance” conditions.
As we can see from our graph the 70% adjustment threshold will permit about 10% of items to be “out of tolerance” by the end of their cycle time, but we are only adjusting about 30% of the items. If we only adjust an item when it is “out of tolerance,” then we are accepting a risk that as many as 20% of our calibrations could produce “out of tolerance” results. It is interesting to note that at a 50% adjustment threshold the likelihood of an “out of tolerance” condition is about 5% and that this number doubles around 70% and that number about doubles by 100%.
The chart was developed using 1000 lines of a simulated measurement for each adjustment threshold. As such the values reported repeated well within a few percentage points. A set of 10,000 lines would be nice, but it would also make the Excel file used to create this graph enormous and not add much value.
Some Things That Aren’t included
For this model, I made the assumption that random error and the tolerance were the same. This is not always the case. I also didn’t look at the case of different calibration cycles having different random error thresholds. Because we are looking strictly at random error, the results above are really a worst case scenario. Most manufacturers report a tolerance over time to assume some coverage factor. They know that most of the devices they produce will exhibit a much smaller random error that what is reported, so the actual number of “out of tolerance” conditions are likely to be significantly smaller then what we see with “truly random” error. In the future I will probably add some additional variables to the simulation and see what happens.
We are also excluding measurement uncertainty from this situation. We will assume that the measurement uncertainty is sufficiently small, like 1 part in 10 of the random error associated with the measurement, to not play a vital role in this model.
How Can I Use this Information?
The chart above presents a good strategy by which we can estimate what adjustment threshold will present an allowable percentage of “out of tolerance” results. What is your risk tolerance? Are you able to except that 10% of your items could be “out of tolerance” when they are calibrated? If you outsource your calibrations, have you discussed your risk tolerance with your calibration provider? Many commercial labs will only adjust an item if it does not meet tolerance unless they have been told differently by the customer.