TUR: What is it?


TUR: What Is It?

TAR, TUR, CMC, and TLAs

In the world of metrology, the three letter acronyms (TLAs) TAR, TUR, and CMC are routinely bantered about like exam scores. Bigger TARs and TURs with smaller CMCs led to the “Scope Wars” of the 90s and early 2000s.

For the quality professional however, all these TLAs have just made the world of metrology more confusing. The calibration world had just gotten past the NIST numbers argument when people started trying to compare scopes of accreditation with TAR/TUR requirements.

As early as the 1930s the “Gage Maker’s Rule of Ten” was regularly understood to be good measurement practice. The basic concept was that the tool used to make a measurement should be ten times as accurate as that object being measured. This was later codified in the 1960 version of the military standard MIL-C-45662. Oddly enough this statement was removed two years later in the 1962 version MIL-C-45662A.

 

TAR

TUR

CMC

Test Accuracy Ratio. First officially recommended in MIL-STD-120. Carried through various military specifications until ANSI/NCSL Z540.1

Test Uncertainty Ratio. First codified in MIL-HDBK-1839A and continued in ANSI/NCSL Z540.3

Calibration Measurement Capability. Formerly “BMC” Best measurement Capability.

First codified in ISO Guide 25.



So, what is the difference between TAR and TUR and why is CMC something completely different?

The verbiage used in the military standards up through Z540.1 for TAR were actually defining what we currently understand as TUR. The change was made because the non-governmental definition of TAR was different. The word accuracy presents an ambiguity in this case that has yet to be resolved.  Test uncertainty ratio can be clearly defined as the ratio of the combined uncertainties of a measurement system to the precision of the measurement that is being made. Here the combined uncertainties are clearly defined by JCGM 100:2008 Evaluation of Measurement Data – Guide to the expression of uncertainty in measurement (commonly referred to as the GUM) and precision is simply the tolerance of the measurement being performed.

 In the late 1980s the calibration world was divided along two different paths. On one side was the governmentally required MIL-STD-45662A standard for calibration activities, and on the other was the voluntary consensus standard ISO Guide 25. ISO Guide 25 accreditation gave calibration laboratories a “Scope of Accreditation” and with it the ability to publish their “best” measurement capabilities. Guide 25 evolved with elements of Z540.1 to give use ISO 17025. Neither Guide 25 nor ISO 17025 make reference to any form of decision rules even though they were present at several levels of the draft stages of the document.

 I don’t know exactly why decision rules were removed from the draft standard. My best guess would be that there are a small number of parameters where TAR/TUR don’t work. Relative humidity is a prime example. The best labs in the world are capable of about 0.5%. This would put primary labs at 2% and calibration labs at 8% with a 4:1 rule. By the time a measurement was performed in the field the best measurement performed would be 32%.

 

What are “Decision Rules?”

TAR and TUR are decision rules. They answer the question how “good” does a measurement need to be to make a statement that a measurement meets a specification. Whenever a measurement is made there is a range of values in which the actual measurement can be found. We refer to this as measurement uncertainty. If this range of values overlaps an “out of tolerance condition” there is a possibility of a false positive or negative result. Tolerance ratios and guardbanding are two common methods used to prevent false accept conditions. A false accept condition is when an “out of tolerance” condition appears to be within tolerance.

 

Unfortunately, it really is not possible to decipher a TAR/TUR from a CMC. A calibration company’s CMC contains data associated with a unit under test(UUT). Contrary to what some may see as common sense one cannot compare the measurement uncertainty on a calibration certificate to the tolerance and arrive at a TUR. When comparing a calibration system to a required measurement precision the UUT is not considered. This means the value that is used to calculate TUR can be significantly smaller than what is published on an ISO 17025 scope of accreditation.

 

If your quality system or industry requires TAR/TUR, then your calibration provider needs to be aware of it. If you are asking for TAR, it is also of extreme importance that you are both defining TAR the same way. Also if you don’t believe a calibration provider can meet your TUR requirement based on their published scope of accreditation, then you should ask anyway. We should also keep in mind the bigger picture. The purpose of TAR/TUR is to prevent false acceptance of nonconforming items. There are more ways to achieve this than simple accuracy ratios, and exploring some of those options may reduce calibration costs and downtime dramatically.


Download the Guide to Measurement Uncertainty


 

Visit our International Standards Used In Calibration page to learn more about standards used in Calibration.

 

Related Posts

AS9100 and Calibration: It's time to get real!

 

AS9100 standards are on schedule for 2016 revision publication.

MRO or Manufacturer: changes are...

CONTINUE READING

How to get the BEST calibration Quote

 

We selected an ISO 17025 Calibration Laboratory so we are done!

Many consumers feel that once...

CONTINUE READING

What is the difference between Accreditation and Certification?

 

The recent changes to ISO 9001:2015 have caused some confusion about how to select a calibration...

CONTINUE READING