Measurement Uncertainty: Dirt Measures

Dirt Measures

In our calibration lab we have a saying, dirt measures. This actually means a lot more than just cleaning dimensional gages. It speaks to the fact that sources of measurement uncertainty are everywhere. In metrology speak we call them “contributors” to measurement uncertainty. These contributors are everywhere. We try to minimize their impact on the measurements we perform, but in order to do that we must identify them and then compute their magnitude. For this, blog we will skip the math and focus on identifying sources of measurement uncertainty.

Inherited Sources of Error

Just to make things more confusing, the first contribution to measurement uncertainty is measurement uncertainty. Yes, you read that right. The main sources of inherited error are the uncertainty and accuracy of the calibration standard used to make a measurement. This is why the choosing a calibration standard used to make a measurement is extremely important. You can choose to measure the voltage from a wall outlet with a 6-½ digit multimeter, but fluctuations in the 3rd and 4th digit are as accurate as this voltage is supplied. So, a 4-½ handheld multimeter would be the more appropriate tool for the job. Likewise, you can measure a gage block with a micrometer, but in all essence the gage block is actually measuring the micrometer.

Accuracy specifications are typically dictated by the equipment manufacturer and must be considered. Sometimes we can provide sufficient history of an item to prove a specification better than stated by the manufacturer. We refer to this as long term drift, and it may also involve using correction factors.

Other places to look for inherited sources of error are secondary standards used in the measurement process. When calibrating a pipette, a sample of water is weighed. This number is then converted to a volume. In order to perform this conversion, the temperature and barometric pressure must be known. We must consider the accuracy and uncertainty of our thermometer and barometer when performing this conversion.

Other Sources of Error

Sources of measurement error are universal and there are volumes of information presented on the topic for almost any measurement you can perform. For the purposes of simplification let’s look at a common measurement. Consider measuring one cup of water in a glass measuring cup. Here is a partial list of error sources.

• Resolution of the measuring cup. We are measuring 1 cup and there is a line there, so it’s not too important, however, if we wanted ⅞ of a cup and there were only lines at ¾ and 1 then we would have an issue.
• How level is the table the measuring cup is placed on? We’ve all experienced this as we tried to measure a cup of water from the faucet and we can never quite hold the cup level.
• Your viewing angle in relationship to the cup. Is your eye completely lined up with the line on the cup? We call this parallax error.
• Where are you viewing the meniscus in relationship to the line on the cup? The surface tension of water causes is to “climb” the sides of the measuring cup slightly, so the top of the water is not perfectly flat.
• What is the temperature of the water in the cup? Is the temperature of the water and the cup dramatically different? What temperature should the water in the cup be? Laboratory glassware has a temperature printed on it to limit this question because the density of water changes based on temperature.
• Does the humidity in the room impact the reading? Is the air so dry that it is evaporating as you take your reading or is it so wet that it is condensing on the sides of the cup as you are trying to take your reading.
• How well lit is the area? Does the lighting interfere with your ability to read the measuring cup?

This is a measurement we have probably all performed at one time or another, and this is still only a partial list. Other items like barometric pressure, local seismic activity, elevation from sea level, and latitudinal gravity could be considered as well, but that might be going too far.

Quantification

This is where the fun part begins for someone like me. We consider all of these contributors and now start making measurements to see how they impact the actual measurement. I said earlier that I would skip the math and I meant it. For the scope of this discussion we will just say there are well established rules used to quantify the magnitude of each of these uncertainty contributions so that they can be combined into a single measurement uncertainty for the process.

Σ it up

As you can see there are a lot of things to consider in making any measurement. Luckily, once you’ve come up with a list you’ve accomplished the hardest part, because now you know everything that has an impact on that measurement. If you repeat a similar measurement, then you need to consider what has changed from the previous measurement in calculating a new measurement uncertainty. The other good thing is that most of these effects are tiny. Once you have quantified a contributor to be tiny you can then consider its effects negligible unless they exceed some threshold level that is likely an increase in magnitude of 10, 100, or even 1000 times larger than in your experiment.

What’s that Number on My Certificate?

Measurement Uncertainty, you know it’s there. It’s that...

FAQ's on Uncertainty

As a certified calibration technician and technical manager for a calibration...