Accuracy vs. Precision

Can’t find the episode where @Jonathan was explaining it but that just came up in my lecture and I thought I would share it - feel free to merge


Where does consistency fit?

Eisenhower target practice?

I use a figure like this when I talk about this topic at work (you would be surprised at what levels of decision making people still have trouble understanding these concepts).

What I always add as well is the idea that the target in the background represents the “truth”. If you don’t know the “truth” (ie the target in the background is missing), which we often don’t, then a whole additional level of cognitive bias comes in. People tend to prefer values or estimates like the tightly grouped arrows far off the bullseye (precise but not accurate) more than they prefer the widely grouped arrows that are near the bullseye (accurate but not precise) because this gives them a greater “feeling” of certainty. The answers are close together so they must be right, even though in reality they are far off the target (that we don’t see). In reality this feeling often leads you astray.


Personally I’d prefer precise output of a power meter over accurate output. As you mentioned w/o being able to see the underlying data you can’t differentiate the ‘accurate but not precise’ and ‘neither accurate or precise’ anyhow. Either of the precise scenarios gives data that I can track performance by.

There’s some limit. I would raise my eyebrows at (and then re-calibrate) a power meter that was precisely telling me that my FTP is 650W.

1 Like

If you’re precise, why can’t you just adjust to become accurate? Not sure you can claim to be precise if you’re not accurate in a mechanical / electronic system.

In theory you can do just that but many things have very non-linear or otherwise difficult or non-deterministic relationships. Precise at point A may not necessarily as precise at Point B.


Because you don’t know what the real answer is. Some PMs do adjust for that though (like when you do your 0 offset) and move it to be more accurate. But without doing an actual calibration test on each unit, you have no idea if it’s drifter 3-4 Watts up or 3-4 Watts down.

1 Like

My analytical chemistry professor in school used the same chart, but with a set of beer pong cups. I still remember it, so he definitely knew his audience :sweat_smile:

@jmmetsar is on the money, I’m a manufacturing engineer and deal with this constantly. Every measurement is just an estimation of the truth, it’s up to the observer to determine which aspects of this estimation are the most important. Do you want consistency or to be in the right ballpark if you have to choose only one?

For things like a power meter that measure across a range of inputs linearity becomes very important too as noted by @ZeroGravity. The meter’s response at 40 W input may read low, 500 W might be on the money, and 1000 W might be high, or anything like that. The designs of the power meters should take this into account and only be rated for a power range that they are reasonably linear across, but these are still estimations.

1 Like

In general, consistency (as we perceive that quality) = precision. Multiple measures under similar conditions give results that exhibit a small spread.

This is an often under-estimated question. If you are using the measure to compare outputs, precision is more important than accuracy; if you are using the measure as an absolute, accuracy is more important. The classical example is a set of wood planks for a fence; the length they get cut at needs to be precise, but not accurate.

1 Like