Cracking the Code

While discussing a new project with a colleague I was reminded of some prior work in which I was involved in 'maxing out' the accuracy of crackmeter measurement systems. Crackmeters have great precision, and, all things being equal, they are highly stable and repeatable. The problem is, more often than not: “all things being equal” is hard to find…

Accuracy vs. Precision
This topic is a great place to continue the discussion from a few weeks ago regarding Accuracy and Precision. To get you up to speed: Taking inspiration from a few different sources (mostly ISO-5725), I’ve started thinking about measurement systems in terms of three things: precision, trueness and accuracy, where, accuracy arises from the combination of both precision and trueness of your measurement system.

Courtesy Wikipedia

It’s important to have a useful and meaningful vernacular when talking about these kinds of things: it keeps people on the same page, facilitates problem solving and better analysis and design.

Back to the case in question: we have a system that is highly precise. Using vibrating wire crackmeters coupled with an AVW200 you’re able to achieve results that are consistent (precise) to about .01mm – even better.

Even with lesser measurement hardware, experience shows trueness is the issue with these systems; not precision. Over the course of the day there is a variation in the results due to temperature change. A desire to have temperature compensation factors included with the gauges is understandable – some companies provide them and some do not. For those that do, the results often leave you wanting more. Those that do not are telling you something – it’s not worth it!

Neither of these scenarios make a consumer feel good, but after a lot of experience with these sensors you get to understand why. The temperature compensation model is fine for the lab, but in practice there are so many unmodelable, project-specific interactions going on between the gauge, adhesives and the structure - creating hysteresis and differential change that is impossible to quantify from 5,000 miles away in a calibration facility.

Do it yourself? Not all it's cracked up to be
By observing the sensors for a little while during the base-lining period you can generate some kind of correction curve. If you measure the temperature sensor inside the crackmeter you can plot temp vs difference and get a decent linear relationship, from which you can apply a correction to any future results (see below).

However, I was humbly disappointed with the long-term results achieved from ‘field calibration’, and started to seek something better. In the image below, the blue line represents raw data from the crackmeter. The green line represents data corrected using a temperature relationship based on the the first day of data shown in the chart. While there is a significant improvement, there is still a strange trend - even in the very day that the correction was derived from!

Reference sensors - a better way
Reference sensors are placed across an uncracked piece of the structure, as near as possible to each crackmeter of interest. Results show that this provides an unmatched improvement in trueness, and therefore the overall accuracy of the system. Take a look at the red line above and you'll agree that once you implement reference sensors you wish you had done it before. Yes, there are cost and time implications, and we’ll talk about them later.

Theoretically, you decrease the precision of the system because you’re combining the random-error component of two precisions algebraically, but the gains to ‘trueness’ just blow it out of the water. You’re essentially removing everything from the observations that are related to sensor and structure; mostly thermal differentials and the like. You’re taking a complex interaction of thermal and mechanical interactions and distilling it down to a simple subtraction – and the results are outstanding.

The Real Cost
The cost of installing reference sensors is less than you might think. You’re in the location anyway, and the cost of hardware is often quite small if you compare it to the total cost of the monitoring job. In practice, you don’t need to put a reference gauge at each crackmeter; just balance them through the project where it makes sense and is practical.

Given the base cost of installation: loggers, enclosure, power, comms, installation, prep work, reports etc, the cost of a few extra sensors wired into channels you probably have available anyway is a relatively insignificant cost – generally no more than 10%, even less than 5%. 

Are you giving them the red line, or the blue line...?
When KODA starts working with you on your next project we'll start talking to you about what you actually want to measure. I could probably carry-on for a few paragraphs about this, but our approach is:

Talk, consult, inform and educate engineers, clients and other stakeholders in the decisions they are making about their monitoring requirements, and make sure what we're providing is actually consistent with that.

If your project is getting data that it thinks is the change in distance across a crack, but in reality it is residual thermal impact on a sensor…then you’re missing the mark!