Least Squares - Most Useful (at least most of the time)

Most Useful - Most Dangerous
First up, I feel compelled to discuss is how dangerous least squares (LS) can be in the wrong hands. It can create the illusion of accuracy and quality, when all you've really done is create precision and redundancy.

LS can be a big time waster too. Just because you can process hundreds of observations quickly, does that suddenly mean that you should spend time in the field measuring them all? Design a survey  properly (LS simulation…!), and acknowledge the difference between accuracy and precision.

Garbage in, garbage out
— Dr Bruce Harvey, UNSW

Time is easily wasted when errors arise. An inexperienced user could easily spend a day trying to track down an error that a more experience user would find in a few minutes. Big errors are easy though. It’s the small ones that LS is even better at hiding from you. My recommendations below for a LS pre-processor would help out with this – filtering and flagging inconsistent data using basic analysis.

Another increasingly common problem with LS adjustments is the tendency of users to blindly assume that control points are perfect and that the more you hold fixed, the better. The problem with this is that small distortions in control networks will get distributed through your newly densified control network. This could be just what you want if you are trying to fit a localized design to existing features, but if you are looking to build a tunnel from one end of your network to the other it is a very risky approach. Make sure you understand the purpose the control network you are using was created for (and how) then constrain your survey appropriately.

So What’s Good about Least Squares?
From my experience with project surveying and monitoring these are a few highlights:
 - Processing large sets of redundant and braced data is a breeze – so long as you know what you are doing and you are careful with checking and managing the results.
 - Free-net adjustments have been useful on a couple of monitoring projects lately. They've been used to process data sets which are free from control network biases, which can be a useful step in assessing the internal strength of your observations before you start to use it in further analysis steps.
 - From a tunneling perspective, the ability to easily combine traditional traversing, shaft connections, GPS and gyroscopic azimuths is handy. As more data becomes available, you can add it in and refine your network, often requiring only a trivial amount of effort.

What’s Out There?
This is a long way from an exhaustive list of packages. I’d encourage readers to share their own experiences. I’d say there are generally 3 different classes of Least Squares Adjustment Package out there:
 - There are the more traditional, text-based versions, usually with a simple PC interface to handle the processing (Fixit2). Text file in and text file out. Much less common these days.
 - The second type of LS adjustment is usually bundled up inside a CAD package – like 12dModel.
 - The third type is becoming more common. It’s a lot like the first type I mentioned, but once the data is read in the user manipulates, assesses and manages the data via a Windows interface/flexgrid/tree etc. CompNet and Star*Net are along these lines, and while I haven’t used it, Elfy seems like a good start towards taking this approach even further. Newer packages like this include a GUI and graphics during processing and output stages making them much more useful and interactive.

What’s possible for the future?
Without considering the market-size and viability of this, here are some thoughts:
- Web-based LS package: It could be offered as SaaS (Software as a Service) in which you could pay by usage, or by subscription. An advantage of this is that while most geomatics companies need to use LS software, sometimes it’s only for a couple of months. Furthermore, by using a subscription based service you avoid issues with USB keys for geographically distributed organizations.

- For my mind, good input data management is crucial. Before a LS adjustment is even started it should be possible to assess and analyze the consistency and quality of the input data. Even the little things, like consistency of Hz and Vt collimation would help with ensuring the data going into your adjustment is good.
- Merge, join and manage multiple campaigns of data: Think of a large project in which multiple surveys ultimately combine to create a single adjustment. A nice way to handle that would be nice.
- Vertical refraction calculation and application: While not strictly reciprocal measurements, it is possible to calculate a good estimate of k (vertical refraction) based on foresight and backsight vertical angles. It would also be nice to be able to apply that to radiations which are measured from that setup. I wouldn't necessarily call this a requirement for high level control surveys, but it would be incredibly useful for high-productivity work flows for monitoring and engineering surveys I've encountered in the past. When uncorrected, vertical refraction manifests itself as height or vertical angle residuals, which can make your Variance Factor look worse than it is, possibly even hiding bad input data.


Anyway, thanks for sticking with me through this lengthy and wordy post. I’d love to hear more about your experiences, so please like, share, follow and we look forward to comments.

Until next time, KODA