Error Propagation in Long Traverses |
Top Previous Next |
|
There is a good reason why the longest traverses in a project often have the worst statistics (large F-Ratios). It is simply that the statistical model used in a least-squares adjustment assumes that systematic error has been eliminated. Realistically, in most compass-and-tape (CT) traverses longer than a few hundred meters, the uncertainty in compass calibration plays a larger role than random errors of measurement. Suppose, for example, that the standard deviation (SD) of the error in a true north-relative instrument calibration is only a tenth of a degree. This would have to be considered a very good calibration. Nevertheless, in an east-west traverse with this instrument, the SD of the error in northing would grow with traverse length as follows:
This assumes that random error behaves in accordance with the Walls default assumptions (UVE = 1.0), and that there are no blunders. (The numbers in the second column were obtained by multiplying the length by the sine of 0.1 degrees.) In most cave surveys the uncertainty in instrument calibration and/or magnetic declination is several times larger than 0.1 degree, in which case the growth of systematic error with distance is larger than what the table shows by the same factor. Even if the traverse meanders somewhat, the effect of random measurement error grows in proportion to the square-root of traverse length, while that of systematic error grows linearly with east-west extent.
Because systematic error can't be eliminated completely in CT surveys, the default variance assignment can be a poor approximation when it's applied to very long traverses. Statistics will sometimes show this when a project contains widely separated GPS locations connected by CT surveys. One way to help remedy the situation is to override the default variances of specific traverses. You can do this with the UV parameter on a #units directive. For example, if the assumptions of the above table were applicable, a 1-km traverse would exhibit the same total error SD (2.47) as Walls would assume by default for a 2-km traverse. To correct for this you would apply "UVH=2" to the vectors in the 1-km traverse, causing the horizontal components to be assigned twice their normal variance. The program currently provides no way to favor one horizontal component over the other in a variance override. (See Choice of Mathematical Model.)
Another possible strategy, one that I recommend in any case if you have very good GPS locations that are well separated, is to try reducing the effects of systematic error by deriving plausible compass corrections that are consistent with those locations (smallest UVE). The macro feature can help with this task.
Determining proper weights in an average requires both craft and common sense. Weight should correspond inversely to error variance, but the latter is rarely known with any confidence. What we can usually hope for in cave surveying is that our assumptions about measurement error won't in themselves limit the accuracy of our results to a practically significant degree.
|