Abstract: | “Precision” may be thought of either as the closeness with which a reported value approximates a “true” value, or as the number of digits carried in computations, depending on context. With suitable formal definitions, it is shown that the precision of a reported value is the difference between the precision with which computations are performed and the “loss” in precision due to the computations. Loss in precision is a function of the quantity computed and of the algorithm used to compute it; in the case of the usual “computing formula” for variances and covariances, it is shown that the loss of precision is expected to be log k i k j where k i , the reciprocal of the coefficient of variation, is the ratio of the mean to the standard deviation of the ith variable. When the precision of a reported value, the precision of computations, and the loss of precision due to the computations are expressed to the same base, all three quantities have the units of significant digits in the corresponding number system. Using this metric for “precision,” the expected precision of a computed (co)variance may be estimated in advance of the computation; for data reported in the paper, the estimates agree closely with observed precision. Implications are drawn for the programming of general-purpose statistical programs, as well as for users of existing programs, in order to minimize the loss of precision resulting from characteristics of the data, A nomograph is provided to facilitate the estimation of precision in binary, decimal, and hexadecimal digits. |