Absolute and relative errors. Measurement errors


Let some random variable a measured n times under the same conditions. The measurement results gave a set n different numbers

Absolute error- dimensional value. Among n Absolute error values ​​are necessarily both positive and negative.

For the most probable value of the quantity A usually taken average value of measurement results

.

The greater the number of measurements, the closer the average value is to the true value.

Absolute errori

.

Relative errori-th measurement is called quantity

Relative error is a dimensionless quantity. Usually the relative error is expressed as a percentage, for this e i multiply by 100%. The magnitude of the relative error characterizes the accuracy of the measurement.

Average absolute error is defined like this:

.

We emphasize the need to sum the absolute values ​​(modules) of the quantities D and i. Otherwise, the result will be identically zero.

Average relative error is called the quantity

.

For a large number of measurements.

Relative error can be considered as the error value per unit of the measured value.

The accuracy of measurements is judged by comparing the errors of the measurement results. Therefore, measurement errors are expressed in such a form that to assess the accuracy it is enough to compare only the errors of the results, without comparing the sizes of the objects being measured or knowing these sizes very approximately. It is known from practice that the absolute error in measuring an angle does not depend on the value of the angle, and the absolute error in measuring length depends on the value of the length. The larger the length, the greater the absolute error for a given method and measurement conditions. Therefore, according to absolute error The result can be judged about the accuracy of angle measurement, but not about the accuracy of length measurement. Expressing the error in relative form allows comparison in known cases accuracy of angular and linear measurements.


Basic concepts of probability theory. Random error.

Random error called the component of measurement error that changes randomly during repeated measurements of the same quantity.

When repeated measurements of the same constant, unchanging quantity are carried out with the same care and under the same conditions, we obtain measurement results - some of them differ from each other, and some of them coincide. Such discrepancies in measurement results indicate the presence of random error components in them.

Random error arises from the simultaneous influence of many sources, each of which in itself has an imperceptible effect on the measurement result, but the total influence of all sources can be quite strong.

Random errors are an inevitable consequence of any measurements and are caused by:

a) inaccuracy of readings on the scale of instruments and instruments;

b) non-identity of conditions for repeated measurements;

c) random changes external conditions(temperature, pressure, force field, etc.) that cannot be controlled;

d) all other influences on measurements, the causes of which are unknown to us. The magnitude of random error can be minimized by repeating the experiment many times and corresponding mathematical processing of the results obtained.

A random error can take on different absolute values, which are impossible to predict for a given measurement. This error can be equally positive or negative. Random errors are always present in an experiment. In the absence of systematic errors, they cause scatter of repeated measurements relative to the true value.

Let us assume that the period of oscillation of a pendulum is measured using a stopwatch, and the measurement is repeated many times. Errors in starting and stopping the stopwatch, an error in the reading value, a slight unevenness in the movement of the pendulum - all this causes scattering of the results of repeated measurements and therefore can be classified as random errors.

If there are no other errors, then some results will be somewhat overestimated, while others will be somewhat underestimated. But if, in addition to this, the clock is also behind, then all the results will be underestimated. This is already a systematic error.

Some factors can cause both systematic and random errors at the same time. So, by turning the stopwatch on and off, we can create a small irregular spread in the starting and stopping times of the clock relative to the movement of the pendulum and thereby introduce a random error. But if, moreover, we are in a hurry to turn on the stopwatch every time and are somewhat late to turn it off, then this will lead to a systematic error.

Random errors are caused by parallax error when counting instrument scale divisions, shaking of the foundation of a building, the influence of slight air movement, etc.

Although it is impossible to exclude random errors in individual measurements, mathematical theory random phenomena allow us to reduce the influence of these errors on the final measurement result. It will be shown below that for this it is necessary to make not one, but several measurements, and the smaller the error value we want to obtain, the more measurements need to be made.

Due to the fact that the occurrence of random errors is inevitable and unavoidable, the main task of any measurement process is to reduce errors to a minimum.

The theory of errors is based on two main assumptions, confirmed by experience:

1. With a large number of measurements, random errors are of the same magnitude, but different sign, that is, errors in the direction of increasing and decreasing the result occur quite often.

2. Errors that are large in absolute value are less common than small ones, thus, the probability of an error occurring decreases as its magnitude increases.

The behavior of random variables is described by statistical patterns, which are the subject of probability theory. Statistical definition of probability w i events i is the relation

Where n- total number of experiments, n i- the number of experiments in which the event i happened. In this case, the total number of experiments should be very large ( n®¥). With a large number of measurements, random errors obey a normal distribution (Gaussian distribution), the main features of which are the following:

1. The greater the deviation of the measured value from the true value, the less likely it is for such a result.

2. Deviations in both directions from the true value are equally probable.

From the above assumptions it follows that in order to reduce the influence of random errors it is necessary to measure this value several times. Suppose we are measuring some quantity x. Let it be produced n measurements: x 1 , x 2 , ... x n- using the same method and with the same care. It can be expected that the number dn obtained results, which lie in some fairly narrow interval from x before x + dx, must be proportional:

The size of the interval taken dx;

Total number of measurements n.

Probability dw(x) that some value x lies in the range from x before x + dx, is defined as follows :

(with the number of measurements n ®¥).

Function f(X) is called the distribution function or probability density.

As a postulate of the error theory, it is accepted that the results of direct measurements and their random errors, when there are a large number of them, obey the law of normal distribution.

The continuous distribution function found by Gauss random variablex has the following form:

, where mis - distribution parameters .

The parameter m of the normal distribution is equal to the mean value b xñ a random variable, which, for an arbitrary known distribution function, is determined by the integral

.

Thus, the value m is the most probable value of the measured quantity x, i.e. her best estimate.

The parameter s 2 of the normal distribution is equal to the variance D of the random variable, which in general case is determined by the following integral

.

Square root from the variance is called the standard deviation of the random variable.

The average deviation (error) of the random variable ásñ is determined using the distribution function as follows

The average measurement error ásñ, calculated from the Gaussian distribution function, is related to the value of the standard deviation s as follows:

< s > = 0.8s.

The parameters s and m are related to each other as follows:

.

This expression allows you to find the standard deviation s if there is a normal distribution curve.

The graph of the Gaussian function is presented in the figures. Function f(x) is symmetrical about the ordinate drawn at the point x = m; passes through a maximum at the point x = m and has an inflection at points m ±s. Thus, variance characterizes the width of the distribution function, or shows how widely the values ​​of a random variable are scattered relative to its true value. The more accurate the measurements, the closer to the true value the results of individual measurements, i.e. the value s is less. Figure A shows the function f(x) for three values ​​of s .

Area of ​​a figure enclosed by a curve f(x) and vertical lines drawn from points x 1 and x 2 (Fig.B) , numerically equal to the probability of the measurement result falling into the interval D x = x 1 - x 2, which is called the confidence probability. Area under the entire curve f(x) is equal to the probability of a random variable falling into the interval from 0 to ¥, i.e.

,

since the probability of a reliable event is equal to one.

Using normal distribution, error theory poses and solves two main problems. The first is an assessment of the accuracy of the measurements taken. The second is an assessment of the accuracy of the arithmetic mean value of the measurement results.5. Confidence interval. Student's coefficient.

Probability theory allows us to determine the size of the interval in which, with a known probability w the results of individual measurements are found. This probability is called confidence probability, and the corresponding interval (<x>±D x)w called confidence interval. The confidence probability is also equal to the relative proportion of results that fall within the confidence interval.

If the number of measurements n is sufficiently large, then the confidence probability expresses the proportion of total numbern those measurements in which the measured value was within the confidence interval. Each confidence probability w matches yours confidence interval.w 2 80%. The wider the confidence interval, the greater the likelihood of getting a result within that interval. In probability theory, a quantitative relationship is established between the value of the confidence interval, confidence probability and the number of measurements.

If we choose as a confidence interval the interval corresponding to the average error, that is, D a =áD Añ, then for a sufficiently large number of measurements it corresponds to the confidence probability w 60%. As the number of measurements decreases, the confidence probability corresponding to such a confidence interval (á Añ ± áD Añ), decreases.

Thus, to estimate the confidence interval of a random variable, one can use the value of the average error áD Añ .

To characterize the magnitude of the random error, it is necessary to specify two numbers, namely, the value of the confidence interval and the value of the confidence probability . Indicating only the magnitude of the error without the corresponding confidence probability is largely meaningless.

If the average measurement error ásñ is known, the confidence interval written as (<x>±asñ) w, determined with confidence probability w= 0,57.

If the standard deviation s is known distribution of measurement results, the specified interval has the form (<xt w s) w, Where t w- coefficient depending on the confidence probability value and calculated using the Gaussian distribution.

Most commonly used quantities D x are given in table 1.

True meaning physical quantity It is almost impossible to determine absolutely accurately, because any measurement operation is associated with a number of errors or, in other words, inaccuracies. The reasons for errors can be very different. Their occurrence may be associated with inaccuracies in the manufacture and adjustment of the measuring device, due to the physical characteristics of the object under study (for example, when measuring the diameter of a wire of non-uniform thickness, the result randomly depends on the choice of the measurement site), random reasons, etc.

The experimenter’s task is to reduce their influence on the result, and also to indicate how close the result obtained is to the true one.

There are concepts of absolute and relative error.

Under absolute error measurements will understand the difference between the measurement result and the true value of the measured quantity:

∆x i =x i -x and (2)

where ∆x i is the absolute error of the i-th measurement, x i _ is the result of the i-th measurement, x and is the true value of the measured value.

The result of any physical measurement is usually written in the form:

where is the arithmetic mean value of the measured value, closest to the true value (the validity of x and≈ will be shown below), is the absolute measurement error.

Equality (3) should be understood in such a way that the true value of the measured quantity lies in the interval [ - , + ].

Absolute error is a dimensional quantity; it has the same dimension as the measured quantity.

The absolute error does not fully characterize the accuracy of the measurements taken. In fact, if we measure segments 1 m and 5 mm long with the same absolute error ± 1 mm, the accuracy of the measurements will be incomparable. Therefore, along with the absolute measurement error, the relative error is calculated.

Relative error measurements is the ratio of the absolute error to the measured value itself:

Relative error is a dimensionless quantity. It is expressed as a percentage:

In the example above, the relative errors are 0.1% and 20%. They differ markedly from each other, although the absolute values ​​are the same. Relative error gives information about accuracy

Measurement errors

According to the nature of the manifestation and the reasons for the occurrence of errors, they can be divided into the following classes: instrumental, systematic, random, and misses (gross errors).

Errors are caused either by a malfunction of the device, or a violation of the methodology or experimental conditions, or are of a subjective nature. In practice, they are defined as results that differ sharply from others. To eliminate their occurrence, it is necessary to be careful and thorough when working with devices. Results containing errors must be excluded from consideration (discarded).

Instrument errors. If the measuring device is in good working order and adjusted, then measurements can be made on it with limited accuracy determined by the type of device. It is customary to consider the instrument error of a pointer instrument to be equal to half the smallest division of its scale. In instruments with digital readout, the instrument error is equated to the value of one smallest digit of the instrument scale.

Systematic errors are errors whose magnitude and sign are constant for the entire series of measurements carried out by the same method and using the same measuring instruments.

When carrying out measurements, it is important not only to take into account systematic errors, but it is also necessary to ensure their elimination.

Systematic errors are conventionally divided into four groups:

1) errors, the nature of which is known and their magnitude can be determined quite accurately. Such an error is, for example, a change in the measured mass in the air, which depends on temperature, humidity, air pressure, etc.;

2) errors, the nature of which is known, but the magnitude of the error itself is unknown. Such errors include errors caused by the measuring device: a malfunction of the device itself, a scale that does not correspond to the zero value, or the accuracy class of the device;

3) errors, the existence of which may not be suspected, but their magnitude can often be significant. Such errors occur most often in complex measurements. A simple example of such an error is the measurement of the density of some sample containing a cavity inside;

4) errors caused by the characteristics of the measurement object itself. For example, when measuring the electrical conductivity of a metal, a piece of wire is taken from the latter. Errors can occur if there is any defect in the material - a crack, thickening of the wire or inhomogeneity that changes its resistance.

Random errors are errors that change randomly in sign and magnitude under identical conditions of repeated measurements of the same quantity.


Related information.


Absolute and relative errors are used to assess the inaccuracy in highly complex calculations. They are also used in various measurements and for rounding calculation results. Let's look at how to determine absolute and relative error.

Absolute error

Absolute error of the number call the difference between this number and its exact value.
Let's look at an example : There are 374 students in the school. If we round this number to 400, then the absolute measurement error is 400-374=26.

To calculate the absolute error, you need to subtract the smaller number from the larger number.

There is a formula for absolute error. Let us denote the exact number by the letter A, and the letter a - the approximation to the exact number. An approximate number is a number that differs slightly from the exact one and usually replaces it in calculations. Then the formula will look like this:

Δa=A-a. We discussed above how to find the absolute error using the formula.

In practice, absolute error is not sufficient to accurately evaluate a measurement. It is rarely possible to know the exact value of the measured quantity in order to calculate the absolute error. Measuring a book 20 cm long and allowing an error of 1 cm, one can consider the measurement to be with a large error. But if an error of 1 cm was made when measuring a wall of 20 meters, this measurement can be considered as accurate as possible. Therefore, in practice, determining the relative measurement error is more important.

Record the absolute error of the number using the ± sign. For example , the length of a roll of wallpaper is 30 m ± 3 cm. The absolute error limit is called the maximum absolute error.

Relative error

Relative error They call the ratio of the absolute error of a number to the number itself. To calculate the relative error in the example with students, we divide 26 by 374. We get the number 0.0695, convert it to a percentage and get 6%. The relative error is denoted as a percentage because it is a dimensionless quantity. Relative error is an accurate estimate of measurement error. If we take an absolute error of 1 cm when measuring the length of segments of 10 cm and 10 m, then the relative errors will be equal to 10% and 0.1%, respectively. For a segment 10 cm long, an error of 1 cm is very large, this is an error of 10%. But for a ten-meter segment, 1 cm does not matter, only 0.1%.

There are systematic and random errors. Systematic is the error that remains unchanged during repeated measurements. Random error arises as a result of the influence of external factors on the measurement process and can change its value.

Rules for calculating errors

There are several rules for the nominal estimation of errors:

  • when adding and subtracting numbers, it is necessary to add up their absolute errors;
  • when dividing and multiplying numbers, it is necessary to add relative errors;
  • When raised to a power, the relative error is multiplied by the exponent.

Approximate and exact numbers are written using decimal fractions. Only the average value is taken, since the exact value can be infinitely long. To understand how to write these numbers, you need to learn about true and dubious numbers.

True numbers are those numbers whose rank exceeds the absolute error of the number. If the digit of a figure is less than the absolute error, it is called doubtful. For example , for the fraction 3.6714 with an error of 0.002, the correct numbers will be 3,6,7, and the doubtful ones will be 1 and 4. Only the correct numbers are left in the recording of the approximate number. The fraction in this case will look like this - 3.67.

One of the most important issues in numerical analysis is the question of how an error that occurs at a certain location during a calculation propagates further, that is, whether its influence becomes larger or smaller as subsequent operations are performed. An extreme case is the subtraction of two almost equal numbers: even with very small errors in both of these numbers, the relative error of the difference can be very large. This relative error will propagate further during all subsequent arithmetic operations.

One of the sources of computational errors (errors) is the approximate representation of real numbers in a computer, due to the finiteness of the bit grid. Although the initial data is presented in a computer with great accuracy, the accumulation of rounding errors during the calculation process can lead to a significant resulting error, and some algorithms may turn out to be completely unsuitable for real calculation on a computer. You can find out more about the representation of real numbers in a computer.

Propagation of errors

As a first step in considering the issue of error propagation, it is necessary to find expressions for the absolute and relative errors of the result of each of the four arithmetic operations as a function of the quantities involved in the operation and their errors.

Absolute mistake

Addition

There are two approximations and to two quantities and , as well as the corresponding absolute errors and . Then as a result of addition we have

.

The error of the sum, which we denote by , will be equal to

.

Subtraction

In the same way we get

.

Multiplication

When multiplying we have

.

Since the errors are usually much smaller than the quantities themselves, we neglect the product of the errors:

.

The product error will be equal to

.

Division

.

Let's transform this expression to the form

.

The factor in parentheses can be expanded into a series

.

Multiplying and neglecting all terms that contain products of errors or degrees of error higher than the first, we have

.

Hence,

.

It must be clearly understood that the error sign is known only in very rare cases. It is not a fact, for example, that the error increases when adding and decreases when subtracting because in the formula for addition there is a plus, and for subtraction - a minus. If, for example, the errors of two numbers have opposite signs, then the situation will be just the opposite, that is, the error will decrease when adding and increase when subtracting these numbers.

Relative error

Once we have derived the formulas for the propagation of absolute errors in the four arithmetic operations, it is quite easy to derive the corresponding formulas for the relative errors. For addition and subtraction, the formulas were transformed so that they explicitly included the relative error of each original number.

Addition

.

Subtraction

.

Multiplication

.

Division

.

We begin an arithmetic operation with two approximate values ​​and with corresponding errors and . These errors can be of any origin. The quantities and may be experimental results containing errors; they may be the results of a pre-computation according to some infinite process and may therefore contain constraint errors; they may be the results of previous arithmetic operations and may contain rounding errors. Naturally, they can also contain all three types of errors in various combinations.

The above formulas give an expression for the error of the result of each of the four arithmetic operations as a function of ; rounding error in this arithmetic operation in this case not taken into account. If in the future it becomes necessary to calculate how the error of this result is propagated in subsequent arithmetic operations, then it is necessary to calculate the error of the result calculated using one of the four formulas add rounding error separately.

Computational process graphs

Now consider a convenient way to calculate the propagation of error in any arithmetic calculation. To this end, we will depict the sequence of operations in a calculation using graph and we will write coefficients near the arrows of the graph that will allow us to relatively easily determine the general error of the final result. This method is also convenient because it allows you to easily determine the contribution of any error that arose during the calculation process to the overall error.

Fig.1. Computational process graph

On Fig.1 a graph of a computational process is depicted. The graph should be read from bottom to top, following the arrows. First, operations located at some horizontal level are performed, after that operations located at a higher level high level, etc. From Fig. 1, for example, it is clear that x And y first added and then multiplied by z. The graph shown in Fig.1, is only an image of the computational process itself. To calculate the total error of the result, it is necessary to supplement this graph with coefficients, which are written next to the arrows according to the following rules.

Addition

Let two arrows that enter the addition circle come out of two circles with values ​​and . These values ​​can be either initial or the results of previous calculations. Then the arrow leading from to the + sign in the circle receives the coefficient, while the arrow leading from to the + sign in the circle receives the coefficient.

Subtraction

If the operation is performed, then the corresponding arrows receive coefficients and .

Multiplication

Both arrows included in the multiplication circle receive a coefficient of +1.

Division

If division is performed, then the arrow from to the slash in the circle receives a coefficient of +1, and the arrow from to the slash in the circle receives a coefficient of −1.

The meaning of all these coefficients is as follows: the relative error of the result of any operation (circle) is included in the result of the next operation, multiplied by the coefficients of the arrow connecting these two operations.

Examples

Fig.2. Computational process graph for addition, and

Let us now apply the graph technique to examples and illustrate what error propagation means in practical calculations.

Example 1

Consider the problem of adding four positive numbers:

, .

The graph of this process is shown in Fig.2. Let us assume that all initial quantities are specified accurately and have no errors, and let , and be the relative rounding errors after each subsequent addition operation. Successive application of the rule to calculate the total error of the final result leads to the formula

.

Reducing the sum in the first term and multiplying the entire expression by , we get

.

Considering that the rounding error is equal to (in this case it is assumed that a real number in a computer is represented in the form decimal With t in significant figures), we finally have

Absolute and relative errors

Errors such as mean (J), root mean square ( m), probable ( r), true (D) and limit (D etc) are absolute errors. They are always expressed in units of the quantity being measured, i.e. have the same dimension as the measured value.
Cases often arise when objects of different sizes are measured with the same absolute errors. For example, average square error line length measurements: l 1 = 100 m and l 2 = 1000 m, amounted to m= 5 cm. The question arises: which line was measured more accurately? To avoid uncertainty, the measurement accuracy of a number of quantities is assessed as a ratio absolute error to the value of the measured quantity. The resulting ratio is called the relative error, which is usually expressed as a fraction with a numerator equal to one.
The name of the absolute error determines the name of the corresponding relative measurement error [1].

Let x- the result of measuring a certain quantity. Then
- mean square relative error;

Average relative error;

Probable relative error;

True relative error;

Limit relative error.

Denominator N relative error must be rounded to two significant figures with zeros:

m x= 0.3 m; x= 152.0 m;

m x= 0.25 m; x= 643.00 m; .

m x= 0.033 m; x= 795,000 m;

As can be seen from the example, the larger the denominator of the fraction, the more accurate the measurements are.

Rounding errors

When processing measurement results, an important role is played by rounding errors, which in their properties can be classified as random variables [2]:

1) the maximum error of one rounding is 0.5 units of the retained sign;

2) large and smaller rounding errors in absolute value are equally possible;
3) positive and negative rounding errors are equally possible;
4) the mathematical expectation of rounding errors is zero.
These properties make it possible to attribute rounding errors to random variables that have a uniform distribution. Continuous random variable X has a uniform distribution over the interval [ a, b], if on this interval the distribution density of the random variable is constant, and outside it it is equal to zero (Fig. 2), i.e.

j (x) . (1.32)

Distribution function F(x)

a b x(1.33)

Rice. 2 Mathematical expectation

(1.34)

Dispersion
(1.35)

Standard deviation

(1.36)

For rounding errors



error: Content protected!!