Correlation function of a deterministic signal. Correlation analysis of discrete signals

Cross correlation function (VKF) different signals(cross-correlation function, CCF) describes both the degree of similarity of the shape of two signals, and their relative position relative to each other along the coordinate (independent variable). Generalizing the formula (6.1.1) of the autocorrelation function to two different signals s(t) and u(t), we obtain the following scalar product of the signals:

B su () =s(t) u(t+) dt. (6.2.1)

Mutual correlation of signals characterizes a certain correlation of phenomena and physical processes displayed by these signals, and can serve as a measure of the “stability” of this relationship when signals are processed separately in various devices. For finite-energy signals, the CCF is also finite, while:

|B su ()|  ||s(t)||||u(t)||,

which follows from the Cauchy-Bunyakovsky inequality and the independence of signal norms from the shift in coordinates.

When changing the variable t = t- in the formula (6.2.1), we get:

B su () = s(t-) u(t) dt = u(t) s(t-) dt = B us (-).

It follows that the parity condition is not satisfied for the VKF, B su ()  B su (-), and the values ​​of the VKF are not required to have a maximum at  = 0.

Rice. 6.2.1. Signals and VKF.

This can be clearly seen in Fig. 6.2.1, where two identical signals are given with centers at points 0.5 and 1.5. Calculation by formula (6.2.1) with a gradual increase in the values ​​of  means successive shifts of the signal s2(t) to the left along the time axis (for each value of s1(t), the values ​​of s2(t+) are taken for the integrand multiplication). When =0, the signals are orthogonal and the value of B 12 ()=0. The maximum B 12 () will be observed when the signal s2(t) is shifted to the left by the value =1, at which the signals s1(t) and s2(t+) completely coincide.

The same values ​​of the CCF according to the formulas (6.2.1) and (6.2.1") are observed at the same mutual position of the signals: when the signal u(t) is shifted by the interval  relative to s(t) to the right along the y-axis and signal s(t) relative to signal u(t) to the left, i.e. B su () = B us (-

Rice. 6.2.2. Mutual covariance functions of signals.

On fig. 6.2.2 shows examples of VKF for a rectangular signal s(t) and two identical triangular signals u(t) and v(t). All signals have the same duration T, while the signal v(t) is shifted forward by the interval T/2.

The signals s(t) and u(t) are the same in terms of time location and the signal "overlap" area is maximum at =0, which is fixed by the function B su . At the same time, the function B su is sharply asymmetric, since with an asymmetric signal shape u(t) for a symmetrical shape s(t) (relative to the center of the signals), the signal "overlapping" area changes differently depending on the direction of the shift (the sign of  with an increase in the value  from zero). When the initial position of the signal u(t) is shifted to the left along the ordinate axis (ahead of the signal s(t) - signal v(t)) the VKF shape remains unchanged and shifts to the right by the same shift value - the function B sv in Fig. 6.2.2. If the expressions of the functions in (6.2.1) are interchanged, then the new function B vs will be a function B sv that is mirrored with respect to =0.

Taking into account these features, the total CCF is calculated, as a rule, separately for positive and negative delays:

B su () = s(t) u(t+) dt. B us () = u(t) s(t+) dt. (6.2.1")

Cross-correlation of noisy signals . For two noisy signals u(t) = s1(t) + q1(t) and v(t) = s2(t) + q2(t), applying the method of deriving formulas (6.1.13) with the replacement of a copy of the signal s(t ) to the signal s2(t), it is easy to derive the cross-correlation formula in the following form:

B uv () = B s1s2 () + B s1q2 () + B q1s2 () + B q1q2 (). (6.2.2)

The last three terms on the right side of (6.2.2) decay to zero as  increases. For large signal setting intervals, the expression can be written in the following form:

B uv () = B s 1 s 2 () +
+
+
. (6.2.3)

At zero average values ​​of noise and statistical independence from signals, the following takes place:

B uv () → B s 1 s 2 ().

VKF discrete signals. All properties of VKF analog signals are also valid for VCFs of discrete signals, while the features of discrete signals described above for discrete ACFs are also valid for them (formulas 6.1.9-6.1.12). In particular, at t = const =1 for signals x(k) and y(k) with the number of samples K:

B xy (n) =
x k y k-n . (6.2.4)

When normalized in units of power:

B xy (n) = x k y k-n 
. (6.2.5)

Estimation of Periodic Signals in Noise . A noisy signal can be evaluated for cross-correlation with a "reference" signal by trial and error, with the cross-correlation function adjusted to its maximum value.

For signal u(k)=s(k)+q(k) with statistical independence of noise and → 0, the cross-correlation function (6.2.2) with the signal template p(k) for q2(k)=0 takes the form:

B up (k) = B sp (k) + B qp (k) = B sp (k) + .

And since → 0 as N increases, then B up (k) → B sp (k). Obviously, the function B up (k) will have a maximum when p(k) = s(k). By changing the form of the template p(k) and maximizing the function B up (k), we can obtain an estimate of s(k) in the form of the optimal form p(k).

Function of cross-correlation coefficients (VKF) is a quantitative indicator of the degree of similarity of signals s(t) and u(t). Similarly to the function of autocorrelation coefficients, it is calculated through the centered values ​​of the functions (to calculate the mutual covariance, it is sufficient to center only one of the functions), and is normalized to the product of the values ​​of the standards of the functions s(t) and v(t):

 su () = C su ()/ s  v . (6.2.6)

The interval of change in the values ​​of correlation coefficients at shifts  can vary from –1 (complete inverse correlation) to 1 (complete similarity or one hundred percent correlation). At shifts , at which zero values ​​ su () are observed, the signals are independent of each other (uncorrelated). The cross-correlation coefficient allows you to establish the presence of a connection between the signals, regardless of the physical properties of the signals and their magnitude.

When calculating the CCF of noisy discrete signals of limited length using formula (6.2.4), there is a probability of occurrence of values ​​ su (n)| > 1.

For periodic signals, the concept of CCF is usually not used, except for signals with the same period, for example, entry and exit signals when studying the characteristics of systems.

In communication theory, correlation theory is used in the study random processes, allowing you to establish a connection between the correlation and spectral properties random signals. The problem often arises of detecting one transmitted signal in another or in interference. For reliable detection of signals and the method is applied correlations, based on the correlation theory. In practice, it turns out to be useful to analyze the characteristics that give an idea of ​​the rate of change in time, as well as the duration of the signal without decomposing it into harmonic components.

Let the signal copy u(t - m) shifted relative to its original u(t) for a time interval t. To quantify the degree of difference (connection) of the signal u(t) and its shifted copy u(t - t) use autocorrelation function(AKF). ACF shows the degree of similarity between the signal and its shifted copy - the larger the ACF value, the stronger this similarity.

For deterministic signal finite duration (finite signal), the analytical notation of the ACF is an integral of the form

Formula (2.56) shows that in the absence of a copy shift relative to the signal (m = 0), the ACF is positive, maximum and equal to the signal energy:

Such energy [J] is released on a resistor with a resistance of 1 Ohm, if a certain voltage is connected to its terminals u(t)[IN].

One of the most important properties of the ACF is its parity: IN( t) = IN(- T). Indeed, if in expression (2.56) we change the variable x = t - t, then

Therefore, the integral (2.56) can be represented in another form:

For a periodic signal with a period Г, whose energy is infinitely large (since the signal exists for an infinite time), the calculation of the ACF by formula (2.56) is unacceptable. In this case, determine the ACF for the period:

Example 2.3

Let us determine the ACF of a rectangular pulse, which has an amplitude E and duration t and (Fig. 2.24).

Solution

It is convenient to calculate the ACF for an impulse graphically. Such a construction is shown in Fig. 2.24, a - g, where are given, respectively, the initial momentum u(t)= u t its copy shifted by m m m (?) = u(t- t) = m t and their product u(f)u(t- t) = uu v Consider the graphical calculation of the integral (2.56). Work u(t)u(t- m) is not equal to zero in the time interval when there is an overlap of any parts of the signal and its copy. As follows from Fig. 2.24, this interval is equal to x - m, if the time shift of the copy is less than the pulse duration. In such cases, for the momentum, the ACF is defined as IN( t) = E 2 ( t and - |t|) with a time shift of the copy to the current time |t| B(0) = = E 2 t and \u003d E (see Fig. 2.24, G).

Rice. 2.24.

A - pulse; 6 - copy; V - product of signal and copy; G - ACF

Often, a numerical parameter convenient for analyzing and comparing signals is introduced - correlation interval tk, analytically and graphically equal to the width of the base of the ACF. For this example, the correlation interval t k = 2m u.

Example 2.4

Define the ACF of a harmonic (cosine) signal u(t) == t/m cos(co? + a).


Rice. 2.25.

A - harmonic signal; b - ACF of a harmonic signal

Solution

Using formula (2.57) and denoting In p ( t) = IN( t), we find

It follows from this formula that the ACF of a harmonic signal is also a harmonic function (Fig. 2.25, b) and has the dimension of power (B 2). Note another very important fact that the calculated ACF does not depend on the initial phase of the harmonic signal (parameter

An important conclusion follows from the analysis: the ACF of almost any signal does not depend on its phase spectrum. Therefore, signals whose amplitude spectra completely coincide, but whose phase spectra differ, will have the same ACF. Another remark is that the original signal cannot be restored from the ACF (again, due to the loss of information about the phase).

Relationship between the ACF and the energy spectrum of the signal. Let the impulse signal u(t) has a spectral density 5(co). We define the ACF using formula (2.56) by writing and(C) in the form of the inverse Fourier transform (2.30):

By introducing a new variable x = t - m, from the last formula we obtain Here the integral

is a function complex conjugate of the spectral density of the signal

Taking into account relation (2.59), formula (2.58) takes the form Function

called energy spectrum (spectral energy density) of the signal, showing the distribution of energy over frequency. The dimension of the energy spectrum of the signal corresponds to the value of IP/co) - [(V 2 -s)/Hz].

Taking into account relation (2.60), we finally obtain the expression for the ACF:

So the ACF of the signal is inverse transformation Fourier from its energy spectrum. Direct Fourier Transform from ACF

So, direct Fourier transform (2.62) ACF determines the energy spectrum, A inverse Fourier transform of the energy spectrum(2.61) - ACF of a deterministic signal. These results are important for two reasons. First, based on the distribution of energy along the spectrum, it becomes possible to evaluate the correlation properties of signals - the wider the energy spectrum of the signal, the smaller the correlation interval. Accordingly, the larger the signal correlation interval, the shorter its energy spectrum. Second, relations (2.61) and (2.62) make it possible to experimentally determine one of the functions from the value of the other. It is often more convenient to first obtain the ACF, and then calculate the energy spectrum using the direct Fourier transform. This technique is widely used in the analysis of the properties of signals in real time, i.e. no time delay in its processing.

Cross-correlation function of two signals. If you need to evaluate the degree of connection between signals x(t) And u 2 (t), then use cross-correlation function(VKF)

For m = 0, the VKF is equal to the so-called mutual energy of two signals

The VCF value does not change if instead of delaying the second signal u 2 (t) consider its advance by the first signal m, (?), therefore

ACF is a special case of VKF if the signals are the same, i.e. u y (t) = u 2 (t) = u(t). In contrast to the ACF, the CCF of two signals B 12 (m) is not even and is not necessarily maximum at m = 0, i.e. in the absence of a time shift of the signals.

From a physical point of view, the correlation function characterizes the relationship or interdependence of two instantaneous values ​​of one or two various signals at times and . In the first case, the correlation function is often called autocorrelation, and in the second, cross-correlation. The correlation functions of deterministic processes depend only on .

If the signals and are given, then the correlation functions are determined by the following expressions:

- cross-correlation function; (2.66)

- autocorrelation function. (2.67)

If and are two periodic signals with the same period T, then it is obvious that their correlation function is also periodic with a period T and hence it can be expanded into a Fourier series.

Indeed, if in expression (2.66) we expand the signal into a Fourier series, then we obtain

(2.68)

where and are complex amplitudes n th harmonic of the signals and, accordingly, is the complex conjugate coefficient. The expansion coefficients of the cross-correlation function can be found as the coefficients of the Fourier series

. (2.69)

The frequency expansion of the autocorrelation function can be easily obtained from formulas (2.68) and (2.69) by setting , Then

. (2.70)

And since and, therefore,

, (2.71)

then the autocorrelation function is even and therefore

. (2.72)

The parity of the autocorrelation function allows it to be expanded into a trigonometric Fourier series in terms of cosines

In the particular case, for , we get:

.

Thus, the autocorrelation function at is the total average power of a periodic signal equal to the sum of the average powers of all harmonics.

Frequency representation of pulse signals

In the previous consideration, it was assumed that the signals are continuous, however, in automatic information processing, pulsed signals are also often used, as well as the conversion of continuous signals into pulsed ones. This requires consideration of the issues of frequency representation of impulse signals.

Consider the model for converting a continuous signal into a pulsed form, shown in Fig. 2.6a.



Let a continuous signal arrive at the input of the pulse modulator (Fig. 2.6b). The pulse modulator generates a sequence of single pulses (Fig. 2.6c) with a period T and pulse duration t, and . The mathematical model of such a sequence of pulses can be described as a function:

(2.74)

Where k- pulse number in the sequence.

The output signal of the pulse modulator (Fig. 2.6d) can be represented as:

.

In practice, it is desirable to have a frequency representation of the pulse train. For this, the function , as a periodic one, can be represented as a Fourier series:

, (2.75)

- spectral expansion coefficients in a Fourier series; (2.76)

Pulse repetition rate;

n is the harmonic number.

Substituting relation (2.74) into expression (2.76), we find:

.

Substituting (2.76) into (2.74), we get:

(2.78)

We transform the difference of sines, then

. (2.79)

Let us introduce the phase designation n th harmonic

. (2.81)

Thus, the sequence of single pulses contains, along with the constant component, an infinite number of harmonics with decreasing amplitude. Amplitude k th harmonic is determined from the expression:

In digital signal processing, time sampling (quantization) is carried out, that is, the conversion of a continuous signal into a sequence of short pulses. As shown above, any sequence of pulses has a rather complex spectrum, so the natural question arises how the time sampling process affects frequency spectrum original continuous signal.

To explore this issue, consider mathematical model the time discretization process shown in Figure 2.7a.

A pulse modulator (PM) is represented as a carrier modulator in the form of an ideal sequence of very short pulses (sequences d-functions) whose repetition period is equal to T(Fig. 2.7b).

A continuous signal is supplied to the input of the pulse modulator (Fig. 2.7c), and a pulse signal is formed at the output (Fig. 2.7d).


Then the ideal sequence model d-functions can be described by the following expression

Along with the spectral approach to the description of signals, in practice it often turns out to be a necessary characteristic that would give an idea of ​​some properties of the signal, in particular, the rate of change in time, as well as the duration of the signal without decomposing it into harmonic components.

As such a temporal characteristic is widely used correlation signal function.

For a deterministic signal s(t) of finite duration, the correlation function is determined by the following expression:

where τ is the time shift of the signal.

This chapter deals with signals that are real functions of time, and the complex conjugate notation can be omitted:

. (1.78)

It can be seen from expression (1.78) that B s (t) characterizes the degree of connection (correlation) of the signal s ( t ) with its copy shifted by m along the time axis. It is clear that the function B s ( t ) reaches a maximum at τ = 0, since any signal is fully correlated with itself. Wherein

, (1.79)

i.e., the maximum value of the correlation function is equal to the signal energy.

As τ increases, the function IN 8 (τ) decreases (not necessarily monotonically) and with a relative shift of the signals s(t) And s(t+ τ) vanishes for a time exceeding the signal duration.

From the general definition of the correlation function, it is clear that it does not matter whether to shift the signal to the right or to the left relative to its copy by the value τ. Therefore, expression (1.78) can be generalized as follows:

. (1.78)

This is equivalent to saying that B s (τ) is even functionτ.

For a periodic signal whose energy is infinitely large, the definition of the correlation function using expressions (1.129) or (1.129") is unacceptable. In this case, the following definition is used:

With this definition, the correlation function acquires the dimension of power, and B Sne p(0) is equal to medium power periodic signal. Due to the periodicity of the signals ( t ) product averaging
or
along an infinite line T should coincide with the averaging over the period T 1 . Therefore, expression (1.79) can be replaced by the expression

The integrals included in this expression are nothing but the correlation function of the signal on the interval T 1 . Denoting it through B sTl ), we arrive at the relation

It is also obvious that the periodic signal s( t ) corresponds to the periodic correlation function B s lane (τ). Function Period B s lane (τ) coincides with the period T 1 original signal ( t ). For example, for the simplest (harmonic) oscillation
correlation function

When τ=0
is the average power of a harmonic oscillation with amplitude A 0 . It is important to note that the correlation function
does not depend on the initial phase of the oscillation .

To estimate the degree of connection between two different signals s 1 ( t ) and s 2 ( t ) the mutual correlation function is used, which is determined by the general expression

For real functions s 1 (t) and s 2 (t)

The above correlation function IN s (τ) is a special case of the function
when s 1 ( t ) =s 2 ( t ).

Unlike
the cross-correlation function is not necessarily even with respect to τ. In addition, the cross-correlation function NotNecessarily reaches a maximum at τ = 0.

Signal correlation functions are used for integral quantitative estimates of the shape of signals and the degree of their similarity with each other.

Autocorrelation functions (ACF) of signals (correlation function, CF). As applied to deterministic signals with a finite energy, the ACF is a quantitative integral characteristic of the signal shape, and is the integral of the product of two copies of the signal s(t), shifted relative to each other by time t:

B s (t) = s(t) s(t+t) dt. (2.4.1)

As follows from this expression, the ACF is the scalar product of the signal and its copy in functional dependence on the variable value of the shift value t. Accordingly, the ACF has the physical dimension of energy, and at t = 0 the value of the ACF is directly equal to the signal energy and is the maximum possible (the cosine of the angle of signal interaction with itself is equal to 1):

B s (0) = s(t) 2 dt = E s .

The ACF function is continuous and even. It is easy to verify the latter by changing the variable t = t-t in expression (2.4.1):

B s (t) = s(t) s(t-t) dt = s(t-t) s(t) dt = B s (-t).

Given parity, the graphical representation of the ACF is usually only done for positive values ​​of t. The sign +t in expression (2.4.1) means that as the values ​​of t increase from zero, the copy of the signal s(t+t) shifts to the left along the t axis. In practice, signals are usually also set on the interval of positive values ​​of the arguments from 0-T, which makes it possible to extend the interval with zero values, if necessary for mathematical operations. Within these limits of calculations, it is more convenient to shift the signal copy to the left along the argument axis, i.e. application in expression (2.4.1) of the function s(t-t):

B s (t) = s(t) s(t-t) dt. (2.4.1")

As the value of the shift t increases for finite signals, the temporal overlap of the signal with its copy decreases, and, accordingly, the cosine of the interaction angle and the scalar product as a whole tend to zero:

Example. On the interval (0, T), a rectangular pulse with an amplitude value equal to A is specified. Calculate the autocorrelation function of the pulse.

When shifting the copy of the impulse along the t axis to the right, at 0≤t≤T, the signals overlap in the interval from t to T. Dot product:

B s (t) \u003d A 2 dt \u003d A 2 (T-t).

When shifting a copy of the impulse to the left, with -T≤t<0 сигналы перекрываются на интервале от 0 до Т-t. Скалярное произведение:

B s (t) = A 2 dt = A 2 (T+t).

For |t| > T the signal and its copy have no intersection points and the scalar product of the signals is equal to zero (the signal and its shifted copy become orthogonal).

Summarizing the calculations, we can write:

B s (t) = .

In the case of periodic signals, the ACF is calculated over one period T, averaging the scalar product and its shifted copy within this period:

B s (t) \u003d (1 / T) s (t) s (t-t) dt.

At t=0, the value of the ACF in this case is equal not to the energy, but to the average power of the signals within the interval T. The ACF of periodic signals is also a periodic function with the same period T. Thus, for the signal s(t) = A cos(w 0 t+j 0) at T=2p/w 0 we have:

B s (t) \u003d A cos (w 0 t + j 0) A cos (w 0 (t-t) + j 0) \u003d (A 2 /2) cos (w 0 t).

Note that the result obtained does not depend on the initial phase of the harmonic signal, which is typical for any periodic signals and is one of the properties of the CF.

For signals given on a certain interval , the calculation of the ACF is also performed with normalization to the length of the interval :

B s (t) = s(t) s(t+t) dt. (2.4.2)

In the limit, for non-periodic signals with ACF measurement on the interval T:

B s (t) = . (2.4.2")

The autocorrelation of a signal can also be estimated by the autocorrelation coefficient, which is calculated according to the formula (based on centered signals):

r s (t) = cos j(t) = ás(t), s(t+t)ñ /||s(t)|| 2.

Cross correlation function (CCF) signals (cross-correlation function, CCF) shows the degree of similarity of the shifted instances of two different signals and their relative position along the coordinate (independent variable), for which the same formula (2.4.1) is used as for the ACF, but under the integral is the product of two different signals, one of which is shifted by time t:

B 12 (t) = s 1 (t) s 2 (t + t) dt. (2.4.3)

When changing the variable t = t-t in formula (2.4.3), we obtain:

B 12 (t) \u003d s 1 (t-t) s 2 (t) dt \u003d s 2 (t) s 1 (t-t) dt \u003d B 21 (-t)

It follows from here that the parity condition is not satisfied for the VKF, and the values ​​of the VKF are not required to have a maximum at t = 0. This can be clearly seen in Fig. 2.4.1, where two identical signals are given with centers at points 0.5 and 1.5. Calculation by formula (2.4.3) with a gradual increase in the values ​​of t means successive shifts of the signal s2(t) to the left along the time axis (for each value of s1(t), the values ​​of s2(t+t) are taken for the integrand multiplication).



Loading...
Top