Spectral power density of a deterministic signal. Examples of determining the spectral density of signals What is the spectral density measured in

The value that characterizes the distribution of energy over the signal spectrum and is called the energy spectral density exists only for signals whose energy over an infinite time interval is finite and, therefore, the Fourier transform is applicable to them.

For signals that do not decay in time, the energy is infinitely large and the integral (1.54) diverges. Setting the amplitude spectrum is not possible. However average powerРср, determined by the ratio

turns out to be the end. Therefore, the broader concept of "power spectral density" is used. We define it as the derivative of the average signal power with respect to frequency and denote it as Ck(u):

The index k emphasizes that here we consider the power spectral density as a characteristic deterministic function u(t) describing the implementation of the signal.

This characteristic of the signal is less meaningful than the spectral density of the amplitudes, since it is devoid of phase information [see. (1.38)]. Therefore, it is impossible to uniquely restore the original realization of the signal from it. However, the absence of phase information makes it possible to apply this concept to signals in which the phase is not defined.

To establish a connection between the spectral density Ck(w) and the amplitude spectrum, we use the signal u(t), which exists on a limited time interval (-T<. t

where is the power spectral density of a time-limited signal.

It will be shown below (see § 1.11) that by averaging this characteristic over a set of implementations, one can obtain the power spectral density for a large class random processes.

Deterministic Signal Autocorrelation Function

There are now two characteristics in the frequency domain: the spectral response and the power spectral density. The spectral characteristic containing complete information about the signal u(t) corresponds to the Fourier transform in the form of a time function. Let us find out what corresponds in the time domain to the power spectral density devoid of phase information.

It should be assumed that the same power spectral density corresponds to a set of time functions that differ in phase. Soviet scientist L.Ya. Khinchin and the American scientist N. Wiener almost simultaneously found the inverse Fourier transform of the power spectral density:


The generalized temporal function r(), which does not contain phase information, will be called the temporal autocorrelation function. It shows the degree of connection between the values ​​of the function u(t) separated by a time interval, and can be obtained from statistical theory by developing the concept of the correlation coefficient. Note that in the time correlation function, averaging is carried out over time within one realization of a sufficiently long duration.

The second integral relation for the Fourier transform pair is also valid:

Example 1.6 Determine the time function of the autocorrelation of a harmonic signal u(t) = u0 cos(t-c). According to (1.64)

After some simple transformations


finally we have

As expected, ru() does not depend on u and, therefore, (1.66) is valid for a whole set of harmonics that differ in phase.

Under Signal Energy IC) understand the magnitude

If the signal has a finite duration T, those. is not equal to zero in the time interval [-T/ 2, T/ 2], then its energy

We write the expression for the signal energy using formula (2.15):

Where

The resulting equality is called Parseval's equality. It defines the signal energy in terms of the time function or spectral energy density, which is equal to |5(/0))| 2. The spectral energy density is also called energy spectrum.

Consider a signal that exists on a limited time interval. Parseval's equality applies to such a signal. Hence,

We divide the left and right parts of the equality by a time interval equal to T, and let this interval go to infinity:

With the increase T the energy of undamped signals increases,

however, the ratio may tend to a certain limit. This limit is called power spectral density C(co). Power spectral density unit: [V 2 DC].

Autocorrelation function

Signal autocorrelation function And(?) is determined by the following integral expression:

where m is the argument defining the function I) and having the dimension of time; u(? + t) - the original signal, shifted in time by -t.

The autocorrelation function has the following properties.

1. The value of the autocorrelation function at a shift m = O is equal to the signal energy E:

2. Autocorrelation function for shifts m F 0 less signal energy:

3. The autocorrelation function is an even function, i.e.

We will verify the validity of properties 2 and 3 by an example.

Example 2.6. Calculate the autocorrelation functions of the signals: the video signal shown in fig. 2.7, i, and a radio signal with the same amplitude and duration. The carrier frequency of the radio signal is sch, and the initial phase is 0.

Solution. Let's solve the first problem graphically. The autocorrelation function is determined by the integral of the product of the function And(?) and its time-shifted copy. Can we find the video signal offset from the equation? + m = 0. Graph of the function m(? + t) is shown in fig. 2.7, b. The area determined by the graph of the product m (?) M (? + t) (Fig. 2.7, V), is equal to

The function D (t) is determined by the equation of a straight line (Fig. 2.7, G). The function has a maximum if the value of the argument m = 0, and is equal to 0 if m = m and. For other values ​​of the argument /?(t)

To verify the validity of property 3, we similarly calculate the function for negative values ​​of m:

Rice. 2.7.

video pulse:

A- rectangular video pulse; b- time-delayed rectangular pulse; V - product of impulses; G - autocorrelation function

The final expression for the autocorrelation function

The function is shown in fig. 2.7, G and has a triangular shape.

Let us calculate the autocorrelation function of the radio signal, placing it symmetrically about the vertical axis. Radio signal:

Substituting the values ​​of the signal and its shifted copy into the formula for the autocorrelation function /?(m), we obtain

The expression for the autocorrelation function of the radio pulse consists of two terms. The first of them is determined by the product of a triangular function and a harmonic signal. At the output of the matched filter, this term is realized in the form of a diamond-shaped radio pulse. The second term is determined by the product of the triangular function and the functions (vtd^/lz, located at the points m = +m and. The values ​​of the functions (vtx)/:*:, which have a noticeable effect on the second term of the autocorrelation function, very quickly decrease with a change in the argument m from -t to oo and from t to - ° o. Solving the equation

it is possible to find the delay intervals within which the values ​​of the functions (vtls)/;*; still affect the behavior of the function /?(t). For positive delay values

where 70 is the period of the harmonic signal.

Similarly, the interval for negative delay values ​​is found.

Since the influence of the second term of the autocorrelation function is limited to very small (compared to the duration of the radio pulses t u) intervals 70/2, within which the values ​​of the triangular function are very small, the second term of the autocorrelation function of the radio pulse can be neglected.

Let us reveal the relationship between the autocorrelation function #(τ) and the spectral energy density of the signal |5(/co)| 2. To do this, we express the time-shifted signal u(1b + m) in terms of its spectral density 5(/co):

Let us substitute this expression into expression (2.21). As a result, we get

It is also easy to verify the validity of the equality

We divide both sides of equality (2.23) by the time interval T and let us direct the magnitude T to infinity:

Taking into account formula (2.20), we rewrite the resulting expression:

Where
- the limit of the ratio of the autocorrelation function of a time-limited signal to the value of this time and when it tends to infinity. If this limit exists, then it is determined by the inverse Fourier transform of the power spectral density of the signal.

A generalization of the concept of "autocorrelation function" is cross-correlation function, which is the scalar product of two signals:

Let us consider the main properties of the cross-correlation function.

1. Permutation of the factors under the integral sign changes the sign of the argument of the cross-correlation function:

In the above transformations, we used the replacement t+ t = X.

  • 2. The cross-correlation function, unlike the autocorrelation function, is not even with respect to the argument m.
  • 3. The cross-correlation function is determined by the inverse Fourier transform from the product of the spectral densities of the signals u(t), v(t):

This formula can be derived similarly to formula (2.22).

Cross-correlation function between a periodically repeating signal and a non-periodic one

signal v(t) = Uq(?)

Where R(t) - autocorrelation function of non-periodic signal u 0 (t).

The resulting expression is equal to the sum of two integrals. With a shift equal to zero, the first integral is equal to zero, and the second is equal to the signal energy. With a shift equal to the signal period, the first integral is equal to the signal energy, and the second is equal to zero. Each value of the function at other shifts is equal to the sum of the values ​​of the autocorrelation functions of a non-periodic signal, shifted relative to each other by one period. In addition, the cross-correlation function is a periodic function that satisfies the equation

Cross-correlation function I or > ( t) between the signal u(t) and a signal

equal to - signal duration v(t).

Indeed, due to the fact that the period of the signal u(t) is equal to T And

cross-correlation function where

Calculating the limit of the function (2 n+ 1)7? m Mo (t) at P-> define an expression for the autocorrelation function of a periodic signal:

Function dimension: [V 2 /Hz].

Function values ​​at zero shift and other shifts for which Lts Mo(T) F 0 are equal to infinity. For this reason, the use of the last expression as a characteristic of a periodic signal loses its meaning.

Divide the last expression by an interval equal to (2 P + 1 )T. As a result, we get the function


since due to the periodicity of the function - t + T) = - T).

The resulting formula defines the function IN( m) as the limit of the ratio of the autocorrelation function of the signal existing in the time interval (2 n+ 1 )T, to this interval and its tendency to infinity. This limit for a periodically repeating signal is called autocorrelation function of a periodic signal. The dimension of this function: [AT 2 ].

The direct Fourier transform of one period of the autocorrelation function of a periodic signal determines the power spectral density, which is a continuous function of frequency. From this density, using formula (2.17), one can find power spectral density of the periodic autocorrelation function of the signal, which is determined for discrete values ​​of frequencies:

where 0)1 = 2 p/t.

If the autocorrelation function is written as a Fourier series in trigonometric form, then the expression for its spectral density

Example 2.7. Calculate Periodic Autocorrelation Function of Signal i(f) = Absh SI. Based on the found function, limited to one period, determine the power spectral density.

Solution. Substituting the given signal into expression (2.26), we obtain an expression for the periodic autocorrelation function:

We substitute the resulting expression into formula (2.24) and find the power spectral density:

Example 2.8. For a periodic normalized autocorrelation function of a noise-like signal (M-sequence with a period N= 1023) calculate the power spectral density. (Periodic function for a sequence of smaller length (IV= 15) is shown in fig. 3.39.)

Solution. For a comparatively long period, LG = 1023 values ​​of the autocorrelation function in the interval T- To > m > To, where To is the pulse duration of the noise-like sequence, we will take equal to zero. In this case, the autocorrelation function is determined by periodically repeating with a period T a sequence of triangular pulses. The base of each triangle is 2to and its height is 1. The equation that determines the autocorrelation function within one period is IN( m) \u003d 1 - |m|/xo- Taking into account the evenness of this function, we determine the coefficients of the Fourier series:

When calculating the integral, the formula was used

Substituting the calculated coefficients into formula (2.27), we crawled

The power spectral density of a periodic autocorrelation function is equal to the weighted sum of an infinitely large number of delta functions. The weight factors are determined by the square of the function (etx) /: ":, multiplied by a constant coefficient 2n (then /T).

Correlation functions digital signals associated with the correlation functions of character sequences. For a code sequence (see § 1.3) of a finite number N

binary symbols, the autocorrelation function is written as

Where - binary characters equal to 0 or 1, or characters equal to -1, 1; d= 0, 1, 2, ..., N - .

Character sequences can be either deterministic or random. When transmitting information, a characteristic property of a sequence of characters is their randomness. The values ​​of the autocorrelation function (at shifts not equal to zero), calculated from a pre-recorded random sequence of finite length, are also random.

Autocorrelation functions of deterministic sequences, which are used for synchronization and also as carriers of discrete messages, are deterministic functions.

Signals constructed using codes or their code sequences are called coded signals.

Most of the properties of the autocorrelation function of the code sequence coincide with the properties of the signal's autocorrelation function discussed above.

With a bullet shift, the autocorrelation function of the code sequence reaches a maximum, which is equal to

If the characters are -1, 1, then r(0) = N.

The values ​​of the autocorrelation function for other shifts are less than r(0).

The autocorrelation function of the code sequence is an even function.

A generalization of the autocorrelation function is the cross-correlation function. For code sequences of the same length, this function

Where 2 } 0 6/, - symbols of the first and second sequence, respectively.

Many function properties d 12 (e) coincide with the properties of the cross-correlation function of the signals considered above. If the function r^(e), I F for any pair of code when shifted d = O is equal to zero, then such codes are called orthogonal. A brief description of some of the codes used in communication systems is given in Appendices 2-4.

The cross-correlation function between a code sequence and a periodically repeating same sequence is called periodic autocorrelation function of the code sequence. The expression for the function follows from expressions (2.25), (2.26):

Where g(d) - non-periodic autocorrelation function of the code sequence; d - shift value between sequences.

Let us substitute the expressions for autocorrelation functions into the resulting formula:

Where a/r, a^+c - elements of the code sequence.

The periodic autocorrelation function of a code sequence is equal to the cross-correlation function calculated for the code sequence and the cyclically shifted symbols of this sequence. Cyclically shifted code sequences obtained from the original sequence а 0 = а 0 ,а ( ,а 2 , ..., a m _ b are listed below. code sequence A ( obtained as a result of shifting the original sequence a 0 move one character to the right and wrap the last character A dm to the beginning of the shifted sequence. The remaining sequences are obtained similarly:

Example 2.9. Calculate the autocorrelation and periodic autocorrelation function of the encoded signal (Fig. 2.8, A)

where and 0 (O is a rectangular pulse with an amplitude A and duration t.

This signal is built from rectangular pulses, the sign of which is determined by the weighting coefficients: a 0 = ,A. = 1, a 2= -1, and their number N= 3. The duration of the signal is equal to 3t and.

Solution. Substituting the expression for the signal into formula (2.21), we obtain

Let's change the variable t - ct n on X:

Denote: & - m = - and replace the discrete variables &, T to variables to, c. As a result, we get

The graph of the autocorrelation function for a given signal is shown in fig. 2.8 b. This function depends on the autocorrelation function /? 0 (m) of a rectangular pulse and values ​​of the autocorrelation function r(

Rice. 2.8. Autocorrelation function of the encoded signal: A- coded signal; 6 - autocorrelation function of the signal; V- autocorrelation function of a periodic signal

Let us calculate the periodic autocorrelation function using the autocorrelation function calculated above, the obtained values ​​of the autocorrelation function of the code sequence and formula (2.28).

Periodic autocorrelation function

Substitute the given value N= 3 into the resulting formula:

Taking into account the values ​​of the autocorrelation function of the code sequence K+Z) = 0, r(+ 2) = -1, r(+1) = 0, KO) = 3 we write the final expression for one period of the signal's periodic autocorrelation function:

The graph of the function is shown in fig. 2.8 V.

Let the signal s(t) is given as a non-periodic function, and it exists only on the interval ( t 1 ,t 2) (example - single pulse). Let's choose an arbitrary period of time T, which includes the interval ( t 1 ,t 2) (see Fig.1).

Let us denote the periodic signal obtained from s(t), as ( t). Then we can write the Fourier series for it

To get to the function s(t) follows in the expression ( t) let the period go to infinity. In this case, the number of harmonic components with frequencies w=n 2p/T will be infinitely large, the distance between them will tend to zero (to an infinitely small value:

the amplitudes of the components will also be infinitesimal. Therefore, it is no longer possible to talk about the spectrum of such a signal, since the spectrum becomes continuous.

The inner integral is a function of frequency. It is called the spectral density of the signal, or frequency response signal and denote i.e.

For generality, the limits of integration can be set to be infinite, since it is all the same where s(t) is equal to zero, and the integral is equal to zero.

The expression for the spectral density is called the direct Fourier transform. Reverse transformation Fourier determines the time function of a signal from its spectral density

the direct (*) and inverse (**) Fourier transforms are collectively referred to as a pair of Fourier transforms. Spectral Density Modulus

determines the amplitude-frequency characteristic (AFC) of the signal, and its argument called the phase-frequency characteristic (PFC) of the signal. The frequency response of the signal is an even function, and the phase response is odd.

The meaning of the module S(w) is defined as the amplitude of a signal (current or voltage) per 1 Hz in an infinitely narrow frequency band that includes the frequency of interest w. Its dimension is [signal/frequency].

Energy spectrum of the signal. If the function s(t) has the Fourier power density of the signal ( signal energy spectral density) is determined by the expression:

w(t) = s(t)s*(t) = |s(t)|2  |S()|2 = S()S*() = W(). (5.2.9)

The power spectrum W() is a real non-negative even function, which is usually called the energy spectrum. The power spectrum, as the square of the modulus of the signal spectral density, does not contain phase information about its frequency components, and, therefore, it is impossible to restore the signal from the power spectrum. This also means that signals with different phase characteristics can have the same power spectra. In particular, the signal shift does not affect its power spectrum. The latter makes it possible to obtain an expression for the energy spectrum directly from expressions (5.2.7). In the limit, for identical signals u(t) and v(t) with a shift t 0, the imaginary part of the spectrum Wuv () tends to zero values, and the real part - to the values ​​of the modulus of the spectrum. With full temporal coincidence of signals, we have:

those. signal energy is equal to the integral of the square of its modulus frequency spectrum- the sum of the energy of its frequency components, and is always a real value.

For an arbitrary signal s(t), the equality

usually called the Parseval equality (in mathematics - the Plancherel theorem, in physics - the Rayleigh formula). The equality is obvious, since the coordinate and frequency representations are essentially just different mathematical representations of the same signal. Similarly for the interaction energy of two signals:

From the Parseval equality follows the invariance of the scalar product of signals and the norm with respect to the Fourier transform:

In a number of purely practical problems of recording and transmitting signals, the energy spectrum of the signal is of very significant importance. Periodic signals are translated into the spectral region in the form of Fourier series. We write a periodic signal with a period T in the form of a Fourier series in complex form:

The interval 0-T contains an integer number of periods of all integrands of the exponents, and is equal to zero, with the exception of the exponent at k = -m, for which the integral is T. Accordingly, the average power of a periodic signal is equal to the sum of the squares of the modules of the coefficients of its Fourier series:

Energy spectrum of the signal is the energy distribution of the basic signals that make up the non-harmonic signal on the frequency axis. Mathematically, the energy spectrum of the signal is equal to the square of the modulus of the spectral function:

Accordingly, the amplitude-frequency spectrum shows the set of amplitudes of the components of the basic signals on the frequency axis, and the phase-frequency spectrum shows the set of phases

The modulus of the spectral function is often called amplitude spectrum, and its argument is phase spectrum.

In addition, there is an inverse Fourier transform that allows you to restore the original signal, knowing its spectral function:

For example, take a rectangular pulse:

Another example of spectra:

Nyquist frequency, Kotelnikov's theorem .

Nyquist frequency - in digital signal processing, a frequency equal to half the sampling frequency. Named after Harry Nyquist. It follows from Kotelnikov's theorem that when discretizing analog signal there will be no information loss only if the spectrum (spectral density) (the highest frequency of the useful signal) of the signal is equal to or lower than the Nyquist frequency. Otherwise, when restoring the analog signal, there will be an overlap of spectral “tails” (frequency substitution, frequency masking), and the shape of the restored signal will be distorted. If the signal spectrum has no components above the Nyquist frequency, then it can be (theoretically) sampled and then reconstructed without distortion. In fact, the “digitization” of a signal (the transformation of an analog signal into a digital one) is associated with quantization of samples - each sample is recorded in the form of a digital code of finite bit depth, as a result of which quantization (rounding) errors are added to the samples, under certain conditions considered as “quantization noise”.

Real signals of finite duration always have an infinitely wide spectrum, which decreases more or less rapidly with increasing frequency. Therefore, the sampling of signals always leads to loss of information (distortion of the waveform during sampling-recovery), no matter how high the sampling frequency is. At the sample rate chosen, distortion can be reduced by suppressing (pre-sampling) analog signal spectral components above the Nyquist frequency, which requires a very high order filter to avoid aliasing. Practical implementation such a filter is very difficult, since the amplitude-frequency characteristics of the filters are not rectangular, but smooth, and a certain transitional frequency band is formed between the passband and the suppression band. Therefore, the sampling rate is chosen with a margin, for example, audio CDs use a sampling rate of 44100 Hertz, while higher frequency in the spectrum sound signals the frequency is considered to be 20000 Hz. The Nyquist frequency margin of 44100 / 2 - 20000 = 2050 Hz avoids frequency substitution when using the implemented low-order filter.

Kotelnikov's theorem

In order to restore the original continuous signal from a sampled one with small distortions (errors), it is necessary to rationally choose the sampling step. Therefore, when converting an analog signal into a discrete one, the question of the size of the sampling step necessarily arises. Intuitively, it is not difficult to understand the following idea. If the analog signal has a low-frequency spectrum limited by some upper frequency Fe (i.e., the function u(t) has the form of a smoothly varying curve, without sharp changes in amplitude), then this function is unlikely to change significantly over a certain small sampling time interval. amplitude. It is quite obvious that the accuracy of restoring an analog signal from a sequence of its samples depends on the value of the sampling interval. The shorter it is, the less the u(t) function will differ from a smooth curve passing through the sample points. However, with a decrease in the sampling interval, the complexity and volume of processing equipment increase significantly. With a sufficiently large sampling interval, the probability of distortion or loss of information increases when the analog signal is restored. The optimal value of the discretization interval is established by the Kotelnikov theorem (other names are the sampling theorem, the K. Shannon theorem, the X. Nyquist theorem: the theorem was first discovered in mathematics by O. Cauchy, and then described again by D. Carson and R. Hartley), proved by him in 1933. V. A. Kotelnikov's theorem is of great theoretical and practical importance: it makes it possible to correctly sample the analog signal and determines the optimal way to restore it at the receiving end from the reference values.

According to one of the most famous and simple interpretations of the Kotelnikov theorem, an arbitrary signal u(t), the spectrum of which is limited by a certain frequency Fe, can be completely restored from the sequence of its reference values ​​following with a time interval

The sampling interval and frequency Fe(1) are often referred to in radio engineering as the interval and the Nyquist frequency, respectively. Analytically, the Kotelnikov theorem is represented by the series

where k is the sample number; - signal value in reference points - upper frequency signal spectrum.

Frequency representation of discrete signals .

Most signals can be represented as a Fourier series:

Cross power spectral density (cross power spectrum) two realizations and stationary ergodic random processes and is defined as the direct Fourier transform over their mutual covariance function

or, given the relationship between circular and cyclic frequencies,

The inverse Fourier transform relates the mutual covariance function and power spectral density:

Similarly to (1.32), (1.33) we introduce power spectral density (power spectrum) random process

The function has the parity property:

The following relationship is valid for the mutual spectral density:

where is the function complex conjugate to .

The above formulas for spectral densities are defined for both positive and negative frequencies and are called double-sided spectral densities . They are convenient in the analytical study of systems and signals. In practice, they use spectral densities that are defined only for non-negative frequencies and are called unilateral (Figure 1.14):

Figure 1.14 - One-sided and two-sided

spectral densities

Let us derive an expression relating the one-sided spectral density of the stationary SP with its covariance function:

We take into account the parity property for the covariance function of the stationary SP and the cosine function, the odd property for the sine function, and the symmetry of the integration limits. As a result, the second integral in the expression obtained above vanishes, and in the first integral one can halve the limits of integration, doubling the coefficient:

Obviously, the power spectral density of a random process is a real function.

Similarly, the inverse relation can be obtained:

From expression (1.42) at , it follows that

This means that the total area under the one-sided spectral density plot is equal to the mean square of the random process. In other words, the one-sided spectral density is interpreted as the mean square distribution of the process over frequencies.

The area under the graph of one-sided density, enclosed between two arbitrary values ​​of frequency and , is equal to the mean square of the process in this frequency band of the spectrum (Figure 1.15):

Figure 1.15 - Spectral density property

The mutual power spectral density is a complex quantity, so it can be represented in exponential form in terms of module And phase angle :


where is the module;

is the phase angle;

, are the real and imaginary parts of the function, respectively.

The modulus of the mutual spectral density is included in the important inequality

This inequality allows us to determine coherence function (square of coherence), which is similar to the square of the normalized correlation function:

The second way to introduce spectral densities is the direct Fourier transform of random processes.

Let and be two stationary ergodic random processes for which finite Fourier transforms th implementations of the length are defined as

The two-sided mutual spectral density of these random processes is introduced using the product through the relation

where the expectation operator means the operation of averaging over the index .

The calculation of the two-sided spectral density of a random process is carried out according to the relation

One-sided spectral densities are introduced similarly:

The functions defined by formulas (1.49), (1.50) are identical to the corresponding functions defined by relations (1.32), (1.33) as Fourier transforms over covariance functions. This statement is called Wiener-Khinchin theorems.

Control questions

1. Give a classification of deterministic processes.

2. What is the difference between polyharmonic and almost periodic processes?

3. Formulate the definition of a stationary random process.

4. What method of averaging the characteristics of an ergodic random process is preferable - averaging over an ensemble of sample functions or averaging over the observation time of one realization?

5. Formulate the definition of the probability distribution density of a random process.

6. Write down an expression connecting the correlation and covariance functions of a stationary random process.

7. When are two random processes considered uncorrelated?

8. Indicate methods for calculating the mean square of a stationary random process.

9. By what transformation are the spectral density and covariance functions of a random process related?

10. To what extent do the values ​​of the coherence function of two random processes change?

Literature

1. Sergienko, A.B. Digital signal processing / A.B. Sergienko. - M: Peter, 2002. - 604 p.

2. Sadovsky, G.A. Theoretical basis information-measuring equipment / G.A. Sadovsky. - M.: Higher school, 2008. - 480 p.

3. Bendat, D. Application of correlation and spectral analysis / D. Bendat, A. Pirsol. – M.: Mir, 1983. – 312 p.

4. Bendat, D. Measurement and analysis of random processes / D. Bendat, A. Pirsol. – M.: Mir, 1974. – 464 p.

The following is short description some signals and their spectral densities are determined. When determining the spectral densities of signals that satisfy the absolute integrability condition, we directly use formula (4.41).

The spectral densities of a number of signals are given in Table. 4.2.

1) Rectangular pulse (Table 4.2, item 4). The oscillation shown in fig. (4.28, a) can be written as

Its spectral density

The spectral density graph (Fig. 4.28, a) is based on the analysis of the spectrum of a periodic sequence of unipolar, rectangular pulses (4.14) carried out earlier. As can be seen from (Fig. 4.28, b), the function vanishes at the values ​​of the argument = n, Where P - 1, 2, 3, ... - any integer. In this case, the angular frequencies are equal to = .

Rice. 4.28. Rectangular pulse (a) and its spectral density (b)

The spectral density of the pulse at is numerically equal to its area, i.e. G(0)=A. This is true for momentum s(t) arbitrary shape. Indeed, setting in the general expression (4.41) = 0, we obtain

i.e. impulse area s(t).

Table 4.3.

Signal s(t)

Spectral density

When the pulse is stretched, the distance between the zeros of the function is reduced, i.e., the spectrum is compressed. As a result, the value increases. On the contrary, when the pulse is compressed, its spectrum expands and the value decreases. On (Fig. 4.29, a, b) are graphs of the amplitude and phase spectra of a rectangular pulse.

Rice. 4.29. Graphs of the amplitude (a) Fig. 4.30. Rectangular pulse, and phase (b) spectra shifted by time

When the pulse is shifted to the right (delay) by time (Fig. 4.30), the phase spectrum changes by the value determined by the argument of the multiplier exp () (Table 4.2, pos. 9). The resulting phase spectrum of the delayed pulse is shown in fig. 4.29, b with a dotted line.

2) Delta function (Table 4.3, item 9). The spectral density - functions are found by the formula (4.41), using the filtering property δ - Functions:

Thus, the amplitude spectrum is uniform and is determined by the area δ -function [= 1] and the phase spectrum is zero [= 0].

The inverse Fourier transform of the function = 1 is used as one of the definitions δ - Functions:

Using the time shift property (Table 4.2, item 9), we determine the spectral density of the function , delayed by time relative to :

The amplitude and phase spectra of the function are shown in Table. 4.3, pos. 10. The inverse Fourier transform of a function has the form

3) Harmonic oscillation (Table 4.3, item 12). A harmonic oscillation is not a completely integrable signal. Nevertheless, to determine its spectral density, a direct Fourier transform is used, writing formula (4.41) as:

Then, taking into account (4.47), we obtain

δ(ω) are delta functions shifted along the frequency axis by frequency , respectively, to the right and to the left relative to. As can be seen from (4.48), the spectral density of a harmonic oscillation with a finite amplitude takes on an infinitely large value at discrete frequencies.

Performing similar transformations, one can obtain the spectral density of the oscillation (Table 4.3, item 13)

4) View function (Table 4.3, item 11)

Signal spectral density as a constant level A is determined by formula (4.48), setting = 0:

5) Single function (or single jump) (Table 4.3, pos. 8). The function is not absolutely integrable. If represented as the limit of the exponential momentum , i.e.

then the spectral density of the function can be defined as the limit of the spectral density of the exponential impulse (Table 4.3, pos. 1) at:

The first term on the right side of this expression is equal to zero at all frequencies except = 0, where it goes to infinity, and the area under the function is equal to a constant value

Therefore, the function can be considered the limit of the first term. The limit of the second term is the function. Finally we get

The presence of two terms in expression (4.51) is consistent with the representation of the function in the form 1/2+1/2sign( t). According to (4.50), the constant component 1/2 corresponds to the spectral density , and the odd function is the imaginary value of the spectral density .

When analyzing the impact of a single jump on chains, Transmission function which is equal to zero at = 0 (i.e., on circuits that do not pass direct current), in formula (4.51) only the second term can be taken into account, representing the spectral density of a single jump in the form

6) Complex exponential signal (Table 4.3, item 16). If we represent the function in the form

then, based on the linearity of the Fourier transform and taking into account expressions (4.48) and (4.49), the spectral density of the complex exponential signal

Therefore, the complex signal has an asymmetric spectrum, represented by a single delta function, shifted by a frequency to the right relative to.

7) Arbitrary periodic function. Let us represent an arbitrary periodic function (Fig. 4.31, a) as a complex Fourier series

where is the pulse repetition rate.

Fourier series coefficients

are expressed in terms of the spectral density of a single pulse s(t) at frequencies ( n=0, ±1, ±2, ...). Substituting (4.55) into (4.54) and using relation (4.53), we determine the spectral density of the periodic function:

According to (4.56), the spectral density of an arbitrary periodic function has the form of a sequence of functions shifted relative to each other by a frequency (Fig. 4.31, b). Coefficients at δ -functions change in accordance with the spectral density of a single pulse s(t) (dashed curve in Fig. 4.31, b).

8) Periodic sequence of δ-functions (Table 4.3, item 17). Spectral density of a periodic sequence of -functions

is defined by formula (4.56) as a special case of the spectral density of a periodic function for = 1:

Fig.4.31. An arbitrary sequence of pulses (a) and its spectral density (b)

Rice. 4.32. Radio signal (a), spectral densities of the radio signal (c) and its envelope (b)

and has the form of a periodic sequence δ -functions multiplied by the coefficient .

9) Radio signal with a rectangular envelope. The radio signal shown in (Fig. 4.32, a) can be written as

According to pos. 11 Table 4.2, the spectral density of the radio signal is obtained by shifting the spectral density of the rectangular envelope along the frequency axis to the right and left with a decrease in ordinates by half, i.e.

This expression is obtained from (4.42) by replacing the frequency with the frequencies - shift to the right and - shift to the left. The transformation of the envelope spectrum is shown in (Fig. 4.32, b, c).

Examples of calculating the spectra of non-periodic signals are also given in.



Loading...
Top