Criterion of linear dependence of rows of columns of a square matrix. Linear dependence and linear independence of matrix rows and columns

where are some numbers (some or even all of these numbers can be equal to zero). This means that there are the following equalities between the elements of the columns:

From (3.3.1) it follows that

If equality (3.3.3) is true if and only if , then the rows are called linearly independent. Relation (3.3.2) shows that if one of the rows is linearly expressed in terms of the others, then the rows are linearly dependent.

It is also easy to see the opposite: if the rows are linearly dependent, then there is a row that is a linear combination of the other rows.

Let, for example, in (3.3.3) , then .

Definition. Let some minor of the rth order be selected in the matrix A and let the minor of the (r + 1)th order of the same matrix completely contain the minor inside it. We will say that in this case the minor borders the minor (or is bordering for ).

We now prove an important lemma.

Lemma about bordering minors. If the minor of order r of the matrix A= is non-zero, and all the minors bordering it are equal to zero, then any row (column) of the matrix A is a linear combination of its rows (columns) that make up .

Proof. Without violating the generality of the reasoning, we will assume that the non-zero minor of the rth order is in the upper left corner of the matrix A = :



.

For the first k rows of the matrix A, the statement of the lemma is obvious: it suffices to include in the linear combination the same row with a coefficient equal to one, and the rest with coefficients equal to zero.

We now prove that the remaining rows of the matrix A are linearly expressed in terms of the first k rows. To do this, we construct a minor of the (r + 1)th order by adding the kth row () to the minor and l-th column():

.

The resulting minor is zero for all k and l. If , then it is equal to zero as containing two identical columns. If , then the resulting minor is the bordering minor for and, therefore, is equal to zero by the hypothesis of the lemma.

Let us expand the minor in terms of the elements of the latter l-th column:

Assuming , we get:

(3.3.6)

Expression (3.3.6) means that k-th line matrix A is linearly expressed through the first r rows.

Since the values ​​of its minors do not change when a matrix is ​​transposed (due to the property of the determinants), everything proved is true for the columns as well. The theorem has been proven.

Corollary I. Any row (column) of a matrix is ​​a linear combination of its basic rows (columns). Indeed, the basis minor of the matrix is ​​different from zero, and all the minors bordering it are equal to zero.

Corollary II. An nth order determinant is equal to zero if and only if it contains linearly dependent rows (columns). Adequacy linear dependence rows (columns) for the determinant to zero was proved earlier as a property of determinants.

Let's prove the necessity. Let a square matrix of the nth order be given, the only minor of which is equal to zero. It follows that the rank of this matrix is ​​less than n, i.e., there is at least one row that is a linear combination of the base rows of this matrix.

Let us prove one more theorem about the rank of a matrix.

Theorem. The maximum number of linearly independent rows of a matrix is ​​equal to the maximum number of its linearly independent columns and is equal to the rank of this matrix.

Proof. Let the rank of the matrix A= be equal to r. Then any of its k base rows are linearly independent, otherwise the base minor would be equal to zero. On the other hand, any r+1 or more rows are linearly dependent. Assuming the contrary, we could find a non-zero minor of order greater than r by Corollary 2 of the previous lemma. The latter contradicts the fact that the maximum order of non-zero minors is r. Everything that has been proved for rows is also true for columns.

In conclusion, we present one more method for finding the rank of a matrix. The rank of a matrix can be determined by finding a minor of maximum order that is different from zero.

At first glance, this requires the calculation of a finite, but perhaps a very large number of minors of this matrix.

The following theorem allows, however, significant simplifications to be made.

Theorem. If the minor of the matrix A is nonzero, and all the minors bordering it are equal to zero, then the rank of the matrix is ​​r.

Proof. It suffices to show that any subsystem of matrix rows for S>r will be linearly dependent under the conditions of the theorem (from this it will follow that r is the maximum number of linearly independent matrix rows or any of its minors of order greater than k are equal to zero).

Let's assume the opposite. Let the rows be linearly independent. By the lemma on bordering minors, each of them will be linearly expressed in terms of rows in which the minor is located and which, due to the fact that it is different from zero, are linearly independent:

Now consider the following linear combination:

or

Using (3.3.7) and (3.3.8), we obtain

,

which contradicts the linear independence of strings.

Consequently, our assumption is false and, therefore, any S>r rows under the conditions of the theorem are linearly dependent. The theorem has been proven.

Consider the rule for calculating the rank of a matrix - the method of bordering minors, based on this theorem.

When calculating the rank of a matrix, one should pass from minors of lower orders to minors of higher orders. If a nonzero r-th order minor has already been found, then only the (r+1)-th order minors bordering the minor need to be computed. If they are zero, then the rank of the matrix is ​​r. This method is also used if we not only calculate the rank of the matrix, but also determine which columns (rows) make up the basis minor of the matrix.

Example. Calculate the rank of a matrix by the method of fringing minors

Solution. The second-order minor in the upper left corner of the matrix A is non-zero:

.

However, all third-order minors surrounding it are equal to zero:

; ;
; ;
; .

Therefore, the rank of matrix A is equal to two: .

The first and second rows, the first and second columns in this matrix are basic. The remaining rows and columns are their linear combinations. Indeed, the following equalities hold for strings:

In conclusion, we note the validity of the following properties:

1) the rank of the product of matrices is not greater than the rank of each of the factors;

2) the rank of the product of an arbitrary matrix A on the right or left by a non-singular square matrix Q is equal to the rank of the matrix A.

Polynomial matrices

Definition. A polynomial matrix or -matrix is ​​a rectangular matrix whose elements are polynomials in one variable with numerical coefficients.

Elementary transformations can be performed on -matrices. These include:

Permutation of two rows (columns);

Multiplying a row (column) by a non-zero number;

Adding to one row (column) another row (column), multiplied by any polynomial.

Two -matrices and of the same size are called equivalent: if it is possible to pass from the matrix to using a finite number of elementary transformations.

Example. Prove the equivalence of matrices

, .

1. Swap the first and second columns in the matrix:

.

2. From the second line, subtract the first, multiplied by ():

.

3. Multiply the second row by (-1) and note that

.

4. Subtract from the second column the first one, multiplied by , we get

.

The set of all -matrices of given sizes is divided into non-intersecting classes of equivalent matrices. Matrices that are equivalent to each other form one class, not equivalent - another.

Each class of equivalent matrices is characterized by a canonical, or normal, -matrix of given sizes.

Definition. The canonical, or normal, -matrix of dimensions is the -matrix, which has polynomials on the main diagonal, where p is the smaller of the numbers m and n ( ), and polynomials that are not equal to zero have leading coefficients equal to 1, and each next polynomial is divisible by the previous one. All elements outside the main diagonal are 0.

It follows from the definition that if among the polynomials there are polynomials of degree zero, then they are at the beginning of the main diagonal. If there are zeros, then they are at the end of the main diagonal.

The matrix of the previous example is canonical. Matrix

also canonical.

Each -matrix class contains a unique canonical -matrix, i.e. each -matrix is ​​equivalent to a single canonical matrix, which is called the canonical form or normal form this matrix.

The polynomials on the main diagonal of the canonical form of the given -matrix are called the invariant factors of the given matrix.

One of the methods for calculating invariant factors is to reduce the given -matrix to the canonical form.

So, for the matrix of the previous example, the invariant factors are

It follows from what has been said that the presence of the same set of invariant factors is a necessary and sufficient condition for the equivalence of -matrices.

Reduction of -matrices to canonical form reduces to the definition of invariant factors

, ; ,

where r is the rank of the matrix; - the greatest common divisor of k-th order minors, taken with the highest coefficient equal to 1.

Example. Let -matrix

.

Solution. Obviously, the greatest common divisor of the first order, i.e. .

We define second-order minors:

, etc.

Already these data are enough to draw a conclusion: therefore, .

We define

,

Hence, .

Thus, the canonical form of this matrix is ​​the following -matrix:

.

A matrix polynomial is an expression of the form

where is a variable; - square matrices of order n with numerical elements.

If , then S is called the degree of the matrix polynomial, n is the order of the matrix polynomial.

Any quadratic -matrix can be represented as a matrix polynomial. Obviously, the converse statement is also true, i.e. any matrix polynomial can be represented as some square matrix.

The validity of these statements clearly follows from the properties of operations on matrices. Let's look at the following examples:

Example. Represent a polynomial matrix

in the form of a matrix polynomial can be as follows

.

Example. Matrix polynomial

can be represented as the following polynomial matrix ( -matrix)

.

This interchangeability of matrix polynomials and polynomial matrices plays an essential role in the mathematical apparatus of factor and component analysis methods.

Matrix polynomials of the same order can be added, subtracted, and multiplied in the same way as ordinary polynomials with numerical coefficients. However, it should be remembered that the multiplication of matrix polynomials, generally speaking, is not commutative, since matrix multiplication is not commutative.

Two matrix polynomials are called equal if their coefficients are equal, i.e. the corresponding matrices for the same powers of the variable .

The sum (difference) of two matrix polynomials is a matrix polynomial whose coefficient at each degree of the variable is equal to the sum (difference) of the coefficients at the same degree in the polynomials and .

To multiply a matrix polynomial by a matrix polynomial, you need to multiply each term of the matrix polynomial by each term of the matrix polynomial, add the resulting products and bring like terms.

The degree of a matrix polynomial is a product less than or equal to the sum of the degrees of the factors.

Operations on matrix polynomials can be performed using operations on the corresponding -matrices.

To add (subtract) matrix polynomials, it is enough to add (subtract) the corresponding -matrices. The same applies to multiplication. -matrix of the product of matrix polynomials is equal to the product of -matrices of factors.

On the other hand, and can be written in the form

where B 0 is a nonsingular matrix.

When dividing by, there is a uniquely defined right quotient and a right remainder

where the degree R 1 is less than the degree , or (division without a remainder), as well as the left quotient and the left remainder if and only if, where, order

The concepts of linear dependence and linear independence are defined for rows and columns in the same way. Therefore, the properties associated with these concepts, formulated for columns, of course, are also valid for rows.

1. If the column system includes a zero column, then it is linearly dependent.

2. If a column system has two equal columns, then it is linearly dependent.

3. If a column system has two proportional columns, then it is linearly dependent.

4. A system of columns is linearly dependent if and only if at least one of the columns is a linear combination of the others.

5. Any columns included in a linearly independent system form a linearly independent subsystem.

6. A column system containing a linearly dependent subsystem is linearly dependent.

7. If the system of columns is linearly independent, and after adding a column to it, it turns out to be linearly dependent, then the column can be decomposed into columns , and moreover, in a unique way, i.e. expansion coefficients are found uniquely.

Let us prove, for example, the last property. Since the column system is linearly dependent, there are numbers not all equal to 0, which

in this equality. Indeed, if , then

Hence, a non-trivial linear combination of columns is equal to the zero column, which contradicts the linear independence of the system . Therefore, and then , i.e. a column is a linear combination of columns. It remains to show the uniqueness of such a representation. Let's assume the opposite. Let there be two expansions and , and not all expansion coefficients are respectively equal to each other (for example, ). Then from the equality

We get (\alpha_1-\beta_1)A_1+\ldots+(\alpha_k-\beta_k)A_k=o

sequentially, the linear combination of columns equals the null column. Since not all of its coefficients are equal to zero (at least ), this combination is non-trivial, which contradicts the condition of linear independence of the columns . The resulting contradiction confirms the uniqueness of the decomposition.

Example 3.2. Prove that two non-zero columns and are linearly dependent if and only if they are proportional, i.e. .

Solution. Indeed, if the columns and are linearly dependent, then there are numbers , which are not equal to zero at the same time, such that . And in this equality. Indeed, assuming that , we obtain a contradiction , since the column is also nonzero. Means, . Therefore, there is a number such that . The need has been proven.

Conversely, if , then . We got a non-trivial linear combination of columns equal to the zero column. So the columns are linearly dependent.

Example 3.3. Consider all possible systems formed from columns

Examine each system for a linear relationship.
Solution. Consider five systems containing one column each. According to paragraph 1 of Remarks 3.1: the systems , are linearly independent, and the system consisting of one zero column , is linearly dependent.

Consider systems containing two columns each:

– each of the four systems and is linearly dependent, since it contains a zero column (property 1);

– the system is linearly dependent, since the columns are proportional (property 3): ;

- each of the five systems and is linearly independent, since the columns are non-proportional (see the statement of example 3.2).

Consider systems containing three columns:

– each of the six systems and is linearly dependent, since it contains a zero column (property 1);

– systems are linearly dependent, since they contain a linearly dependent subsystem (property 6);

are systems and are linearly dependent, since the last column is linearly expressed in terms of the rest (property 4): and respectively.

Finally, systems of four or five columns are linearly dependent (by property 6).

Matrix rank

In this section, we consider another important numerical characteristic of a matrix, related to how much its rows (columns) depend on each other.

Definition 14.10 Let there be a matrix of sizes and a number not exceeding the smallest of the numbers and : . Let's choose arbitrarily the matrix rows and columns (the numbers of the rows may differ from the numbers of the columns). The determinant of a matrix composed of elements at the intersection of the selected rows and columns is called the matrix order minor.

Example 14.9 Let .

A first-order minor is any element of the matrix. So 2, , are first-order minors.

Minors of the second order:

1. take rows 1, 2, columns 1, 2, we get a minor ;

2. take rows 1, 3, columns 2, 4, we get a minor ;

3. take rows 2, 3, columns 1, 4, we get a minor

Minors of the third order:

rows here can only be selected in one way,

1. take columns 1, 3, 4, get a minor ;

2. take columns 1, 2, 3, get a minor .

Offer 14.23 If all minors of the order matrix are equal to zero, then all minors of order , if any, are also equal to zero.

Proof. Take an arbitrary minor of order . This is the determinant of the order matrix. Let's expand it by the first line. Then, in each term of the expansion, one of the factors will be a minor of the order of the original matrix. By assumption, the order minors are equal to zero. Therefore, the order minor will also be equal to zero.

Definition 14.11 The rank of a matrix is ​​the largest of the non-zero orders of the minors of the matrix . The rank of the zero matrix is ​​considered to be zero.

There is no single, standard, notation for the rank of a matrix. Following the tutorial , we will refer to it as .

Example 14.10 The matrix of Example 14.9 has rank 3 because there is a non-zero third-order minor, but there are no fourth-order minors.

Matrix rank is equal to 1, since there is a non-zero first-order minor (an element of the matrix), and all second-order minors are equal to zero.

The rank of a non-degenerate square matrix of order is equal to , since its determinant is a minor of the order and the non-degenerate matrix is ​​non-zero.

Offer 14.24 When transposing a matrix, its rank does not change, that is, .

Proof. The transposed minor of the original matrix will be the minor of the transposed matrix , and vice versa, any minor is the transposed minor of the original matrix . When transposing, the determinant (minor) does not change (Proposition 14.6). Therefore, if all minors of order in the original matrix are equal to zero, then all minors of the same order in are also equal to zero. If the order minor in the original matrix is ​​non-zero, then there is a non-zero minor of the same order. Hence, .

Definition 14.12 Let the rank of the matrix be . Then any non-zero order minor is called a basic minor.

Example 14.11 Let . The determinant of the matrix is ​​zero, since the third row is equal to the sum of the first two. The second-order minor, located in the first two rows and the first two columns, is . Therefore, the rank of the matrix is ​​equal to two, and the considered minor is basic.

A basic minor is also a minor located, say, in the first and third rows, first and third columns: . The base will be the minor in the second and third rows, the first and third columns: .

The minor in the first and second rows, the second and third columns is equal to zero and therefore will not be basic. The reader can independently check which other second-order minors are basic and which are not.

Since the columns (rows) of the matrix can be added, multiplied by numbers, form linear combinations, it is possible to introduce definitions of linear dependence and linear independence of the system of columns (rows) of the matrix. These definitions are similar to the same definitions 10.14, 10.15 for vectors.

Definition 14.13 A system of columns (rows) is called linearly dependent if there is such a set of coefficients, of which at least one is nonzero, that the linear combination of columns (rows) with these coefficients will be equal to zero.

Definition 14.14 A system of columns (rows) is linearly independent if it follows from the equality to zero of a linear combination of these columns (rows) that all coefficients of this linear combination are equal to zero.

The following proposition, similar to Proposition 10.6, is also true.

Offer 14.25 A system of columns (rows) is linearly dependent if and only if one of the columns (one of the rows) is a linear combination of other columns (rows) of this system.

We formulate a theorem called basic minor theorem.

Theorem 14.2 Any column of a matrix is ​​a linear combination of columns passing through the basis minor.

The proof can be found in textbooks on linear algebra, for example, in,.

Offer 14.26 The rank of a matrix is ​​equal to the maximum number of its columns that form a linearly independent system.

Proof. Let the rank of the matrix be . Let's take the columns passing through the basis minor. Assume that these columns form a linearly dependent system. Then one of the columns is a linear combination of the others. Therefore, in the basis minor, one column will be a linear combination of the other columns. By Propositions 14.15 and 14.18, this basic minor must be equal to zero, which contradicts the definition of a basic minor. Therefore, the assumption that the columns passing through the basis minor are linearly dependent is not true. So, the maximum number of columns forming a linearly independent system is greater than or equal to .

Assume that the columns form a linearly independent system. Let's make a matrix out of them. All matrix minors are matrix minors. Therefore, the basis minor of the matrix has an order of at most . By the basis minor theorem, a column that does not pass through the basis minor of a matrix is ​​a linear combination of columns that pass through the basis minor, that is, the columns of the matrix form a linearly dependent system. This contradicts the choice of columns that form the matrix. Therefore, the maximum number of columns forming a linearly independent system cannot be greater than . Hence, it is equal to , as stated.

Offer 14.27 The rank of a matrix is ​​equal to the maximum number of its rows that form a linearly independent system.

Proof. By Proposition 14.24, the rank of a matrix does not change upon transposition. The rows of a matrix become its columns. The maximum number of new columns of the transposed matrix (former rows of the original one) forming a linearly independent system is equal to the rank of the matrix.

Offer 14.28 If the matrix determinant is equal to zero, then one of its columns (one of the rows) is a linear combination of the remaining columns (rows).

Proof. Let the order of the matrix be . The determinant is the only minor of a square matrix that has order . Since it is equal to zero, then . Therefore, the system of columns (rows) is linearly dependent, that is, one of the columns (one of the rows) is a linear combination of the others.

The results of Propositions 14.15, 14.18 and 14.28 give the following theorem.

Theorem 14.3 The determinant of a matrix is ​​zero if and only if one of its columns (one of the rows) is a linear combination of the other columns (rows).

Finding the rank of a matrix by calculating all its minors requires too much computational work. (The reader can verify that there are 36 second-order minors in a fourth-order square matrix.) Therefore, a different algorithm is used to find the rank. To describe it, some additional information is required.

Definition 14.15 We call the following operations on them elementary transformations of matrices:

1) permutation of rows or columns;
2) multiplying a row or column by a non-zero number;
3) adding to one of the rows another row, multiplied by a number, or adding to one of the columns of another column, multiplied by a number.

Offer 14.29 Under elementary transformations, the rank of the matrix does not change.

Proof. Let the rank of the matrix be equal to , -- the matrix resulting from the elementary transformation.

Consider a permutation of strings. Let be a minor of the matrix , then the matrix has a minor , which either coincides with or differs from it by a permutation of rows. And vice versa, any matrix minor can be associated with a matrix minor that either coincides with or differs from it in the order of rows. Therefore, from the fact that in the matrix all the minors of the order are equal to zero, it follows that in the matrix all the minors of this order are also equal to zero. And since the matrix has a non-zero order minor, the matrix also has a non-zero order minor, i.e. .

Consider multiplying a string by a non-zero number. A minor from a matrix corresponds to a minor from a matrix that either coincides with or differs from it by only one row, which is obtained from the minor row by multiplying by a non-zero number. In the last case . In all cases, or and are simultaneously equal to zero, or simultaneously different from zero. Hence, .

Linear independence of matrix rows

Given a size matrix

We denote the rows of the matrix as follows:

The two lines are called equal if their corresponding elements are equal. .

We introduce the operations of multiplying a string by a number and adding strings as operations carried out element by element:

Definition. A row is called a linear combination of matrix rows if it is equal to the sum of the products of these rows by arbitrary real numbers (any numbers):

Definition. The rows of the matrix are called linearly dependent , if there are such numbers that are not simultaneously equal to zero, such that the linear combination of matrix rows is equal to the zero row:

Where . (1.1)

The linear dependence of the rows of the matrix means that at least 1 row of the matrix is ​​a linear combination of the rest.

Definition. If the linear combination of rows (1.1) is equal to zero if and only if all coefficients are , then the rows are called linearly independent .

Matrix rank theorem. The rank of a matrix is ​​equal to the maximum number of its linearly independent rows or columns through which all other rows (columns) are linearly expressed.

The theorem plays a fundamental role in matrix analysis, in particular, in the study of systems of linear equations.

6, 13,14,15,16. Vectors. Operations on vectors (addition, subtraction, multiplication by a number),n -dimensional vector. The concept of vector space and its basis.

A vector is a directed segment with a starting point A and end point IN(which can be moved parallel to itself).

Vectors can be denoted either by 2 uppercase letters or by one lowercase letter with a dash or an arrow.

Length (or module) vector is a number equal to the length of the segment AB representing the vector.

Vectors that lie on the same line or on parallel lines are called collinear .

If the beginning and end of the vector coincide (), then such a vector is called zero and denoted by = . The length of the null vector is zero:

1) The product of a vector by a number:

There will be a vector having a length whose direction is the same as the direction of the vector if , and opposite to it if .

2) Opposite vector is called the product of a vector -per number(-1), i.e. -=.

3) The sum of two vectors and is called a vector , the beginning of which coincides with the beginning of the vector , and the end with the end of the vector , provided that the beginning coincides with the end . (rule of triangles). The sum of several vectors is defined similarly.



4) The difference of two vectors and is called the sum of the vector and the vector -, the opposite.

Scalar product

Definition: The scalar product of two vectors is the number equal to the product of the lengths of these vectors and the cosine of the angle between them:

n-dimensional vector and vector space

Definition. An n-dimensional vector is an ordered collection n real numbers written as x \u003d (x 1, x 2, ..., x n), Where x i i -th component of the vector X.

The concept of an n-dimensional vector is widely used in economics, for example, a certain set of goods can be characterized by the vector x \u003d (x 1, x 2, ..., x n), and the corresponding prices y = (y 1 ,y 2 ,…,y n).

- Two n-dimensional vectors are equal if and only if their respective components are equal, i.e. x=y if x i= y i, i = 1,2,…,n.

- The sum of two vectors the same dimension n called vector z = x + y, whose components are equal to the sum of the corresponding components of the terms of the vectors, i.e. z i= x i+y i, i = 1,2,…, n.

- The product of a vector x by a real number is called a vector whose components are equal to the product of the corresponding components of the vector, i.e. , i= 1,2,…,n.

Linear operations on any vectors satisfy the following properties:



1) - commutative (displacement) property of the sum;

2) - associative (associative) property of the sum;

3) - property associative with respect to the numerical factor;

4) - distributive (distributive) property with respect to the sum of vectors;

5) - distributive with respect to the sum of numerical factors property;

6) There is a null vector such that for any vector (the special role of the null vector);

7) For any vector there is an opposite vector such that ;

8) for any vector (special role of the numerical factor 1).

Definition. The set of vectors with real components that defines the operations of adding vectors and multiplying a vector by a number that satisfies the above eight properties (considered as axioms) is called vector state .

Dimension and basis of a vector space

Definition. linear space called n-dimensional if it contains n linearly independent vectors, and any of the vectors are already dependent. In other words, space dimension is the maximum number of linearly independent vectors contained in it. The number n is called the dimension of the space and is denoted by .

A collection of n linearly independent vectors in an n-dimensional space is called basis .

7. Eigenvectors and Matrix Eigenvalues. Characteristic equation of the matrix.

Definition. The vector is called own vector linear operator if there is a number such that:

The number is called own operator value (matrices A) corresponding to the vector .

It can be written in matrix form:

Where is a column matrix from the coordinates of the vector , or expanded:

Let's rewrite the system so that there are zeros in the right parts:

or in matrix form: . The resulting homogeneous system always has a zero solution. For a non-zero solution to exist, it is necessary and sufficient that the determinant of the system: .

The determinant is a polynomial n th degree relative to . This polynomial is called characteristic polynomial of the operator or matrix A, and the resulting equation is characteristic equation of the operator or matrices A.

Example:

Find the eigenvalues ​​and eigenvectors of the linear operator given by the matrix .

Solution: Compose the characteristic equation or , whence the eigenvalue of the linear operator .

Find the eigenvector corresponding to the eigenvalue . To do this, we solve the matrix equation:

Or , or , whence we find: , or

Or .

Suppose that , we get that vectors , for any are eigenvectors of a linear operator with eigenvalue .

Likewise, vector .

8. System P linear equations with P variables ( general form). The matrix form of such a system. System solution (definition). Consistent and inconsistent, definite and indefinite systems of linear equations.

Solving a system of linear equations with unknowns

Systems of linear equations are widely used in economics.

The system of linear equations with variables has the form:

,

where () are arbitrary numbers called coefficients for variables And free terms of equations , respectively.

Brief entry: ().

Definition. The solution of the system is such a set of values, when substituting which each equation of the system turns into a true equality.

1) The system of equations is called joint if it has at least one solution, and incompatible if it has no solutions.

2) The joint system of equations is called certain if it has a unique solution, and uncertain if it has more than one solution.

3) Two systems of equations are called equivalent (equivalent) , if they have the same set of solutions (for example, one solution).

We write the system in matrix form:

Denote: , Where

A is the matrix of coefficients for variables, or the matrix of the system, X – matrix-column of variables, IN is a column matrix of free members.

Because the number of matrix columns is equal to the number of matrix rows, then their product:

There is a matrix-column. The elements of the resulting matrix are the left parts of the initial system. Based on the definition of matrix equality, the initial system can be written as: .

Cramer's theorem. Let be the determinant of the matrix of the system, and be the determinant of the matrix obtained from the matrix by replacing the i-th column with a column of free terms. Then, if , then the system has a unique solution, determined by the formulas:

Cramer formula.

Example. Solve a system of equations using Cramer's formulas

Solution. System matrix determinant . Therefore, the system has a unique solution. Calculate obtained from replacing the first, second, third columns, respectively, with a column of free members:

According to Cramer's formulas:

9. Gauss method for solving the systemn linear equations with P variables. The concept of the Jordan-Gauss method.

Gauss method - method of successive exclusion of variables.

The Gauss method consists in the fact that with the help of elementary transformations of rows and permutations of columns, the system of equations is reduced to an equivalent system of a stepped (or triangular) form, from which all other variables are found sequentially, starting from the last (by number) variables.

It is convenient to carry out Gaussian transformations not with the equations themselves, but with the extended matrix of their coefficients , obtained by assigning to the matrix a column of free terms :

.

It should be noted that the Gauss method can be used to solve any system of equations of the form .

Example. Using the Gauss method to solve the system:

Let us write the augmented matrix of the system.

Step 1 . Swap the first and second lines so that it becomes equal to 1.

Step 2 Multiply the elements of the first row by (-2) and (-1) and add them to the elements of the second and third rows so that zeros form under the element in the first column. .

For consistent systems of linear equations, the following theorems are true:

Theorem 1. If the rank of the matrix of the joint system is equal to the number of variables, i.e. , then the system has a unique solution.

Theorem 2. If the rank of the matrix of the joint system is less than the number of variables, i.e. , then the system is uncertain and has an infinite number of solutions.

Definition. The basis minor of a matrix is ​​any non-zero minor whose order is equal to the rank of the matrix.

Definition. Those unknowns whose coefficients are included in the record of the basic minor are called basic (or basic), the rest of the unknowns are called free (or non-basic).

To solve a system of equations in the case means to express and (because the determinant composed of their coefficients is not equal to zero), then and are free unknowns.

We express the basic variables in terms of the free ones.

From the second row of the resulting matrix, we express the variable :

From the first line we express:

The general solution of the system of equations: , .

Consider an arbitrary, not necessarily square, matrix A of size mxn.

Matrix rank.

The concept of the rank of a matrix is ​​related to the concept of linear dependence (independence) of rows (columns) of a matrix. Consider this concept for strings. For columns, it's the same.

Denote the sinks of the matrix A:

e 1 \u003d (a 11, a 12, ..., a 1n); e 2 \u003d (a 21, a 22, ..., a 2n); ..., e m \u003d (a m1, a m2, ..., a mn)

e k =e s if a kj =a sj , j=1,2,…,n

Arithmetic operations on matrix rows (addition, multiplication by a number) are introduced as operations carried out element by element: λе k =(λа k1 ,λа k2 ,…,λа kn);

e k +e s =[(а k1 +a s1),(a k2 +a s2),…,(а kn +a sn)].

Line e is called linear combination rows e 1 , e 2 ,…,e k , if it is equal to the sum of the products of these rows by arbitrary real numbers:

e=λ 1 e 1 +λ 2 e 2 +…+λ k e k

Lines e 1 , e 2 ,…,e m are called linearly dependent, if there are real numbers λ 1 ,λ 2 ,…,λ m , not all equal to zero, that the linear combination of these rows is equal to the zero row: λ 1 e 1 +λ 2 e 2 +…+λ m e m = 0 ,Where 0 =(0,0,…,0) (1)

If the linear combination is equal to zero if and only if all coefficients λ i are equal to zero (λ 1 =λ 2 =…=λ m =0), then the rows e 1 , e 2 ,…,e m are called linearly independent.

Theorem 1. For strings e 1 ,e 2 ,…,e m to be linearly dependent, it is necessary and sufficient that one of these strings be a linear combination of the other strings.

Proof. Necessity. Let strings e 1 , e 2 ,…,e m be linearly dependent. Let, for definiteness, (1) λm ≠0, then

That. the string e m is a linear combination of the rest of the strings. Ch.t.d.

Adequacy. Let one of the rows, for example e m , be a linear combination of the other rows. Then there are numbers such that the equality holds, which can be rewritten as ,

where at least 1 of the coefficients, (-1), is non-zero. Those. rows are linearly dependent. Ch.t.d.

Definition. Minor k-th order matrix A of size mxn is called the k-th order determinant with elements lying at the intersection of any k rows and any k columns of matrix A. (k≤min(m,n)). .

Example., minors of the 1st order: =, =;

minors of the 2nd order: , 3rd order

A 3rd order matrix has 9 1st order minors, 9 2nd order minors and 1 3rd order minor (the determinant of this matrix).

Definition. Matrix rank A is the highest order of non-zero minors of this matrix. Designation - rgA or r(A).

Matrix rank properties.

1) the rank of the matrix A nxm does not exceed the smallest of its dimensions, i.e.

r(A)≤min(m,n).

2) r(A)=0 when all matrix elements are equal to 0, i.e. A=0.

3) For a square matrix A of the nth order, r(A)=n when A is nondegenerate.



(The rank of a diagonal matrix is ​​equal to the number of its non-zero diagonal elements).

4) If the rank of a matrix is ​​r, then the matrix has at least one minor of order r that is not equal to zero, and all minors of higher orders are equal to zero.

For the ranks of the matrix, the following relations are true:

2) r(A+B)≤r(A)+r(B); 3) r(AB)≤min(r(A),r(B));

3) r(A+B)≥│r(A)-r(B)│; 4) r(A T A)=r(A);

5) r(AB)=r(A) if B is a square non-singular matrix.

6) r(AB)≥r(A)+r(B)-n, where n is the number of columns of matrix A or rows of matrix B.

Definition. A nonzero minor of order r(A) is called basic minor. (Matrix A can have several basis minors). Rows and columns at the intersection of which there is a basis minor are called respectively base lines And base columns.

Theorem 2 (on the basic minor). Basic rows (columns) are linearly independent. Any row (any column) of matrix A is a linear combination of basic rows (columns).

Proof. (For strings). If the basic rows were linearly dependent, then by Theorem (1) one of these rows would be a linear combination of other basic rows, then, without changing the value of the basic minor, you can subtract the specified linear combination from this row and get a zero row, and this contradicts because the basis minor is different from zero. That. the base rows are linearly independent.

Let us prove that any row of matrix A is a linear combination of basic rows. Because with arbitrary changes in rows (columns), the determinant retains the property of being equal to zero, then, without loss of generality, we can assume that the basis minor is in the upper left corner of the matrix

A=, those. located on the first r rows and the first r columns. Let 1£j£n, 1£i£m. Let us show that the determinant of the (r+1)th order

If j£r or i£r, then this determinant is equal to zero, because it will have two identical columns or two identical rows.

If j>r and i>r, then this determinant is a minor of the (r + 1)th order of the matrix A. Since matrix rank is r, so any minor higher order is 0.

Expanding it by the elements of the last (added) column, we get

a 1j A 1j +a 2j A 2j +…+a rj A rj +a ij A ij =0, where the last algebraic addition A ij coincides with the basic minor M r and therefore A ij = M r ≠0.

Dividing the last equality by A ij , we can express the element a ij as a linear combination: , where .

We fix the value i (i>r) and we get that for any j (j=1,2,…,n) the elements i-th line e i are linearly expressed in terms of row elements e 1 , e 2 ,…,e r , i.e. i-th line is a linear combination of basic rows: . Ch.t.d.

Theorem 3. (necessary and sufficient condition for the determinant to be equal to zero). In order for the nth order determinant D to be equal to zero, it is necessary and sufficient that its rows (columns) be linearly dependent.

Proof (p.40). Necessity. If the nth order determinant D is equal to zero, then the basis minor of its matrix is ​​of order r

Thus, one row is a linear combination of the others. Then, by Theorem 1, the rows of the determinant are linearly dependent.

Adequacy. If the rows D are linearly dependent, then by Theorem 1 one row A i is a linear combination of the other rows. Subtracting the indicated linear combination from the line A i, without changing the value of D, we obtain a zero line. Therefore, by properties of determinants, D=0. h.t.d.

Theorem 4. Under elementary transformations, the rank of the matrix does not change.

Proof. As was shown when considering the properties of determinants, when transforming square matrices, their determinants either do not change, or are multiplied by a nonzero number, or change sign. In this case, the highest order of non-zero minors of the original matrix is ​​preserved, i.e. the rank of the matrix does not change. Ch.t.d.

If r(A)=r(B), then A and B are equivalent: A~B.

Theorem 5. Using elementary transformations, one can reduce the matrix to stepped view. The matrix is ​​called stepped if it has the form:

А=, where a ii ≠0, i=1,2,…,r; r≤k.

Conditions r≤k can always be achieved by transposition.

Theorem 6. The rank of a step matrix is ​​equal to the number of its non-zero rows .

Those. The rank of the step matrix is ​​r, because there is a non-zero minor of order r:

Let a matrix A of sizes (m; n) have k rows and k columns chosen arbitrarily (k ≤ min(m; n)). The matrix elements located at the intersection of the selected rows and columns form a square matrix of order k, the determinant of which is called the minor M kk of order k y or the k-th order minor of matrix A.

The rank of a matrix is ​​the maximum order r of non-zero minors of the matrix A, and any non-zero minor of order r is called the basis minor. Designation: rank A = r. If rang A = rang B and the sizes of the matrices A and B are the same, then the matrices A and B are said to be equivalent. Designation: A ~ B.

The main methods for calculating the rank of a matrix are the fringing minors method and the .

Fringing Minor Method

The essence of the method of bordering minors is as follows. Let a minor of order k, which is different from zero, be already found in the matrix. Then only those minors of order k + 1 are considered below that contain (i.e., border) the k-th order minor that is different from zero. If they are all equal to zero, then the rank of the matrix is ​​equal to k, otherwise, among the bordering minors of the (k + 1)th order, there is a non-zero one, and the whole procedure is repeated.

Linear independence of rows (columns) of a matrix

The concept of the rank of a matrix is ​​closely related to the concept of the linear independence of its rows (columns).

Matrix rows:

are called linearly dependent if there are such numbers λ 1 , λ 2 , λ k that the equality is true:

The rows of the matrix A are called linearly independent if the above equality is possible only in the case when all numbers λ 1 = λ 2 = ... = λ k = 0

The linear dependence and independence of the columns of the matrix A are defined in a similar way.

If any row (a l) of matrix A (where (a l)=(a l1 , a l2 ,…, a ln)) can be represented as

The notion of a linear combination of columns is defined similarly. The following theorem on the basis minor is valid.

Basic rows and basic columns are linearly independent. Any row (or column) of matrix A is a linear combination of basic rows (columns), i.e. rows (columns) that intersect the basic minor. Thus, the rank of matrix A: rang A = k is equal to the maximum number of linearly independent rows (columns) of matrix A.

Those. the rank of a matrix is ​​the dimension of the largest square matrix within the matrix for which you want to determine the rank, for which the determinant is not equal to zero. If the original matrix is ​​not square, or if it is square, but its determinant is zero, then for square matrices of smaller order, rows and columns are chosen arbitrarily.

Except through determinants, the rank of a matrix can be calculated by the number of linearly independent rows or columns of the matrix. It is equal to the number of linearly independent rows or columns, whichever is less. For example, if a matrix has 3 linearly independent rows and 5 linearly independent columns, then its rank is three.

Examples of finding the rank of a matrix

Find the rank of the matrix by the method of bordering minors

Solution. Minor of the second order

bordering minor M 2 is also different from zero. However, both minors are of the fourth order, bordering M 3 .

are equal to zero. Therefore, the rank of the matrix A is 3, and the basic minor is, for example, the minor M 3 presented above.

The method of elementary transformations is based on the fact that elementary transformations of a matrix do not change its rank. Using these transformations, you can bring the matrix to the form when all its elements, except for a 11 , a 22 , …, a rr (r ≤min (m, n)), are equal to zero. This obviously means that rank A = r. Note that if an n-th order matrix has the form of an upper triangular matrix, that is, a matrix in which all elements under the main diagonal are equal to zero, then it is determined to be equal to the product of the elements on the main diagonal. This property can be used when calculating the rank of a matrix by the method of elementary transformations: it is necessary to use them to reduce the matrix to a triangular one, and then, by selecting the appropriate determinant, we find that the rank of the matrix is ​​equal to the number of non-zero elements of the main diagonal.

Using the method of elementary transformations, find the rank of a matrix

Solution. Denote the i-th row of the matrix A by the symbol α i . At the first stage, we perform elementary transformations

At the second stage, we perform transformations

As a result, we get



Loading...
Top