Design operators. Linear Operators in Euclidean Space Find the operator matrix for projecting onto a plane

Linear operator matrix

Let be a linear operator, and let the spaces and be finite-dimensional, and .

Let us set arbitrary bases: in and V .

Let's set the task: for an arbitrary vector, calculate the coordinates of the vector in basis .

Introducing the vector row matrix , consisting of the images of the basis vectors , we get:

Note that the last equality in this chain takes place just because of the linearity of the operator .

We expand the system of vectors in terms of the basis:

,

where - th column of the matrix is ​​the column of coordinates of the vector in the basis.

Finally we will have:

So, in order to calculate the column of coordinates of the vector in the chosen basis of the second space, it is enough to multiply the column of coordinates of the vector in the chosen basis of the first space from the left by the matrix consisting of the columns of coordinates of the images of the basis vectors of the first space in the basis of the second space.

The matrix is ​​called matrix of a linear operator in a given pair of bases .

Let us agree to denote the matrix of a linear operator by the same letter as the operator itself, but without italics. Sometimes we will use the following notation: , often omitting references to bases (if this does not harm accuracy).

For a linear transformation (i.e. when ) we can talk about it matrix in this basis.

As an example, consider the matrix of the projection operator from the example in Section 1.7 (assuming it to be a transformation of the space of geometric vectors). As a basis, we choose the usual basis .

Therefore, the matrix of the projection operator onto a plane in the basis has the form:

Note that if we considered the projection operator as a mapping in , understanding by the latter the space of all geometric vectors lying in the plane , then, taking the basis as a basis, we will already obtain the following matrix:

Considering an arbitrary size matrix as a linear operator mapping an arithmetic space into an arithmetic space , and choosing a canonical basis in each of these spaces, we get that the matrix of a given linear operator in such a pair of bases is the same matrix that defines the given operator - that is, in In this case, the matrix and the linear operator are the same (in the same way as when choosing a canonical basis in an arithmetic vector space, a vector and a column of its coordinates in a given basis can be identified). But it would be a gross mistake to identify vector as such And linear operator as such with their representation in one or another basis (in the form of a column or matrix). Both the vector and the linear operator are geometric, invariant objects, defined independently of any basis. So, when we, for example, draw a geometric vector as a directed segment, then it is defined completely invariantly, i.e. we, when we draw it, do not care about bases, coordinate systems, etc., and we can operate with it purely geometrically. Another thing is that for comfort For this operation, for the convenience of calculations with vectors, we build a certain algebraic apparatus by introducing coordinate systems, bases, and the purely algebraic technique of calculations over vectors associated with them. Figuratively speaking, a vector, as a "bare" geometric object, "dresses" in different coordinate representations depending on the choice of basis. But a person can put on the most diverse dress, from which his essence as a person does not change, but it is also true that not every dress suits this or that situation (you won’t go to the beach in a concert tailcoat), and naked is not everywhere either walk. So not any basis is suitable for solving this problem, just as a purely geometric solution may turn out to be too complicated. We will see in our course how to solve such a seemingly purely geometric problem as the classification of second-order surfaces, a rather complex and beautiful algebraic theory is built.

Understanding the difference between a geometric object and its representation in one basis or another is the basis for the perception of linear algebra. And a geometric object does not have to be exactly a geometric vector. So, if we define an arithmetic vector , then it can be identified with the column of its coordinates in the canonical basis , for (see first semester):

But let's introduce another basis in , consisting of vectors and (check that this is really a basis!) and, using the transition matrix , recalculate the coordinates of our vector:

We got a completely different column, but it represents the same arithmetic vector in a different basis.

What has been said about vectors also applies to linear operators. What for a vector is its coordinate representation, for a linear operator its matrix is.

So (let's say it again) one must clearly distinguish between invariant, geometric, objects, what is a vector and a linear operator, and their representation in one basis or another (of course, we are talking about finite-dimensional linear spaces).

Let us now deal with just the problem of transforming the matrix of a linear operator when passing from one pair of bases to another.

Let - a new pair of bases in and respectively.

Then (denoting the operator matrix in a pair of "primed" bases) we get:

But in other way,

,

whence, due to the uniqueness of the expansion of the vector in terms of the basis

,

For a linear transformation, the formula takes on a simpler form:

The matrices and related by this relation are called similar.

It is easy to see that the determinants of similar matrices coincide.

Let us now introduce the concept rank of a linear operator.

By definition, this number is equal to the dimension of the image of the given operator:

Let us prove the following important assertion:

Statement 1.10 The rank of a linear operator coincides with the rank of its matrix, regardless of the choice of bases.

Proof. First of all, we note that the image of a linear operator is the linear span of the system , where is a basis in the space .

Really,

whatever the numbers , but this means that is the specified linear span.

The dimension of the linear span, as we know (see Section 1.2), coincides with the rank of the corresponding system of vectors.

We proved earlier (Sec. 1.3) that if a system of vectors is expanded in some basis in the form

then, under the condition that the system is independent, the columns of the matrix are linearly independent. A stronger assertion can also be proved (we omit this proof): the rank of the system is equal to the rank of the matrix, moreover, this result does not depend on the choice of the basis, since multiplication of a matrix by a nondegenerate transition matrix does not change its rank.

Because the

,

Since, obviously, the ranks of such matrices coincide, this result does not depend on the choice of a particular basis.

The assertion has been proven.

For linear transformations some finite-dimensional linear space, we can also introduce the notion determinant of this transformations as a determinant of its matrix in an arbitrarily fixed basis, because the matrices of a linear transformation in different bases are similar and, therefore, have the same determinants.

Using the concept of a matrix of a linear operator, we prove the following important relation: for any linear transformation of an -dimensional linear space

Let us choose an arbitrary basis in the space . Then the kernel consists of those and only those vectors whose coordinate columns are solutions of the homogeneous system

namely, a vector if and only if the column is a solution to system (1).

In other words, there is an isomorphism of the kernel onto the space of solutions of system (1). Therefore, the dimensions of these spaces coincide. But the dimension of the solution space of system (1) is, as we already know, , where is the rank of the matrix . But we just proved that

Let the linear operator A acts in the Euclidean space E n and transforms this space into itself.

Let's introduce definition: operator A* we call the adjoint operator A, if for any two vectors x,y from E n the equality of scalar products of the form is fulfilled:

(Ax,y) = (x, A * y)

More definition: a linear operator is called self-adjoint if it is equal to its adjoint operator, i.e. the equality is true:

(Ax,y) = (x,Ay)

or in particular ( Axx) = (x,Ax).

The self-adjoint operator has some properties. Let's mention some of them:

    The eigenvalues ​​of a self-adjoint operator are real (without proof);

    The eigenvectors of a self-adjoint operator are orthogonal. Indeed, if x 1 And x2 are eigenvectors, and  1 and  2 are their own numbers, then: Ax 1 =  1 x; Ax 2 =  2 x; (Ax 1 ,x 2) = (x 1 ,Ax 2), or  1 ( x 1 , x 2) =  2 (x 1 , x 2). Since  1 and  2 are different, then from here ( x 1 , x 2) = 0, which was to be proved.

    In Euclidean space, there is an orthonormal basis of eigenvectors of the self-adjoint operator A. That is, the matrix of a self-adjoint operator can always be reduced to a diagonal form in some orthonormal basis composed of the eigenvectors of the self-adjoint operator.

Another definition: we call a self-adjoint operator acting in the Euclidean space symmetrical operator. Consider the matrix of a symmetric operator. Let's prove the statement: For an operator to be symmetric, it is necessary and sufficient that its matrix be symmetric in an orthonormal basis.

Let A is a symmetric operator, i.e.:

(Ax,y) = (x,Ay)

If A is the matrix of the operator A, and x And y are some vectors, then we write:

coordinates x And y in some orthonormal basis

Then: ( x,y) = X T Y = Y T X and we have ( Ax,y) = (AX) T Y = X T A T Y

(x,Ay) = X T (AY) = X T AY,

those. X T A T Y = X T A Y. For arbitrary column matrices X,Y this equality is possible only when A T = A, which means that the matrix A is symmetric.

Consider some examples of linear operators

Operator design. Let it be required to find the matrix of a linear operator that projects a three-dimensional space onto the coordinate axis e 1 in basis e 1 , e 2 , e 3 . The matrix of a linear operator is a matrix whose columns must contain the images of the basis vectors e 1 = (1,0,0), e 2 = (0,1,0), e 3 = (0,0,1). These images are obviously there: Ae 1 = (1,0,0)

Ae 2 = (0,0,0)

Ae 3 = (0,0,0)

Therefore, in the basis e 1 , e 2 , e 3 the matrix of the desired linear operator will look like:

Let us find the kernel of this operator. By definition, a kernel is a set of vectors X, for which AX = 0. Or


That is, the kernel of the operator is the set of vectors lying in the plane e 1 , e 2 . The dimension of the kernel is n – rangA = 2.

The set of images of this operator is obviously the set of vectors collinear e 1 . The dimension of the image space is equal to the rank of the linear operator and is equal to 1 , which is less than the dimension of the preimage space. i.e. operator A- degenerate. The matrix A is also degenerate.

Another example: find the matrix of a linear operator realizing in the space V 3 (basis i, j, k) linear transformation– symmetry with respect to the origin of coordinates.

We have: Ai = -i

That is, the desired matrix

Consider a linear transformation − symmetry about the plane y = x.

Aj = i(1,0,0)

Ak = k (0,0,1)

The operator matrix will be:

Another example is the already familiar matrix that relates the coordinates of a vector when the coordinate axes are rotated. Let's call the operator that performs the rotation of the coordinate axes - the rotation operator. Suppose a rotation is made through an angle :

AI' = cos i+ sin j

Aj' = -sin i+ cos j

Rotation Operator Matrix:

AIAj

Recall the formulas for transforming the coordinates of a point when changing the basis - replacing coordinates on the plane when changing the basis:

E These formulas can be considered in two ways. Previously, we considered these formulas in such a way that the point stands still, the coordinate system rotates. But it can also be considered in such a way that the coordinate system remains the same, but the point moves from position M * to position M. The coordinates of the point M and M * are defined in the same coordinate system.

IN All of the above allows us to approach the next problem that programmers dealing with computer graphics have to solve. Let it be necessary on the computer screen to rotate some flat figure (for example, a triangle) relative to the point O' with coordinates (a,b) by some angle . The rotation of coordinates is described by the formulas:

Parallel translation provides the ratios:

In order to solve such a problem, an artificial trick is usually used: the so-called “homogeneous” coordinates of a point on the XOY plane are introduced: (x, y, 1). Then the matrix that performs parallel translation can be written:

Really:

And the rotation matrix:

The problem under consideration can be solved in three steps:

1st step: parallel transfer to the vector A(-a, -b) to align the center of rotation with the origin:

2nd step: turn by angle :

3rd step: parallel transfer to the vector A(a, b) to return the center of rotation to its previous position:

The desired linear transformation in matrix form will look like:

(**)

1. Projection operators and ring idempotents

Let the vector space V be equal to the direct sum of the subspaces W and L: . By the definition of a direct sum, this means that each vector vV is uniquely representable as v=w+l, wW. lL.

Definition 1. If, so that v=w+l, then the map that associates each vector vV with its component (projection) wW is called the projector of the space V onto the space W. It is also called the projection operator, or the projection operator.

Obviously, if wW, then (w)=w. It follows from this that it has the following remarkable property 2 =P.

Definition 2. An element e of a ring K is called an idempotent (that is, similar to one) if e 2 =e.

There are only two idempotents in the ring of integers: 1 and 0. The situation is different in the ring of matrices. For example, matrices are idempotents. Matrices of projection operators are also idempotents. The corresponding operators are called idempotent operators.

Consider now the direct sum of n subspaces of the space V:

Then, similarly to the case of a direct sum of two subspaces, we can obtain n projection operators, …, . They have the property ==0 for ij.

Definition 3. Idempotents e i and e j (ij) are called orthogonal if e i e j = e j e i =0. Therefore, and are orthogonal idempotents.

From the fact that IV=V and from the addition rule for linear operators it follows that

This decomposition is called the decomposition of unity into a sum of idempotents.

Definition 4. An idempotent e is said to be minimal if it cannot be represented as a sum of idempotents other than e and 0.

2. Canonical decomposition of a representation

Definition 5. The canonical decomposition of the representation Т(g) is its decomposition of the form Т(g)=n 1 T 1 (g)+ n 2 T 2 (g)+…+ n t T t (g), in which the equivalent irreducible representations Т i (g ) are combined together, and n i is the multiplicity of the occurrence of the irreducible representation T i (g) in the decomposition of T(g).

Theorem 1. The canonical decomposition of a representation is determined using a projection operator of the form

I=1, 2, …, t, (31)

where |G| - the order of the group G; m i - degrees of representations T i (g), where i=1, 2, ..., t; i (g), i=1, 2, …, t - characters of irreducible representations T i (g). In this case, m i is determined by the formula

3. Projection operators associated with matrices of irreducible group representations

Formulas (31) can only be used to obtain the canonical decomposition of the representation. In the general case, it is necessary to use matrices of irreducible representations, which allow one to construct the corresponding projection operators.

Theorem 2. Let be the matrix elements of an irreducible representation T r (g) of the group G. An operator of the form

is the projection operator and is called the Wigner operator. In expression (33) m r is the dimension of the representation T r (g).

4. Decomposition of a representation into a direct sum of irreducible representations using the Wigner operator

Denote by M the module associated with the representation T. Let the irreducible representations T 1 , T 2 , …, T t from the canonical decomposition of the representation according to the method described earlier (see § 4) correspond to the irreducible submodules M 1 , M 2 , …, M t . Decomposition of a module M of the form

is called the canonical decomposition of the module M. Denote niMi=Li, so that

Denote irreducible submodules of modules L i

; i=1, 2, …, t. (36)

We need to find these modules.

Let's assume that the problem is solved. Therefore, in each of the modmodules M i (s) (s=1, 2, ..., n i) an orthonormal base is found in which the operator is represented by the matrix T i (g) of the irreducible representation T obtained as a result of the action (according to the rule from § 3 ) of the operator to the base by the formula

J=1, 2, …, m i . (37)

In this expression, we can assume that m i is the dimension of the irreducible representation T i (i=1, 2, …, t), and are base elements with number g from the irreducible submodule M i . Let us now place the elements of the base L i for fixed i as follows:

On the right in expression (38) there are bases of modules M i (1) , M i (2) , …, . If i changes from 1 to t, then we get the desired base of the entire module M, consisting of m 1 n 1 + m 2 n 2 +…+ m t n t elements.

Consider now the operator

acting in the module M (j is fixed). According to Theorem 2, is the projection operator. Therefore, this operator leaves all the basis elements (s=1, 2, …, n i) located in the j-th column of expression (38) unchanged and sets all other base vectors to zero. Denote by M ij the vector space spanned by the orthogonal system of vectors in the jth column of expression (38). Then we can say that is the projection operator onto the space M ij . The operator is known, since the diagonal elements of the matrices of irreducible group representations are known, as well as the operator T(g).

Now we can solve our problem.

We choose n i arbitrary basis vectors in M: and act on them with the projection operator. The resulting vectors lie in the space M ij and are linearly independent. They are not necessarily orthogonal and normalized. Let us orthonormalize the resulting system of vectors according to the rule from § 2. Denote the resulting system of vectors by e ij (s) in accordance with the notation adopted under the assumption that the problem is solved. As already indicated, here j is fixed, and s=1, 2, …, n i . Denote e if (s) (f=1, 2, …, j-1, j+1, …, m i), the remaining elements of the base of the module M i of dimension n i m i . Denote by the following operator:

It follows from the orthogonality relations for matrices of irreducible representations that this operator makes it possible to obtain e ig s by the formula

I=1, 2, …, t. (41)

All of the above can be expressed in the form of the following algorithm.

In order to find the base of the module M from elements that transform according to irreducible representations T i contained in the representation T associated with the module M, it is necessary:

Using formula (32), find the dimensions of the subspaces M ij corresponding to the j-component of the irreducible representation T i .

Find using the projection operator (39) all subspaces M ij .

In each subspace M ij choose an arbitrary orthonormal base.

Using formula (41), find all elements of the base that are transformed by the remaining components of the irreducible representation T i .

The Dirac bra and ket vectors are remarkable in that they can be used to write Various types works.

The product of a bra vector and a ket vector is called the inner product or inner product. In fact, this is a standard matrix product according to the row-by-column rule. Its result is a complex number.

The product of a ket vector and another ket vector no longer gives a number, but another ket vector. It is also represented as a column vector, but with the number of components equal to the product of the dimensions of the original vectors. Such a product is called a tensor product or a Kronecker product.

The same is true for the product of two bra vectors. We get a large row vector.

The last option is to multiply the ket vector by the bra vector. That is, you need to multiply the column by the row. Such a product is also called a tensor or outer product. As a result, a matrix is ​​obtained, that is, an operator.

Let's consider an example of using such operators.

Let us take some arbitrary Hermitian operator A. According to the postulates, some observable quantity corresponds to it. The eigenvectors of the Hermitian operator form a basis. The most general state vector can be expanded in this basis. That is, represent the sum of basis vectors with certain complex coefficients. This fact is known as the principle of superposition. Let's rewrite the expression using the sum sign.

But the coefficients in the expansion of the vector in terms of the basis vectors are the probability amplitudes, that is, the scalar product of the state vector with the corresponding basis vector. Let's write this amplitude to the right of the vector. The expression under the sum sign can be viewed as the multiplication of the ket vector by a complex number - the probability amplitude. On the other hand, it can be considered as the product of the matrix obtained by multiplying the ket vector by the bra vector and the original ket vector. The ket vector can be taken out from under the sign of the sum outside the bracket. The same psi vector will appear to the right and left of the equals sign. This means that the whole sum does nothing with the vector and is therefore equal to the identity matrix.

This formula itself is very useful when manipulating expressions with products of bra and ket vectors. After all, the unit can be inserted anywhere in the work.

Let's see what are the matrices included in the sum and obtained by the tensor product of the basis ket vector with its Hermitian conjugation. Again, for clarity, let's draw an analogy with ordinary vectors in three-dimensional space.

Let us choose unit basis vectors ex ey and ez, coinciding in direction with the coordinate axes. The tensor product of the vector ex and its conjugation will be represented by the following matrix. Take an arbitrary vector v. What happens when this matrix is ​​multiplied by a vector? This matrix simply zeroed out all components of the vector except for x. The result is a vector directed along the x-axis, that is, the projection of the original vector onto the basis vector ex. It turns out that our matrix is ​​nothing more than a projection operator.

The remaining two projection operators on the basis vectors ey and ez are represented by similar matrices and perform a similar function - they set all but one component of the vector to zero.

What happens when you sum the projection operators? Let us add, for example, the operators Px and Py. Such a matrix will reset only the z-component of the vector. The resulting vector will always lie in x-y planes. That is, we have a projection operator on the x-y plane.

Now it is clear why the sum of all projection operators on basis vectors is equal to the identity matrix. In our example, we will get the projection of a three-dimensional vector onto the three-dimensional space itself. The identity matrix is ​​essentially the projection of the vector onto itself.

It turns out that the assignment of the projection operator is equivalent to the assignment of a subspace of the original space. In the considered case of a three-dimensional Euclidean space, this can be a one-dimensional line defined by a single vector or a two-dimensional plane defined by a pair of vectors.

Returning to quantum mechanics with its state vectors in Hilbert space, we can say that the projection operators define a subspace and project the state vector into this Hilbert subspace.

Let us present the main properties of projection operators.

  1. Successive application of the same projection operator is equivalent to a single projection operator. Usually this property is written as P 2 =P. Indeed, if the first operator projected a vector into a subspace, then the second operator will not do anything with it. The vector will already be in this subspace.
  2. The projection operators are Hermitian operators, respectively, in quantum mechanics, they correspond to observable quantities.
  3. The eigenvalues ​​of projection operators of any dimension are only the numbers one and zero. Whether the vector is in the subspace or not. Because of this binarity, the observed value described by the projection operator can be formulated as a question, the answer to which is “yes” or “no”. For example, is the spin of the first electron in the singlet state up on the z-axis? This question can be put in correspondence with the projection operator. Quantum mechanics allows you to calculate the probabilities for the answer "yes" and for the answer "no".

In what follows, we will talk more about projection operators.



Loading...
Top