Frobenius normal form




In linear algebra, the Frobenius normal form or rational canonical form of a square matrix A with entries in a field F is a canonical form for matrices obtained by conjugation by invertible matrices over F. The form reflects a minimal decomposition of the vector space into subspaces that are cyclic for A (i.e., spanned by some vector and its repeated images under A). Since only one normal form can be reached from a given matrix (whence the "canonical"), a matrix B is similar to A if and only if it has the same rational canonical form as A. Since this form can be found without any operations that might change when extending the field F (whence the "rational"), notably without factoring polynomials, this shows that whether two matrices are similar does not change upon field extensions. The form is named after German mathematician Ferdinand Georg Frobenius.


Some authors use the term rational canonical form for a somewhat different form that is more properly called the primary rational canonical form. Instead of decomposing into a minimal number of cyclic subspaces, the primary form decomposes into a maximal number of cyclic subspaces. It is also defined over F, but has somewhat different properties: finding the form requires factorization of polynomials, and as a consequence the primary rational canonical form may change when the same matrix is considered over an extension field of F. This article mainly deals with the form that does not require factorization, and explicitly mentions "primary" when the form using factorization is meant.




Contents






  • 1 Motivation


  • 2 Example


  • 3 General case and theory


  • 4 A rational normal form generalizing the Jordan normal form


  • 5 See also


  • 6 References


  • 7 External links


    • 7.1 Algorithms







Motivation


When trying to find out whether two square matrices A and B are similar, one approach is to try, for each of them, to decompose the vector space as far as possible a direct sum of stable subspaces, and compare the respective actions on these subspaces. For instance if both are diagonalizable, then one can take the decomposition into eigenspaces (for which the action is as simple as it can get, namely by a scalar), and then similarity can be decided by comparing eigenvalues and their multiplicities. While in practice this is often a quite insightful approach, there are various drawbacks this has as a general method. First, it requires finding all eigenvalues, say as roots of the characteristic polynomial, but it may not be possible to give an explicit expression for them. Second, a complete set of eigenvalues might exist only in an extension of the field one is working over, and then one does not get a proof of similarity over the original field. Finally A and B might not be diagonalizable even over this larger field, in which case one must instead use a decomposition into generalized eigenspaces, and possibly into Jordan blocks.


But obtaining such a fine decomposition is not necessary to just decide whether two matrices are similar. The rational canonical form is based on instead using a direct sum decomposition into stable subspaces that are as large as possible, while still allowing a very simple description of the action on each of them. These subspaces must be generated by a single nonzero vector v and all its images by repeated application of the linear operator associated to the matrix; such subspaces are called cyclic subspaces (by analogy with cyclic subgroups) and they are clearly stable under the linear operator. A basis of such a subspace is obtained by taking v and its successive images as long as they are linearly independent. The matrix of the linear operator with respect to such a basis is the companion matrix of a monic polynomial; this polynomial (the minimal polynomial of the operator restricted to the subspace, which notion is analogous to that of the order of a cyclic subgroup) determines the action of the operator on the cyclic subspace up to isomorphism, and is independent of the choice of the vector v generating the subspace.


A direct sum decomposition into cyclic subspaces always exists, and finding one does not require factoring polynomials. However it is possible that cyclic subspaces do allow a decomposition as direct sum of smaller cyclic subspaces (essentially by the Chinese remainder theorem). Therefore, just having for both matrices some decomposition of the space into cyclic subspaces, and knowing the corresponding minimal polynomials, is not in itself sufficient to decide their similarity. An additional condition is imposed to ensure that for similar matrices one gets decompositions into cyclic subspaces that exactly match: in the list of associated minimal polynomials each one must divide the next (and the constant polynomial 1 is forbidden to exclude trivial cyclic subspaces of dimension 0). The resulting list of polynomials are called the invariant factors of (the K[X]-module defined by) the matrix, and two matrices are similar if and only if they have identical lists of invariant factors. The rational canonical form of a matrix A is obtained by expressing it on a basis adapted to a decomposition into cyclic subspaces whose associated minimal polynomials are the invariant factors of A; two matrices are similar if and only if they have the same rational canonical form.



Example


Consider the following matrix A, over Q:


A=(−13−10−200−2−1−111−2−10−1−2−643−8−4−21−18−3−1523−3000000010000−10001000200000004010).{displaystyle scriptstyle A={begin{pmatrix}-1&3&-1&0&-2&0&0&-2\-1&-1&1&1&-2&-1&0&-1\-2&-6&4&3&-8&-4&-2&1\-1&8&-3&-1&5&2&3&-3\0&0&0&0&0&0&0&1\0&0&0&0&-1&0&0&0\1&0&0&0&2&0&0&0\0&0&0&0&4&0&1&0end{pmatrix}}.}{displaystyle scriptstyle A={begin{pmatrix}-1&3&-1&0&-2&0&0&-2\-1&-1&1&1&-2&-1&0&-1\-2&-6&4&3&-8&-4&-2&1\-1&8&-3&-1&5&2&3&-3\0&0&0&0&0&0&0&1\0&0&0&0&-1&0&0&0\1&0&0&0&2&0&0&0\0&0&0&0&4&0&1&0end{pmatrix}}.}

A has minimal polynomial μ=X6−4X4−2X3+4X2+4X+1{displaystyle mu =X^{6}-4X^{4}-2X^{3}+4X^{2}+4X+1}mu=X^6-4X^4-2X^3+4X^2+4X+1, so that the dimension of a subspace generated by the repeated images of a single vector is at most 6. The characteristic polynomial is χ=X8−X7−5X6+2X5+10X4+2X3−7X2−5X−1{displaystyle chi =X^{8}-X^{7}-5X^{6}+2X^{5}+10X^{4}+2X^{3}-7X^{2}-5X-1}chi=X^8-X^7-5X^6+2X^5+10X^4+2X^3-7X^2-5X-1, which is a multiple of the minimal polynomial by a factor X2−X−1{displaystyle X^{2}-X-1}X^2-X-1. There always exist vectors such that the cyclic subspace that they generate has the same minimal polynomial as the operator has on the whole space; indeed most vectors will have this property, and in this case the first standard basis vector e1{displaystyle e_{1}}e_{1} does so: the vectors Ak(e1){displaystyle A^{k}(e_{1})}A^k(e_1) for k=0,1,…,5{displaystyle k=0,1,ldots ,5}k=0,1,ldots,5 are linearly independent and span a cyclic subspace with minimal polynomial μ{displaystyle mu }mu . There exist complementary stable subspaces (of dimension 2) to this cyclic subspace, and the space generated by vectors v=(3,4,8,0,−1,0,2,−1)⊤{displaystyle v=(3,4,8,0,-1,0,2,-1)^{top }}v=(3,4,8,0,-1,0,2,-1)^top and w=(5,4,5,9,−1,1,1,−2)⊤{displaystyle w=(5,4,5,9,-1,1,1,-2)^{top }}w=(5,4,5,9,-1,1,1,-2)^top is an example. In fact one has A⋅v=w{displaystyle Acdot v=w}Acdot v=w, so the complementary subspace is a cyclic subspace generated by v{displaystyle v}v; it has minimal polynomial X2−X−1{displaystyle X^{2}-X-1}X^2-X-1. Since μ{displaystyle mu }mu is the minimal polynomial of the whole space, it is clear that X2−X−1{displaystyle X^{2}-X-1}X^2-X-1 must divide μ{displaystyle mu }mu (and it is easily checked that it does), and we have found the invariant factors X2−X−1{displaystyle X^{2}-X-1}X^2-X-1 and μ=X6−4X4−2X3+4X2+4X+1{displaystyle mu =X^{6}-4X^{4}-2X^{3}+4X^{2}+4X+1}mu=X^6-4X^4-2X^3+4X^2+4X+1 of A. Then the rational canonical form of A is the block diagonal matrix with the corresponding companion matrices as diagonal blocks, namely


C=(01000000110000000000000−10010000−40001000−4000010020000010400000010).{displaystyle scriptstyle C={begin{pmatrix}0&1&0&0&0&0&0&0\1&1&0&0&0&0&0&0\0&0&0&0&0&0&0&-1\0&0&1&0&0&0&0&-4\0&0&0&1&0&0&0&-4\0&0&0&0&1&0&0&2\0&0&0&0&0&1&0&4\0&0&0&0&0&0&1&0end{pmatrix}}.}{displaystyle scriptstyle C={begin{pmatrix}0&1&0&0&0&0&0&0\1&1&0&0&0&0&0&0\0&0&0&0&0&0&0&-1\0&0&1&0&0&0&0&-4\0&0&0&1&0&0&0&-4\0&0&0&0&1&0&0&2\0&0&0&0&0&1&0&4\0&0&0&0&0&0&1&0end{pmatrix}}.}

A basis on which this form is attained is formed by the vectors v,w{displaystyle v,w}v,w above, followed by Ak(e1){displaystyle A^{k}(e_{1})}A^k(e_1) for k=0,1,…,5{displaystyle k=0,1,ldots ,5}k=0,1,ldots,5; explicitly this means that for



P=(351−100−40440−1−1−2−3−5850−2−5−2−11−6090−13−200−1−10001−14010000−112101−102−6−1−2001−14−2){displaystyle scriptstyle P={begin{pmatrix}3&5&1&-1&0&0&-4&0\4&4&0&-1&-1&-2&-3&-5\8&5&0&-2&-5&-2&-11&-6\0&9&0&-1&3&-2&0&0\-1&-1&0&0&0&1&-1&4\0&1&0&0&0&0&-1&1\2&1&0&1&-1&0&2&-6\-1&-2&0&0&1&-1&4&-2end{pmatrix}}}{displaystyle scriptstyle P={begin{pmatrix}3&5&1&-1&0&0&-4&0\4&4&0&-1&-1&-2&-3&-5\8&5&0&-2&-5&-2&-11&-6\0&9&0&-1&3&-2&0&0\-1&-1&0&0&0&1&-1&4\0&1&0&0&0&0&-1&1\2&1&0&1&-1&0&2&-6\-1&-2&0&0&1&-1&4&-2end{pmatrix}}},

one has A=PCP−1.{displaystyle A=PCP^{-1}.}{displaystyle A=PCP^{-1}.}



General case and theory


Fix a base field F and a finite-dimensional vector space V over F. Given a polynomial p(x) ∈ F[x], there is associated to it a companion matrix C whose characteristic polynomial is p(x).


Theorem: Let V be a finite-dimensional vector space over a field F, and A a square matrix over F. Then V (viewed as an F[x]-module with the action of x given by A and extending by linearity) satisfies the F[x]-module isomorphism



VF[x]/(a1(x)) ⊕ … ⊕ F[x]/(an(x))

where the ai(x) ∈ F[x] may be taken to be non-units, unique as monic polynomials, and can be arranged to satisfy the relation



a1(x) | … | an(x)

where "a | b" is notation for "a divides b".


Sketch of Proof: Apply the structure theorem for finitely generated modules over a principal ideal domain to V, viewing it as an F[x]-module. Note that any free F[x]-module is infinite-dimensional over F, so that the resulting direct sum decomposition has no free part since V is finite-dimensional. The uniqueness of the invariant factors requires a separate proof that they are determined up to units; then the monic condition ensures that they are uniquely determined. The proof of this latter part is omitted. See [DF] for details.


Given an arbitrary square matrix, the elementary divisors used in the construction of the Jordan normal form do not exist over F[x], so the invariant factors ai(x) as given above must be used instead. These correspond to factors of the minimal polynomial m(x) = an(x), which (by the Cayley–Hamilton theorem) itself divides the characteristic polynomial p(x) and in fact has the same roots as p(x), not counting multiplicities. Note in particular that the Theorem asserts that the invariant factors have coefficients in F.


As each invariant factor ai(x) is a polynomial in F[x], we may associate a corresponding block matrix Ci which is the companion matrix to ai(x). In particular, each such Ci has its entries in the field F.


Taking the matrix direct sum of these blocks over all the invariant factors yields the rational canonical form of A. Where the minimal polynomial is identical to the characteristic polynomial, the Frobenius normal form is the companion matrix of the characteristic polynomial. As the rational canonical form is uniquely determined by the unique invariant factors associated to A, and these invariant factors are independent of basis, it follows that two square matrices A and B are similar if and only if they have the same rational canonical form.



A rational normal form generalizing the Jordan normal form


The Frobenius normal form does not reflect any form of factorization of the characteristic polynomial, even if it does exist over the ground field F. This implies that it is invariant when F is replaced by a different field (as long as it contains the entries of the original matrix A). On the other hand, this makes the Frobenius normal form rather different from other normal forms that do depend on factoring the characteristic polynomial, notably the diagonal form (if A is diagonalizable) or more generally the Jordan normal form (if the characteristic polynomial splits into linear factors). For instance, the Frobenius normal form of a diagonal matrix with distinct diagonal entries is just the companion matrix of its characteristic polynomial.


There is another way to define a normal form, that, like the Frobenius normal form, is always defined over the same field F as A, but that does reflect a possible factorization of the characteristic polynomial (or equivalently the minimal polynomial) into irreducible factors over F, and which reduces to the Jordan normal form when this factorization only contains linear factors (corresponding to eigenvalues). This form[1] is sometimes called the generalized Jordan normal form, or primary rational canonical form. It is based on the fact that the vector space can be canonically decomposed into a direct sum of stable subspaces corresponding to the distinct irreducible factors P of the characteristic polynomial (as stated by the lemme des noyaux [fr][2]), where the characteristic polynomial of each summand is a power of the corresponding P. These summands can be further decomposed, non-canonically, as a direct sum of cyclic F[x]-modules (like is done for the Frobenius normal form above), where the characteristic polynomial of each summand is still a (generally smaller) power of P. The primary rational canonical form is a block diagonal matrix corresponding to such a decomposition into cyclic modules, with a particular form called generalized Jordan block in the diagonal blocks, corresponding to a particular choice of a basis for the cyclic modules. This generalized Jordan block is itself a block matrix of the form


(C0⋯0UC⋯0⋮0⋯UC){displaystyle scriptstyle {begin{pmatrix}C&0&cdots &0\U&C&cdots &0\vdots &ddots &ddots &vdots \0&cdots &U&Cend{pmatrix}}}{displaystyle scriptstyle {begin{pmatrix}C&0&cdots &0\U&C&cdots &0\vdots &ddots &ddots &vdots \0&cdots &U&Cend{pmatrix}}}

where C is the companion matrix of the irreducible polynomial P, and U is a matrix whose sole nonzero entry is a 1 in the upper right hand corner. For the case of a linear irreducible factor P = xλ, these blocks are reduced to single entries C = λ and U = 1 and, one finds a (transposed) Jordan block. In any generalized Jordan block, all entries immediately below the main diagonal are 1. A basis of the cyclic module giving rise to this form is obtained by choosing a generating vector v (one that is not annihilated by Pk−1(A) where the minimal polynomial of the cyclic module is Pk), and taking as basis


v,A(v),A2(v),…,Ad−1(v), P(A)(v),A(P(A)(v)),…,Ad−1(P(A)(v)), P2(A)(v),…, Pk−1(A)(v),…,Ad−1(Pk−1(A)(v)){displaystyle v,A(v),A^{2}(v),ldots ,A^{d-1}(v),~P(A)(v),A(P(A)(v)),ldots ,A^{d-1}(P(A)(v)),~P^{2}(A)(v),ldots ,~P^{k-1}(A)(v),ldots ,A^{d-1}(P^{k-1}(A)(v))}v,A(v),A^2(v),ldots,A^{d-1}(v), ~<br />
        P(A)(v), A(P(A)(v)),ldots,A^{d-1}(P(A)(v)), ~<br />
        P^2(A)(v),ldots, ~<br />
        P^{k-1}(A)(v),ldots,A^{d-1}(P^{k-1}(A)(v))

where d = deg(P).



See also


  • Smith normal form


References


  • [DF] David S. Dummit and Richard M. Foote. Abstract Algebra. 2nd Edition, John Wiley & Sons. pp. 442, 446, 452-458. .mw-parser-output cite.citation{font-style:inherit}.mw-parser-output .citation q{quotes:"""""""'""'"}.mw-parser-output .citation .cs1-lock-free a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .citation .cs1-lock-limited a,.mw-parser-output .citation .cs1-lock-registration a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .citation .cs1-lock-subscription a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-ws-icon a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Wikisource-logo.svg/12px-Wikisource-logo.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{font-size:100%}.mw-parser-output .cs1-maint{display:none;color:#33aa33;margin-left:0.3em}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}
    ISBN 0-471-36857-1.




  1. ^ Phani Bhushan Bhattacharya, Surender Kumar Jain, S. R. Nagpaul, Basic abstract algebra, Theorem 5.4, p.423


  2. ^ Xavier Gourdon, Les maths en tête, Mathématiques pour M', Algèbre, 1998, Ellipses, Th. 1 p. 173




External links


  • Rational Canonical Form (Mathworld)


Algorithms



  • An O(n3) Algorithm for Frobenius Normal Form

  • An Algorithm for the Frobenius Normal Form (pdf)

  • A rational canonical form Algorithm (pdf)




Popular posts from this blog

Lambaréné

維納斯堡 (華盛頓州)

Mononymous person