Description:
- We write A∈Rm×n
- A collection of vectors in rows or columns
- A=[ a1 a2 ... an ] or A=a1Ta2T...anT
- Always think of matrix as a transformation for vectors
Matrix Algebra:
- Transpose:
- Exchange row to colume
- [AT](i,j)=A(j,i)=Aji
- every element at ij becomes at ji
- Matrix Addition/Subtraction:
- Must have the same size
- Match element by element
- Resulting to a new matrix of same size
- Multiplication:
- Multiply with a scalar: k(acbd)=(kakckbkd)
- Matrix-Vector product:
- Between a m×n matrix and a n-vector x
- Returns a vector, like a linear map
- A vector transformed by the matrix
- Denote by Ax, the m-vector with i-th component as (Ax)i=∑j=1nAijxj, i=1,...m
- y1=A11x1+A12x2,
- y2=A21x1+A22x2,
- y3=A31x1+A32x2
- If the columns of matrix A is given by vectors ai by columns, then Ax can be intepreted as Linear Combination of the columns, Ax=i=1∑nxiai
- Then we have Ax=y=x1A11A21A31+x2A12A22A32
- Think of vectors within matrix as the transformation for the the input (vector)
- Vector-Matrix product:
- We first need to transpose vector c∈Rm to have (c⊺A)j=i=1∑mAijzi, j=1,...n
- Matrix-Matrix product:
- Defines for A∈Rm×n and B∈Rn×p
- Combine matrix A and matrix B in 1 matrix
- AB returns the m×p matrix with i,j element given by (AB)ij=k=1∑nAikBkj
- column i of A linear combine with row j of B
- Column-wise interpretation:
- AB=A(b1…bp)=(Ab1…Abp)
- which is matrix-vector product
- Row-wise interpretation:
- AB=a1⊺⋮am⊺B=a1⊺B⋮am⊺B
- which is vector-matrix product
- Generally, AB=BA
- AmAn=AnAm=Am+n
- (AB)T=BT.AT
- (AB)CD=A(BC)D=AB(CD)
- Division:
Some classes of matrices:
- Square Matrix
- Low-rank matrix:
- Ranks are defined by the number of unique vectors in the matrix
- Rank one: Dyad
- Zero matrix:
- Ones matrix
Matrix as linear map:
- Vector function
- A map f : Rn→Rm can be represented by a matrix A∈Rm×n, mapping input vectors x∈Rn to output a vector y∈Rm
- y=Ax where A is a matrix A∈Rm×n
- Affine maps are linear functions with a constant vector:
- f(x)=Ax+b for some A∈Rm×n,b∈Rm
Matrix Range and Rank:
- For matrix A∈Rm×n,A=[a1a2...an] which are vectors, 1 vector ai∈Rm each
- each ai is a column space
- row space also exist for A⊺
- Matrix transformation will result to a new space where the solution exist
- range of a matrix is the space that the solution can exist in
- The range of A, R(A)={Ax : x∈Rn}
- ex: R(A)=123456.[x1x2]=1x1+4x22x1+5x23x1+6x2=x1123+x2456
- now x1 and x2 forms a Linear Combination of vector
- so R(A) is a Subspace, span with basis ai
- ie, Ax∈R(A)
- rank(A), rank of A is dimension of R(A)
- rank represents the number of linear independent columns of A
- so the matrix transformation output will have nb of dimension = R(A)
- Full rank means all vectors columns are linear independent
- 1≤rank(A)≤min(m,n)
- as, for example:
- span column space of A=span(x1123+x2456)=2
- span row space of A, column space of A⊺=span(x1[14]+x2[25]+x3[36])=2
- The dimension of the range is capped by the dimension number of vectors
Nullspace of matrix:
- Nullspace of matrix A∈Rm.n is a subspace
- The set of vectors in the input space that are mapped to zero, denoted by N(A)={x∈Rn:Ax=0}
- As transformed by matrix, some vectors output to origin
- example: map a 2d plan in 3d to a point, the 2d space is the null space
- R(A⊺) and N(A) are mutually orthogonal subspace
- ∀x∈R(A⊺),y∈N(A) : x⊺y=0
- as they dot product is 0
Trace:
- The trace of a space matrix A∈Rn×n is the sum of its diagonal elements
- The trace is a linear function of the entries of the matrix
- trace(A)=trace(A⊺) for any square matrix A
- trace(AB)=trace(BA) for any matrices A∈Rn×m,B∈Rm×n
Matrix determinant:
- Think of matrix as tranformation, image 2 vectors forms a plan, the determinant denotes the ratio of size after / size before
- if the determinant is 0 for a plane, then the matrix transformation squish all the plane to a line, or a point
- generally, if the matrix is 0, it squishing everything to lower dimension
- if the determinant is negative, meaning the plane flipped around
- Of a Square Matrix
- det(A)=j=1∑n(−1)i+jA(i,j).det A(i,j)
- inductive formula (Laplace’s determinant expansion)
- where i is any row, choosen at will
- A(i,j) denotes a (n−1)×(n−1) submatrix of A obtained by eliminating row i and column j from A
- A∈Rn,n is singular ⟺det(A)=0⟺N(A) is not equal to {0}
- For any square matrix A,B∈Rn,n and a scalar α:
- det(A)=detA⊺
- det(AB)=det(BA)=det(A)det(B)
- det(αA)=αndet(A)
Orthonormal matrix:
- U∈Rn,n
- We have U⊺U=UU⊺=In
- Hence 1=det(In)=det(U⊺U)=(det U)2
Matrix inverse:
- If A∈Rn,n is non-singular, det=0, then we have A−1 as the unique n×n such that AA−1=A−1A=In
- If A,B are both square nonsingular, then (AB)−1=B−1A−1
- If A is square and nonsingular then:
- (A⊺)−1=(A−1)⊺
- det(A)=det(A⊺)=det(A−1)1
- For a generic matrix A∈Rm,n, pseudoinverse is:
- if m≥n, then Ali is a left inverse of A, if AliA=In
- if n≥m, then Ari is a right inverse of A, if AriA=Im
- In general, a matrix Api is a pseudoinverse of A, if AApiA=A
Similar matrix:
- Two matrices A,B∈Rn,n are said to be similar if there exist a nonsingular matrix P∈Rn,n such that B=P−1AP
- any P with det(P)=0 with 3 linear independent column vector works
- Similar matrices are related to different representation of the same linear map, under a change of basis in the underlying space.
- They have same set of Eigenvalues
- Matrix B=P−1AP represents the linear map y=Ax, in the new basis defined by the columns of P
Matrix Decomposition: