Fundamentals of Vectors

Geometric Basics

Definition and Representation

Geometric Interpretation

Position Vectors

Direction Vectors

Unit Vectors

Standard Basis Vectors

Vector Components Description
$\mathbf{i}$ $\langle 1,0,0 \rangle$ Unit vector in x-direction
$\mathbf{j}$ $\langle 0,1,0 \rangle$ Unit vector in y-direction
$\mathbf{k}$ $\langle 0,0,1 \rangle$ Unit vector in z-direction

Basic Vector Operations

Addition and Subtraction

Parallelogram Law

Triangle Inequality

Scalar Multiplication

Systems of Linear Equations

Matrix Form (Ax = b)

$$ \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \ a_{21} & a_{22} & \cdots & a_{2n} \ \vdots & \vdots & \ddots & \vdots \ a_{m1} & a_{m2} & \cdots & a_{mn} \end{bmatrix} \begin{bmatrix} x_1 \ x_2 \ \vdots \ x_n \end{bmatrix} = \begin{bmatrix} b_1 \ b_2 \ \vdots \ b_m \end{bmatrix} $$

Solution Types

Type Condition Geometric Interpretation
Unique Solution $\text{rank}(A) = \text{rank}([A|b]) = n$ Lines/planes intersect at one point
Infinite Solutions $\text{rank}(A) = \text{rank}([A|b]) < n$ Lines/planes overlap
No Solution $\text{rank}(A) < \text{rank}([A|b])$ Lines/planes are parallel

Geometric Interpretation

Solution Methods

Gaussian Elimination

  1. Forward Elimination

  2. Convert matrix to row echelon form (REF)

  3. Create zeros below diagonal

[1 * * *] [0 1 * *] [0 0 1 *] [0 0 0 1]

  1. Back Substitution
  2. Solve for variables from bottom up
  3. $x_n \rightarrow x_{n-1} \rightarrow \cdots \rightarrow x_1$

Gauss-Jordan Elimination

[1 0 0 *]
[0 1 0 *]
[0 0 1 *]

Key Operations

Operation Description Notation
Row Swap Swap rows $i$ and $j$ $R_i \leftrightarrow R_j$
Scalar Multiplication Multiply row $i$ by $c$ $cR_i$
Row Addition Add multiple of row $i$ to row $j$ $R_j + cR_i$

Matrices

Types and Properties

Type Definition Properties
Square $n \times n$ matrix - Same number of rows and columns
- Can have determinant
- May be invertible
Rectangular $m \times n$ matrix - Different number of rows and columns
- No determinant
Identity ($I_n$) $a_{ij} = \begin{cases} 1 & \text{if } i=j \ 0 & \text{if } i\neq j \end{cases}$ - Square matrix
- 1's on diagonal, 0's elsewhere
- $AI = IA = A$
Zero ($0$) All entries are 0 - Can be any dimension
- $A + 0 = A$
Diagonal $a_{ij} = 0$ for $i \neq j$ - Non-zero elements only on main diagonal
Triangular Upper: $a_{ij} = 0$ for $i > j$
Lower: $a_{ij} = 0$ for $i < j$
- Square matrix
- Determinant = product of diagonal entries

Basic Matrix Operations

Addition and Subtraction

Scalar Multiplication

Transpose

Row Echelon Form

A matrix is in row echelon form if:

  1. All zero rows are at the bottom
  2. Leading coefficient (pivot) of each nonzero row is to the right of pivots above
  3. All entries below pivots are zero

Reduced Row Echelon Form

Additional conditions for RREF:

  1. Leading coefficient of each nonzero row is 1
  2. Each leading 1 is the only nonzero entry in its column

Pivot Positions

Leading Entries

Properties:

Vector Spaces

Axioms of Vector Spaces

Let $V$ be a vector space over field $F$ with vectors $\mathbf{u}, \mathbf{v}, \mathbf{w} \in V$ and scalars $c, d \in F$:

  1. Closure under addition: $\mathbf{u} + \mathbf{v} \in V$
  2. Commutativity: $\mathbf{u} + \mathbf{v} = \mathbf{v} + \mathbf{u}$
  3. Associativity: $(\mathbf{u} + \mathbf{v}) + \mathbf{w} = \mathbf{u} + (\mathbf{v} + \mathbf{w})$
  4. Additive identity: $\exists \mathbf{0} \in V$ such that $\mathbf{v} + \mathbf{0} = \mathbf{v}$
  5. Additive inverse: $\exists -\mathbf{v} \in V$ such that $\mathbf{v} + (-\mathbf{v}) = \mathbf{0}$
  6. Scalar multiplication closure: $c\mathbf{v} \in V$
  7. Scalar multiplication distributivity: $c(\mathbf{u} + \mathbf{v}) = c\mathbf{u} + c\mathbf{v}$
  8. Vector distributivity: $(c + d)\mathbf{v} = c\mathbf{v} + d\mathbf{v}$
  9. Scalar multiplication associativity: $c(d\mathbf{v}) = (cd)\mathbf{v}$
  10. Scalar multiplication identity: $1\mathbf{v} = \mathbf{v}$

Linear Independence

Definition

Vectors ${\mathbf{v}_1, \mathbf{v}_2, ..., \mathbf{v}_n}$ are linearly independent if: $$c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + ... + c_n\mathbf{v}_n = \mathbf{0}$$ implies $c_1 = c_2 = ... = c_n = 0$

Tests and Algorithms

Test Description Result
Matrix Test Form matrix $A$ with vectors as columns and solve $A\mathbf{x}=\mathbf{0}$ Independent if only solution is trivial
Determinant For square matrix of vectors Independent if det$(A) \neq 0$
Rank Compute rank of matrix $A$ Independent if rank = number of vectors

Span

Definition

The span of vectors ${\mathbf{v}_1, ..., \mathbf{v}_n}$ is: $$\text{span}{\mathbf{v}_1, ..., \mathbf{v}_n} = {c_1\mathbf{v}_1 + ... + c_n\mathbf{v}_n : c_i \in F}$$

Geometric Interpretation

Basis

A basis is a linearly independent set of vectors that spans the vector space.

Standard Basis

For $\mathbb{R}^n$: ${\mathbf{e}_1, \mathbf{e}_2, ..., \mathbf{e}_n}$ where: $\mathbf{e}_i = [0, ..., 1, ..., 0]^T$ (1 in $i$th position)

Dimension

The dimension of a vector space is the number of vectors in any basis.

Basis Theorem

Every basis of a vector space has the same number of vectors.

Dimension Theorem

For finite-dimensional vector space $V$:

Subspaces

Tests for Subspaces

A subset $W$ of vector space $V$ is a subspace if:

  1. $\mathbf{0} \in W$
  2. Closed under addition: $\mathbf{u}, \mathbf{v} \in W \implies \mathbf{u} + \mathbf{v} \in W$
  3. Closed under scalar multiplication: $c \in F, \mathbf{v} \in W \implies c\mathbf{v} \in W$

Common Subspaces

Subspace Definition Dimension
Null Space $N(A) = {\mathbf{x}: A\mathbf{x}=\mathbf{0}}$ $n - \text{rank}(A)$
Column Space $C(A) = {\mathbf{y}: \mathbf{y}=A\mathbf{x}}$ $\text{rank}(A)$
Row Space $R(A) = C(A^T)$ $\text{rank}(A)$

Advanced Vector Operations (cont. 1)

Dot Product

The scalar product of two vectors.

Geometric Definition

$$\vec{a} \cdot \vec{b} = |\vec{a}||\vec{b}|\cos(\theta)$$ where $\theta$ is the angle between vectors

Algebraic Definition

For vectors in $\mathbb{R}^n$: $$\vec{a} \cdot \vec{b} = \sum_{i=1}^n a_ib_i = a_1b_1 + a_2b_2 + ... + a_nb_n$$

Properties

Property Formula Description
Commutative $\vec{a} \cdot \vec{b} = \vec{b} \cdot \vec{a}$ Order doesn't matter
Distributive $\vec{a} \cdot (\vec{b} + \vec{c}) = \vec{a} \cdot \vec{b} + \vec{a} \cdot \vec{c}$ Distributes over addition
Scalar Multiplication $(k\vec{a}) \cdot \vec{b} = k(\vec{a} \cdot \vec{b})$ Scalars can be factored out
Self-Dot Product $\vec{a} \cdot \vec{a} = |\vec{a}|^2$ Dot product with itself equals magnitude squared

Angle Formula

$$\cos(\theta) = \frac{\vec{a} \cdot \vec{b}}{|\vec{a}||\vec{b}|}$$

Projection Formula

Vector projection of $\vec{a}$ onto $\vec{b}$: $$\text{proj}_{\vec{b}}\vec{a} = \frac{\vec{a} \cdot \vec{b}}{|\vec{b}|^2}\vec{b}$$

Cross Product

Vector product resulting in a vector perpendicular to both input vectors (3D only).

Right-Hand Rule

  1. Point index finger in direction of first vector
  2. Point middle finger in direction of second vector
  3. Thumb points in direction of cross product

Properties

Property Formula Description
Anti-commutative $\vec{a} \times \vec{b} = -(\vec{b} \times \vec{a})$ Order matters
Distributive $\vec{a} \times (\vec{b} + \vec{c}) = \vec{a} \times \vec{b} + \vec{a} \times \vec{c}$ Distributes over addition
Magnitude $|\vec{a} \times \vec{b}| = |\vec{a}||\vec{b}|\sin(\theta)$ Area of parallelogram
Perpendicular $\vec{a} \times \vec{b} \perp \vec{a}$ and $\vec{a} \times \vec{b} \perp \vec{b}$ Result is perpendicular to both vectors

Triple Product

Scalar triple product: $$\vec{a} \cdot (\vec{b} \times \vec{c}) = \det[\vec{a} \; \vec{b} \; \vec{c}]$$ Represents volume of parallelepiped

Linear Combinations

Sum of vectors with scalar coefficients: $$c_1\vec{v_1} + c_2\vec{v_2} + ... + c_n\vec{v_n}$$

Geometric Interpretation

Matrix Properties (cont. 1)

Rank

The rank of a matrix is the dimension of the vector space spanned by its columns (or rows).

Full Rank

A matrix has full rank when:

Rank Theorems

Theorem Statement
Rank-Nullity For m×n matrix A: rank(A) + nullity(A) = n
Product Rank rank(AB) ≤ min(rank(A), rank(B))
Addition Rank rank(A + B) ≤ rank(A) + rank(B)

Trace

The trace of a square matrix is the sum of elements on the main diagonal. $$tr(A) = \sum_{i=1}^n a_{ii}$$

Properties

Determinant

For square matrix A, det(A) or |A| measures the scaling factor of the linear transformation.

Properties

  1. det(AB) = det(A)⋅det(B)
  2. det(A^T) = det(A)
  3. det(A^{-1}) = \frac{1}{det(A)}
  4. For triangular matrices: det = product of diagonal entries
  5. det(cA) = c^n det(A) for n×n matrix

Calculation Methods

Cofactor Expansion

For n×n matrix: $$det(A) = \sum_{j=1}^n a_{ij}C_{ij}$$ where Cᵢⱼ is the (i,j) cofactor

Row/Column Expansion

Choose any row/column i: $$det(A) = \sum_{j=1}^n (-1)^{i+j} a_{ij}M_{ij}$$ where Mᵢⱼ is the minor

Triangular Method

  1. Convert to upper triangular using row operations
  2. Multiply diagonal elements
  3. Account for row operation signs

Cramer's Rule

For system Ax = b with det(A) ≠ 0: $$x_i = \frac{det(A_i)}{det(A)}$$ where Aᵢ is A with column i replaced by b

Matrix Operations (cont. 1)

Matrix Multiplication

For matrices $A_{m×n}$ and $B_{n×p}$: $$(AB){ij} = \sum$$}^n a_{ik}b_{kj

Properties

Non-commutativity

Associativity

$(AB)C = A(BC)$ always holds for conformable matrices

Inverse

For square matrix $A$, if $AA^{-1} = A^{-1}A = I$, then $A^{-1}$ is the inverse of $A$

Existence Conditions

Properties

Property Formula
Double Inverse $(A^{-1})^{-1} = A$
Product Inverse $(AB)^{-1} = B^{-1}A^{-1}$
Scalar Inverse $(cA)^{-1} = \frac{1}{c}A^{-1}$
Transpose Inverse $(A^{-1})^T = (A^T)^{-1}$

Calculation Methods

Adjugate Method

For an n×n matrix: $$A^{-1} = \frac{1}{\det(A)}\text{adj}(A)$$ where adj(A) is the adjugate matrix

Gaussian Elimination

  1. Form augmented matrix $\lbrack A|I \rbrack$
  2. Convert left side to identity matrix
  3. Right side becomes $A^{-1}$

Elementary Matrices

Conjugate Transpose

For matrix $A$:

Systems of Linear Equations (cont. 1)

Homogeneous Systems ($A\mathbf{x} = \mathbf{0}$)

A system where all constants are zero: $A\mathbf{x} = \mathbf{0}$

Trivial Solution

Nontrivial Solutions

Consistency Theorems

Consistent System Conditions

Condition System is Consistent When
Rank Test $\text{rank}(A) = \text{rank}([A|\mathbf{b}])$
Square Matrix $\text{det}(A) \neq 0$
General Solutions exist if $\mathbf{b}$ is in column space of $A$

Fredholm Alternative

For system $A\mathbf{x} = \mathbf{b}$, exactly one of these is true:

  1. System has a solution
  2. $\mathbf{y}^T A = \mathbf{0}$ has a solution with $\mathbf{y}^T\mathbf{b} \neq 0$

Cramer's Rule

For system $A\mathbf{x} = \mathbf{b}$ where $A$ is $n \times n$ with $\text{det}(A) \neq 0$: $$x_i = \frac{\text{det}(A_i)}{\text{det}(A)}$$ Where $A_i$ is matrix $A$ with column $i$ replaced by $\mathbf{b}$

Matrix Inverse Method

For square system $A\mathbf{x} = \mathbf{b}$ where $A$ is invertible: $$\mathbf{x} = A^{-1}\mathbf{b}$$

Requirements:

Linear Transformations

Definition

A linear transformation $T: V \to W$ is a function between vector spaces that preserves:

  1. Addition: $T(u + v) = T(u) + T(v)$
  2. Scalar multiplication: $T(cv) = cT(v)$

Matrix Representation

Every linear transformation $T: \mathbb{R}^n \to \mathbb{R}^m$ can be represented by a unique $m \times n$ matrix $A$

Standard Matrix

For transformation $T$, the standard matrix $A$ is formed by: $$A = [T(e_1) \; T(e_2) \; \cdots \; T(e_n)]$$ where $e_i$ are standard basis vectors

Properties

Property Definition Test
Kernel (Null Space) $\text{ker}(T) = {v \in V : T(v) = 0}$ Solve $Ax = 0$
Range (Image) $\text{range}(T) = {T(v) : v \in V}$ Span of columns of $A$
One-to-One (Injective) $T(v_1) = T(v_2) \implies v_1 = v_2$ $\text{ker}(T) = {0}$
Onto (Surjective) $\text{range}(T) = W$ Columns span $W$
Isomorphism Bijective linear transformation One-to-one and onto

Special Transformations

Rotation

Reflection

Projection

Scaling

Shearing

Inner Product Spaces

Definition

An inner product on a vector space $V$ is a function $\langle \cdot,\cdot \rangle: V \times V \to \mathbb{R}$ (or $\mathbb{C}$)

Properties

Property Mathematical Form Description
Positive Definiteness $\langle x,x \rangle \geq 0$ and $\langle x,x \rangle = 0 \iff x = 0$ Always non-negative, zero only for zero vector
Symmetry $\langle x,y \rangle = \overline{\langle y,x \rangle}$ Complex conjugate for complex spaces
Linearity $\langle ax+by,z \rangle = a\langle x,z \rangle + b\langle y,z \rangle$ Linear in first argument

Norm

The norm induced by inner product: $|x| = \sqrt{\langle x,x \rangle}$

Properties

Distance Function

$$d(x,y) = |x-y| = \sqrt{\langle x-y,x-y \rangle}$$

Orthogonality

Orthogonal Vectors

Two vectors $x,y$ are orthogonal if $\langle x,y \rangle = 0$

Orthogonal Sets

Orthogonal Matrices

Matrix $Q$ is orthogonal if $Q^TQ = QQ^T = I$

Properties

Property Formula Note
Inverse $Q^{-1} = Q^T$ Transpose equals inverse
Determinant $\det(Q) = \pm 1$ Always unit magnitude
Column/Rows $\langle q_i,q_j \rangle = \delta_{ij}$ Form orthonormal set
Length Preservation $|Qx| = |x|$ Preserves distances

Orthogonal Complements

For subspace $W$, orthogonal complement $W^⊥$: $$W^⊥ = {x \in V : \langle x,w \rangle = 0 \text{ for all } w \in W}$$

Orthogonal Projections

Projection onto subspace $W$: $$\text{proj}W(x) = \sumw_i$$ where ${w_1,\ldots,w_k}$ is basis for $W$}^k \frac{\langle x,w_i \rangle}{|w_i|^2

For orthonormal basis: $$\text{proj}W(x) = \sum^k \langle x,w_i \rangle w_i$$

Special Matrices (cont. 1)

Type Definition Properties Example
Symmetric $A = A^T$ - Diagonal elements can be any real number
- Elements symmetric across main diagonal
- All eigenvalues are real
$$\begin{bmatrix} 1 & 2 & 3\ 2 & 4 & 5\ 3 & 5 & 6 \end{bmatrix}$$
Skew-symmetric $A = -A^T$ - Diagonal elements must be zero
- $a_{ij} = -a_{ji}$
- All eigenvalues are imaginary or zero
$$\begin{bmatrix} 0 & 2 & -1\ -2 & 0 & 3\ 1 & -3 & 0 \end{bmatrix}$$
Orthogonal $AA^T = A^TA = I$ - $A^{-1} = A^T$
- Columns/rows form orthonormal basis
- $\det(A) = \pm 1$
- Preserves lengths and angles
$$\begin{bmatrix} \cos\theta & -\sin\theta\ \sin\theta & \cos\theta \end{bmatrix}$$
Idempotent $A^2 = A$ - Eigenvalues are only 0 or 1
- Trace = rank
- Used in projection matrices
$$\begin{bmatrix} 1 & 0\ 0 & 0 \end{bmatrix}$$
Nilpotent $A^k = 0$ for some $k$ - All eigenvalues = 0
- Trace = 0
- $k \leq n$ where $n$ is matrix size
$$\begin{bmatrix} 0 & 1\ 0 & 0 \end{bmatrix}$$

Additional Properties:

  1. Symmetric Matrices:

  2. All real symmetric matrices are diagonalizable

  3. $x^TAx$ is a quadratic form

  4. Orthogonal Matrices:

  5. Every column/row has unit length

  6. Any two columns/rows are perpendicular
  7. Preserves inner products: $(Ax)^T(Ay) = x^Ty$

  8. Idempotent Matrices:

  9. $I - A$ is also idempotent if $A$ is idempotent

  10. Rank = Trace for idempotent matrices

  11. Nilpotent Matrices:

  12. The minimal $k$ for which $A^k = 0$ is called the index of nilpotency
  13. Characteristic polynomial is $\lambda^n$

Change of Basis

Transition Matrices

A transition matrix $P$ transforms coordinates from one basis to another: $$P_{B←C} = [v_1 \; v_2 \; \cdots \; v_n]$$ where $v_i$ are the basis vectors of B expressed in basis C

Operation Formula Meaning
Change from C to B $[v]B = P[v]_C$ Vector v in basis B
Change from B to C $[v]C = P[v]_B$ Vector v in basis C
Inverse relation $P_{C←B} = P_{B←C}^{-1}$ Matrices are inverses

Similar Matrices

Two matrices A and B are similar if: $$B = P^{-1}AP$$ where P is an invertible matrix

Properties:

Coordinate Vectors

For a vector $v$ and basis $B = {b_1, b_2, ..., b_n}$: $$[v]_B = \begin{bmatrix} c_1 \ c_2 \ \vdots \ c_n \end{bmatrix}$$ where $v = c_1b_1 + c_2b_2 + ... + c_nb_n$

Orthogonalization

Gram-Schmidt Process

Converting basis ${v_1, v_2, ..., v_n}$ to orthogonal basis ${u_1, u_2, ..., u_n}$:

  1. $u_1 = v_1$
  2. $u_2 = v_2 - \text{proj}_{u_1}(v_2)$
  3. $u_3 = v_3 - \text{proj}{u_1}(v_3) - \text{proj}(v_3)$

General formula: $$u_k = v_k - \sum_{i=1}^{k-1} \text{proj}_{u_i}(v_k)$$ where $\text{proj}_u(v) = \frac{\langle v,u \rangle}{\langle u,u \rangle}u$

QR Decomposition

Matrix A can be decomposed as: $$A = QR$$ where:

Orthonormal Basis

Construction

  1. Start with any basis
  2. Apply Gram-Schmidt process
  3. Normalize each vector: $e_i = \frac{u_i}{|u_i|}$

Properties

Property Description
Orthogonality $\langle e_i,e_j \rangle = 0$ for $i \neq j$
Normality $|e_i| = 1$ for all i
Transition Matrix P is orthogonal ($P^T = P^{-1}$)
Coordinates $[v]_B = [\langle v,e_1 \rangle \; \langle v,e_2 \rangle \; \cdots \; \langle v,e_n \rangle]^T$

Eigenvalues and Eigenvectors

Definitions

Characteristic Equation

$$\det(A - \lambda I) = 0$$ where:

Characteristic Polynomial

$$p(\lambda) = \det(A - \lambda I)$$

Properties

Property Description
Sum $\sum \lambda_i = \text{trace}(A)$
Product $\prod \lambda_i = \det(A)$
Number ≤ dimension of matrix

Geometric Multiplicity

Algebraic Multiplicity

Eigenspace

$$E_\lambda = \text{null}(A - \lambda I)$$

Diagonalization

$A = PDP^{-1}$ where:

Diagonalizability Conditions

Matrix $A$ is diagonalizable if and only if:

  1. Geometric multiplicity = algebraic multiplicity for all eigenvalues
  2. Sum of dimensions of eigenspaces = matrix dimension

Diagonalization Process

  1. Find eigenvalues (solve characteristic equation)
  2. Find eigenvectors for each eigenvalue
  3. Form $P$ from eigenvectors as columns
  4. Form $D$ with eigenvalues on diagonal
  5. Verify $A = PDP^{-1}$

Similar Matrices

Special Cases

Repeated Eigenvalues

Complex Eigenvalues

Applications

Powers of Matrices

$$A^n = PD^nP^{-1}$$

Difference Equations

For system $\vec{x}_{k+1} = A\vec{x}_k$:

Differential Equations

For system $\frac{d\vec{x}}{dt} = A\vec{x}$:


Important Theorems

Fundamental Theorem of Linear Algebra

For a linear transformation $T: V \rightarrow W$ and its matrix $A$:

Cayley-Hamilton Theorem

Every square matrix $A$ satisfies its own characteristic equation: $$p(A) = 0 \text{ where } p(\lambda) = \det(A - \lambda I)$$

Spectral Theorem

For symmetric matrices $A \in \mathbb{R}^{n \times n}$:

Triangle Inequality

For vectors $x, y$: $$|x + y| \leq |x| + |y|$$

Cauchy-Schwarz Inequality

For vectors $x, y$: $$|x^Ty| \leq |x||y|$$

Invertible Matrix Theorem

The following statements are equivalent for a square matrix $A$:

  1. $A$ is invertible
  2. $\det(A) \neq 0$
  3. $\text{rank}(A) = n$
  4. $\text{Null}(A) = {0}$
  5. Columns are linearly independent
  6. $Ax = b$ has unique solution for all $b$

Matrix Factorization Theorems

LU Decomposition

For matrix $A$: $$A = LU$$ where:

QR Decomposition

For matrix $A$: $$A = QR$$ where:

Cholesky Decomposition

For symmetric positive definite matrix $A$: $$A = LL^T$$ where:

Advanced Topics

Singular Value Decomposition (SVD)

Any matrix $A \in \mathbb{R}^{m \times n}$ can be decomposed as $A = U\Sigma V^T$

Singular Values

U and V Matrices

Applications

Application Description
Low-rank approximation Truncate to $k$ largest singular values
Image compression Reduce dimensionality while preserving structure
Principal Component Analysis Use right singular vectors as principal components
Pseudoinverse $A^+ = V\Sigma^+U^T$

Jordan Canonical Form

For matrix $A$, exists invertible $P$ such that $P^{-1}AP = J$

Jordan Blocks

$$ J_k(\lambda) = \begin{bmatrix} \lambda & 1 & 0 & \cdots & 0 \ 0 & \lambda & 1 & \cdots & 0 \ \vdots & \vdots & \ddots & \ddots & \vdots \ 0 & 0 & \cdots & \lambda & 1 \ 0 & 0 & \cdots & 0 & \lambda \end{bmatrix} $$

Generalized Eigenvectors

Positive Definite Matrices

Symmetric matrix $A$ where $x^TAx > 0$ for all nonzero $x$

Tests for Positive Definiteness

Test Condition
Eigenvalue All eigenvalues > 0
Leading Principals All leading principal minors > 0
Cholesky Exists unique lower triangular $L$ with $A = LL^T$
Quadratic Form $x^TAx > 0$ for all nonzero $x$

Applications

Linear Operators

Maps between vector spaces preserving linear structure

Adjoint Operators

Self-Adjoint Operators

Dual Spaces

Vector space $V^*$ of linear functionals on $V$

Dual Basis

Dual Maps

For linear map $T: V \to W$, dual map $T^: W^ \to V^$ $$\langle T^f,v \rangle = \langle f,Tv \rangle$$

Tensor Products

$V \otimes W$ is universal space for bilinear maps

Definition

Properties

Property Description
Dimension $\dim(V \otimes W) = \dim(V)\dim(W)$
Associativity $(U \otimes V) \otimes W \cong U \otimes (V \otimes W)$
Distributivity $(U \oplus V) \otimes W \cong (U \otimes W) \oplus (V \otimes W)$
Field multiplication $(\alpha v) \otimes w = v \otimes (\alpha w) = \alpha(v \otimes w)$

Multilinear Algebra

Tensors

Exterior Algebra