Proof that any self-adjoint matrix is diagonalizable.

(Sorry. If you don’t know what this is, please ignore it. It’s not important. Really.)

Setup: If A is self-adjoint, and W is an A-invariant subspace ⇒ W⊥ is A-invariant.

Want: ∀ x∈W⊥, Ax ∈ W⊥〈Ax,w〉 = 0 ∀ w ∈ W ⇐ Orthonormal

Given:〈Ax,w) =〈x,Aw〉 ⇐ Self-Adjoint

Aw∈W ⇐ W is A-invariant

then〈x∈W⊥, Aw∈W〉= 0

Since any self-adjoint matrix is ortho-diagonalizable, if A is self-adjoint, then ∃ an orthonormal basis B∈ℂn made out of eigenvectors such that [A]B

Want: k=n (that is, an orthonormal basis made out of eigenvectors).

Proof by contradiction: Suppose k less then n


W=span(v1, v2, .. vk), A-invariant (this is trivial, but see Appendix A)

then W⊥ is A-invariant, then A is restricted to subspace W⊥

AW⊥: W⊥ → W⊥

Then ∃ v∈W⊥, an eigenvector of AW⊥.

But since AW⊥v = Av, v is an eignevector to of A perpendicular to W.

We assumed that S is maximal, but we ended up with a contradiction, since the set {v1, v2, .. ,vk, v/ ||v||} is an orthogonal set of eigenvectors.

So k must be equal to n. As a result, A is orthodiagonalizable.


HTML is really not suited for doing math.

Appendix A

If λ1≠λ2,〈v1,v2〉must be 0

Here is how: Av1=λ1v1, Av2=λ2v2

Given that: λ1〈v1,v2〉 = 〈λ1v1,v2〉= 〈Av1,v2〉= 〈v1,Av2〉= 〈v1,λ2v2〉=λ2〈v1,v2〉⇒ λ1〈v1,v2〉= λ2〈v1,v2〉

Since λ1≠λ2, 〈v1,v2〉=0