Linear Algebra

Basic Concepts

Trace

Trace should be calculated using the metric. An example is the trace of Ricci tensor,

\[R=g^{ab}R_{ab}\]

Einstein equation is

\[R_{ab}-\frac{1}{2}g_{ab}R=8\pi G T_{ab}\]

The trace is

\[\begin{split}g^{ab}R_{ab}-\frac{1}{2}g^{ab}g_{ab}R &= 8\pi G g^{ab}T_{ab} \\ \Rightarrow R-\frac{1}{2} 4 R &= 8\pi G T \\ \Rightarrow -R &= 8\pi GT\end{split}\]

Determinant

Some useful properties of determinant.

  1. Interchange rows (colomns) once will generate a negative sign.
  2. Determinant can be calculated recursively when implemented numerically.
  3. Determinant for block matrix can be expressed using the blocks.

Here is an example of the determinant of block matrix. Suppose our block matrix is

\[\begin{split}A = \begin{pmatrix} B & C \\ D & E \end{pmatrix},\end{split}\]

where each block is a square matrix. We calculate the determinant through

\[\mathrm{Det}(A) = \mathrm{Det}(BE - CD).\]

This is useful when we have a block diagonalized matrix.

Technique

Inverse of a matrix

Many methods to get the inverse of a matrix. Check wikipedia for Invertible matrix.

Adjugate matrix method for example is here.

\[A^{-1} = \frac{A^*}{|A|}\]

in which, \(A^*\) is the adjugate matrix of \(A\).

Eigenvalues of \(A^\dagger A\)

One can prove that the eigenvalues of any matrix \(B\) that can be written as \(A^\dagger A\) are positive semidefinite.

Proof

Suppose the eigenvectors are \(V_i\) with corresponding eigenvalues \(\lambda_i\), i.e.,

\[B V_i = \lambda_i V_i.\]

We now construct a number

\[V_i^\dagger B V_i.\]

On one hand, we have

\[V_i^\dagger B V_i = V_i^\dagger \lambda_i V_i = \lambda_i V_i^\dagger V_i,\]

where \(V_i^\dagger V_i \geq 0\).

On the other hand,

\[V_i^\dagger B V_i = V_i^\dagger A\dagger A V_i = (A V_i)^\dagger A V_i \geq 0.\]

As long as \(V_i^\dagger V_i \neq 0\), we have

\[\lambda_i = (A V_i)^\dagger A V_i / V_i^\dagger V_i \geq 0.\]

Tensor Product Space

\(\ket{\phi}_1\) and \(\ket{\phi}_2\) are elements of Hilbert space \(H_1\) and \(H_2\). Tensor Product of \(\ket{\phi}_1\) and \(\ket{\phi}_2\) is denoted as \(\ket{\phi}_1\otimes \ket{\phi}_2\). This operation is linear and distributive.

Tensor product space \(H_1\otimes H_2\) is composed of all the linear combinations of all possible tensor products of elements in \(H_1\) and \(H_2\).

Inner Product

Inner product of two tensor products

\[(\bra{\phi}_1\otimes \bra{\phi}_2)(\ket{\psi}_1\otimes \ket{\psi}_2) = ( {} _ 1 \braket{\phi}{\psi}_1)({}_2\braket{\phi}{\psi}_2)\]

Operators Applied to Tensor Product

Two operators \(\hat O_1\) and \(\hat O_2\) works on \(H_1\) and \(H_2\) respectively applied to tensor product

\[(\hat O_1 \otimes \hat O_2 )( \ket{\phi}_1\otimes \ket{\phi}_2 ) = (\hat O_1 \ket{\phi}_1) \otimes (\hat O_2 \ket{\phi}_2)\]

Solving Linear Equations

First of all, write down the augmented matrix for the equation set.

Elementary row operations are allowed on the augmented matrix. Operate on the matrix until one can read out the solutions.


Back to top

© 2017, Lei Ma. | Created with Sphinx and . | On GitHub | Neutrino Notebook Statistical Mechanics Notebook | Index | Page Source