# The transfer matrix method

The method we have just seen is an ad hoc solution that works only in this case, and it would be rather difficult to extend it in the presence of an external field. A more general method that allows us to extend our considerations also when ${\displaystyle h\neq 0}$ and to compute other interesting properties is the so called transfer matrix method, which basically consists in defining an appropriate matrix related to the model such that all the thermodynamic properties of the system can be extracted from the eigenvalues of this matrix. We are going to see this method applied to the one-dimensional Ising model, but its validity is completely general; we will stress every time if we are stating general properties of the transfer matrix method or restricting to particular cases.

The Hamiltonian of a one-dimensional Ising model with periodic boundary conditions when an external field is present is such that:

${\displaystyle -\beta {\mathcal {H}}=K(S_{1}S_{2}+\cdots +S_{N-1}S_{N}+S_{N}S_{1})+h\sum _{i=1}^{N}S_{i}}$
where ${\displaystyle \beta {\mathcal {H}}}$ is sometimes called reduced Hamiltonian. We now rewrite the partition function in the following "symmetric" way:
{\displaystyle {\begin{aligned}Z_{N}=\sum _{S_{1}=\pm 1}\cdots \sum _{S_{N}=\pm 1}\left[e^{KS_{1}S_{2}+{\frac {h}{2}}(S_{1}+S_{2})}\right]\left[e^{KS_{2}S_{3}+{\frac {h}{2}}(S_{2}+S_{3})}\right]\cdots \\\cdots \left[e^{KS_{N}S_{1}+{\frac {h}{2}}(S_{N}+S_{1})}\right]\end{aligned}}}
If we therefore define the transfer matrix ${\displaystyle {\boldsymbol {T}}}$ such that[1]:
${\displaystyle \langle S|{\boldsymbol {T}}|S'\rangle =e^{KSS'+{\frac {h}{2}}(S+S')}}$
we can write ${\displaystyle Z_{N}}$ as a product of the matrix elements of ${\displaystyle {\boldsymbol {T}}}$:
${\displaystyle Z_{N}=\sum _{S_{1}=\pm 1}\cdots \sum _{S_{N}=\pm 1}\langle S_{1}|{\boldsymbol {T}}|S_{2}\rangle \langle S_{2}|{\boldsymbol {T}}|S_{3}\rangle \cdots \langle S_{N}|{\boldsymbol {T}}|S_{1}\rangle }$
If we now choose ${\displaystyle |S_{i}\rangle }$ so that they are orthonormal, i.e.:
${\displaystyle \left|+1\right\rangle ={\begin{pmatrix}1\\0\end{pmatrix}}\qquad \qquad \quad \left|-1\right\rangle ={\begin{pmatrix}0\\1\end{pmatrix}}}$
then an explicit representation of ${\displaystyle {\boldsymbol {T}}}$ is:
${\displaystyle {\boldsymbol {T}}={\begin{pmatrix}e^{K+h}&e^{-K}\\e^{-K}&e^{K-h}\end{pmatrix}}}$
Note that the matrix elements of ${\displaystyle {\boldsymbol {T}}}$ are in one-to-one correspondence with the spin variables, and that the dimension of the transfer matrix depends on the number of possible values that they can assume. Now, since the vectors ${\displaystyle |S_{i}\rangle }$ are orthonormal we have:
${\displaystyle \sum _{S_{i}=\pm 1}|S_{i}\rangle \langle S_{i}|=\mathbb {I} }$
where ${\displaystyle \mathbb {I} }$ is the identity matrix, and ${\displaystyle Z_{N}}$ becomes:
${\displaystyle Z_{N}=\sum _{S_{1}=\pm 1}\langle S_{1}|{\boldsymbol {T}}^{N}|S_{1}\rangle =\operatorname {Tr} {\boldsymbol {T}}^{N}}$
This is the general purpose of the transfer matrix method: being able to write the partition function of a system as the trace of the ${\displaystyle N}$-th power of an appropriately defined matrix (the transfer matrix). Now, the trace can be easily computed if we diagonalize ${\displaystyle {\boldsymbol {T}}}$; if we call ${\displaystyle {\boldsymbol {T}}_{D}}$ the diagonalization of the transfer matrix we will have:
${\displaystyle {\boldsymbol {T}}_{D}={\boldsymbol {P}}^{-1}{\boldsymbol {TP}}}$
where ${\displaystyle {\boldsymbol {P}}}$ is an invertible matrix whose columns are the eigenvectors of ${\displaystyle {\boldsymbol {T}}}$. Since ${\displaystyle {\boldsymbol {P}}^{-1}{\boldsymbol {P}}=\mathbb {I} }$, the partition function becomes:
${\displaystyle Z_{N}=\operatorname {Tr} {\boldsymbol {T}}^{N}=\operatorname {Tr} ({\boldsymbol {P}}\underbrace {{\boldsymbol {P}}^{-1}{\boldsymbol {T}}{\boldsymbol {PP}}^{-1}\cdots {\boldsymbol {P}}^{-1}{\boldsymbol {T}}{\boldsymbol {P}}} _{N{\text{ times}}}{\boldsymbol {P}}^{-1})=\operatorname {Tr} \left({\boldsymbol {PT}}_{D}^{N}{\boldsymbol {P}}^{-1}\right)}$
and using the cyclic property of the trace[2] we get:
${\displaystyle Z_{N}=\operatorname {Tr} \left({\boldsymbol {T}}_{D}^{N}{\boldsymbol {P}}^{-1}{\boldsymbol {P}}\right)=\operatorname {Tr} {\boldsymbol {T}}_{D}^{N}}$
In the case of the one-dimensional Ising model we are considering ${\displaystyle {\boldsymbol {T}}_{D}}$ is a ${\displaystyle 2\times 2}$ matrix so it will have two eigenvalues which we call ${\displaystyle \lambda _{+}}$ and ${\displaystyle \lambda _{-}}$, with the convention ${\displaystyle \lambda _{+}>\lambda _{-}}$ (we could in principle also consider the case ${\displaystyle \lambda _{+}=\lambda _{-}}$, but we will shortly see why this is not necessary). We will therefore have:
${\displaystyle Z_{N}=\lambda _{+}^{N}+\lambda _{-}^{N}}$
In general, if ${\displaystyle {\boldsymbol {T}}}$ is a ${\displaystyle (n+2)\times (n+2)}$ matrix whose eigenvalues are ${\displaystyle \lambda _{+}>\lambda _{-}>\lambda _{1}>\cdots >\lambda _{n}}$ we will have:
${\displaystyle Z_{N}=\lambda _{+}^{N}+\lambda _{-}^{N}+\sum _{i=1}^{n}\lambda _{i}^{N}}$
Let us note that the dimension of the transfer matrix can increase if we consider interactions with longer ranges or if we allow the spin variables to assume more than one value[3], and clearly the larger the matrix the harder its eigenvalues are to be computed, but the principle is always the same.

We will now use the transfer matrix in order to compute some interesting properties of a generic system.

## Free energy

Considering a general situation, the partition function of a model can be written with the use of the transfer matrix as ${\textstyle Z_{N}=\lambda _{+}^{N}+\lambda _{-}^{N}+\sum _{i=1}^{n}\lambda _{i}^{N}}$. Therefore, in the thermodynamic limit the free energy of the system will be:

${\displaystyle f=-k_{B}T\lim _{N\to \infty }{\frac {1}{N}}\ln Z_{N}=-k_{B}T\lim _{N\to \infty }{\frac {1}{N}}\ln \left[\lambda _{+}^{N}\left(1+{\frac {\lambda _{-}^{N}}{\lambda _{+}^{N}}}+\sum _{i=1}^{n}{\frac {\lambda _{i}^{N}}{\lambda _{+}^{N}}}\right)\right]}$
Since ${\displaystyle \lambda _{+}>\lambda _{-}>\lambda _{i}}$ we have:
${\displaystyle \left({\frac {\lambda _{-}}{\lambda _{+}}}\right)^{N}{\stackrel {N\to \infty }{\longrightarrow }}0\qquad \qquad \;\left({\frac {\lambda _{i}}{\lambda _{+}}}\right)^{N}{\stackrel {N\to \infty }{\longrightarrow }}0}$
and therefore:
${\displaystyle f=-k_{B}T\ln \lambda _{+}}$
This is an extremely important result, since this means that the entire thermodynamics of the system can be obtained by only knowing the largest eigenvalue of the transfer matrix[4]. Furthermore, the fact that only ${\displaystyle \lambda _{+}}$ is involved in the expression of the free energy in the thermodynamic limit has a very important consequence on the possibility for phase transitions to occur. In fact, there exists a theorem called Perron-Frobenius theorem in linear algebra which states the following:

Theorem (Perron-Frobenius)

If ${\displaystyle {\boldsymbol {A}}}$ is an ${\displaystyle n\times n}$ square matrix (with ${\displaystyle n}$ finite) such that all its elements are positive, namely ${\displaystyle A_{ij}>0\quad \forall i,j}$, then the eigenvalue ${\displaystyle \lambda _{+}}$ with largest magnitude is:

1. real and positive
1. non-degenerate
1. an analytic function of the elements ${\displaystyle A_{ij}}$

We omit the proof of this theorem. This means that if the transfer matrix of a model satisfies such properties, since ${\displaystyle \lambda _{+}}$ is an analytic function the system will never exhibit phase transitions because also ${\displaystyle f}$ will be analytic.

For the one-dimensional Ising model with nearest neighbour interaction that we are considering, these properties are satisfied and so we have:

1. ${\displaystyle \lambda _{+}\neq 0}$, so that ${\displaystyle f}$ is well defined
1. ${\displaystyle \lambda _{+}\neq \lambda _{-}}$ (this justifies a posteriori why we have considered ${\displaystyle \lambda _{+}\neq \lambda _{-}}$ from the beginning)
1. ${\displaystyle \lambda _{+}}$ is analytic, and therefore so is ${\displaystyle f}$

From the last fact we deduce that no phase transition can occur for ${\displaystyle T\neq 0}$; if ${\displaystyle T=0}$ some of the elements of ${\displaystyle {\boldsymbol {T}}}$ diverge and Perron-Frobenius theorem can't be applied[5].

In general, in higher dimensions or with different kinds of interactions the transfer matrix can become infinite-dimensional in the thermodynamic limit: in this case the assumptions of Perron-Frobenius theorem don't hold and so the system can actually exhibit phase transitions (since ${\displaystyle \lambda _{+}}$ is not necessarily an analytic function any more).

## Correlation function and correlation length

The transfer matrix method can also be used to compute the correlation function and so the correlation length of a system. As we know (see Long range correlations) in order to do that we first have to compute the two-point correlation function.

The connected correlation function of two spins which are at ${\displaystyle R}$ sites of distance is defined as:

${\displaystyle G_{R}=\left\langle S_{1}S_{R}\right\rangle -\left\langle S_{1}\right\rangle \left\langle S_{R}\right\rangle }$
where we have considered the first and the ${\displaystyle R}$-th spins because we are assuming periodic boundary conditions (so our choice is equivalent to considering two generic spins at sites ${\displaystyle i}$ and ${\displaystyle i+R}$). For very large distances, we know that the correlation function decays exponentially, namely ${\displaystyle G_{R}\sim e^{-R/\xi }}$ for ${\displaystyle R\to \infty }$, where ${\displaystyle \xi }$ is the correlation length. Therefore we can define the correlation length ${\displaystyle \xi }$ of the system as:
${\displaystyle \xi ^{-1}=\lim _{R\to \infty }\left(-{\frac {1}{R}}\ln G_{R}\right)}$

We begin by computing ${\displaystyle \left\langle S_{1}S_{R}\right\rangle }$; this is the thermodynamic limit of the quantity:

${\displaystyle \left\langle S_{1}S_{R}\right\rangle _{N}={\frac {1}{Z_{N}}}\sum _{\lbrace S_{i}=\pm 1\rbrace }S_{1}S_{R}e^{-\beta {\mathcal {H}}}}$
Using the same factorization of the Hamiltonian that we have previously seen and that led to the expression of ${\displaystyle Z_{N}}$ in terms of the eigenvalues ${\displaystyle \lambda _{i}}$ of ${\displaystyle {\boldsymbol {T}}}$, we have:
{\displaystyle {\begin{aligned}\left\langle S_{1}S_{R}\right\rangle _{N}={\frac {1}{Z_{N}}}\sum _{\lbrace S_{i}=\pm 1\rbrace }S_{1}\langle S_{1}|{\boldsymbol {T}}|S_{2}\rangle \langle S_{2}|{\boldsymbol {T}}|S_{3}\rangle \cdots \\\cdots \langle S_{R-1}|{\boldsymbol {T}}|S_{R}\rangle S_{R}\langle S_{R}|{\boldsymbol {T}}|S_{R+1}\rangle \cdots \langle S_{N}|{\boldsymbol {T}}|S_{1}\rangle =\\={\frac {1}{Z_{N}}}\sum _{S_{1},S_{R}}S_{1}\langle S_{1}|{\boldsymbol {T}}^{R}|S_{R}\rangle S_{R}\langle S_{R}|{\boldsymbol {T}}^{N-R}|S_{1}\rangle \end{aligned}}}
Now, we can write:
${\displaystyle {\boldsymbol {T}}=\sum _{i}|t_{i}\rangle \lambda _{i}\langle t_{i}|}$
where ${\displaystyle |t_{i}\rangle }$ are the eigenvectors of ${\displaystyle {\boldsymbol {T}}}$ and ${\displaystyle \lambda _{i}}$ their relative eigenvalues (again ordered so that ${\displaystyle \lambda _{+}>\lambda _{-}>\lambda _{1}>\cdots }$); this way since these eigenvectors are orthonormal we also have:
${\displaystyle {\boldsymbol {T}}^{n}=\sum _{i}|t_{i}\rangle \lambda _{i}^{n}\langle t_{i}|}$
Therefore:
${\displaystyle \left\langle S_{1}S_{R}\right\rangle _{N}={\frac {1}{Z_{N}}}\sum _{S_{1},S_{R}}\sum _{i,j}S_{1}\langle S_{1}|t_{i}\rangle \lambda _{i}^{R}\langle t_{i}|S_{R}\rangle S_{R}\langle S_{R}|t_{j}\rangle \lambda _{j}^{N-R}\langle t_{j}|S_{1}\rangle }$
Now, we introduce the matrices:
${\displaystyle {\boldsymbol {S_{i}}}=\sum _{S_{i}}|S_{i}\rangle S_{i}\langle S_{i}|}$
which are diagonal matrices such that on their diagonal there are all the possible spin values at the ${\displaystyle i}$-th site. This way, moving ${\displaystyle \langle t_{j}|S_{1}\rangle }$ at the beginning of ${\displaystyle \left\langle S_{1}S_{R}\right\rangle _{N}}$ (since it is simply a number) and summing over ${\displaystyle S_{1}}$ and ${\displaystyle S_{R}}$ we get:
${\displaystyle \left\langle S_{1}S_{R}\right\rangle _{N}={\frac {1}{Z_{N}}}\sum _{i,j}\langle t_{j}|{\boldsymbol {S_{1}}}|t_{i}\rangle \lambda _{i}^{R}\langle t_{i}|{\boldsymbol {S_{R}}}|t_{j}\rangle \lambda _{j}^{N-R}}$
and using the expression of ${\displaystyle Z_{N}}$ given in terms of the ${\displaystyle \lambda _{i}}$-s:
${\displaystyle \left\langle S_{1}S_{R}\right\rangle _{N}={\frac {\sum _{i,j}\langle t_{j}|{\boldsymbol {S_{1}}}|t_{i}\rangle \lambda _{i}^{R}\langle t_{i}|{\boldsymbol {S_{R}}}|t_{j}\rangle \lambda _{j}^{N-R}}{\sum _{k}\lambda _{k}^{N}}}}$
Multiplying and dividing by ${\displaystyle \lambda _{+}^{N}}$ we get:
${\displaystyle \left\langle S_{1}S_{R}\right\rangle _{N}={\frac {\sum _{i,j}\langle t_{j}|{\boldsymbol {S_{1}}}|t_{i}\rangle (\lambda _{i}/\lambda _{+})^{R}\langle t_{i}|{\boldsymbol {S_{R}}}|t_{j}\rangle (\lambda _{j}/\lambda _{+})^{N-R}}{\sum _{k}(\lambda _{k}/\lambda _{+})^{N}}}}$
In the thermodynamic limit the surviving terms are those containing ${\displaystyle \lambda _{j}=\lambda _{+}}$ and ${\displaystyle \lambda _{k}=\lambda _{+}}$, and also all the terms with ${\displaystyle \lambda _{i}}$ (because they are not affected by the limit in ${\displaystyle N}$), so:
{\displaystyle {\begin{aligned}\left\langle S_{1}S_{R}\right\rangle =\lim _{N\to \infty }\left\langle S_{1}S_{R}\right\rangle _{N}=\sum _{i}\left({\frac {\lambda _{i}}{\lambda _{+}}}\right)^{R}\langle t_{+}|{\boldsymbol {S_{1}}}|t_{i}\rangle \langle t_{i}|{\boldsymbol {S_{R}}}|t_{+}\rangle =\qquad \qquad \qquad \;\\\qquad \qquad \quad =\langle t_{+}|{\boldsymbol {S_{1}}}|t_{+}\rangle \langle t_{+}|{\boldsymbol {S_{R}}}|t_{+}\rangle +\sum _{i\neq +}\left({\frac {\lambda _{i}}{\lambda _{+}}}\right)^{R}\langle t_{+}|{\boldsymbol {S_{1}}}|t_{i}\rangle \langle t_{i}|{\boldsymbol {S_{R}}}|t_{+}\rangle \end{aligned}}}
where we have used the symbolic notation "${\displaystyle i\neq +}$" in the sum to indicate that we are excluding the case ${\displaystyle \lambda _{i}=\lambda _{+}}$.

What we now want to show is that:

${\displaystyle \left\langle S_{i}\right\rangle =\lim _{N\to \infty }\left\langle S_{i}\right\rangle _{N}=\langle t_{+}|{\boldsymbol {S_{i}}}|t_{+}\rangle }$
Considering ${\displaystyle S_{i}=S_{R}}$ and proceeding like we have done for ${\displaystyle \left\langle S_{1}S_{R}\right\rangle _{N}}$, we get to:
{\displaystyle {\begin{aligned}\left\langle S_{R}\right\rangle _{N}={\frac {1}{Z_{N}}}\sum _{\lbrace S_{i}=\pm 1\rbrace }S_{R}e^{-\beta {\mathcal {H}}}={\frac {1}{Z_{N}}}\sum _{S_{1},S_{R}}\langle S_{1}|{\boldsymbol {T}}^{R}|S_{R}\rangle S_{R}\langle S_{R}|{\boldsymbol {T}}^{N-R}|S_{1}\rangle =\\={\frac {1}{Z_{N}}}\sum _{S_{1},S_{R}}\sum _{i,j}\langle t_{j}|S_{1}\rangle \langle S_{1}|t_{i}\rangle \lambda _{i}^{R}\langle t_{i}|S_{R}\rangle S_{R}\langle S_{R}|t_{j}\rangle \lambda _{j}^{N-R}=\\{\frac {1}{Z_{N}}}\sum _{i,j}\underbrace {\langle t_{j}|t_{i}\rangle } _{\delta _{ij}}\lambda _{i}^{R}\langle t_{i}|{\boldsymbol {S_{R}}}|t_{j}\rangle \lambda _{j}^{N-R}\end{aligned}}}
and again, using the expression of ${\displaystyle Z_{N}}$ and multiplying and dividing by ${\displaystyle \lambda _{+}}$:
${\displaystyle \left\langle S_{R}\right\rangle _{N}={\frac {\sum _{i}(\lambda _{i}/\lambda _{+})^{R}\langle t_{i}|{\boldsymbol {S_{R}}}|t_{i}\rangle (\lambda _{i}/\lambda _{+})^{N-R}}{\sum _{k}(\lambda _{k}/\lambda _{+})^{N}}}={\frac {\sum _{i}(\lambda _{i}/\lambda _{+})^{N}\langle t_{i}|{\boldsymbol {S_{R}}}|t_{i}\rangle }{\sum _{k}(\lambda _{k}/\lambda _{+})^{N}}}}$
Again, the only surviving term in the thermodynamic limit is that with ${\displaystyle \lambda _{i}=\lambda _{k}=\lambda _{+}}$, so indeed:
${\displaystyle \left\langle S_{R}\right\rangle =\lim _{N\to \infty }\left\langle S_{R}\right\rangle _{N}=\langle t_{+}|{\boldsymbol {S_{R}}}|t_{+}\rangle }$

This way, ${\displaystyle \left\langle S_{1}S_{R}\right\rangle }$ becomes:

${\displaystyle \left\langle S_{1}S_{R}\right\rangle =\left\langle S_{1}\right\rangle \left\langle S_{R}\right\rangle +\sum _{i\neq +}\left({\frac {\lambda _{i}}{\lambda _{+}}}\right)^{R}\langle t_{+}|{\boldsymbol {S_{1}}}|t_{i}\rangle \langle t_{i}|{\boldsymbol {S_{R}}}|t_{+}\rangle }$
and thus the connected correlation function is:
${\displaystyle G_{R}=\sum _{i\neq +}\left({\frac {\lambda _{i}}{\lambda _{+}}}\right)^{R}\langle t_{+}|{\boldsymbol {S_{1}}}|t_{i}\rangle \langle t_{i}|{\boldsymbol {S_{R}}}|t_{+}\rangle }$

If we now take the limit ${\displaystyle R\to \infty }$ the leading term will be that with the largest possible eigenvalue ${\displaystyle \lambda _{i}}$, i.e. ${\displaystyle \lambda _{-}}$ and all the other will vanish. Therefore:

${\displaystyle G_{R}{\stackrel {R\to \infty }{\sim }}\left({\frac {\lambda _{-}}{\lambda _{+}}}\right)^{R}\langle t_{+}|{\boldsymbol {S_{1}}}|t_{-}\rangle \langle t_{-}|{\boldsymbol {S_{R}}}|t_{+}\rangle }$
and thus the correlation length will be such that:
${\displaystyle \xi ^{-1}=\lim _{R\to \infty }\left(-{\frac {1}{R}}\ln \left[\left({\frac {\lambda _{-}}{\lambda _{+}}}\right)^{R}\langle t_{+}|{\boldsymbol {S_{1}}}|t_{-}\rangle \langle t_{-}|{\boldsymbol {S_{R}}}|t_{+}\rangle \right]\right)=\ln {\frac {\lambda _{+}}{\lambda _{-}}}}$
(since ${\displaystyle \langle t_{+}|{\boldsymbol {S_{1}}}|t_{-}\rangle }$ and ${\displaystyle \langle t_{-}|{\boldsymbol {S_{R}}}|t_{+}\rangle }$ are just numbers). Therefore:
${\displaystyle \xi =\left(\ln {\frac {\lambda _{+}}{\lambda _{-}}}\right)^{-1}}$

## Explicit computations for the one-dimensional Ising model

We now want to apply what we have just shown in order to do some explicit computations on the one-dimensional Ising model. We have seen that the explicit expression of the transfer matrix for the one-dimensional Ising model is:

${\displaystyle {\boldsymbol {T}}={\begin{pmatrix}e^{K+h}&e^{-K}\\e^{-K}&e^{K-h}\end{pmatrix}}}$
Its eigenvalues can be determined, as usual, solving the equation ${\displaystyle \det({\boldsymbol {T}}-\lambda \mathbb {I} )=0}$, which yields:
${\displaystyle \lambda _{\pm }=e^{K}\left(\cosh h\pm {\sqrt {\sinh ^{2}h+e^{-4K}}}\right)}$
Thus, the free energy of the system in the thermodynamic limit is:
${\displaystyle f(K,h)=-J-k_{B}T\ln \left(\cosh h+{\sqrt {\sinh ^{2}h+e^{-4K}}}\right)}$
while its magnetization (remembering that ${\displaystyle h=\beta H}$) is:
${\displaystyle m=-{\frac {\partial f}{\partial H}}=-{\frac {1}{k_{B}T}}{\frac {\partial f}{\partial h}}={\frac {\sinh h+{\frac {\sinh h\cosh h}{\sqrt {\sinh ^{2}h+e^{-4K}}}}}{\cosh h+{\sqrt {\sinh ^{2}h+e^{-4K}}}}}={\frac {\sinh h}{\sqrt {\sinh ^{2}h+e^{-4K}}}}}$
Let us note that for ${\displaystyle H\to 0}$ at fixed ${\displaystyle T}$, since ${\displaystyle \sinh h=\sinh(\beta H)\to 0}$ then ${\displaystyle m}$ vanishes: this is again the expression that the Ising model in one dimension does not exhibit phase transitions. Furthermore we can see that in the limit ${\displaystyle T\to 0}$, namely ${\displaystyle \beta \to \infty }$, we have ${\displaystyle h,K\to \infty }$ and thus ${\displaystyle m\to 1}$: again,the only value of the temperature for which the unidimensional Ising model with nearest-neighbour interactions exhibits a spontaneous magnetization is ${\displaystyle T=0}$.

The isothermal susceptibility of the system is:

${\displaystyle \chi _{T}={\frac {\partial m}{\partial H}}={\frac {1}{k_{B}T}}{\frac {\partial m}{\partial h}}}$
Instead of explicitly calculating ${\displaystyle \chi _{T}}$ for ${\displaystyle h}$ generic, since we are interested in the behaviour of ${\displaystyle m}$ when there are no external fields let us see what happens for small values of ${\displaystyle h}$. Since in this case ${\displaystyle \sinh h\sim h}$ we have:
${\displaystyle m\sim {\frac {h}{\sqrt {h^{2}+e^{-4K}}}}\sim {\frac {h}{e^{-2K}}}=he^{2K}}$
and therefore:
${\displaystyle \chi _{T}\sim {\frac {e^{2K}}{k_{B}T}}}$
For high and low temperatures, we get:
${\displaystyle \chi _{T}{\stackrel {T\to \infty }{\sim }}{\frac {1}{k_{B}T}}\quad \qquad \chi _{T}{\stackrel {T\to 0}{\sim }}{\frac {e^{\frac {2J}{k_{B}T}}}{k_{B}T}}}$
and as we can see ${\displaystyle \chi _{T}}$ diverges exponentially for ${\displaystyle T\to 0}$; this agrees with the fact that (as already previously stated) for ${\displaystyle T\to 0}$ some elements of the transfer matrix diverge and thus the Perron-Frobenius theorem can't be applied.

Considering now the correlation length, we have:

${\displaystyle \xi ^{-1}=-\ln \left({\frac {\cosh h-{\sqrt {\sinh ^{2}h+e^{-4K}}}}{\cosh h+{\sqrt {\sinh ^{2}h+e^{-4K}}}}}\right)}$
In the particular case ${\displaystyle H=0}$:
${\displaystyle \xi _{H=0}^{-1}=-\ln {\frac {1-e^{-2K}}{1+e^{-2K}}}=-\ln {\frac {1}{\tanh K}}\quad \Rightarrow \quad \xi _{H=0}={\frac {1}{\ln(\tanh K)}}}$
Now, in the limit ${\displaystyle T\to 0}$, namely ${\displaystyle K\to \infty }$, the following asymptotic expansion holds:
${\displaystyle \tanh K\sim 1+2e^{-2K}+O(e^{-4K})}$
Therefore:
${\displaystyle \xi _{H=0}\sim {\frac {1}{\ln(1+2e^{-2K})}}\sim {\frac {1}{2e^{-2K}}}={\frac {e^{\frac {2J}{k_{B}T}}}{2}}}$
and so again we find an exponential divergence for ${\displaystyle T\to 0}$. On the other hand, for ${\displaystyle T\to \infty }$, namely ${\displaystyle K\to 0}$:
${\displaystyle \tanh K\sim {\frac {1+K^{2}}{K}}\quad \Rightarrow \quad \ln(\tanh K)\approx -\ln K+\ln(1+K^{2}){\stackrel {K\to 0}{\sim }}-\ln K}$
Thus:
${\displaystyle \xi _{H=0}\sim -{\frac {1}{\ln K}}\quad \Rightarrow \quad \xi _{H=0}{\stackrel {K\to \infty }{\longrightarrow }}0}$
as expected[6].

1. We symbolically use Dirac's bra-ket notation.
2. Namely, ${\displaystyle \operatorname {Tr} (ABC)=\operatorname {Tr} (BCA)=\operatorname {Tr} (CAB)}$.
3. In this case (which will be studied later on, see Bragg-Williams approximation for the Potts model) the model is called Potts model; if the spin variables can assume ${\displaystyle q}$ different values, then the transfer matrix of a one-dimensional Potts model will be ${\displaystyle q\times q}$.
4. This is a blessing also from a computational point of view: it often happens, in fact, that the exact expression of the transfer matrix can be obtained but it is too big or complicated to diagonalize completely. There are however several algorithms that allows one to compute only the largest eigenvalue of the matrix in a rather efficient way.
5. This agrees with what we have noted in Bulk free energy, thermodynamic limit and absence of phase transitions about the fact that a "phase transition" occurs in the one-dimensional Ising model for ${\displaystyle T=0}$.
6. Remember that in general correlation lengths are negligible for large temperatures and become relevant near a critical point.