The Adjacency Matrix
Let \(G\) be a graph with vertex set \(V=\{v_1,v_2,\ldots,v_n\}\). The adjacency matrix of \(G\) is the \(n\times n\) matrix \(\bs{A} = \bs{A}(G)\) whose \((i,j)\) entry is \[ \bs{A}(i,j) = \begin{cases} 1, & \text{if \(v_i\sim v_j\)}\\[2ex] 0, & \text{ otherwise.}\end{cases} \] Since \(v_i\sim v_j\) if and only if \(v_j \sim v_i\), it follows that \(\bs{A}(i,j) = \bs{A}(j,i)\), and therefore \(\bs{A}\) is a symmetric matrix, that is, \(\bs{A}^T = \bs{A}\). By definition, the indices of the non-zero entries of the \(i\)th row of \(\bs{A}\) correspond to the neighbors of vertex \(v_i\). Similarly, the non-zero indices of the \(i\)th column of \(\bs{A}\) are the neighbors of vertex \(v_i\). It follows that the degree of \(v_i\) is the sum of the \(i\)th row (or \(i\)th column) of \(\bs{A} \), that is, \[ \deg(v_i) = \sum_{j=1}^n \bs{A}(i,j) = \sum_{j=1}^n \bs{A}(j, i). \] If we denote the column vector of all ones by \(\bs{e} = (1,1,\ldots,1)\), then \[ \bs{A}\bs{e} = \begin{bmatrix} \deg(v_1) \\ \deg(v_2) \\ \vdots\\ \deg(v_n)\end{bmatrix}. \] We will call \(\bs{A}\bs{e}\) the degree vector of \(G\). We note that, after a possible permutation of the vertices, \(\bs{A}\bs{e}\) is equal to the degree sequence of \(G\).
Consider the graph \(G=(V,E)\) with \(V=\set{v_1,v_2,v_3,v_4,v_5}\) and edge set \(E=\set{v_1v_2,v_1v_3,v_2v_3,v_3v_4,v_3v_5}\). The adjacency matrix of \(G\) is
\[
\bs{A} = \begin{bmatrix}
0 & 1 & 1 & 0 & 0\\
1 & 0 & 1 & 0 & 0\\
1 & 1 & 0 & 1 & 1\\
0 & 0 & 1 & 0 & 0\\
0 & 0 & 1 & 0 & 0
\end{bmatrix}.
\]
One of the first applications of the the adjacency matrix of a graph \(G\) is to count walks in \(G\). A walk from vertex \(u\) to vertex \(v\) (not necessarily distinct) is a sequence of vertices \((w_0, w_1, \ldots, w_k)\), not necessarily distinct, such that \(w_{i-1}\sim w_i\), and \(w_0 = u\) and \(w_k = v\). In this case, the walk is of length \(k\). In the case that \(u=v\), then the walk is said to be a closed walk. A walk where all the vertices are distinct is a path and a cycle is a closed walk where the only repeated vertices are the end-vertices of the walk. A closed walk of length three in a graph \(G\) implies that \(G\) contains \(K_3=C_3\) as a subgraph. For obvious reasons, \(K_3\) is called a triangle.
For any graph \(G\) with vertex set \(V=\{v_1,v_2,\ldots,v_n\}\), the \((i,j)\) entry of \(\bs{A}^k\) is the number of walks from \(v_i\) to \(v_j\) of length \(k\).
The proof is by induction on \(k\). For \(k=1\), \(\bs{A}(i,j) = 1\) implies that \(v_i\sim v_j\) and then clearly there is a walk of length \(k=1\) from \(v_i\) to \(v_j\). If on the other hand \(\bs{A}(i,j) = 0\) then \(v_i\) and \(v_j\) are not adjacent and then clearly there is no walk of length \(k=1\) from \(v_i\) to \(v_j\). Now assume that the claim is true for some \(k\geq 1\) and consider the number of walks of length \(k+1\) from \(v_i\) to \(v_j\). Any walk of length \(k+1\) from \(v_i\) to \(v_j\) contains a walk of length \(k\) from \(v_i\) to a neighbor of \(v_j\). If \(v_p\in N(v_j)\) then by induction the number of walks of length \(k\) from \(v_i\) to \(v_p\) is \(\bs{A}^k (i, p)\). Hence, the total number of walks of length \(k+1\) from \(v_i\) to \(v_j\) is
\[
\sum_{v_p \in N(v_j)} \bs{A}^k(i, p) = \sum_{\ell=1}^n \bs{A}^k(i, \ell) \bs{A}(\ell, j) = \bs{A}^{k+1}(i,j).
\]
The trace of a matrix \(\bs{M}\) is the sum of its diagonal entries and will be denoted by \(\tr(\bs{M})\):
\[
\tr(\bs{M}) = \sum_{i=1}^n \bs{M}(i,i).
\]
Since all the diagonal entries of an adjacency matrix \(\bs{A}\) are all zero we have \(\tr(\bs{A}) = 0\).
Let \(G\) be a graph with adjacency matrix \(\bs{A}\). Let \(m\) be the number of edges in \(G\), let \(t\) be the number of triangles in \(G\), and let \(q\) be the number of 4-cycles in \(G\). Then
\begin{align*}
\tr(\bs{A}^2) &= 2m\\
\tr(\bs{A}^3) &= 6t\\
\tr(\bs{A}^4) &= 8 q - 2m + 2 \sum_{i=1}^n\deg(v_i)^2
\end{align*}
The entry \(\bs{A}^2(i,i)\) is the number of closed walks from \(v_i\) of length \(k=2\). A closed walk of length \(k=2\) counts one edge. Hence, \(\bs{A}^2(i,i) = \deg(v_i)\) and therefore \[\tr(\bs{A}^2) = \sum_{i=1}^n \bs{A}^2(i,i) = \sum_{i=1}^n \deg(v_i) = 2m.\] To prove the second statement, we begin by noting that a closed walk can be traversed in two different ways. Hence, for each vertex \(v\) in a triangle, there are two walks of length \(k=3\) that start at \(v\) and traverse the triangle. And since each triangle contains three distinct vertices, each triangle in a graph accounts for six walks of length \(k=3\). Since \(\sum_{i=1}^n \bs{A}^3(i,i)\) counts all walks in \(G\) of length three we have
\[
\tr(\bs{A}^3) = \sum_{i=1}^n \bs{A}^3(i,i) = 6t.
\]
Now consider \(\tr(\bs{A}^4) = \sum_{i=1}^n \bs{A}^4(i,i)\). We count the number of closed walks of length \(k=4\) from \(v_i\). There are 3 types of such walks: (1) closed walks of the form \((v_i, x, v_i, y, v_i)\) where \(x, y \in N(v_i)\). The number of such walks is \(\deg(v_i)^2\) since we have \(\deg(v_i)\) choices for \(x\) and \(\deg(v_i)\) choices for \(y\); (2) closed walks of the form \((v_i, v_j, x, v_j, v_i)\) where \(v_j \in N(v_i)\) and \(x\in N(v_j)\backslash\hspace{-0.3em}\{v_i\}\), the number of such walks is \(\sum_{v_j \sim v_i} (\deg(v_j) - 1)\); (3) walks along 4-cycles from \(v_i\) and there are 2 such walks for each cycle \(v_i\) is contained in, say \(q_i\). Hence,
\[
\bs{A}^4(i,i) = 2q_i + \deg(v_i)^2 + \sum_{v_j\sim v_i} (\deg(v_j)-1)
\]
Therefore,
\begin{align*}
\tr(\bs{A}^4) &= \sum_{i=1}^n \left( 2q_i + \deg(v_i)^2 + \sum_{v_j\sim v_i} (\deg(v_j)-1) \right)\\
&= 8 q + \sum_{i=1}^n\left(\deg(v_i)^2 - \deg(v_i) + \sum_{v_j\sim v_i} \deg(v_j)\right)\\
&= 8 q - 2m + \sum_{i=1}^n \deg(v_i)^2 + \sum_{i=1}^n \sum_{v_j\sim v_i} \deg(v_j)\\
&= 8 q - 2m + \sum_{i=1}^n \deg(v_i)^2 + \sum_{i=1}^n \deg(v_i)^2\\
&= 8 q - 2m + 2\sum_{i=1}^n \deg(v_i)^2
\end{align*}
Show that the total number of walks of length \(k\) in a graph \(G\) with adjacency matrix \(\bs{A}\) is \(\bs{e}^T\bs{A}^k\bs{e}\).
We also obtain the following as a corollary.
A graph \(G\) with \(n\geq 2\) vertices is connected if and only if the off-diagonal entries of the matrix
\[
\bs{B} = \bs{A} + \bs{A}^2 + \cdots + \bs{A}^{n-1}
\]
are all positive. In fact,
\[
d(v_i,v_j) = \min\{k\;|\; \bs{A}^k(i,j) \gt 0\}.
\]
We first note that for any \(k\geq 1\), all the entries of \(\bs{A}^k\) are non-negative and therefore if \(\bs{A}^k(i,j) \gt 0\) for some \(k\in\set{1,2,\ldots,n-1}\) then \(\bs{B}(i,j) \gt 0\).
Assume first that \(G\) is connected. Then for distinct vertices \(v_i\neq v_j\) we have that \(1\leq d(v_i, v_j) \leq n-1\) since there is path from \(v_i\) to \(v_j\). Therefore, if \(k=d(v_i,v_j)\) then \(\bs{A}^k(v_i, v_j) \gt 0\) and then also \(\bs{B}(i,j) \gt 0\). Hence, all off-diagonal entries of \(\bs{B}\) are positive.
Now assume that all off-diagonal entries of \(\bs{B}\) are positive. Let \(v_i\) and \(v_j\) be arbitrary distinct vertices. Since \(\bs{B}(i,j) \gt 0\) then there is a minimum \(k\in\{1,\ldots,n-1\}\) such that \(\bs{A}^k(i,j) \gt 0\). Therefore, there is a walk of length \(k\) from \(v_i\) to \(v_j\). We claim that every such walk is a path. Assume that \(\gamma=(w_0, w_1, \ldots, w_k)\) is a walk (but not a path) from \(v_i\) to \(v_j\) of length \(k\). If \(v\) is a repeated vertex in \(\gamma\), say \(w_a = v\) and \(w_b = v\) for \(a \lt b\) then we may delete the vertices \(w_{a+1}, w_{a+2}, \ldots, w_b\) from \(\gamma\) and still obtain a walk from \(v_i\) to \(v_j\). We can continue this process of deleting vertices from \(\gamma\) to obtain a \(v_i-v_j\) walk with no repeated vertices, that is, a \(v_i-v_j\) path. This path has length less than \(k\) which contradicts the minimality of \(k\). This proves the claim and hence all \(v_i-v_j\) walks of length \(k\) are paths from \(v_i\) to \(v_j\). This proves that \(G\) is connected.
In the proof of the previous corollary we proved the following.
Every walk from \(u\) to \(v\) contains a path from \(u\) to \(v\).
Let \(V=\{v_1,v_2,\ldots,v_n\}\). How do you obtain the adjacency matrix of \(G-v_i\) given the adjacency matrix of \(G\)?
Recall that for a graph \(G\) we denote its complement by \(\overline{G}\). Below we give a relationship between the adjacency matrices of \(G\) and \(\overline{G}\).
For any graph \(G\) it holds that
\[
\bs{A}(G) + \bs{A}(\overline{G}) + \bs{I} = \bs{J}.
\]
Let \(\bs{A}=\bs{A}(G)\) and let \(\bar{\bs{A}} = \bs{A}(\overline{G})\). For \(i\neq j\), if \(\bs{A}(i,j) = 0\) then \(\bar{\bs{A}}(i,j)=1\), and vice-versa. Therefore, \(\bs{A}(i,j) + \bar{\bs{A}}(i,j) = 1\) for all \(i\neq j\). On the other hand, \(\bs{A}(i,i) = \bar{\bs{A}}(i,i) = 0\) for all \(i\). Thus \(\bs{A}(G)+\bs{A}(\overline{G}) + \bs{I} = \bs{J}\) as claimed.
Exercises
Provide the adjacency matrix for each of the following graphs.
- The path graph \(P_8\) where the vertices are labelled in increasing order from one end to the other along the path.
- The cycle graph \(C_7\) where the vertices are labelled around the cycle in increasing order.
- The complete graph \(K_n\) with vertices labelled in any way. { (Do this for small \(n\) and then write the general form of \(\bs{A} (K_n)\).)}
- The graph \((P_2\vee K_2)\oplus P_2\).
Consider the complete bipartite graph \(K_{n,m}\) where \(X\) and \(Y\) are the parts of the bipartition. Suppose that \(X=\{v_1,v_2,\ldots,v_n\}\) and \(Y=\{v_{n+1},v_{n+2},\ldots, v_{n+m}\}\). What is the form of the adjacency matrix of \(K_{n,m}\)? Try this for small \(n, m\), say \(n=3\) and \(m=4\), and then generalize.
Consider the following recursively defined sequence of graphs:
\begin{align*}
G_1 &= K_1 \\
G_2 &= G_1 \vee K_1\\
G_3 &= G_2 \oplus K_1\\
G_4 &= G_3 \vee K_1\\
G_5 &= G_4 \oplus K_1
\end{align*}
and in general \(G_{k} = G_{k-1} \oplus K_1\) if \(k\geq 3\) is odd and \(G_{k} = G_{k-1} \vee K_1\) if \(k\geq 2\) is even.
- Consider the graph \(G_4\). Label the vertices of \(G_4\) using \(v_1, v_2,v_3,v_4\) and so that \(\deg(v_i) \leq \deg(v_{i+1})\). Using this labelling of the vertices, write out the adjacency matrix of \(G_4\).
- Do the same as in part (a) with \(G_6\).
- Do the same as in part (a) with \(G_8\).
- For general even \(k\), what is the general form of the adjacency matrix of \(G_k\)?
For each case, draw the graph with the given adjacency matrix.
\[
\text{(a)} \bs{A} =
\begin{bmatrix}
0 & 1 & 1 & 1 & 1 & 0 & 0 & 0\\
1 & 0 & 1 & 1 & 0 & 1 & 0 & 0\\
1 & 1 & 0 & 1 & 0 & 0 & 1 & 0\\
1 & 1 & 1 & 0 & 0 & 0 & 0 & 1\\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0
\end{bmatrix}
\qquad\text{(b)}
\bs{A} = \begin{bmatrix}
0&0&0&0&0&0&0&1\\
0&0&0&0&0&0&0&1\\
0&0&0&0&0&0&0&1\\
0&0&0&0&0&0&0&1\\
0&0&0&0&0&0&0&1\\
0&0&0&0&0&0&0&1\\
0&0&0&0&0&0&0&1\\
1&1&1&1&1&1&1&0
\end{bmatrix}
\]
Consider the cycle graph \(C_6\) with vertices \(V(C_6)=\{v_1,v_2,\ldots,v_6\}\) and so that \(v_i\sim v_{i+1}\) and \(v_1\sim v_6\). Prove that if \(k\) is even then \(\bs{A}^k(v_1,v_4)=0\). (Hint: \(C_6\) is bipartite.)
Let \(\bs{A}_1\) and \(\bs{A}_2\) be the adjacency matrices of \(G_1\) and \(G_2\), respectively.
- What is the adjacency matrix of \(G_1 \oplus G_2\) in terms of \(\bs{A}_1\) and \(\bs{A}_2\)?
- What is the adjacency matrix of \(G_1 \vee G_2\) in terms of \(\bs{A}_1\) and \(\bs{A}_2\)?
Let \(\bs{B}_k = \bs{A} + \bs{A} ^2 + \cdots + \bs{A} ^k\) for \(k\geq 1\). Recall that the diameter a graph, denoted by \(\diam(G)\), is the maximum distance among all vertices in \(G\). Prove that if \(G\) is connected then the smallest integer \(m\) such that all the off-diagonal entries of \(\bs{B}_m\) are positive is the diameter of \(G\).
Let \(G\) be a \(r\)-regular graph with adjacency matrix \(\bs{A}\). Prove that the total number of walks of length \(k\geq 1\) in \(G\) is \(n r^k\).
The coefficients and roots of a polynomial
As mentioned at the beginning of this chapter, the eigenvalues of the adjacency matrix of a graph contain valuable information about the structure of the graph and we will soon see examples of this. Recall that the eigenvalues of a matrix are the roots of its characteristic polynomial and the coefficients of a polynomial depend in a polynomial way on its roots. For example, expanding the polynomial \(g(t) = (t-\lambda_1)(t-\lambda_2)\) we obtain \[ g(t) = t^2 - (\lambda_1+\lambda_2) t + \lambda_1\lambda_2 \] from which we see that the coefficient of \(t\) and the constant term of \(g(t)\) are polynomial expressions in the roots \(\lambda_1\) and \(\lambda_2\). If we define the polynomials \(s_1(x,y) = x+y\) and \(s_2(x,y) = xy\) then \[ g(t) = t^2 - s_1(\lambda_1,\lambda_2) t + s_2(\lambda_1,\lambda_2). \] How about a cubic polynomial? Consider then \(g(t)=(t-\lambda_1)(t-\lambda_2)(t-\lambda_3)\) and expand: \[ g(t) = t^3 - (\lambda_1+\lambda_2+\lambda_3) t^2 + (\lambda_1\lambda_2+\lambda_1\lambda_3+\lambda_2\lambda_3)t - \lambda_1\lambda_2\lambda_3. \] Thus, if define the polynomials \(s_1(x,y,z) = x+y+z\), \(s_2(x,y,z) = xy + xz + yz\), and \(s_3(x,y,z) = xyz\), then \[ g(t) = t^3 - s_1(\lambda_1,\lambda_2,\lambda_3) t^2 + s_2(\lambda_1,\lambda_2,\lambda_3) t - s_3(\lambda_1,\lambda_2,\lambda_3) \] and again we see that the coefficients of \(g(t)\) are polynomial expressions in the roots \(\lambda_1,\lambda_2,\lambda_3\). To deal with the general \(n\)th order polynomial, let us introduce some terminology and notation. A multivariate polynomial is a polynomial in more than one variable. Examples of polynomials in the variables \(x,y\) are \[ f(x,y) = x^2-xy+7y^2, \qquad f(x,y) = -8y^5 - xy^2 -2y^3 + xy^{32} \] and examples of polynomials in the variables \(x,y,z\) are \[ f(x,y,z) = x^2 + xy^2z - 44 z^3,\qquad f(x,y,z) = 11xyz. \] We will only consider polynomials with rational coefficients and we will use \(\mathbb{Q}[x_1,x_2,\ldots,x_n]\) to denote the set of all polynomials in the variables \(x_1,x_2,\ldots,x_n\) with rational coefficients. For example, the polynomial \[ f(x_1,x_2,x_3,x_4) = 4 - 6x_1^3 x_2 x_3^2 x_4^5-33x_1x_2x_3x_4 + \tfrac{5}{2}x_1x^2 x_4. \] is an element of \(\mathbb{Q}[x_1,x_2,x_3,x_4]\). We now introduce a set of \(n\) particularly important polynomials.
Let \(I=\set{1,2,\ldots,n}\) and for \(1\leq k \leq n\) let \(\binom{I}{k}\) denote the set of all \(k\)-element subsets of \(I\). The \(k\)th elementary symmetric polynomial in the variables \(x_1,x_2,\ldots,x_n\) is defined by
\[
s_k(x_1,x_2,\ldots,x_n) = \sum_{\set{i_1,i_2,\ldots,i_k}\in \binom{I}{k} } x_{i_1} x_{i_2} \cdots x_{i_k}.
\]
In Definition 2.2.1, the notation \(\sum_{\set{i_1,i_2,\ldots,i_k}\in \binom{I}{k}}\) means that the sum is over all \(k\)-element subsets of \(I\). The number of elements in \(\binom{I}{k}\) is \(\binom{n}{k}\) and thus \(s_k\) is the sum of \(\binom{n}{k}\) monomials of the form \(x_{i_1}x_{i_2}\ldots x_{i_k}\). We call \(s_k\) the \(k\)th elementary symmetric polynomial in \(n\) variables. A few examples of \(s_k\) are
\begin{align*}
s_1(x_1,x_2,\ldots,x_n) &= x_1+x_2+x_3+\cdots+x_n\\[2ex]
s_2(x_1,x_2,\ldots,x_n) &= x_1x_2 + x_1x_3+\cdots+x_1x_n + x_2x_3+\cdots+x_{n-1}x_n\\[2ex]
s_n(x_1,x_2,\ldots,x_n) &= x_1x_2\cdots x_n
\end{align*}
If it is necessary to emphasize that \(s_k\) is the \(k\)th elementary symmetric polynomial in \(n\) variables then we use the notation \(s^n_k\) but note that the superscript \(n\) is not an exponent but there only to indicate the number of variables.
For \(n=3\), the elementary symmetric polynomials are
\begin{equation}
\begin{aligned}
s_1(x_1,x_2,x_3) &= x_1+x_2+x_3\\
s_2(x_1,x_2,x_3) &= x_1x_2 + x_1x_3 + x_2x_3\\
s_3(x_1,x_2,x_3) &= x_1x_2x_3.
\end{aligned}
\end{equation}
and for \(n=4\) the elementary symmetric polynomials are
\begin{equation}\label{eqn:sym-poly4}
\begin{aligned}
s_1(x_1,x_2,x_3,x_4) &= x_1+x_2+x_3+x_4\\
s_2(x_1,x_2,x_3,x_4) &= x_1x_2 + x_1x_3 +x_1x_4 +x_2x_3+x_2x_4 + x_3x_4\\
s_3(x_1,x_2,x_3,x_4) &= x_1x_2x_3+x_1x_2x_4+x_2x_3x_4\\
s_4(x_1,x_2,x_3,x_4) &= x_1x_2x_3x_4.
\end{aligned}
\end{equation}
For \(n=7\), there are \(\binom{7}{5} = 21\) five-element subsets of \(\{1,2,\ldots,7\}\), and thus \(s_5(x_1,x_2,\ldots,x_7)\) is the sum of \(21\) monomials:
\begin{align*}
s_5(x_1,x_2,\ldots,x_7) &= x_1x_2x_3x_4x_5+x_1x_2x_3x_4x_6+x_1x_2x_3x_4x_7+x_1x_2x_3x_5x_6\\
&\;+x_1x_2x_3x_5x_7+x_1x_2x_3x_6x_7+x_1x_2x_4x_5x_6+x_1x_2x_4x_5x_7\\
&\;+x_1x_2x_4x_6x_7+x_1x_2x_5x_6x_7+x_1x_3x_4x_5x_6+x_1x_3x_4x_5x_7 \\
&\;+x_1x_3x_4x_6x_7+x_1x_3x_5x_6x_7+x_1x_4x_5x_6x_7+x_2x_3x_4x_5x_6\\
&\;+x_2x_3x_4x_5x_7+x_2x_3x_4x_6x_7+x_2x_3x_5x_6x_7+x_2x_4x_5x_6x_7\\
&\;+x_3x_4x_5x_6x_7.
\end{align*}
We now describe a natural way in which the elementary symmetric polynomials arise. Introduce a new variable \(t\) and consider the polynomial
\[
g(t) = (t-\lambda_1)(t-\lambda_2)\cdots (t-\lambda_n).
\]
Hence, \(\lambda_1,\lambda_2,\ldots,\lambda_n\) are the roots of the polynomial \(g(t)\) since \(g(\lambda_i) = 0\) for all \(i=1,2,\ldots,n\). Expanding the right hand side, we now show by induction that
\begin{equation}\label{eqn:g-poly}
g(t) = t^n - s_1 t^{n-1} + s_2 t^{n-2} + \cdots + (-1)^k s_k t^{n-k} + \cdots + (-1)^{n-1}s_{n-1} t + (-1)^n s_n.
\end{equation}
where the \(s_k\) appearing in the coefficients of \(g(t)\) is the \(k\)th elementary symmetric polynomial evaluated at \((\lambda_1,\ldots,\lambda_n)\). We first begin with the following lemma.
Let \(s^n_k\) denote the \(k\)th elementary symmetric polynomial in the \(n\) variables \(x_1,x_2,\ldots,x_n\) and let \(s^{n+1}_k\) denote the \(k\)th elementary symmetric polynomial in the \(n+1\) variables \(x_1,x_2,\ldots,x_{n+1}\). Then
\[
s^{n+1}_k = s^n _k + x_{n+1} s^n_{k-1}
\]
By definition,
\[
s^{n+1}_k = \sum_{\set{i_1,i_2\ldots,i_k}\in I_{n+1}(k) } x_{i_1}x_{i_2}\cdots x_{i_k}
\]
A \(k\)-element subset of \(I_{n+1}=\set{1,2,\ldots,n, n+1}\) that does not contain \(n+1\) is an element of \(I_n(k)\) and a \(k\)-element subset of \(I_{n+1}\) that does contain \(n+1\) is the union of \(\set{n+1}\) and a \((k-1)\)-element subset of \(I_n\). Therefore,
\begin{align*}
s^{n+1}_k &= \sum_{ \set{i_1,i_2\ldots,i_k}\in I_{n}(k)} x_{i_1}x_{i_2}\cdots x_{i_k} + x_{n+1}\left[\sum_{\set{i_1,i_2\ldots,i_{k-1}}\in I_{n}(k-1) } x_{i_1}x_{i_2}\cdots x_{i_{k-1}}\right]\\[2ex]
&= s^n_k + x_{n+1} s^n_{k-1}
\end{align*}
as claimed.
If \(g(t) = (t-\lambda_1)(t-\lambda_2)\cdots (t-\lambda_n)\) then
\[
g(t) = t^n - s_1 t^{n-1} + s_2 t^{n-2} + \cdots + (-1)^k s_k t^{n-k} + \cdots + (-1)^{n-1}s_{n-1} t + (-1)^n s_n.
\]
where \(s_k = s_k(\lambda_1,\lambda_2,\ldots,\lambda_n)\) for \(k=1,2,\ldots,n\).
The proof is by induction on the order \(n\) of the polynomial \(g(t)\). The case \(n=1\) is trivial. Assume that the claim is true for all polynomials of order \(n\) and let \(g(t) = (t-\lambda_1)(t-\lambda_2)\cdots (t-\lambda_n)(t-\lambda_{n+1})\). Then \(g(t) = h(t) (t-\lambda_{n+1})\) where \(h(t)=(t-\lambda_1)(t-\lambda_2)\cdots (t-\lambda_n)\). Applying the induction hypothesis to \(h(t)\), we have that
\[
g(t) = \left(t^n - s^n_1 t^{n-1} + s^n_2 t^{n-2} + \cdots + (-1)^{n-1}s^n_{n-1} t + (-1)^n s^n_n\right) (t-\lambda_{n+1})
\]
and then expanding and collecting like terms we obtain
\begin{align*}
g(t) &= t^{n+1} - (\lambda_{n+1} + s^n_1) t^n + (s^n_2 + \lambda_{n+1}s^n_1)t^{n-1}\\
& + \cdots + (-1)^{n}(s^n_n + \lambda_{n+1}s^n_{n-1}) t + (-1)^{n+1}\lambda_{n+1} s_n^n
\end{align*}
We can now apply Lemma 2.2.2 to the coefficients of \(g(t)\) and obtain
\[
g(t) = t^{n+1} - s^{n+1}_1 t^n + s^{n+1}_2 t^{n-1} + \cdots + (-1)^n s^{n+1}_nt + (-1)^{n+1} s_{n+1}^{n+1}
\]
as claimed.
Consider the polynomial \(g(t) = t(t-3)(t+1)(t-2)\). Hence, the roots of \(g\) are \(\lambda_1=0\), \(\lambda_2=3\), and \(\lambda_3=-1\), and \(\lambda_4=2\). Expanding \(g\) we obtain
\[
g(t) = t^4 - 4t^3 + t^2 + 6t.
\]
On the other hand, using the expressions for \(s_1,s_2,s_3,s_4\) from \eqref{eqn:sym-poly4}, we have:
\begin{align*}
s_1(0,3,-1,2) &= 0 + 3 - 1 + 2=4\\
s_2(0,3,-1,2) &= (3)(-1) + (3)(2) + (-1)(2) = 1\\
s_3(0,3,-1,2) &= (3)(-1)(2) = -6\\
s_4(0,3,-1,2) &= (0)(3)(-1)(2) = 0.
\end{align*}
Let us record the following for future reference.
Consider the polynomial
\[
g(t) = t^n + c_1 t^{n-1} + c_2t^{n-2} + \cdots + c_{n-1}t + c_n.
\]
Then \( - c_1\) is the sum of the roots of \(g\) and \((-1)^n c_n\) is the product of the roots of \(g\).
There is another important set of multivariate polynomials that we will encounter in the next section called the power sums polynomials and that are closely related with the elementary symmetric polynomials. The power sums polynomials in the variables \(x_1,x_2,\ldots,x_n\) are the \(n\) polynomials \(p_1\), \(p_2\), \(\ldots\), \(p_n\) given by
\[
p_k(x_1,x_2,\ldots,x_n) = x_1^k + x_2^k + \cdots + x_n^k.
\]
The relationship between the elementary symmetric and the power sums polynomials is the following. First of all, it is clear that \(s_1 = p_1\). Now,
\begin{align*}
p_1^2 - p_2 & = (x_1+x_2+\cdots+x_n)^2 - (x_1^2+x_2^2+\cdots+x_n^2)\\[2ex]
&= \sum_{i=1}^n x_i^2 + \sum_{1\leq i \lt j\leq n} 2 x_i x_j - (x_1^2+x_2^2+\cdots+x_n^2)\\
&= 2 \sum_{i \lt j} x_i x_j\\
& = 2 s_2
\end{align*}
and therefore
\[
s_2 = \frac{1}{2}(p_1^2 - p_2).
\]
A similar computation yields that
\[
s_3 = \frac{1}{6}(p_1^3 - 3p_1p_2 + 2p_3).
\]
The general relationship between the symmetric elementary and power sums polynomials is known as Newton's identities:
\begin{equation}\label{eqn:newton-id}
p_k - s_1 p_{k-1} + s_2 p_{k-2} + \cdots + (-1)^{k-1} s_{k-1} p_1 + (-1)^k k s_k = 0, 1\leq k\leq n
\end{equation}
From Newton's identities we obtain that
\[
s_k = \tfrac{1}{k}(-1)^{k-1}(p_k - s_1 p_{k-1} + \cdots + (-1)^{k-1}s_{k-1} p_1).
\]
Now since \(s_1=p_1\), it is straightforward to show by induction that each elementary symmetric polynomial can be written only in terms of the power sums polynomial. To summarize, we obtain the following.
Consider the polynomial \(g(t) = (t-\lambda_1)(t-\lambda_2)\cdots (t-\lambda_n)\) written in expanded form
\[
g(t) = t^n + c_1 t^{n-1} + c_2t^{n-2} + \cdots c_{n-1} t + c_n.
\]
The coefficients \(c_1, c_2, \ldots, c_n\) can be expressed in terms of the power sums polynomials \(p_1, p_2, \ldots,\) \(p_n\) evaluated at the roots \(\lambda_1, \lambda_2, \ldots, \lambda_n\), that is, there are polynomial functions \(f_1, f_2, \ldots, f_n\) such that
\[
c_i = f_i(p_1, p_2, \ldots, p_n)
\]
where the \(p_1,p_2,\ldots,p_n\) are evaluated at \(\lambda_1,\lambda_2,\ldots,\lambda_n\).
Exercises
Expand the polynomial \(g(x) = (t-\lambda_1)(t-\lambda_2)(t-\lambda_3)(t-\lambda_4)\) and use the expressions for \(s_1,s_2,s_3, s_4\) in \eqref{eqn:sym-poly4} to verify equation \eqref{eqn:g-poly} for \(n=4\).
Use Newton's identities to express \(s_4\) in terms of \(p_1, p_2, p_3, p_4\).
The polynomial \(g(t) = t^3 + c_1 t^2 + 2 t + 8\) has \(\lambda_1=2\) as a root. Find the other roots of \(g\) and then find \(c_1\).
The characteristic polynomial and spectrum of a graph
In this section, we introduce the characteristic polynomial and spectrum of a graph and prove some of their basic properties. Before we begin, we recall some basic facts from linear algebra. Recall that \(\lambda\) is an eigenvalue of the matrix \(\bs{M}\) if there exists a vector \(\bs{x}\) such that \[ \bs{M}\bs{x} = \lambda \bs{x}. \] In this case, \(\bs{x}\) is called an eigenvector of \(\bs{M}\) corresponding to the eigenvalue \(\lambda\). To find the eigenvalues of \(\bs{M}\), we find the zeros of the characteristic polynomial of \(\bs{M}\): \[ p(t) = \det(t\bs{I} - \bs{M}). \] If \(\bs{M}\) is an \(n\times n\) matrix, then the characteristic polynomial \(p(t)\) is an \(n\)th order polynomial and \(p(\lambda) = 0\) if and only if \(\lambda\) is an eigenvalue of \(\bs{M}\). From the Fundamental Theorem of Algebra, \(p(t)\) has \(n\) eigenvalues, possibly repeated and complex. However, if \(\bs{M}\) is a symmetric matrix, then an important result in linear algebra is that the eigenvalues of \(\bs{M}\) are all real numbers and we may therefore order them from say smallest to largest: \[ \lambda_1 \leq \lambda_2 \leq \cdots \leq \lambda_n. \] Also, if \(\bs{M}\) is symmetric and \(\bs{x}\) and \(\bs{y}\) are eigenvectors of \(\bs{M}\) corresponding to distinct eigenvalues then \(\bs{x}\) and \(\bs{y}\) are orthogonal, that is, \[ \dotprod{\bs{x},\bs{y}} = \sum_{i=1}^n x_i y_i = 0. \] Moreover, if \(\bs{M}\) is symmetric, there exists an orthonormal basis \(\beta=\{\bs{x}_1,\bs{x}_2,\) \(\ldots,\bs{x}_n\}\) of \(\real^n\) consisting of eigenvectors of \(\bs{M}\). Recall that \(\beta=\{\bs{x}_1,\bs{x}_2,\) \(\ldots,\bs{x}_n\}\) is an orthonormal basis of \(\real^n\) if \(\|\bs{x}_i\| = 1\) and \(\dotprod{\bs{x}_i, \bs{x}_j} = 0\) if \(i\neq j\), that is, the vectors in \(\beta\) are are unit vectors and are mutually orthogonal.
The characteristic polynomial of a graph \(G\) with adjacency matrix \(\bs{A}\) is
\[
p(t) = \det(t\bs{I}-\bs{A}).
\]
The spectrum of \(G\), denoted by \(\spec(G)\), is the list of the eigenvalues of \(\bs{A}\) in increasing order \(\lambda_1\leq \lambda_2\leq\cdots\leq\lambda_n\):
\[
\spec(G) = (\lambda_1, \lambda_2, \ldots, \lambda_n).
\]
Show by direct computation that the characteristic polynomial of \(P_3\) is \(p(t) = t^3 - 2t\) and find the eigenvalues of \(P_3\).
The adjacency matrix of the empty graph \(E_n\) is the zero matrix and therefore the characteristic polynomial of \(E_n\) is \(p(x) = x^n\). Hence, \(E_n\) has spectrum \(\spec(E_n) = (0,0,\ldots,0)\).
The adjacency matrix of \(K_4\) is
\[
\bs{A} =
\begin{bmatrix}
0&1&1&1\\
1&0&1&1\\
1&1&0&1\\
1&1&1&0
\end{bmatrix}
\]
Consider the vectors \(\bs{x}_1 = (1,-1,0,0)\), \(\bs{x} _2=(1,0,-1,0)\), and \(\bs{x} _3=(1,0,0,-1)\). It is not hard to see that \(\bs{x}_1,\bs{x}_2,\bs{x}_3\) are linearly independent. A direct computation yields
\[
\bs{A}\bs{x}_1= (-1,1,0,0) = -\bs{x}_1
\]
and therefore \(\lambda_1=-1\) is an eigenvalue of \(\bs{A}\). Similarly, a direct computation yields that \(\bs{A}\bs{x}_2 = -\bs{x}_2\) and \(\bs{A}\bs{x}_3 = -\bs{x}_3\). Hence, \(\lambda_2=\lambda_3=-1\). Finally, we have that \(\bs{A}\bs{e} = (3,3,3,3) = 3 \bs{e}\), and therefore \(\lambda_4=3\) is an eigenvalue of \(\bs{A}\). Therefore, the spectrum of \(K_n\) is
\[
\spec(K_4) = (-1,-1,-1,3)
\]
and therefore the characteristic polynomial of \(K_4\) is \(p(t) = (t-3)(t+1)^3\). In general, one can show that
\[
\spec(K_n) = (-1,-1,\ldots,-1, n-1)
\]
and therefore the characteristic polynomial of \(K_n\) is \(p(t) = (t-(n-1))(t+1)^{n-1}\). \(\square\)
The following result, and the previous example, shows that \(\Delta(G)\) is a sharp bound for the magnitude of the eigenvalues of \(G\).
For any eigenvalue \(\lambda\) of \(G\) it holds that \(|\lambda|\leq \Delta(G)\).
Suppose that \(\lambda\) is an eigenvalue of \(G\) with eigenvector \(\bs{x}=(x_1,x_2,\) \(\ldots,x_n)\). Suppose that the \(j\)th entry of \(\bs{x}\) has maximum absolute value, that is, \(|x_i|\leq |x_j|\) for all \(i=1,2,\ldots,n\). Since \(\bs{A}\bs{x} = \lambda \bs{x}\) it follows that
\[
\lambda x_j = \sum_{i=1}^n \bs{A}(j,i) x_i
\]
and therefore using the triangle inequality we obtain
\begin{align*}
|\lambda| |x_j| =\left|\sum_{i=1}^n \bs{A}(j,i) x_i \right| &\leq \sum_{i=1}^n |\bs{A}(j,i)| |x_i|\\[2ex]
&= |x_j| \sum_{i=1}^n |\bs{A}(j,i)| \\[2ex]
&= |x_j| \deg(v_j)\\[2ex]
&\leq |x_j| \Delta(G).
\end{align*}
Therefore \(|\lambda| |x_j| \leq |x_j| \Delta(G)\), and the claim follows by dividing both sides of the inequality by \(|x_j|\neq 0\).
Let \(\spec(G)=(\lambda_1,\lambda_2,\ldots,\lambda_n)\) and let \(d_{\text{avg}} = \frac{2|E(G)|}{n}\) denote the average degree of \(G\). Then
\[
d_{\text{avg}} \leq \lambda_n \leq \Delta(G).
\]
Let \(\beta=\set{\bs{x}_1,\bs{x}_2,\ldots,\bs{x}_n}\) be an orthonormal basis of \(\bs{A}=\bs{A}(G)\) with corresponding eigenvalues \(\lambda_1,\lambda_2,\ldots,\lambda_n\). Then there exists numbers \(\alpha_1, \alpha_2,\ldots,\alpha_n \in \real\) such that \(\bs{e} = \sum_{i=1}^n \alpha_i \bs{x}_i\). Let
\[
\bs{d} = \bs{A}\bs{e} = (\deg(v_1),\deg(v_2),\ldots,\deg(v_n))
\]
denote the degree vector of \(G\). Now
\[
\bs{e}^T \bs{A}\bs{e} = \bs{e}^T \bs{d} = \sum_{i=1}^n \deg(v_i)
\]
while on the other hand, since \(\beta\) is an orthonormal basis, we have
\[
\bs{e}^T \bs{A} \bs{e} = \sum_{i=1}^n \alpha_i^2 \lambda_i.
\]
Therefore,
\begin{align*}
\sum_{i=1}^n \deg(v_i) &= \sum_{i=1}^n \alpha_i^2 \lambda_i \leq \lambda_n \sum_{i=1}^n \alpha_i^2 = \lambda_n \cdot n
\end{align*}
where we have used the fact that \(n = \bs{e}^T \bs{e} = \sum_{i=1}^n \alpha_i^2\) and \(\lambda_i \leq \lambda_n\) for all \(i=1,2,\ldots,n\). Therefore,
\[
\frac{1}{n} \sum_{i=1}^n \deg(v_i) \leq \lambda_n
\]
and this proves the first inequality. The second inequality follows from Proposition 2.3.2.
A graph \(G\) is \(k\)-regular if and only if \(\bs{e}=(1,1,\ldots,1)\) is an eigenvector of \(G\) with eigenvalue \(\lambda = k\).
Recall that
\[
\bs{A}\bs{e} = (\deg(v_1),\deg(v_2),\ldots,\deg(v_n)).
\]
If \(G\) is \(k\)-regular then \(\deg(v_i) = k\) for all \(v_i\) and therefore
\[
\bs{A}\bs{e} = (k,k,\ldots,k) = k\bs{e}.
\]
Thus, \(k\) is an eigenvalue of \(\bs{A}\) with corresponding eigenvector \(\bs{e}\). On the other hand, if \(\bs{e}\) is an eigenvector of \(G\) with eigenvalue \(k\) then
\[
\bs{A}\bs{e} = k\bs{e} = (k,k,\ldots,k)
\]
and thus \(\deg(v_i) = k\) for all \(v_i\) and then \(G\) is \(k\)-regular.
Let \(G\) be a \(k\)-regular graph with eigenvalues \(\lambda_1\leq\lambda_2\leq\cdots\leq\lambda_n=k\). Then the complement graph \(\overline{G}\) has eigenvalues \(n-1-k, -1-\lambda_1,-1-\lambda_2,\ldots,-1-\lambda_{n-1}\).
Let \(\beta=\set{\bs{x}_1,\bs{x}_2,\ldots,\bs{x}_n}\) be an orthonormal basis of \(\real^n\) consisting of eigenvectors of \(\bs{A}\) with corresponding eigenvalues \(\lambda_1\leq\lambda_2\leq\cdots\leq\lambda_n=k\). By Proposition 2.3.4, \(k\) is an eigenvalue of \(G\) with corresponding eigenvector \(\bs{e}\), and moreover by Exercise 2.16, \(\lambda_n = k\) is the maximum eigenvalue of \(G\). We may therefore take \(\bs{x}_n = \frac{1}{\sqrt{n}}\bs{e}\). Let \(\bs{A}=\bs{A}(G)\) and let \(\overline{\bs{A}} = \bs{A}(\overline{G})\). From Lemma 2.1.5 we have that
\[
\overline{\bs{A}} = \bs{J} - \bs{I} - \bs{A}.
\]
Now since \(\bs{x}_i\) is orthogonal to \(\bs{x}_n\) for \(1\leq i \lt n\) we have \(\bs{J}\bs{x}_i = \bs{0}\) for \(1\leq i \lt n\). Therefore, for \(1\leq i \lt n\) we have
\[
\overline{\bs{A}}\bs{x}_i = \bs{J}\bs{x}_i - \bs{I}\bs{x}_i - \bs{A}\bs{x}_i = -\bs{x}_i - \lambda_i\bs{x}_i = (-1-\lambda_i)\bs{x}_i.
\]
Since \(\overline{G}\) is a regular graph with degree \((n-1-k)\), by Proposition 2.3.4 \((n-1-k)\) is an eigenvalue of \(\overline{G}\) with corresponding eigenvector \(\bs{x}_n\), and this ends the proof.
We now consider bipartite graphs.
Suppose that \(G\) is a bipartite graph. Then if \(\lambda\) is an eigenvalue of \(G\) then \(-\lambda\) is an eigenvalue of \(G\). Consequently, if \(\pm\lambda_1, \pm \lambda_2, \ldots, \pm \lambda_q\) are the non-zero eigenvalues of \(G\) then the characteristic polynomial of \(G\) takes the form
\[
p(t) = t^k (t^2-\lambda_1^2)(t^2-\lambda_2^2)\cdots(t^2-\lambda_q^2).
\]
where \(k\geq 0\). In particular, if \(n=|V(G)|\) is odd then \(k\geq 1\), that is, \(\lambda=0\) is an eigenvalue of \(G\) with multiplicity \(k\).
Since \(G\) is bipartite, there is a partition \(\set{X,Y}\) of the vertex set \(V(G)\) such that each edge of \(G\) has one vertex in \(X\) and the other in \(Y\). Let \(r=|X|\) and \(s=|Y|\). By a relabelling of the vertices of \(G\), we may assume that \(X=\set{v_1,v_2,\ldots,v_{r}}\) and \(Y=\set{v_{r+1},v_{r+2},\ldots,v_{r+s}}\). Therefore, the adjacency matrix of \(G\) takes the form
\[
\bs{A} = \begin{bmatrix} \bs{0} & \bs{B}\\[2ex] \bs{B}^T & \bs{0}\end{bmatrix}.
\]
Suppose that \(\bs{z} = \left[\begin{smallmatrix}\bs{x} \\ \bs{y}\end{smallmatrix}\right]\) is an eigenvector of \(\bs{A}\) with eigenvalue \(\lambda\). Thus,
\[
\bs{A} \bs{z} = \begin{bmatrix} \bs{B}\bs{y} \\ \bs{B}^T \bs{x}\end{bmatrix} = \lambda \begin{bmatrix}\bs{x} \\ \bs{y}\end{bmatrix}.
\]
Then
\[
\bs{A} \begin{bmatrix}\bs{x} \\ -\bs{y}\end{bmatrix} = \begin{bmatrix} -\bs{B}\bs{y} \\ \bs{B}^T \bs{x}\end{bmatrix} = -\lambda \begin{bmatrix}\bs{x} \\ -\bs{y}\end{bmatrix}.
\]
Therefore, \(\left[\begin{smallmatrix}\bs{x} \\ -\bs{y}\end{smallmatrix}\right]\) is an eigenvector of \(\bs{A}\) with eigenvalue \(-\lambda\). Hence, \((t^2-\lambda^2)\) is a factor in \(p(t)\). If \(q\) denotes the number of non-zero eigenvalue pairs \(\pm \lambda_i\) then \(k=n-2q\) is the multiplicity of the eigenvalue \(\lambda=0\), and if \(n\) is odd then \(k\geq 1\).
The eigenvalues of the cycle \(C_n\) are
\[
2\cos\left(\tfrac{2\pi j}{n}\right)
\]
for \(j=0,1,\ldots, n-1\).
Under what condition will a \(k\)-regular graph \(G\) have \(\lambda = \pm k\) as eigenvalues?
Consider the complete bipartite graph \(K_{r, s}\) where \(r, s, \geq 1\). Show that \(\lambda = 0\) is an eigenvalue of \(K_{r,s}\) of multiplicity \(r+s - 2\). In Exercise 2.18 you will show that \(\pm \sqrt{rs}\) are the other two eigenvalues of \(K_{r,s}\).
Here is a generalization of the previous example.
Suppose that \(G_1\) is a \(k_1\)-regular graph with \(n_1\) vertices and \(G_2\) is a \(k_2\)-regular graph with \(n_2\) vertices. Let \(\spec(G_1) = (\lambda_1, \lambda_2,\ldots,\lambda_{n_1})\), where \(\lambda_{n_1} = k_1\), and let \(\spec(G_2) = (\mu_1,\mu_2,\ldots,\mu_{n_2})\), where \(\mu_{n_2}=k_2\). Let \(G=G_1\vee G_2\).
- Let \(V(G_1) = \{v_1, v_2,\ldots,v_{n_1}\}\) and \(V(G_2) = \set{w_1,w_2,\ldots,w_{n_2}}\). Write down the adjacency matrix of \(\bs{A}=\bs{A}(G)\) if we order the vertices of \(G\) as \((v_1,v_2,\ldots,v_{n_1},w_1,w_2,\ldots,w_{n_2})\).
- If \(\bs{x}_i\neq \bs{e}\) is an eigenvector of \(G_1\) with eigenvalue \(\lambda_i\), with \(i \lt n_1\), then show that \(\left[\begin{smallmatrix}\bs{x}_i\\ \bs{0}\end{smallmatrix}\right]\) is an eigenvector of \(G\) with eigenvalue \(\lambda_i\).
- If \(\bs{y}_j\neq \bs{e}\) is an eigenvector of \(G_2\) with eigenvalue \(\mu_j\), with \(j \lt n_2\), then show that \(\left[\begin{smallmatrix}\bs{0}\\ \bs{y}_j\end{smallmatrix}\right]\) is an eigenvector of \(G\) with eigenvalue \(\mu_j\).
- Parts (b) and (c) determine \(n_1+n_2-2\) eigenvalues of \(G\). Here we find the remaining two eigenvalues. Consider the vector \(\bs{z}=\left[\begin{smallmatrix}\alpha \bs{e} \\ \bs{e}\end{smallmatrix}\right]\) where \(\alpha\neq 0\) and is to be determined. Apply the eigenvector-eigenvalue condition \(\bs{A} \bs{z} = \lambda \bs{z}\) and show that the remaining two eigenvalues of \(G\) are \[ \lambda = \frac{(k_1+k_2)\pm \sqrt{(k_2-k_1)^2 + 4n_1n_2}}{2} \] and that \[ \alpha = \frac{-(k_2-k_1)\pm \sqrt{(k_2-k_1)^2 + 4n_1n_2}}{2n_1} \]
- Conclude that if \(p_1(t)\) and \(p_2(t)\) are the characteristic polynomials of \(G_1\) and \(G_2\), respectively, then the characteristic polynomial of \(G\) is \[ p(t) = \frac{p_1(t)p_2(t)}{(t-k_1)(t-k_2)} ((t-k_1)(t-k_2)-n_1n_2) \]
Let \(d(G)=(d_1,d_2,\ldots,d_n)\) be the degree sequence of \(G\) and let \(\lambda_n\) be the largest eigenvalue of \(G\). Prove that
\[
\sqrt{\frac{d_1^2+d_2^2+\cdots+d_n^2}{n}} \leq \lambda_n
\]
Hint: Use Rayleigh quotients and Perron-Frobenius. \cite{AH:60}
Exercises
Let \(\bs{M}\) be an \(n\times n\) matrix and let \(p(t)\) be the characteristic polynomial of \(\bs{M}\). Find \(p(0)\) in two ways:
- Using the expansion \[ p(t) = t^n - s_1 t^{n-1} + s_2 t^{n-2} + \cdots+(-1)^{n-1}s_{n-1} t + (-1)^ns_n \] where as usual \(s_1,s_2,\ldots,s_n\) are the elementary symmetric polynomials evaluated at the roots \(\lambda_1,\lambda_2,\ldots,\lambda_n\) of \(p(t)\).
- Using the definition of \(p(t)\), namely, \(p(t) = \det(t\bs{I}-\bs{M})\). Hint: Recall that \(\det(\alpha \bs{M}) = \alpha^n \det(\bs{M})\) for any \(\alpha\in\real\).
By direct hand computation, find the characteristic polynomial and spectrum of the graph \(G=C_3\vee K_1\).
Let \(G_1=C_4\oplus K_1\) and let \(G_2=E_4\vee K_1\).
- Draw the graphs \(G_1\) and \(G_2\). Explain why \(G_1\) and \(G_2\) are not isomorphic.
- Find the characteristic polynomials and eigenvalues of \(G_1\) and \(G_2\).
- What can you conclude based on parts (a) and (b)?
Prove that if \(G\) has at least one edge then \(G\) has at least one negative and one positive eigenvalue. (Hint: Use Proposition 2.3.3 and the fact that \(0=\tr(\bs{A})=\lambda_1+\lambda_2+\cdots+\lambda_n\) where \(\lambda_1,\lambda_2,\ldots,\lambda_n\) are the eigenvalues of \(\bs{A}\).)
Let \(G\) be a \(k\)-regular graph. Prove that \(|\lambda_i|\leq k\) for all eigenvalues \(\lambda_1,\lambda_2,\ldots,\lambda_n\) of \(G\).
Recall that \(u, v\) are twin vertices if \(N(u)\backslash\hspace{-0.3em}\set{v} = N(v)\backslash\hspace{-0.3em}\set{u}\), that is, \(u\) and \(v\) have the same neighbors (other than themselves). Let \(G\) be a graph with \(V(G)=\set{v_1,v_2,\ldots,v_n}\). Prove that if \(v_1\) and \(v_2\) are twin vertices then \(\bs{x} = \bs{e}_1 - \bs{e}_2\) is an eigenvector of \(G\) with eigenvalue
- \(\lambda = 0\) if \(v_1\) and \(v_2\) are not adjacent, and
- \(\lambda = -1\) if \(v_1\) and \(v_2\) are adjacent.
Consider the complete bipartite graph \(K_{r, s}\) where \(r, s \geq 1\).
- Show that the vector \[ \bs{z} = (\underbrace{\sqrt{s},\sqrt{s},\ldots,\sqrt{s}}_{r \text{ times }}, \underbrace{\sqrt{r},\sqrt{r},\ldots,\sqrt{r}}_{s \text{ times }}) \] is an eigenvector of \(K_{r,s}\) with eigenvalue \(\sqrt{rs}\).
- Find an eigenvector for \(-\sqrt{rs}\).
Let \(G_1\) and \(G_2\) be graphs with characteristic polynomials \(p_1(t)\) and \(p_2(t)\), respectively. What is the characteristic polynomial of \(G_1\oplus G_2\)? Show all your work. Hint: See Exercise 2.6 and use the fact that
\[
\det\begin{bmatrix} \bs{X} & \bs{0}\\\bs{0}&\bs{Y}\end{bmatrix} = \det(\bs{X})\det(\bs{Y}).
\]
Prove that if \(\lambda=\frac{p}{q}\) is a rational eigenvalue of a graph \(G\) then in fact \(\lambda\) is an integer, that is, \(q=1\). (Hint: Rational root theorem)
Cospectral graphs
In this section, we consider the question of whether it is possible to uniquely determine a graph from its spectrum. To that end, we say that two graphs \(G_1\) and \(G_2\) are cospectral if they have the same (adjacency) eigenvalues. Our first task is to show that isomorphic graphs are cospectral. It turns out, however, that there are non-isomorphic graphs with the same eigenvalues and we will supply some examples. We will then use our results from Section 2.2 to identify structural properties shared by cospectral graphs. To show that two isomorphic graphs are cospectral, we use the fact that isomorphic graphs have similar adjacency matrices. Recall that two matrices \(\bs{A}_1\) and \(\bs{A}_2\) are similar if there exists an invertible matrix \(\bs{P}\) such that \[ \bs{A}_2 = \bs{P}^{-1}\bs{A}_1\bs{P}. \] Similar matrices share many properties. For example:
If \(\bs{A}_1\) and \(\bs{A}_2\) are similar then the eigenvalues of \(\bs{A}_1\) and \(\bs{A}_2\) are equal.
By definition, there exists an invertible matrix \(\bs{P}\) such that \(\bs{A}_2 = \bs{P}^{-1}\bs{A}_1\bs{P}\). Let \(p_1(t)=\det(t\bs{I}-\bs{A}_1)\) and let \(p_2(t) = \det(t\bs{I}-\bs{A}_2)\), that is, \(p_i(t)\) is the characteristic polynomial of \(\bs{A}_i\), for \(i=1,2\). Then
\begin{align*}
p_2(t) &= \det(t\bs{I} - \bs{A}_2)\\
&= \det(t\bs{P}^{-1}\bs{P} - \bs{P}^{-1}\bs{A}_1\bs{P})\\
&= \det(\bs{P}^{-1} ( t\bs{I} - \bs{A}_1) \bs{P})\\
&= \det(\bs{P}^{-1}) \det(t\bs{I}-\bs{A}_1) \det(\bs{P}) \\
&= \det(t\bs{I}-\bs{A}_1)\\
& = p_1(t)
\end{align*}
where we used the fact that \(\det(\bs{P}^{-1})\det(\bs{P}) = 1\). Hence, \(p_1(t) = p_2(t)\), and therefore \(\bs{A}_1\) and \(\bs{A}_2\) have the same eigenvalues.
Hence, if we can show that the adjacency matrices of isomorphic graphs \(G_1\) and \(G_2\) are similar then \(G_1\) and \(G_2\) are cospectral. To do this, we study how permutations \(\sigma\in S_n\) can be represented by matrices. For the permutation \(\sigma\in S_n\) define the \(n\times n\) matrix \(\bs{P}\) as follows. Let \(\bs{e}_1,\ldots,\bs{e}_n\) denote the standard basis vectors in \(\real^n\) thought of as column vectors. Define the matrix \(\bs{P}\) as
\[
\bs{P} = \begin{bmatrix} \bs{e}^T_{\sigma(1)} \\[0.1cm] \bs{e}^T_{\sigma(2)} \\[0.1cm] \bs{e}^T_{\sigma(3)} \\[0.1cm] \vdots \\[0.1cm] \bs{e}^T_{\sigma(n)}\end{bmatrix}.
\]
For example, for the permutation \(\sigma=\begin{pmatrix}1&2&3&4&5&6\\ 3&6&5&2&1&4\end{pmatrix}\) the matrix \(\bs{P}\) is
\begin{equation}\label{eqn:perm1}
\bs{P} = \begin{bmatrix} \bs{e}^T_{\sigma(1)} \\ \bs{e}^T_{\sigma(2)} \\ \bs{e}^T_{\sigma(3)} \\ \bs{e}^T_{\sigma(4)} \\ \bs{e}^T_{\sigma(5)} \\ \bs{e}^T_{\sigma(6)} \end{bmatrix} = \begin{bmatrix} \bs{e}^T_{3} \\ \bs{e}^T_{6} \\ \bs{e}^T_{5} \\ \bs{e}^T_{2} \\ \bs{e}^T_{1} \\ \bs{e}^T_{4} \end{bmatrix} = \begin{bmatrix} 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 0 & 1 & 0\\0 & 1 & 0 & 0&0&0\\ 1&0&0&0&0&0\\ 0&0&0&1&0&0\end{bmatrix}.
\end{equation}
Notice that \(\bs{P}\) can also be obtained by starting with the identity matrix \(\bs{I}\) and sending column \(i\) of \(\bs{I}\) to column \(\sigma(i)\). Therefore, written out in column form we have
\[
\bs{P} = \begin{bmatrix} \bs{e}_{\sigma^{-1}(1)} & \bs{e}_{\sigma^{-1}(2)} & \bs{e}_{\sigma^{-1}(3)} & \cdots & \bs{e}_{\sigma^{-1}(n)} \end{bmatrix}.
\]
The matrix \(\bs{P}\) is called the permutation matrix associated to \(\sigma\). The columns of any permutation matrix \(\bs{P}\) form an orthonormal basis of \(\real^n\) since the columns of \(\bs{P}\) are just the standard basis vectors of \(\real^n\) (of course in a rearranged order). Hence, permutations matrices are orthogonal matrices, in other words \(\bs{P}^T\bs{P}=\bs{P}\bs{P}^T=\bs{I}\). Hence, \(\bs{P}^{-1}=\bs{P}^T\), and this implies that \(\det(\bs{P}) = \pm 1\) for any permutation matrix \(\bs{P}\).
We now present the linear-algebraic version of the notion of isomorphic graphs. In the following proof, we use the fact that for any \(n\times n\) matrix \(\bs{M}\), the \((i,j)\) entry of \(\bs{M}\) can be determined by the computation
\[
\bs{M}_{i,j} = \bs{e}_i^T\bs{M}\bs{e}_j.
\]
Let \(G\) be a graph with adjacency matrix \(\bs{A}\). If \(\bs{P}\) is a permutation matrix then \(\bs{P}^T\bs{A}\bs{P}\) is the adjacency matrix of some graph that is isomorphic to \(G\). Conversely, for any graph \(H\) that is isomorphic to \(G\) there exists a permutation matrix \(\bs{P}\) such that \(\bs{P}^T\bs{A}\bs{P}\) is the adjacency matrix of \(H\).
Let \(\sigma:V\rightarrow V\) be a permutation with permutation matrix \(\bs{P}\). Recall that we can write
\[
\bs{P} = \begin{bmatrix} \bs{e}_{\sigma^{-1}(1)} & \bs{e}_{\sigma^{-1}(2)} & \bs{e}_{\sigma^{-1}(3)} & \cdots & \bs{e}_{\sigma^{-1}(n)}\end{bmatrix}
\]
and then \(\bs{P}\bs{e}_j = \bs{e}_{\sigma^{-1}(j)}\) for any standard basis vector \(\bs{e}_j\). Put \(\bs{B} = \bs{P}^T\bs{A}\bs{P}\) and note that \(\bs{B}\) is symmetric because \(\bs{B}^T = (\bs{P}^T\bs{A}\bs{P})^T = \bs{P}^T \bs{A}^T (\bs{P}^T)^T = \bs{P}^T\bs{A} \bs{P}=\bs{B}\). Let \(i, j \in\{1,2,\ldots,n\}\) and let \(k=\sigma(i)\) and let \(\ell=\sigma(j)\). Consider the entry \(\bs{B}_{k,\ell}\):
\begin{align*}
\bs{B}_{k,\ell} &= \bs{e}_k^T\bs{B}\bs{e}_\ell\\
&= \bs{e}_{k}^T \bs{P}^T\bs{A}\bs{P} \bs{e}_{\ell}\\
&= (\bs{P}\bs{e}_k)^T \bs{A} (\bs{P}\bs{e}_{\ell}) \\
&= \bs{e}_{\sigma^{-1}(k)} \bs{A} \bs{e}_{\sigma^{-1}(\ell)}\\
&= \bs{e}_i \bs{A} \bs{e}_j\\
&= \bs{A}_{i,j}.
\end{align*}
We have proved that \(\bs{B}_{\sigma(i),\sigma(j)} = \bs{A}_{i,j}\) for all \(i,j \in \{1,2,\ldots,n\}\). This proves that all the entries of \(\bs{B}\) are either \(0\) or \(1\) and the diagonal entries of \(\bs{B}\) are zero since they are zero for \(\bs{A}\). Hence, \(\bs{B}\) is the adjacency matrix of a graph, say \(H\). Now, since \(\bs{B}_{\sigma(i),\sigma(j)} = \bs{A}_{i,j}\) then \(\{i,j\}\) is an edge in \(G\) if and only if \(\{\sigma(i),\sigma(j)\}\) is an edge in \(H\). Hence, \(G\cong H\) with isomorphism \(\sigma\).
Conversely, let \(H\) be a graph isomorphic to \(G\). Then there exists a permutation \(\sigma:V\rightarrow V\) such that \(\{i,j\}\) is an edge in \(G\) if and only if \(\{\sigma(i),\sigma(j)\}\) is an edge in \(H\). Let \(\bs{P}\) be the permutation matrix of \(\sigma\). Our computation above shows that \((\bs{P}^T\bs{A}\bs{P})_{\sigma(i),\sigma(j)} = \bs{A}_{i,j}\). Hence, the \(0-1\) matrix \(\bs{P}^T\bs{A}\bs{P}\) has a non-zero entry at \((\sigma(i),\sigma(j))\) if and only if \(\bs{A}\) has a non-zero entry at \((i,j)\). Hence, \(\bs{P}^T\bs{A}\bs{P}\) is the adjacency matrix of \(H\) and the proof is complete.
Here is a rephrasing of the previous theorem.
Let \(\bs{A}_1\) and \(\bs{A}_2\) be the adjacency matrices of two graphs \(G_1\) and \(G_2\) on the vertex set \(V=\{1,2,\ldots,n\}\), respectively. Then \(G_1\) and \(G_2\) are isomorphic if and only if there exists a permutation matrix \(\bs{P}\) such that \(\bs{A}_2=\bs{P}^T\bs{A}_1 \bs{P}\).
Using Theorem 2.4.2 and the definition of an automorphism, the following is immediate.
Let \(G=(V,E)\) be a graph and let \(\sigma:V\rightarrow V\) be a permutation with matrix representation \(\bs{P}\). Then \(\sigma\) is an automorphism of \(G\) if and only if \(\bs{P}^T\bs{A}\bs{P} = \bs{A}\), or equivalently, \(\bs{A}\bs{P}=\bs{P}\bs{A}\).
Combining Corollary 2.4.3 and Proposition 2.4.1 we obtain the following.
If \(G_1\) and \(G_2\) are isomorphic then \(\spec(G_1) = \spec(G_2)\).
It is now natural to ask whether non-isomorphic graphs can have the same eigenvalues. The answer turns out to be yes, and in fact it is not too difficult to find non-isomorphic graphs that have the same eigenvalues. For example, one can verify that the graphs \(G_1=C_4\oplus K_1\) and \(G_2=E_4\vee K_1\) have the same eigenvalues but are not isomorphic since \(G_1\) is disconnected and \(G_2\) is connected. These two graphs are the smallest non-isomorphic cospectral graphs. The smallest connected non-isomorphic cospectral graphs are shown in Figure 2.1.
We now investigate what graph properties can be deduced from the eigenvalues of a graph, and in particular, we will focus on the coefficients of the characteristic polynomial of a graph and some of the properties they reveal about the graph. Recall from Section 2.2 that for any polynomial \(p(t)\) with roots \(\lambda_1,\lambda_2,\ldots,\lambda_n\) it holds that
\begin{equation}\label{eqn:p-expand}
p(t) = t^n - s_1 t^{n-1} + s_2 t^{n-2} + \cdots+(-1)^{n-1}s_{n-1} t + (-1)^ns_n
\end{equation}
where \(s_1,\ldots,s_n\) are the elementary symmetric polynomials evaluated \(\lambda_1,\lambda_2,\) \(\ldots,\lambda_n\). From Section 2.2, the elementary symmetric polynomials \(s_1,s_2,\ldots,s_n\) can be written in terms of the power sums \(p_1,p_2,\ldots,p_k\), and it turns out that if \(p(t)\) is the characteristic polynomial of a matrix \(\bs{M}\) then there is a very simple relationship between the power sums \(p_1,p_2,\ldots,p_n\) and the entries of \(\bs{M}\). Consider first \(p_1=s_1\) via a \(3\times 3\) matrix:
\begin{align*}
p(t) = \det(t\bs{I} - \bs{M}) &= \det\begin{bmatrix}t-m_{11} & m_{12}&m_{13}\\m_{21}&t-m_{22}&m_{23}\\m_{31}&m_{32}&t-m_{33}\end{bmatrix}\\[2ex]
&= (t-m_{11})(t-m_{22})(t-m_{33}) + g(t)
\end{align*}
where \(g\) is a polynomial whose degree is at most one. Expanding we obtain
\[
p(t) = t^3 - (m_{11}+m_{22}+m_{33})t^2 + h(t)
\]
where \(h(t)\) is a polynomial whose degree is at most one. This shows that
\[
p_1 = m_{11} + m_{22} + m_{33} = \tr(\bs{M}).
\]
In the general \(n\times n\) case, a similar argument shows that the coefficient of \(t^{n-1}\) in \(p(t)=\det(t\bs{I}-\bs{M})\) is \(-(m_{11}+m_{22}+\cdots+m_{nn}) = -\tr(\bs{M})\). On the other hand, if the roots of \(p(t)\) are \(\lambda_1,\lambda_2,\ldots,\lambda_n\) then \(p_1=s_1 = \lambda_1+\lambda_2+\cdots+\lambda_n\). To summarize:
Suppose that the \(n\times n\) matrix \(\bs{M}\) has eigenvalues \(\lambda_1,\lambda_2,\ldots,\lambda_n\). Then
\[
\tr(\bs{M}) = \lambda_1+\lambda_2+\cdots+\lambda_n.
\]
In other words, the trace of \(\bs{M}\) is the sum of the eigenvalues of \(\bs{A}\).
Alternatively, we have shown that
\[
p_1=\tr(\bs{M}).
\]
We now want to relate the power sums \(p_2,\ldots,p_n\) with the entries of \(\bs{M}\) but first we need the following.
If \(\lambda\) is an eigenvalue of \(\bs{M}\) then \(\lambda^k\) is an eigenvalue of \(\bs{M}^k\).
If \(\bs{M}\bs{x} = \lambda \bs{x}\) then
\[
\bs{M}^2 \bs{x} = \bs{M}(\bs{M}\bs{x} ) = \bs{M}(\lambda \bs{x} ) = \lambda \bs{M}\bs{x} = \lambda (\lambda \bs{x} ) = \lambda ^2 \bs{x} .
\]
By induction,
\[
\bs{M}^{k+1} \bs{x} = \bs{M}^k( \bs{M}\bs{x} ) = \bs{M}^k(\lambda \bs{x} ) = \lambda \bs{M}^k \bs{x} = \lambda \cdot \lambda^k \bs{x} = \lambda^{k+1} \bs{x} .
\]
Therefore, if \(\bs{M}\) has eigenvalues \(\lambda_1, \lambda_2, \ldots, \lambda_n\) then \(\bs{M}^k\) has eigenvalues \(\lambda_1^k, \lambda_2^k,\) \(\ldots, \lambda_n^k\).
As a consequence we obtain the following.
If \(\lambda_1,\lambda_2,\ldots,\lambda_n\) are the eigenvalues of \(\bs{M}\) then
\[
\tr(\bs{M}^k) = p_k(\lambda_1,\lambda_2,\ldots,\lambda_n)=\lambda_1^k + \lambda_2^k + \cdots + \lambda_n^k.
\]
By Lemma 2.4.7, the eigenvalues of \(\bs{M}^k\) are \(\lambda_1^k, \lambda_2^k, \ldots, \lambda_n^k\). By Proposition 2.4.6 applied to \(\bs{M^k}\), it holds that
\[
\tr(\bs{M}^k) = \lambda_1^k + \lambda_2^k + \cdots + \lambda_n^k = p_k(\lambda_1,\lambda_2,\ldots,\lambda_k).
\]
Proposition 2.4.8 is important because it relates the power sums \(p_1,p_2,\ldots,p_k\) evaluated at the eigenvalues with the entries of \(\bs{M}\) via the numbers \(\tr(\bs{A}^k)\). From this we can for example prove the following.
Let \(\bs{M}\) and \(\bs{N}\) be \(n\times n\) matrices. Then \(\bs{M}\) and \(\bs{N}\) have the same eigenvalues if and only if \(\tr(\bs{M}^k) = \tr(\bs{N}^k)\) for \(1\leq k\leq n\).
Suppose that \(\bs{M}\) and \(\bs{N}\) have the same eigenvalues \(\lambda_1,\lambda_2,\ldots,\lambda_n\). Then by Proposition 2.4.8 we have
\[
\tr(\bs{M}^k) = \lambda_1^k+\lambda_2^k+\cdots+\lambda_n^k
\]
and
\[
\tr(\bs{N}^k)=\lambda_1^k+\lambda_2^k+\cdots+\lambda_n^k
\]
and therefore \(\tr(\bs{M}^k) = \tr(\bs{N}^k)\) for all \(k\geq 1\). Conversely, if \(\tr(\bs{M}^k) =\tr(\bs{N}^k)\) for \(1\leq k\leq n\) then by Proposition 2.2.5 the coefficients of the characteristic polynomials of \(\bs{M}\) and \(\bs{N}\) are equal since the coefficients of the characteristic polymials can be written in terms of \(p_k = \tr(\bs{M}^k) =\tr(\bs{N}^k)\). Hence, \(\bs{M}\) and \(\bs{N}\) have the same characteristic polynomial and therefore the same eigenvalues.
For our purposes, Proposition 2.4.8 is the main tool to relate the spectrum of a graph with its structural properties. For example, we obtain the following.
Let \(G_1\) and \(G_2\) be graphs each with \(n\) vertices. Then \(G_1\) and \(G_2\) are cospectral if and only if for each \(k\in\set{1,2,\ldots n}\), the total number of closed walks in \(G_1\) of length \(k\) equals the total number of walks in \(G_2\) of length \(k\).
By Theorem 2.4.9, if \(G_1\) and \(G_2\) have the same eigenvalues then \(\tr(\bs{A}_1^k) = \tr(\bs{A}_2^k)\) for all \(1\leq k \leq n\). By Theorem 2.1.1, the number \(\tr(\bs{A}_1^k)\) is the total number of closed walks of length \(k\) and the claim follows.
Let \(\spec(G) = (\lambda_1,\lambda_2,\ldots,\lambda_n)\). Suppose that \(\lambda_1^2 + \lambda_2^2 + \cdots + \lambda_n^2 = 56\). Find the number of edges of \(G\).
In the next theorem, we relate the first few coefficients of the characteristic polynomial of a graph with some of the structural properties of a graph. The main tool being used here is Proposition 2.4.8.
Let \(G\) be a graph with characteristic polynomial
\[
p(t) = t^n + c_1 t^{n-1} + c_2 t^{n-2} + \cdots + c_{n-1} t + c_n.
\]
If \(G\) has \(m\) edges, \(t\) triangles, and \(q\) cycles of length four then
\begin{align*}
c_1 &= 0\\
c_2 &= -m\\
c_3 &= -2t\\
c_4 &= - 2 q + \frac{1}{2}m(m+1) - \frac{1}{2} \sum_{i=1}^n \deg(v_i)^2.
\end{align*}
Since \(\bs{A}\) has zeros on the diagonal then
\[
c_1 = -s_1 = -\tr(\bs{A} ) = 0.
\]
From the Newton identities \eqref{eqn:newton-id}, and using \(p_1=s_1=0\), we have
\[
c_2 = s_2 = -\frac{1}{2}(p_2-p_1^2) = -\frac{1}{2} p_2.
\]
Now \(p_2=\tr(\bs{A} ^2)\) and since \(\tr(\bs{A} ^2)=2m\) (Corollary 2.1.2) we conclude that
\[
p_2=\tr(\bs{A} ^2) = 2 m.
\]
Therefore,
\[
c_2 = -\frac{1}{2}p_2 = - m.
\]
Now consider \(c_3=-s_3\). We have from the Newton identities that
\[
s_3 = \frac{1}{6} (p_1^3-3p_1p_2+2p_3) = \frac{1}{3} p_3 = \frac{1}{3}\tr(\bs{A} ^3).
\]
From Corollary 2.1.2, we have \(\tr(\bs{A} ^3)=6 t\) and therefore \(c_3=- s_3 = -2t\) as claimed. Finally, from Newton's identities we have
\[
c_4 = s_4 = -\frac{1}{4}(p_4 - s_1p_3 + s_2p_2 - s_3p_1) = -\frac{1}{4}(p_4 + s_2p_2).
\]
Now from Corollary 2.1.2, we have \(p_4 = 8q -2m + 2\sum_{i=1}^n \deg(v_i)^2\), \(p_2 = 2m\), and therefore
\begin{align*}
c_4 &= -\frac{1}{4}(p_4+s_2p_2)\\[2ex]
&= -\frac{1}{4}\left( 8q -2m + 2\sum_{i=1}^n \deg(v_i)^2 - 2m^2\right)\\[2ex]
&= -2q + \frac{1}{2} m(m+1) - \frac{1}{2}\sum_{i=1}^n \deg(v_i)^2
\end{align*}
as claimed.
From Theorem 2.4.11 we now obtain a few graph characteristics that are shared by cospectral graphs. For example, if \(G_1\) and \(G_2\) are cospectral then they have the same characteristic polynomial. Theorem 2.4.11 then implies that \(G_1\) and \(G_2\) have the same number of edges and the same number of triangles.
If \(G_1\) and \(G_2\) are cospectral then they have the same number of edges and the same number of triangles.
A natural question to ask is if the degree sequence is a property that must be shared by cospectral graphs. The answer in general is no. For example, in Figure 2.2 we display two non-isomoprhic cospectral trees with distinct degree sequences. For trees, however, the following example shows that the sum of the squares of the degrees is equal for cospectral trees.
Suppose that \(G_1\) and \(G_2\) are graphs with at least \(n\geq 5\) vertices and suppose that they have the same number of 4-cycles. Let \(d(G_1)=(d_1,d_2,\ldots,d_n)\) and let \(d(G_2)=(\delta_1,\delta_2,\ldots,\delta_n)\) be their respective degree sequences. If \(G_1\) and \(G_2\) are cospectral show that
\[
\sum_{i=1}^n d_i^2 = \sum_{i=1}^n \delta_i^2.
\]
Solution: If \(G_1\) and \(G_2\) are cospectral then their characteristic polynomials are equal, and in particular the coefficient \(c_4\) of \(t^{n-4}\) in their characteristic polynomials are equal. Also, they must have the same number of edges. Since \(G_1\) and \(G_2\) have the same number of \(C_4\)'s, Theorem 2.4.11 implies that \(\sum_{i=1}^n d_i^2 = \sum_{i=1}^n \delta_i^2\). This example is applicable to trees since trees have no cycles.
Suppose that \(G\) is a bipartite graph with spectrum \(\spec(G) =(\lambda_1,\lambda_2,\ldots,\lambda_n)\). Prove that if \(k\) is odd then
\[
p_k(\lambda_1,\lambda_2,\ldots,\lambda_n) = \lambda_1^k + \lambda_2^k + \cdots + \lambda_n^k = 0.
\]
Solution: Let \(\set{X,Y}\) be a bipartition of \(G\). Suppose that \((w_0,w_1,\ldots,w_k)\) is a closed walk in \(G\) and without loss of generality suppose that \(w_0 \in X\). Then \(w_i \in Y\) for \(i\) odd and \(w_j \in X\) for \(j\) even. Since \(w_k = w_0 \in X\) it follows that \(k\) is necessarily even. Hence, all closed walks in \(G\) have even length. By Proposition 2.4.8, \(\tr(\bs{A}^k) = p_k(\lambda_1,\ldots,\lambda_n)\) for all \(k\geq 1\), and \(\tr(\bs{A}^k)\) is the total number of closed walks of length \(k\). Hence, \(\tr(\bs{A}^k)=0\) for \(k\) odd and the claim is proved.
The graph \(G\) has spectrum \(\spec(G) = (-2,1-\sqrt{5},0,0,1+\sqrt{5})\) and degree sequence \(d(G) = (4,3,3,3,3)\). Find the number of edges, triangles, and \(4\)-cycles in \(G\).
Use Theorem 2.4.11 to find the characteristic polynomial of \(P_5\). (Hint: \(P_5\) is bipartite.)
Exercises
Prove that if \(G\) is a \(k\)-regular graph with \(n\) vertices and \(\spec(G)=(\lambda_1,\lambda_2,\ldots,\lambda_n)\) then
\[
\sum_{i=1}^n \lambda_i^2 = kn.
\]
A \(3\)-regular graph \(G\) with \(n=8\) vertices has characteristic polynomial
\[
p(t) = t^8-12t^6-8t^5+38t^4+48t^3-12t^2-40t-15.
\]
Find the number of edges \(m\), number of triangles \(t\), and number of \(4\)-cycles \(q\) of \(G\).
Two trees \(G_1\) and \(G_2\) have degree sequence \(d(G_1)=(5, 3, 2, 2,\) \(1, 1, 1, 1, 1, 1)\) and \(d(G_2) = (3,3,2,2,2,2,1,1,1,1)\). Could \(G_1\) and \(G_2\) be cospectral? Explain.
A graph \(G\) has spectrum \(\spec(G) = (-2,-1,0,0,1,2)\). How many closed walks of length 4 are in \(G\)? What about closed walks of length 5?
A tree \(G\) has degree sequence \(d(G) = (3,2,1,1,1)\). Find the characteristic polynomial of \(G\). (Hint: Recall that a tree is bipartite, and recall Theorem 2.3.6.)
Bipartite Graphs
The following is a spectral characterization of bipartite graphs.
Let \(G\) be a graph on \(n\) vertices. The following are equivalent.
- \(G\) is bipartite
- \(\lambda\) is a non-zero eigenvalue of \(\bs{A}\) if and only if \(-\lambda\) is an eigenvalue of \(\bs{A}\)
- \(\sum_{i=1}^n \lambda_i^k = 0\) for \(k\) odd.
- \(\tr(\bs{A}^k) = 0\) for \(k\) odd.
Suppose that \(G\) is bipartite and let \(V=V(G)\). The vertex set \(V\) can be partitioned into two disjoint parts \(X\) and \(Y\) such that any edge \(e\in E(G)\) has one vertex in \(X\) and one in \(Y\). Therefore, by an appropriate labelling of the vertices, the adjacency matrix of \(G\) takes the form
\[
\bs{A} = \begin{bmatrix}0 & B\\[2ex] B^T & 0\end{bmatrix}
\]
where \(B\) is a \(|X|\times |Y|\) matrix. Suppose that \(\xi=\left[\begin{smallmatrix}\xi_1\\ \xi_2\end{smallmatrix}\right]\) is an eigenvector of \(\bs{A}\) with eigenvalue \(\lambda\neq 0\), where \(\xi\in\real^{|X|}\) and \(\xi_2\in\real^{|Y|}\). Then \(\bs{A} \xi = \lambda\xi\) implies that
\[
\bs{A} \xi = \begin{bmatrix}B\xi_2\\ B^T\xi_1\end{bmatrix} = \lambda \begin{bmatrix}\xi_1\\ \xi_2\end{bmatrix}.
\]
Consider the vector \(\tilde{\xi} = \left[\begin{smallmatrix}-\xi_1\\ \xi_2\end{smallmatrix}\right]\). Then
\[
\bs{A} \tilde{\xi} = \begin{bmatrix}B\xi_2\\ -B^T\xi_1\end{bmatrix} = \begin{bmatrix}\lambda \xi_1\\ -\lambda \xi_2\end{bmatrix} = -\lambda \begin{bmatrix}-\xi_1\\ \xi_2\end{bmatrix} = -\lambda\tilde{\xi}.
\]
Hence, \(\tilde{\xi}\) is an eigenvector of \(\bs{A}\) with eigenvalue \(-\lambda\). This proves (i) \(\Longrightarrow\) (ii). Now assume that (ii) holds. Then there are an even number of non-zero eigenvalues, say \(\pm \lambda_1,\pm \lambda_2,\ldots,\pm \lambda_r\), where \(n=2r+q\), and \(q\) is the number of zero eigenvalues. If \(k\) is odd then \(\sum_{i=1}^n \lambda_i^k = 0\). This proves (ii) \(\Longrightarrow\) (iii). Now assume that (iii) holds. Since \(\tr(\bs{A}^k) = \sum_{i=1}^n \lambda_i^k\) it follows that \(\tr(\bs{A}^k)=0\) for \(k\) odd. This proves (iii) \(\Longrightarrow\) (iv). Now assume (iv) holds. It is known that the \((i,i)\) entry of \(\bs{A}^k\) are the number of walks starting and ending at vertex \(i\). Hence, the total number of cycles of length \(k\) in \(G\) is bounded by \(\tr(\bs{A}^k)\). By assumption, if \(k\) is odd then \(\tr(\bs{A}^k)=0\) and thus there are no cycles of odd length in \(G\). This implies that \(G\) is bipartite and proves (iv) \(\Longrightarrow\) (i). This ends the proof.
From the previous theorem, it follows that if \(\pm\lambda_1,\pm\lambda_2,\ldots,\pm\lambda_r\) are the non-zero eigenvalues of \(\bs{A}\) and \(\bs{A}\) has the zero eigenvalue of multiplicity \(p\geq 0\) then the characteristic poly of \(\bs{A}\) is
\[
p(t) = t^p (t^2-\lambda_1^2)(t^2-\lambda_2^2)\cdots (t^2-\lambda_r^2)
\]
From this it follows that at least half of the coefficients \(s_1,s_2,\ldots,s_n\) in the expansion of \(p(t)\) are zero, namely all the odd coefficients. For example, if say \(r=3\) and \(p=1\) then
\begin{align*}
p(t) &= t(t^2-\lambda_1^2)(t^2-\lambda_2^2)(t^2-\lambda_3^2)\\[2ex]
&= t(t^6 - s_2 t^4 + s_3 t^2 - s_2)\\[2ex]
&= t^7 - s_2 t^5 + s_4 t^3 - s_6 t
\end{align*}
so that \(s_1=s_3=s_5=s_7=0\). Another way to see this is using the Newton identities.
Let \(G\) be a bipartite graph on \(n\) vertices and let \(p(t)\) be the characteristic polynomial of \(G\):
\[
p(t) = t^n - s_1 t^{n-1} + s_2 t^{n-2} + \cdots + (-1)^{n-1}s_{n-1} t + (-1)^ns_n.
\]
Then \(s_k=0\) for \(k\) odd.
Using the Newton identities, we have
\[
s_k = \tfrac{1}{k}(-1)^{k-1}\sum_{j=0}^{k-1} p_{k-j} s_j
\]
for \(1\leq k\leq n\). If \(G\) is bipartite then \(p_\ell = \tr(\bs{A} ^\ell)=0\) for all \(\ell\geq 1\) odd. Let \(k\) be odd and assume by induction that \(s_1,s_3,\ldots,s_{k-1}\) are all zero. Then the only terms \(p_{k-j}s_j\) that survive in the expression for \(s_k\) are those where \(j\) is even. If \(j\) is even then necessarily \(k-j\) is odd, and thus \(p_{k-j}=0\). Hence \(s_k=0\) as claimed.
As a consequence of Corollary 2.5.2, if \(G\) is a bipartite graph on \(n\) vertices and \(n\) is odd then then \(s_n=0\) and thus \(\lambda=0\) is an eigenvalue of \(\bs{A}\).