The Laplacian and Signless Laplacian Matrices

We first define the incidence matrix of a graph.
Let \(G=(V,E)\) be a graph where \(V=\{v_1,v_2,\ldots,v_n\}\) and \(E=\{e_1,e_2,\) \(\ldots,e_m\}\). The incidence matrix of \(G\) is the \(n\times m\) matrix \(\bs{M}\) such that \[ \bs{M}(i,j) = \begin{cases} 1, & \text{if \(v_i \in e_j\)}\\[2ex] 0, & \text{otherwise.}\end{cases} \]
Hence, the rows of \(\bs{M}\) are indexed by the vertices of \(G\) and the columns of \(\bs{M}\) are indexed by the edges of \(G\). The only non-zero entries of column \(\bs{M}(:, j)\) (there are only two non-zero entries) correspond to the indices of the vertices incident with edge \(e_j\). Similarly, the non-zero entries of the row \(\bs{M}(i, :)\) correspond to all the edges incident to vertex \(v_i\). Hence, \(\sum_{j=1}^m \bs{M}(i, j) = \deg(v_i)\).
Find the incidence matrix of the graphs given below.
figures/incidence-matrix.svg
Two graphs
The signless Laplacian matrix of \(G\) is the \(n\times n\) matrix defined as \[ \bs{Q}(G):= \bs{M} \bs{M}^T \] When no confusion arises we write \(\bs{Q}\) instead of \(\bs{Q}(G)\). Notice that \[ \bs{Q}^T = (\bs{M}\bs{M}^T)^T = (\bs{M}^T)^T \bs{M}^T = \bs{M}\bs{M}^T \] and thus \(\bs{Q}\) is a symmetric matrix. We now find an alternative expression for \(\bs{Q}\). Let \(\bs{D}\) be the \(n\times n\) diagonal matrix whose \(i\)th diagonal entry is \(\bs{D}(i,i)=\deg(v_i)\). The matrix \(\bs{D}\) is called the degree matrix of \(G\).
For any graph \(G\) it holds that \(\bs{Q} = \bs{D} + \bs{A}\).
We have that \[ \bs{Q}(i, j) = \bs{M}(i, :) \bs{M}^T(:, j) = \sum_{k=1}^m \bs{M}(i, k)\bs{M}^T(k,j) = \sum_{k=1}^m \bs{M}(i, k)\bs{M}(j, k) \] If \(i=j\) then \[ \bs{Q}(i, i) = \sum_{k=1}^m \bs{M}(i, k)\bs{M}(i, k) = \sum_{k=1}^m\bs{M}(i,k) = \deg(v_i). \] On the other hand, if \(i\neq j\) then \(\bs{Q}(i, j)\) is the product of the \(i\)th row and the \(j\)th row of \(\bs{M}\), and the only possibly non-zero product is when \(\bs{M}(i,:)\) and \(\bs{M}(j,:)\) have a non-zero entry in the same column, which corresponds to \(v_i\) and \(v_j\) incident with the same edge.
Before we can define the Laplacian matrix of a graph we need the notion of an orientation on a graph. An orientation of \(G\) is an assignment of a direction to each edge \(e\in E\) by declaring one vertex incident with \(e\) as the head and the other vertex as the tail. Formally, an orientation of \(G\) is a function \(g:E(G)\rightarrow V(G)\times V(G)\) such that \(g(\{u,v\})\) is equal to one of \((u,v)\) or \((v,u)\). If \(g(\{u,v\}) = (u,v)\) then we say that \(u\) is the tail and \(v\) is the head of the edge \(e=\{u,v\}\).
Let \(G=(V,E)\) be a graph where \(V=\{v_1,v_2,\ldots,v_n\}\) and \(E=\{e_1,e_2,\) \(\ldots,e_m\}\), and let \(g:E\rightarrow V\times V\) be an orientation of \(G\). The oriented incidence matrix \(\bs{N}\) of \(G\) is the \(n\times m\) matrix defined by \[ \bs{N}(i, j) = \begin{cases}1, &\text{ if \(v_i\) the head of \(e_j\)} \\ -1, &\text{ if \(v_i\) the tail of \(e_j\) }\\0, &\text{ if \(v_i\notin e_j\)} \end{cases} \]
The Laplacian matrix of \(G\) relative to the orientation \(g\) is the \(n\times n\) matrix \[ \bs{L}(G):=\bs{N} \bs{N}^T. \] As with the signless Laplacian matrix, the Laplacian matrix is a symmetric matrix. When no confusion arises, we write \(\bs{L}\) instead of \(\bs{L}(G)\).
Assign an orientation to the left graph in Figure 4.1 and compute the associated oriented incidence matrix \(\bs{N}\). Then compute \(\bs{L}=\bs{N}\bs{N}^T\).
For any graph \(G\) it holds that \(\bs{L}(G) = \bs{D} - \bs{A}\). Consequently, \(\bs{L}\) is independent of the orientation chosen.
The proof is similar to the that of the signless Laplacian matrix. That \(\bs{L}\) is independent of the orientation follows since \(\bs{D}\) and \(\bs{A}\) are independent of any orientation.
Let \(\bs{e}=(1,1,\ldots,1)\) be the all ones vector. Then \[ \bs{L} \bs{e} = \bs{D} \bs{e} - \bs{A} \bs{e} = \begin{bmatrix}\deg(v_1)\\ \deg(v_2) \\ \vdots \\ \deg(v_n)\end{bmatrix} - \begin{bmatrix}\deg(v_1)\\ \deg(v_2) \\ \vdots \\ \deg(v_n)\end{bmatrix} = \bs{0} \] Therefore \(\lambda = 0\) is an eigenvalue of \(\bs{L}\) with corresponding eigenvector \(\bs{e}\). We now show that \(\bs{Q}\) and \(\bs{L}\) have non-negative eigenvalues. To that end, we say that a symmetric matrix \(\bs{Z}\) is positive semi-definite if \(\bs{x} ^T \bs{Z} \bs{x}\geq 0\) for all non-zero \(\bs{x}\) and is positive definite if \(\bs{x}^T\bs{Z} \bs{x} \gt 0\) for all non-zero \(\bs{x}\).
A symmetric matrix \(\bs{Z}\) is positive definite if and only if every eigenvalue of \(\bs{Z}\) is positive. Similarly, \(\bs{Z}\) is positive semi-definite if and only if every eigenvalue of \(\bs{Z}\) is non-negative.
Since \(\bs{Z}\) is symmetric, there exists an orthonormal basis \(\bs{x}_1,\bs{x}_2,\ldots,\bs{x}_n\) of \(\real^n\) consisting of eigenvectors of \(\bs{Z}\). Thus, \(\bs{x}_i^T\bs{x}_j=0\) if \(i\neq j\) and \(\bs{x}_i^T\bs{x}_i^T=1\). Let \(\lambda_1,\lambda_2,\ldots,\lambda_n\) denote the corresponding eigenvalues, that is, \(\bs{Z}\bs{x}_i = \lambda_i \bs{x}_i\). Suppose that \(\bs{Z}\) is positive definite (the proof for positive semi-definiteness is identical). Then \(\bs{x}^T\bs{Z}\bs{x} \gt 0\) for all non-zero \(\bs{x}\). Now, \[ \bs{x}_i^T \bs{Z} \bs{x}_i = \bs{x}_i ^T (\lambda_i \bs{x}_i) = \lambda_i \bs{x}_i^T \bs{x}_i = \lambda_i \] Therefore, \(\lambda_i = \bs{x}_i^T \bs{Z}\bs{x}_i \gt 0\) is positive. This shows that if \(\bs{Z}\) is positive definite then all eigenvalues of \(\bs{Z}\) are positive. Conversely, suppose that \(\bs{Z}\) has all positive eigenvalues and let \(\bs{x}\) be an arbitrary non-zero vector. Since \(\bs{x}_1,\bs{x}_2,\ldots,\bs{x}_n\) is a basis for \(\real^n\), there are constants \(c_1,\ldots,c_n\), not all zero, such that \(\bs{x}=c_1\bs{x}_1+c_2\bs{x}_2+\cdots+c_n\bs{x}_n\). Then, \[ \bs{x}^T \bs{Z} \bs{x} = c_1^2\lambda_1+c_2^2\lambda_2+\cdots+c_n^2\lambda_n \] and since at least one \(c_i\) is non-zero and all eigenvalues are positive, we conclude that \(\bs{x}^T\bs{Z}\bs{x} \gt 0\).
The Laplacian and signless Laplacian matrices are positive semi-definite.
Recall that \(\bs{L}\bs{e}=\bs{0}\) and thus \(\bs{e}^T\bs{L}\bs{e}=0\). Now, by definition of \(\bs{L}\), for any vector \(\bs{x}\) we have \[ \bs{x} ^T\bs{L} \bs{x} = \bs{x}^T \bs{N} \bs{N}^T \bs{x} = (\bs{N}^T \bs{x})^T \cdot (\bs{N}^T \bs{x}) = \| \bs{N}^T\bs{x} \|^2 \geq 0. \] We conclude that \(\bs{x}^T \bs{L} \bs{x} \geq 0\) for all \(\bs{x}\), and therefore \(\bs{L}\) is positive semi-definite. The proof for \(\bs{Q}\) is identical.
Since \(\bs{L}\) is a symmetric matrix, and as we have just shown is positive semi-definite, the eigenvalues of \(\bs{L}\) can be ordered as \[ 0=\mu_1\leq \mu_2\leq\mu_3\leq\cdots\leq \mu_n \] The Laplacian matrix reveals many useful connectivity properties of a graph.
A graph \(G\) is connected if and only if \(\mu_1=0\) is a simple eigenvalue of \(\bs{L}\). Moreover, the algebraic multiplicity of \(\mu_1\) is the number of components of \(G\).
We first recall that \(\bs{e}\) is an eigenvector of \(\bs{L}\) with eigenvalue \(\mu_1=0\). Suppose that \(G=G_1\oplus G_2\oplus \cdots\oplus G_k\). For any vector \(\bs{x}\) we have \[ \bs{x}^T \bs{L} \bs{x} = \bs{x}^T\bs{N}\bs{N}^T\bs{x} = \|\bs{N}^T\bs{x}\|^2 = \sum_{v_iv_j\in E} (x_i - x_j)^2. \] Since \(E(G) = E(G_1)\sqcup E(G_2)\sqcup \cdots \sqcup E(G_k)\) (where \(\sqcup\) denotes disjoint union) we can write \[ \bs{x}\bs{L}^T\bs{x} = \sum_{v_iv_j\in E(G_1)} (x_i - x_j)^2 + \sum_{v_iv_j\in E(G_2)} (x_i - x_j)^2 + \cdots + \sum_{v_iv_j\in E(G_k)} (x_i - x_j)^2 \] Suppose now that \(\bs{L}\bs{x}=\bs{0}\), that is, \(\bs{x}\) is an eigenvector of \(\bs{L}\) with eigenvalue \(\mu_1\). Then \(\bs{x}^T\bs{L}\bs{x} = 0\) and from our computation above we deduce that \(\sum_{v_iv_j\in E(G_a)} (x_i - x_j)^2 = 0\) for each component \(G_a\) of \(G\). Hence, the entries of \(\bs{x}\) are equal on each component of \(G\). If \(G\) is connected then \(\bs{x}\) has all entries equal and thus \(\bs{x}\) is a multiple of \(\bs{e}\). This proves that the geometric multiplicity, and thus the algebraic multiplicity, of \(\mu_1\) is one and thus \(\mu_1\) is a simple eigenvalue. Conversely, assume that \(G\) is disconnected with components \(G_1, G_2, \ldots, G_k\) where \(k\geq 2\), and let \(n=|V(G)|\). Let \(\bs{z}_i \in\real^n\) be the vector with entries equal to 1 on each vertex of component \(G_i\) and zero elsewhere. Then \(\bs{N}^T \bs{z}_i = \bs{0}\) and therefore \(\bs{L}\bs{z}_i = \bs{N}\bs{N}^T \bs{z}_i = \bs{0}\). Since \(\bs{z}_1, \bs{z}_2, \ldots, \bs{z}_k\) is a linearly independent set of vectors, this proves that the multiplicity of \(\mu_1\) is at least \(k\). However, since each component \(G_i\) is by definition connected and we have proved that a connected graph has \(\mu_1\) as a simple eigenvalue, \(\mu_1\) has algebraic multiplicity exactly \(k\).
Since \(\bs{Q}\) is a symmetric matrix and is semi-positive definite, the eigenvalues of \(\bs{Q}\) can be ordered as \[ 0\leq q_1 \leq q_2 \leq \cdots\leq q_n \] Note that in general we can only say that \(0\leq q_1\).
The signless Laplacian matrix of the graph on the left in Figure 4.1 is \[ \bs{Q} = \begin{bmatrix} 2&0&1&1&0&0\\ 0&2&1&1&0&0\\ 1&1&3&1&0&0\\ 1&1&1&5&1&1\\ 0&0&0&1&2&1\\ 0&0&0&1&1&2 \end{bmatrix} \] and \(\det(\bs{Q}) = 80\). Hence, \(0 \lt q_1\).
Suppose that \(G\) is connected. The least eigenvalue of \(\bs{Q}\) is \(q_1=0\) if and only if \(G\) is bipartite. In this case, 0 is a simple eigenvalue.
As in the proof for the Laplacian matrix, for any \(\bs{x}\in\real^n\) we have \[ \bs{x}^T\bs{Q}\bs{x} = \bs{x}^T\bs{M}\bs{M}^T\bs{x} = \|\bs{M}^T \bs{x}\|^2 = \sum_{v_iv_j\in E} (x_i + x_j)^2. \] Suppose that \(\bs{x}=(x_1,x_2,\ldots,x_n)\) is an eigenvector of \(\bs{Q}\) with eigenvalue \(q_1=0\). Then \(\bs{x}^T\bs{Q}\bs{x}=0\) and therefore \(x_i = -x_j\) for every edge \(v_iv_j\in E\). Let \(C_1 = \set{v_i\in V \;|\; x_i \gt 0}\), \(C_2 = \set{v_j\in V\;|\; x_j \lt 0}\), and \(C_3 = \set{v_k\in V\; |\; x_k = 0}\). Since \(\bs{x}\) is a non-zero vector, \(C_1\) and \(C_2\) are non-empty, and moreover \(C_1\cap C_2=\emptyset\). Any vertex in \(C_3\) is not adjacent to any vertex in \(C_1\) or \(C_2\). Indeed, if \(v_k \in C_3\) and \(v_k\sim v_i\) then necessarily \(0=x_k=-x_i=0\) and thus \(v_i\in C_3\). Since \(G\) is connected this implies that \(C_3=\emptyset\). This proves that \(C_1\) and \(C_2\) is a partition of \(V(G)\). Moreover, if \(v_iv_j\in E\) and \(v_i \in C_1\) then necessarily \(v_j \in C_2\), and vice-versa. This proves that \(\set{C_1,C_2}\) is a bipartition of \(G\), and thus \(G\) is bipartite. Now suppose that \(G\) is bipartite and let \(\set{X,Y}\) be a bipartition of \(G\). Let \(\alpha\neq 0\) and let \(\bs{x}\) be the vector whose entries on \(X\) are \(\alpha\) and on \(Y\) are \(-\alpha\). Thus, if \(\bs{M}\) denotes the incidence matrix of \(G\) then \(\bs{M}^T \bs{x} = \bs{0}\). Therefore \(\bs{Q}\bs{x} = \bs{M}\bs{M}^T\bs{x} = \bs{0}\) and thus \(\bs{x}\) is an eigenvector of \(\bs{Q}\) with eigenvalue \(q_1=0\). Now suppose that \(\bs{z}\) is another eigenvector of \(\bs{M}\) with eigenvalue \(q_1\). Then \(\bs{M}\bs{z}=\bs{0}\) implies that \(z_i = -z_j\) for \(v_iv_j\in E\). Since \(G\) is connected, \(\bs{z}\) is completely determined by its value at \(i\) since there is a path from \(v_i\) to any vertex in \(G\). Thus \(\bs{z}\) is a multiple of \(\bs{x}\), and this proves that \(q_1=0\) is a simple eigenvalue.
For any graph \(G\), the multiplicity of \(q_1=0\) as an eigenvalue of \(G\) is the number of bipartite components of \(G\).
Prove that \(\bs{L}(G) + \bs{L}(\overline{G}) = n\bs{I} - \bs{J}\) and use it to show that if \(\spec(\bs{L}) = (0, \mu_2, \mu_3, \ldots, \mu_n)\) then \(\spec(\overline{\bs{L}}) = (0, n-\mu_n, n-\mu_{n-1}, \ldots, n - \mu_2)\) where \(\overline{\bs{L}}\) is the Laplacian of \(\overline{G}\).
Suppose that the adjacency matrix of \(G\) has eigenvalues \(\lambda_1\leq \lambda_2\leq \cdots \leq \lambda_n\). If \(G\) is a \(k\)-regular graph, find the eigenvalues of \(\bs{L}\) and \(\bs{Q}\).
Find the Laplacian and signless Laplacian eigenvalues of the complete graph \(K_n\).

Exercises

Label the vertices of \(C_4\) so that \(v_i\sim v_{i+1}\) for \(i=1,2,3\). Find the Laplacian matrix of \(C_4\). Do the same for \(C_5\). What about for \(C_n\) for arbitrary \(n\geq 4\)?
Recall that for any \(n\times n\) matrix \(\bs{Z}\) with eigenvalues \(\lambda_1,\lambda_2,\ldots,\lambda_n\), if \[ \det(t\bs{I}-\bs{Z}) = t^n - s_1t^{n-1} + s_2 t^{n-2}+ \cdots + (-1)^n s_n \] is the characteristic polynomial of \(\bs{Z}\) then \begin{align*} s_1 &= \tr(\bs{Z})=\lambda_1+\lambda_2+\cdots+\lambda_n \\ s_n &= \det(\bs{Z})=\lambda_1\lambda_2\cdots\lambda_n \end{align*} Using this fact, find the coefficient of \(t^{n-1}\) of the characteristic polynomial \(\det(t\bs{I}-\bs{L})\) for any Laplacian matrix \(\bs{L}\). What about the constant term of \(\det(t\bs{I}-\bs{L})\)?
Let \(G_1\) be a graph with \(n_1\) vertices and Laplacian eigenvalues \(0=\alpha_1\leq \alpha_2\leq\cdots\leq\alpha_{n_1}\), and let \(G_2\) be a graph with \(n_2\) vertices and Laplacian eigenvalues \(0=\beta_1\leq\beta_2\leq\cdots\leq\beta_{n_2}\). In this problem you are going to find the Laplacian eigenvalues of \(G=G_1\vee G_2\). Recall that \(G\) is obtained by taking the union of \(G_1\) and \(G_2\) and then connecting each vertex in \(G_1\) to every vertex in \(G_2\). Hence \(|V(G)| = n_1 + n_2\) and \(E(G) = E(G_1) \cup E(G_2) \cup \set{\set{u, v}\; | \; u \in V(G_1), v\in V(G_2)}\).
  1. Suppose that the vertices of \(G\) are labelled so that the first \(n_1\) vertices are from \(G_1\) and the next \(n_2\) vertices are from \(G_2\). Let \(\bs{L}_1 = \bs{L}(G_1)\) and \(\bs{L}_2 = \bs{L}(G_2)\), and we note that \(\bs{L}_1\) is a \(n_1\times n_1\) matrix and \(\bs{L}_2\) is a \(n_2\times n_2\) matrix. Explain why \[ \bs{L}(G) = \begin{bmatrix} \bs{L}_1 + n_2\bs{I} & -\bs{J} \\[2ex] -\bs{J} & \bs{L}_2 + n_1\bs{I} \end{bmatrix}. \] where as usual \(\bs{I}\) is the identity matrix and \(\bs{J}\) is the all ones matrix, each of appropriate size.
  2. Consider the vector \(\bs{z} = (n_2, n_2, \ldots, n_2, -n_1,-n_1, \ldots, -n_1)\) where \(n_2\) appears \(n_1\) times and \(n_1\) appears \(n_2\) times. Note that \(\bs{z}\) can be written as \(\bs{z} = (n_2\bs{e}, -n_1\bs{e})\) where \(\bs{e}\) is the all ones vector of appropriate size. Show that \(\bs{z}\) is an eigenvector of \(\bs{L}(G)\) and find the corresponding eigenvalue.
  3. Suppose that \(\bs{x}\in\real^{n_1}\) is an eigenvector of \(\bs{L}_1\) with eigenvalue \(\alpha_i\) for \(i\geq 2\). Let \(\bs{z} = (\bs{x}, \bs{0}_{n_2})\) where \(\bs{0}_{n_2}\) is the zero vector in \(\real^{n_2}\). Using the fact that \(\bs{e}^T \bs{x} = 0\), show that \(\bs{z}\) is an eigenvector of \(\bs{L}\) with eigenvalue \(n_2+\alpha_i\). Hence, this shows that \(n_2+\alpha_2, \ldots, n_2+\alpha_{n_1}\) are eigenvalues of \(\bs{L}\).
  4. Suppose that \(\bs{y}\in\real^{n_2}\) is an eigenvector of \(\bs{L}_2\) with eigenvalue \(\beta_j\) for \(j\geq 2\). Let \(\bs{z} = (\bs{0}_{n_1}, \bs{y})\) where \(\bs{0}_{n_1}\) is the zero vector in \(\real^{n_1}\). Using the fact that \(\bs{e}^T \bs{y} = 0\), show that \(\bs{z}\) is an eigenvector of \(\bs{L}\) with eigenvalue \(n_1+\beta_j\). Hence, this shows that \(n_1+\beta_2, \ldots, n_1+\beta_{n_2}\) are eigenvalues of \(\bs{L}\).
  5. Parts (a), (b), (c) produce \(n_1 + n_2 - 1\) eigenvalues of \(\bs{L}\). What is the missing eigenvalue of \(\bs{L}\)?

The Matrix Tree Theorem

Recall that \(H\) is a subgraph of \(G\) if \(V(H)\subset V(G)\) and \(E(H)\subset E(G)\). A subgraph \(H\) of \(G\) is a spanning subgraph of \(G\) if \(V(H)=V(G)\). Hence, a spanning subgraph of \(G\) is obtained by deleting some of the edges of \(G\) but keeping all vertices. If \(H\) is a spanning subgraph of \(G\) and \(H\) is a tree then we say that \(H\) is a spanning tree of \(G\). The proof of the following lemma is left as an exercise.
A graph \(G\) is connected if and only if \(G\) has a spanning tree.
If \(G\) has a spanning tree \(H\) then clearly \(G\) is connected since any path in \(H\) is a path in \(G\). Now suppose that \(G\) is connected. If \(|E(G)| = 2\) then \(G=P_2\) and then clearly \(G\) has a spanning tree. Assume that every connected graph with \(m\geq 2\) edges has a spanning tree. Let \(G\) be a connected graph with \(m+1\) edges. If \(G\) is a tree we are done so suppose \(G\) is not a tree. Pick any cycle \(C_k\) in \(G\) and delete any edge \(e\) in \(C_k\). Then \(G-e\) is connected and has \(m\) edges. By induction, \(G-e\) has a spanning tree, say \(H\), and \(H\) is then a spanning tree for \(G\).
Find all of the spanning trees of the graph \(G\) shown below.
figures/find-spanning-trees.svg
The Matrix Tree theorem provides a way to count the number of spanning trees in a graph \(G\) using the cofactors of the Laplacian matrix \(\bs{L}\). Recall that for any \(n\times n\) matrix \(\bs{Z}\), the \((i,j)\)-cofactor of \(\bs{Z}\) is \((-1)^{i+j}\det(\bs{Z}_{i,j})\) where \(\bs{Z}_{i,j}\) is the \((n-1)\times (n-1)\) matrix obtained by deleting the \(i\)th row and the \(j\)th column of \(\bs{Z}\). Clearly, if \(\bs{Z}\) is an integer matrix then each cofactor is an integer. The cofactor matrix of \(\bs{Z}\) is the \(n\times n\) matrix \(\cof(\bs{Z})\) with entries \(\cof(\bs{Z})(i,j) =(-1)^{i+j}\det(\bs{Z}_{i,j})\). Using the definition of the determinant, one can show that \begin{equation}\label{eqn:cof} \bs{Z}\cof(\bs{Z})^T = \det(\bs{Z})\bs{I}. \end{equation} Moreover, if \(\bs{Z}\) is symmetric then \(\cof(\bs{Z})\) is also symmetric.
For any graph \(G\), there exists an integer \(\tau(G)\) such that \(\cof(\bs{L}) = \tau(G)\bs{J}\), in other words, \[ \tau(G) = (-1)^{i+j} \det(\bs{L}_{i,j}) \] for all \(i, j\).
Using the fact that \(\det(\bs{L}) = 0\) and \eqref{eqn:cof} we obtain \(\bs{L} \cof(\bs{L})^T = \det(\bs{L})\bs{I} = \bs{0}\). Suppose that \(G\) is connected. Then any vector in the kernel of \(\bs{L}\) is a multiple of \(\bs{e}\). Now since \(\bs{L}\cof(\bs{L})^T = \bs{0}\), it follows that each row of \(\cof(\bs{L})\) is a multiple of the all ones vector \(\bs{e}\), i.e., each row of \(\cof(\bs{L})\) is constant. Since \(\cof(\bs{L})\) is symmetric, this implies that \(\cof(\bs{Z})\) is a constant matrix, i.e., \(\cof(\bs{L}) = \tau(G) \bs{J}\) for some integer \(\tau(G)\). If \(G\) is disconnected, then the kernel of \(\bs{L}\) is at least two-dimensional and therefore \(\text{rank}(\bs{L})\leq n-2\). This implies that every cofactor of \(\bs{L}\) is zero. Hence, in this case \(\tau(G) = 0\).
For any graph \(G\), \(\tau(G)\) is the number of spanning trees of \(G\).