Linear algebra review
Let us first recall some basics from linear algebra. Let \(\bs{Y}\) be a \(n\times n\) matrix. The kernel of \(\bs{Y}\) (frequently called the nullspace of \(\bs{Y}\)) is the set \[ \ker(\bs{Y}) = \{\bs{v}\in\real^n\;|\; \bs{Y}\bs{v} = \bs{0}\}. \] In other words, the kernel of \(\bs{Y}\) consists of vectors that get mapped to the zero vector when multiplied by \(\bs{Y}\). Clearly, the zero vector \(\bs{0}\) is in \(\ker(\bs{Y})\) since \(\bs{Y}\bs{0}=\bs{0}\) and in fact \(\ker(\bs{Y})\) is a subspace of \(\real^n\) because it is closed under scalar multiplication and scalar addition (verify this!). We say that \(\bs{Y}\) has a trivial kernel if \(\ker(\bs{Y}) = \{\bs{0}\}\), that is, if the only vector in \(\ker(\bs{Y})\) is the zero vector. In a first course in linear algebra you proved that \(\bs{Y}\) has a trivial kernel if and only if \(\bs{Y}\) is invertible (if and only if \(\det(\bs{Y})\neq 0\)). Now let \(\bs{M}\) be a \(n\times n\) matrix with real entries. Given a non-zero vector \(\bs{v}\in\real\) if it holds that \[ \bs{M}\bs{v} = \lambda \bs{v} \] for some number \(\lambda\in\real\) then we say that \(\bs{v}\) is an eigenvector of \(\bs{M}\) with eigenvalue \(\lambda\). We will say that \((\bs{v},\lambda)\) is an eigenpair of \(\bs{M}\). Notice that we require \(\bs{v}\) to be non-zero, and that \(\bs{v}\in\real^n\) and also \(\lambda\in\real\). We are restricting our considerations only to real eigenvectors and real eigenvalues although it is easy to construct matrices \(\bs{M}\) with real entries that do not have real eigenvectors/eigenvalues, such as \[ \bs{M} = \begin{bmatrix}0 & 1\\-1 & 0\end{bmatrix}. \] However, because the matrices we will study have only real eigenvectors and eigenvalues, this lost of generality will not be too restrictive. If \((\bs{v},\lambda)\) is an eigenpair of \(\bs{M}\), then from \(\bs{M}\bs{v} = \lambda\bs{v}\) we can write that \(\lambda\bs{v} - \bs{M}\bs{v} = \bs{0}\) or equivalently by factoring \(\bs{v}\) on the right we have \[ (\lambda\bs{I} - \bs{M})\bs{v} = \bs{0}. \] Hence, \(\bs{M}\) will have an eigenvector \(\bs{v}\) associated to \(\lambda\) if the matrix \[ \bs{Y}_\lambda = \lambda\bs{I} - \bs{M} \] has a non-trivial kernel. Now, \(\bs{Y}_\lambda\) as a non-trivial kernel if and only if \(\bs{Y}_\lambda\) is not invertible if and only if \(\det(\bs{Y}_\lambda) = 0\). Hence, the only way that \(\lambda\) is an eigenvalue of \(\bs{M}\) is if \[ \det(\lambda\bs{I}-\bs{M}) = 0. \] If we replace \(\lambda\) by a variable \(x\) then to find the eigenvalues of \(\bs{M}\) we must therefore find the roots of the polynomial \[ p(x) = \det(x\bs{I}-\bs{M}) \] that is, we must find numbers \(\lambda\) such that \(p(\lambda) = 0\). The polynomial \(p(x)\) is called the characteristic polynomial of \(\bs{M}\) and we have just showed that the roots of \(p(x)\) are exactly the eigenvalues of \(\bs{M}\). Notice that to compute the polynomial \(p(x)\) we do not need any information about the eigenvectors of \(\bs{M}\) and \(p(x)\) is only used to find the eigenvalues. However, if \(\lambda\) is known to be a root of \(p(x)\) then any vector in \(\ker(\lambda\bs{I} - \bs{M})\) is an eigenvector of \(\bs{M}\) with corresponding eigenvalue \(\lambda\). Now since \(\ker(\lambda\bs{I}-\bs{M})\) is a subspace of \(\real^n\) it has a basis, say \(\beta_\lambda=\{\bs{v}_1,\bs{v}_2,\ldots,\bs{v}_k\}\), consisting of eigenvectors of \(\bs{M}\) with eigenvalue \(\lambda\). The dimension of the subspace \(\ker(\lambda\bs{I}-\bs{M})\) is called the geometric multiplicity of \(\lambda\) and \(\ker(\lambda\bs{I}-\bs{M})\) is sometimes called the eigenspace associated to \(\lambda\) because it is where all the eigenvectors associated to \(\lambda\) live. If \(\lambda\) is an eigenvalue of \(\bs{M}\), the algebraic multiplicity of \(\lambda\) is the number of times that \(\lambda\) appears as a root of the characteristic polynomial \(p(x)\). An eigenvalue is said to be simple if its algebraic multiplicity is one and said to be repeated otherwise. The geometric multiplicity is always less than or equal to the algebraic multiplicity.
Suppose that \(\bs{M}\) is a \(6\times 6\) matrix with characteristic polynomial
\[
p(x) = \det(x\bs{I} - \bs{M}) = x^6 - 4x^5-12x^4.
\]
By inspection we can factor \(p(x)\):
\[
p(x) = x^4 ( x^2 - 4x - 12) = x^4 (x-6)(x+2).
\]
Therefore, the eigenvalues of \(\bs{M}\) are \(\lambda_1=0\), \(\lambda_2=6\) and \(\lambda_3=-2\), and thus \(\bs{M}\) has only three distinct eigenvalues (even though it is a \(6\times 6\) matrix). The algebraic multiplicity of \(\lambda_1\) is \(4\) and it is thus repeated, while \(\lambda_2\) and \(\lambda_3\) are both simple eigenvalues. Thus, as a set, the eigenvalues of \(\bs{M}\) are \(\{0,6, -2\}\), whereas if we want to list all the eigenvalues of \(\bs{M}\) in say increasing order we obtain
\[
(-2,0,0,0,0,6).
\]
The latter is sometimes called the ``set of eigenvalues listed with multiplicities'' or the ``list of eigenvalues with multiplicities''.
If \(\bs{M}\) is a \(n\times n\) matrix and has \(n\) simple eigenvalues \(\lambda_1,\lambda_2,\ldots,\lambda_n\) (i.e., all have algebraic multiplicity \(1\)) then each eigenspace \(\ker(\lambda_i\bs{I}-\bs{M})\) is one-dimensional (i.e., it is a line in \(\real^n\) through the origin). Therefore, if \(\bs{v}\) and \(\bs{w}\) are two eigenvectors associated to the same eigenvalue \(\lambda_i\) then \(\bs{v}\) and \(\bs{w}\) are scalar multiplies of each other, that is, \(\bs{v} = \alpha \bs{w}\) for some non-zero \(\alpha \in \real\).
Let us now focus on the case that \(\bs{M}\) is a symmetric matrix, that is, \(\bs{M}^T=\bs{M}\). One of the most important results in linear algebra is that the eigenvalues of \(\bs{M}\) are all real numbers and moreover there exists an orthonormal basis of \(\real^n\) consisting of eigenvectors of \(\bs{M}\). Hence, if \(\lambda_1,\lambda_2,\ldots,\lambda_n\) denote the eigenvalues of \(\bs{M}\) (some of which may be repeated) then we are guaranteed the existence of an orthonormal basis \(\beta=\{\bs{v}_1,\bs{v}_2,\ldots,\bs{v}_n\}\) of \(\real^n\) such that \(\bs{M}\bs{v}_i = \lambda_i \bs{v}_i\) for all \(i=1,2,\ldots,n\). If we set \(\bs{\Lambda} = \text{diag}(\lambda_1,\lambda_2,\ldots,\lambda_n)\) to be the diagonal matrix with \(i\)th diagonal entry equal to \(\lambda_i\) and set \(\bs{X} = \begin{bmatrix}\bs{v}_1 & \bs{v}_2 & \cdots & \bs{v}_n\end{bmatrix}\) then the condition \(\bs{M}\bs{v}_i = \lambda_i \bs{v}_i\) for all \(i=1,2,\ldots,n\) can be written as the matrix equation
\[
\bs{M}\bs{X} = \bs{X}\bs{\Lambda}
\]
and therefore
\[
\bs{M} = \bs{X}\bs{\Lambda} \bs{X}^{-1}.
\]
However, since \(\bs{X}^{-1} = \bs{X}^T\) (because \(\bs{X}\) is an orthogonal matrix) we have that
\[
\bs{M} = \bs{X}\bs{\Lambda} \bs{X}^T.
\]
In Python, one can compute the eigenvalues and eigenvectors of a symmetric matrix by using the function
eigh
which is contained in the module numpy.linalg
or scipy.linalg
. The function eigh
returns a 2-tuple where the first element is an array of the eigenvalues and the second element is an orthogonal matrix consisting of the eigenvectors. For example, a typical call to eigh
is
E, X = numpy.linalg.eigh(M)and
E
is a \(1\times n\) array that stores the eigenvalues and X
is a \(n\times n\) numpy array whose columns consist of orthonormal eigenvectors of \(\bs{M}\). For example, to confirm that the 3rd column of X
is an eigenvector whose eigenvalue is the 3rd entry of E
we type
M @ X[:, 2] - E[2] * X[:, 2]and the returned value should be the zero vector of length \(n\). To verify that
X
is an orthogonal matrix (i.e., that \(\bs{X}^T\bs{X}=\bs{I}\)) type:
X.T @ X
Automorphisms and eigenpairs
Recall that if \(\bs{P}\) is the permutation matrix of the permutation \(\sigma\in S_n\) then \(\sigma \in \aut(G)\) if and only if \(\bs{P}^T\bs{A} \bs{P} = \bs{A}\) or equivalently \(\bs{A}\bs{P} = \bs{P}\bs{A}\). Our first result describes how eigenvectors behave under the action of an automorphism of \(G\).
Let \(G\) be a graph with adjacency matrix \(\bs{A}\) and suppose that \(\bs{P}\) is the permutation matrix representation of an automorphism \(\sigma\in\aut(G)\). If \((\bs{v},\lambda)\) is an eigenpair of \(\bs{A}\) then so is \((\bs{P}\bs{v},\lambda)\).
Let \(\bs{v}\) be an eigenvector of \(\bs{A}\) with eigenvalue \(\lambda\), that is, \(\bs{A}\bs{v} = \lambda\bs{v}\). Since \(\bs{P}\) is an automorphism of \(G\) we have that \(\bs{A}\bs{P}=\bs{P}\bs{A}\) and therefore
\[
\bs{A}\bs{P}\bs{v} = \bs{P}\bs{A} \bs{v} = \bs{P}\lambda\bs{v} = \lambda \bs{P}\bs{v}.
\]
Thus, \(\bs{P}\bs{v}\) is an eigenvector of \(\bs{A}\) with the same eigenvalue \(\lambda\) as \(\bs{v}\).
Consider the graph \(G\) shown in Figure 6.1 with adjacency matrix
\[
\bs{A}=\left[ \begin{array}{cccccc} 0&1&1&0&0&0\\ 1&0&1&1
&1&0\\ 1&1&0&1&1&0\\ 0&1&1&0&1&1
\\ 0&1&1&1&0&1\\ 0&0&0&1&1&0
\end {array} \right].
\]
The spectrum \(\spec(G)=(\lambda_1,\lambda_2,\lambda_3,\lambda_4,\lambda_5,\lambda_6)\) of \(G\) and corresponding eigenvectors written as the columns of the matrix \(\bs{X}=\begin{bmatrix}\bs{v}_1&\bs{v}_2&\bs{v}_3&\bs{v}_4&\bs{v}_5&\bs{v}_6\end{bmatrix}\) are
\begin{align*}
\spec(G) &= \left(-2,-1,-1,\frac{3-\sqrt{17}}{2},1,\frac{3+\sqrt{17}}{2}\right)\\[2ex]
\bs{X} &=
\left[ \begin {array}{rrrcrc} -1&0&0& 1 &-2& 1
\\ 1&0&-1&- 0.28&-1& 1.78
\\ 1&0&1&- 0.28&-1& 1.78
\\ -1&-1&0&- 0.28&1& 1.78
\\ -1&1&0&- 0.28&1& 1.78
\\ 1&0&0& 1&2& 1\end {array} \right]
\end{align*}
One can verify that for instance \(\sigma=(1\;6)(2\;5)(3\;4)\) is an automorphism of \(G\). If \(\bs{P}\) denotes the permutation matrix of \(\sigma\), one can verify that \(\bs{P}\bs{v}_1 = -\bs{v}_1\) which is an eigenvector of \(\bs{A}\) with same eigenvalue as \(\bs{v}_1\). As another example, one can verify that \(\bs{P}\bs{v}_4=\bs{v}_4\) and that \(\bs{P}\bs{v}_2 = \bs{v}_3\). Hence, in some cases \(\bs{P}\bs{v}_i\) is a scalar multiple of \(\bs{v}_i\) and in some cases \(\bs{P}\bs{v}_i\) is a non-scalar multiple of \(\bs{v}_i\); the latter case can occur if the eigenvalue associated to \(\bs{v}_i\) is repeated as it occurs with \(\bs{v}_2\) and \(\bs{v}_3\) (in this case \(\lambda_2=\lambda_3=-1\)).
Our next result relates the algebraic multiplicities of the eigenvalues of \(\bs{A}\) with the order of the automorphisms of \(G\).
Let \(G\) be a graph with adjacency matrix \(\bs{A}\). If \(\bs{A}\) has simple eigenvalues then every non-identity automorphism of \(G\) has order \(k=2\). In particular, the automorphism group \(\text{Aut}(G)\) is an abelian group.
Since \(\bs{A}\) has simple eigenvalues then \(\ker(\lambda\bs{I}-\bs{A})\) is one-dimensional, and therefore \(\bs{P}\bs{v} = \alpha\bs{v}\) for some scalar \(\alpha\neq 0\) (see Example 6.2). Therefore, multiplying the equation \(\bs{P}\bs{v} = \alpha\bs{v}\) by \(\bs{P}\) on the left we obtain
\[
\bs{P}^2\bs{v} = \bs{P}\alpha\bs{v} = \alpha\bs{P}\bs{v} = \alpha^2 \bs{v}.
\]
Now since \(\bs{P}\) is an orthogonal matrix, \(\|\bs{P}\bs{v}\|= \|\bs{v}\|\), and thus \(\alpha = \pm 1\) which implies that \(\alpha^2=1\). Therefore,
\[
\bs{P}^2\bs{v} = \bs{v}.
\]
Hence, the matrix \(\bs{P}^2\) has \(\bs{v}\) as an eigenvector with eigenvalue \(1\). Since \(\bs{v}\) was an arbitrary eigenvector of \(\bs{A}\), if \(\beta=\{\bs{v}_1,\bs{v}_2,\ldots,\bs{v}_n\}\) is a basis of \(\real^n\) consisting of orthonormal eigenvectors of \(\bs{A}\) then \(\beta\) consists of eigenvectors of \(\bs{P}^2\) all of which have eigenvalue \(1\). Therefore, if \(\bs{X} = \begin{bmatrix}\bs{v}_1 & \bs{v}_2 & \cdots & \bs{v}_n\end{bmatrix}\) then
\[
\bs{P}^2 \bs{X} = \bs{X}
\]
and therefore since \(\bs{X}\) is invertible we have
\[
\bs{P}^2 = \bs{I}.
\]
Thus \(\sigma^2\) is the identity permutation and consequently \(\sigma\) has order \(k=2\). That \(\aut(G)\) is abelian then follows from Example 1.28.
Proposition 6.2.2 does not say that \(\aut(G)\) will necessarily contain non-trivial automorphisms (of order 2). In fact, most graphs will have distinct eigenvalues and a trivial automorphism group, that is, \(\aut(G)=\{\text{id}\}\). What Proposition 6.2.2 does say is that if there is any non-trivial automorphism \(\sigma\in\aut(G)\) then \(\sigma\) has order \(k=2\) whenever \(\bs{A}\) has distinct eigenvalues.
Before we move on, we need to recall the notion of a partition of a set \(V\). A partition of \(V\) is a collection \(\pi=\{C_1,C_2,\ldots,C_k\}\) of subsets \(C_i\subset V\) such that \(V = \bigcup_{i=1}^k C_i\) and \(C_i\cap C_j = \emptyset\) for all \(i\neq j\). The subsets \(C_i\) are called the cells of the partition \(\pi\). If \(\pi\) has \(k\)-cells then it is called a \(k\)-partition. For example, if \(V=\{1,2,3,\ldots,10\}\) then \(\pi=\{\{6,9,5,10\}, \{1,4\},\{2,3,7\},\{8\}\}\) is a partition of \(V\) and it has \(k=4\) cells. The unit partition of \(V=\{1,2,\ldots,n\}\) is the partition that contains only one cell, namely, \(\{\{1,2,\ldots,n\}\}\) and the discrete partition of \(V\) is the partition that contains \(n\) cells, namely, \(\{\{1\},\{2\},\ldots,\{n\}\}\). We will refer to these partitions as the trivial partitions of \(V\).
Given a set \(V=\{1,2,\ldots,n\}\), how many partitions of \(V\) are there? The total number of partitions of an \(n\)-element set is the Bell number \(B_n\). The first several Bell numbers are \(B_0 = 1\), \(B_1 = 1\), \(B_2 = 2\), \(B_3 = 5\), \(B_4 = 15\), \(B_5 = 52\), and \(B_6 = 203\). The Bell numbers satisfy the recursion
\[
B_{n+1} = \sum_{k=0}^n \binom{n}{k} B_k.
\]
See the Wiki page on \href{https://en.wikipedia.org/wiki/Bell_number}{Bell numbers} for more interesting facts.
Every permutation \(\sigma\in S_n\) induces a partition of \(V\) as follows. Suppose that \(\sigma\) has \(r\) cycles in its cycle decomposition:
\[
\sigma = \underbrace{(i_1\;i_2\;\cdots\; i_{m_1})}_{\sigma_1} \underbrace{(i_{m_1+1}\;i_{m_1+2}\;\cdots\; i_{m_2})}_{\sigma_2}
\cdots \underbrace{(i_{m_{r-1}+1}\;i_{m_{r-1}+2}\;\cdots\; i_{m_r})}_{\sigma_r}.
\]
The \(r\) sets formed from the integers within each cycle of \(\sigma\) and the singleton sets formed from the remaining integers fixed by \(\sigma\) forms a partition of the vertex set \(V=\{1,2,\ldots,n\}\). In other words, if \(j_1,j_2,\ldots,j_m\) are the integers fixed by \(\sigma\), and we set \(k=r+m\), then if we set
\begin{align*}
C_1 & = \{i_1,i_2,\cdots, i_{m_1}\}\\
C_2 &= \{i_{m_1+1},i_{m_1+2},\cdots, i_{m_2}\}\\
& \vdots\\
C_r &= \{i_{m_{r-1}+1},i_{m_{r-1}+2},\cdots, i_{m_r}\} \\
C_{r+1} &= \{j_1\}\\
& \vdots\\
C_{k} &= \{j_m\}
\end{align*}
then \(\pi=\{C_1,C_2,\ldots,C_k\}\) is a partition of \(V=\{1,2,\ldots,n\}\). The notation is messy but an example will make the above clear.
As an example, consider \(\sigma\in S_{12}\) given by
\[
\sigma=(1\; 5)(2\; 3)(7\; 8\; 4\; 11)(9\;12\;6)(10)
\]
Hence, \(\sigma\) fixes the integer \(10\). Thus, if we set \(C_1=\{1,5\}\), \(C_2=\{2,3\}\), \(C_3=\{7,8,4,11\}\), \(C_4=\{9,12,6\}\), \(C_5=\{10\}\), then \(\pi=\{C_1,C_2,C_3,C_4,C_5\}\) is a partition of the vertex set \(V=\{1,2,\ldots,12\}\). We say that the partition \(\pi\) is induced by the permutation \(\sigma\).
Suppose now that \(\sigma\in\aut(G)\) and let \(\pi=\{C_1,C_2,\ldots,C_k\}\) be the partition of \(V\) induced by the cycle decomposition of \(\sigma\). Pick a cell \(C_i\) and pick a vertex \(u \in C_i\) and suppose that \(u\) has \(d\) neighbors in cell \(C_j\), say that they are \(\{v_1,\ldots,v_d\}\). Since \(\sigma\) sends a vertex in one cell to a vertex in the same cell (recall that each cell consists of the integers in a cycle) then necessarily \(\sigma(u) \in C_i\) and \(\sigma(v_\ell)\in C_j\) for \(\ell=1,\ldots,d\). Now since \(\sigma\in\aut(G)\), it follows that \(\{\sigma(u),\sigma(v_\ell)\}\in E(G)\) and therefore the neighbors of \(\sigma(u)\) in \(C_j\) are \(\{\sigma(v_1),\sigma(v_2),\ldots,\sigma(v_d)\}\). Hence, the number of vertices in \(C_j\) adjacent to \(\sigma(u)\) is equal to the number of vertices in \(C_j\) adjacent to \(u\), in this case \(d\). Since \(\sigma\) cycles through all the vertices in \(C_i\), it follows that all the vertices in \(C_i\) have the same number of neighbors in \(C_j\). Surprisingly, this observation turns out to be an important one.
We introduce some notation to capture what we have just discussed. Given a partition \(\pi=\{C_1,C_2,\ldots,C_k\}\) of \(V\), define for any vertex \(v\in V\) the degree of \(v\) in \(C_j\) by
\[
\deg(v, C_j) = \text{the number of vertices in \(C_j\) adjacent to \(v\)}.
\]
Formally, if \(N(v)\) denotes all the vertices adjacent to \(v\) then
\[
\deg(v,C_j) = | N(v) \cap C_j|.
\]
Using this notation, what we showed above is that for any two cells \(C_i\) and \(C_j\) contained in a partition induced by an automorphism of \(G\), the number
\[
\deg(u, C_j)
\]
is the same for all \(u\in C_i\). Now, \(\deg(u,C_j)\) could be zero and it is certainly less than \(|C_j|\). We can reverse the role of the cells \(C_i\) and \(C_j\) and conclude that \(\deg(v, C_i)\) is the same for all \(v\in C_j\). However, it is not necessarily the case that \(\deg(v, C_i)\) will equal \(\deg(u, C_j)\). Lastly, we could also consider the case that \(C_i=C_j\) and thus for \(u\in C_i\) the number \(\deg(u, C_i)\) is the number of vertices in \(C_i\) adjacent to \(u\). We summarize our discussion with the following lemma.
Suppose that \(\sigma\in\aut(G)\) and let \(\pi=\{C_1,C_2,\ldots,C_k\}\) be the partition induced by the cycle decomposition of \(\sigma\). Then for any pair of cells \(C_i\) and \(C_j\) it holds that \[\deg(u, C_j) = \deg(v, C_j)\] for all \(u,v\in C_i\).
Consider the graph \(G\) shown in Figure 6.2, which is called the Petersen graph.
One can verify that \(\sigma=(1\; 10)(2\;8\;5\;7)(3\;6\;4\;9)\) is an automorphism of \(G\). Consider the induced partition \(\pi=\{\{1,10\}, \{2,8,5,7\},\) \(\{3,6,4,9\}\}\), and let \(C_1, C_2, C_3\) be the cells of \(\pi\). For any vertex \(v\in C_1\), one can verify that \(\deg(v, C_1) = 1\), \(\deg(v, C_2) = 2\), and \(\deg(v, C_3) = 0\). For any vertex \(w\in C_2\), one can verify that \(\deg(w,C_1) = 1\), \(\deg(w,C_2)=0\), and \(\deg(w,C_3)=2\). Lastly, for any vertex \(u\in C_3\), one can verify that \(\deg(u,C_1) = 0\), \(\deg(u,C_2) = 2\), and \(\deg(u, C_3)=1\). We summarize our results using a matrix which we denote by
\[
\bs{A}_\pi = \begin{bmatrix}
\deg(v, C_1) & \deg(v,C_2) & \deg(v,C_3)\\[2ex]
\deg(w, C_1) & \deg(w,C_2) & \deg(w,C_3)\\[2ex]
\deg(u, C_1) & \deg(u,C_2) & \deg(u,C_3)\\
\end{bmatrix} =
\begin{bmatrix}
1 & 2 & 0\\
1 & 0 & 2\\
0 & 2 & 1
\end{bmatrix}.
\]
As another example, \(\sigma = (1\;3\;9)(4\;6\;10)(5\;8\;7)\) is an automorphism of \(G\) and the induced partition is \(\pi=\{\{1,3,9\},\{4,6,10\},\{5,8,7\},\{2\}\}\). One can verify that for any \(v\in C_1\) it holds that \(\deg(v, C_1) = 0\), \(\deg(v,C_2)=1\), \(\deg(v,C_3)=1\), and \(\deg(v,C_4)=1\). Also, for any \(w\in C_2\) it holds that \(\deg(w, C_1) = 1\), \(\deg(w,C_2)=0\), \(\deg(w,C_3)=2\), and \(\deg(v,C_4)=0\). Similar verifications can be made for vertices in \(C_3\) and \(C_4\), and in this case
\[
\bs{A}_\pi = \begin{bmatrix}0&1&1&1\\1&0&2&0\\1&2&0&0\\3&0&0&0\end{bmatrix}.
\]
In the next section, we will see how the degree conditions that exist for a partition induced by an automorphism exist for more general partitions not necessarily arising from an automorphism.
Equitable partitions of graphs
In this section, we will introduce the notion of an equitable partition of a graph and how we can use such partitions to define a notion of a quotient graph. We begin with two examples.
Consider again the Petersen graph in Figure 6.2. One can verify that \(\pi=\{\{1,2,3,4,7,10\},\{5,8,9\},\{6\}\}\) is not the partition induced by any automorphism of \(G\). However, one can verify that for any \(v\in C_1\) it holds that \(\deg(v,C_1) = 2\), \(\deg(v,C_2)=1\), and \(\deg(v,C_3)=0\), that for any \(u\in C_2\) it holds that \(\deg(u,C_1)=2\), \(\deg(u,C_2)=0\), and \(\deg(u,C_3)=1\); and finally that for any \(w\in C_3\) it holds that \(\deg(w,C_1) =0\), \(\deg(w,C_2)=3\), and \(\deg(w,C_3)=0\). Hence, even though \(\pi\) is not induced by any automorphism, it still satisfies the degree conditions that are satisfied by a partition induced by an automorphism. In this case,
\[
\bs{A}_\pi = \begin{bmatrix} 2&1&0\\2&0&1\\0&3&0\end{bmatrix}.
\]
Consider the graph \(G\) shown in Figure 6.3, which is called the Frucht graph. This graph has a trivial automorphism group, i.e., \(\aut(G) = \{\text{id}\}\). However, it contains the type of partitions that satisfy the degree conditions of the partitions induced by an automorphism. For example, for \(\pi=\{\{1,5,7,12\},\{3,6,9,11\},\{2,4,8,10\}\}\), one can verify that for any \(u\in C_1\) it holds that \(\deg(u,C_1) = 1\), \(\deg(u,C_2) = 1\), and \(\deg(u,C_3)=1\); for any \(v\in C_2\) it holds that \(\deg(v,C_1) = 1\), \(\deg(v,C_2) = 0\), and \(\deg(v,C_3) = 2\); and finally for any \(w\in C_3\) it holds that \(\deg(w,C_1) = 1\), \(\deg(w,C_2) = 2\), and \(\deg(w,C_3) = 0\). Hence, in this case
\[
\bs{A}_\pi = \begin{bmatrix}1&1&1\\1&0&2\\1&2&0\end{bmatrix}.
\]
Let us now define the vertex partitions that will be of interest to us and that generalize the partitions induced by automorphisms of a graph.
Let \(G=(V,E)\) be graph and let \(\pi=\{C_1,C_2,\ldots,C_k\}\) be a partition of \(V\). If for each pair of cells \((C_i,C_j)\) (not necessarily distinct) it holds that \(\deg(u,C_j) = \deg(v,C_j)\) for every \(u,v\in C_i\) then we say that \(\pi\) is an equitable partition of \(V\).
By Lemma 6.2.3, every partition induced by an automorphism is an equitable partition, however, Examples 6.7- 6.8 show that the converse does not hold, that is, not every equitable partition is induced by an automorphism. In fact, numerical evidence indicates that the proportion of equitable partitions induced by an automorphism tends to zero as \(n\rightarrow\infty\).
There is an elegant linear-algebraic way to characterize an equitable partition that is very useful and which gives insight into the structure of the eigenvectors of a graph. We first need to introduce some notation and review more linear algebra.
Let \(\pi=\{C_1,C_2,\ldots,C_k\}\) be a partition of \(V=\{1,2,\ldots,n\}\). For any cell \(C_i\) let \(\bs{c}_i\in\real^n\) denote the vector whose entries are equal to \(1\) on the integer indices in \(C_i\) and zero elsewhere. For example, if \(C_i=\{2,4,7\}\) and \(n=9\) then \(\bs{c}_i = (0,1,0,1,0,0,1,0,0)\), i.e., \(\bs{c}_i\) has a 1 in positions \(\{2,4,7\}\) and zero elsewhere. The vector \(\bs{c}_i\) is called the characteristic vector (or indicator vector) of the cell \(C_i\). We define the characteristic matrix of \(\pi\) as the \(n\times k\) matrix whose columns are the characteristic vectors of the cells of \(\pi\):
\[
\bs{C} = \begin{bmatrix}\bs{c}_1 & \bs{c}_2 & \cdots & \bs{c}_k\end{bmatrix}.
\]
Notice that if \(\pi\) has \(k\) cells then \(\bs{C}\) is a \(n\times k\) matrix. Also, and more importantly, since \(\pi\) is a partition of \(V\), the columns of \(\bs{C}\) are orthogonal, and thus \(\rank(\bs{C})=k\).
If \(V=\{1,2,3,4,5,6,7,8\}\) then \(\pi=\{\{1,4,6\}, \{2,5\},\{7,8\},\) \(\{3\}\}\) is a partition of \(V\) with cells \(C_1=\{1,4,6\}\), \(C_2=\{2,5\}\), \(C_3=\{7,8\}\), \(C_4=\{3\}\), and \(\pi\) is a \(4\)-partition. The characteristic matrix of \(\pi\) is
\[
\bs{C} =
\begin{bmatrix}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 0 & 1\\
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
1 & 0 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 1 & 0
\end{bmatrix}.
\]
Now, since \(\bs{C}\) has orthogonal columns then we have
\[
\bs{C}^T\bs{C} = \begin{bmatrix}
\bs{c}_1^T\bs{c}_1 & 0 & 0 & 0\\
0 & \bs{c}_2^T\bs{c}_2 & 0 & 0\\
0 & 0 & \bs{c}_3^T\bs{c}_3 & 0\\
0 & 0 & 0 & \bs{c}_4^T\bs{c}_4\\
\end{bmatrix} =
\begin{bmatrix}
3 & 0 & 0 & 0\\
0 & 2 & 0 & 0\\
0 & 0 & 2 & 0\\
0 & 0 & 0 & 1
\end{bmatrix}.
\]
Notice that the diagonals are just the cardinalities of the cells, i.e., \(\bs{c}_i^T\bs{c}_i = |C_i|\).
For any \(n\times k\) matrix \(\bs{C}\) recall that the image of \(\bs{C}\) (frequently called the range of \(\bs{C}\)) is
\[
\img(\bs{C}) = \{\bs{C}\bs{x}\;|\;\bs{x}\in\real^k\}.
\]
The image of \(\bs{C}\) is a subspace and more concretely, it is the subspace spanned by the columns of \(\bs{C}\) (frequently called the column space of \(\bs{C}\)). To see this, if \(\bs{C}= \begin{bmatrix}\bs{c}_1 & \bs{c}_2 & \cdots & \bs{c}_k\end{bmatrix}\) then for \(\bs{x}=(x_1,x_2,\ldots,x_k)\) we have
\[
\bs{C}\bs{x} = x_1\bs{c}_1 + x_2\bs{c}_2 + \cdots+x_k\bs{c}_k
\]
and thus \(\bs{C}\bs{x}\) is a linear combination of the columns of \(\bs{C}\), i.e., \(\bs{C}\bs{x}\in\spn\{\bs{c}_1,\bs{c}_2,\) \(\ldots,\bs{c}_k\}\). We now introduce the following important notion.
Let \(\bs{A}\) be a \(n\times n\) matrix and let \(\mathsf{W}\subseteq\real^n\) be a subspace. If for every \(\bs{x}\in \ms{W}\) it holds that \(\bs{A}\bs{x}\in \ms{W}\) then we say that \(\ms{W}\) is \(\bs{A}\)-invariant.
Hence, a subspace \(\ms{W}\) is \(\bs{A}\)-invariant if \(\bs{A}\) maps any element in \(\ms{W}\) back to an element in \(\ms{W}\). There is no reason to expect that this will be true for an arbitrary subspace and so that is why such subspaces are singled out. Suppose that \(\ms{W}\) is \(\bs{A}\)-invariant and let \(\beta=\{\bs{y}_1,\bs{y}_2,\ldots,\bs{y}_k\}\) be a basis for \(\ms{W}\) and consider the matrix \(\bs{W}=\begin{bmatrix}\bs{y}_1&\bs{y}_2&\cdots&\bs{y}_k\end{bmatrix}\). Now, since \(\ms{W}\) is \(\bs{A}\)-invariant then \(\bs{A}\bs{y}_i \in \ms{W}\) and therefore \(\bs{A}\bs{y}_i\) can be written as a linear combination of the basis vectors \(\beta\). Therefore, there is some vector \(\bs{b}_i\in\real^k\) such that
\[
\bs{A}\bs{y}_i = \bs{W}\bs{b}_i.
\]
This holds for each \(i=1,2,\ldots,k\) and therefore if we set \(\bs{B}=\begin{bmatrix}\bs{b}_1&\bs{b}_2&\cdots&\bs{b}_k\end{bmatrix}\) then
\[
\bs{A}\bs{W} = \bs{W}\bs{B}.
\]
Suppose that \(\bs{A}\) has eigenvalue \(\lambda\) and let \(\beta=\{\bs{v}_1,\bs{v}_2,\ldots,\bs{v}_m\}\) be a set of linearly independent eigenvectors of \(\bs{A}\) with eigenvalue \(\lambda\) (\(m\) is necessarily \(\leq\) the geometric multiplicity of \(\lambda\)). Consider the subspace \(\ms{W} = \spn\{\bs{v}_1,\bs{v}_2,\ldots,\bs{v}_m\}\). Let \(\bs{x}\in\ms{W}\) so that there exists constants \(\alpha_1,\alpha_2,\ldots,\alpha_m\) such that \(\bs{x} = \alpha_1\bs{v}_1 + \alpha_2\bs{v}_2 + \cdots + \alpha_m\bs{v}_m\). Then,
\begin{align*}
\bs{A}\bs{x} &= \bs{A}(\alpha_1\bs{v}_1 + \alpha_2\bs{v}_2 + \cdots + \alpha_m\bs{v}_m) \\
&= \bs{A}(\alpha_1\bs{v}_1) + \bs{A}(\alpha_2\bs{v}_2) + \cdots + \bs{A}(\alpha_m\bs{v}_m)\\
&= \alpha_1 \bs{A}\bs{v}_1 + \alpha_2\bs{A}\bs{v}_2 + \cdots + \alpha_m\bs{A}\bs{v}_m\\
&=\alpha_1\lambda \bs{v}_1 + \alpha_2\lambda \bs{v}_2 + \cdots + \alpha_m\lambda \bs{v}_m
\end{align*}
from which we see that \(\bs{A}\bs{x} \in \spn\{\bs{v}_1,\bs{v}_2,\ldots,\bs{v}_m\}\), that is, \(\bs{A}\bs{x}\in\ms{W}\). Hence, \(\ms{W}\) is \(\bs{A}\)-invariant.
Although not true for a general matrix \(\bs{A}\), if \(\bs{A}\) is symmetric then every \(\bs{A}\)-invariant subspace \(\ms{W}\) has a basis of eigenvectors of \(\bs{A}\). This fact is so important that we state it as a theorem.
If \(\bs{A}\) is a symmetric \(n\times n\) matrix and \(\ms{W}\) is \(\bs{A}\)-invariant then there exists a basis of \(\ms{W}\) consisting of eigenvectors of \(\bs{A}\).
Suppose that \(\ms{W}\) is \(k\)-dimensional and let \(\beta=\{\bs{w}_1,\bs{w}_2,\ldots,\bs{w}_k\}\) be an orthonormal basis of \(\ms{W}\). Since \(\ms{W}\) is \(\bs{A}\)-invariant then for each \(i\in\{1,2,\ldots,k\}\) there exists constants \(b_{1,i}, b_{2,i}, \ldots, b_{k,i}\) such that
\[
\bs{A}\bs{w}_i = b_{1,i}\bs{w}_1 + b_{2,i}\bs{w}_2 + \cdots + b_{k,i}\bs{w}_k.
\]
If we let \(\bs{B}\) be the \(k\times k\) matrix with entries \(\bs{B}_{j,i} = b_{j,i}\) and we put \(\bs{W} = \begin{bmatrix}\bs{w}_1&\bs{w}_2&\cdots&\bs{w}_k\end{bmatrix}\) then
\begin{equation}
\bs{A} \bs{W} = \bs{W} \bs{B}.
\end{equation}
Now since \(\beta\) is an orthonormal basis we have that \(\bs{W}^T\bs{W} = \bs{I}_{k\times k}\) and therefore multiplying \eqref{eqn:WB} by \(\bs{W}^T\) on the left we obtain that
\[
\bs{B} = \bs{W}^T \bs{A} \bs{W}.
\]
Now since \(\bs{A}\) is symmetric it follows that \(\bs{B}\) is symmetric and thus \(\bs{B}\) has \(k\) linearly independent eigenvectors, say \(\bs{v}_1, \bs{v}_2, \ldots, \bs{v}_k\), with associated eigenvalues \(\lambda_1,\lambda_2,\ldots,\lambda_k\). We claim that \(\bs{x}_i = \bs{W}\bs{v}_i\) is an eigenvector of \(\bs{A}\) with eigenvalue \(\lambda_i\). We compute
\begin{align*}
\bs{A} \bs{x}_i &= \bs{A} \bs{W} \bs{v}_i\\
&= \bs{W} \bs{B} \bs{v}_i\\
&= \bs{W} \lambda_i \bs{v}_i\\
&= \lambda_i \bs{W}\bs{v}_i \\
&= \lambda_i \bs{x}_i.
\end{align*}
The eigenvectors \(\bs{x}_1,\bs{x}_2,\ldots,\bs{x}_k\) are clearly contained in \(\ms{W}\) and they are linearly independent since \(\bs{v}_1,\bs{v}_2,\ldots,\bs{v}_k\) are linearly independent and the matrix \(\bs{W}\) has rank \(k\).
We are now finally ready to characterize an equitable partition in terms of the invariant subspaces of the adjaceny matrix.
Let \(G=(V,E)\) be a graph with adjacency matrix \(\bs{A}\). Let \(\pi\) be a \(k\)-partition of \(V\) with characteristic matrix \(\bs{C}\). Then \(\pi\) is an equitable partition of \(G\) if and only if \(\img(\bs{C})\) is \(\bs{A}\)-invariant. Equivalently, \(\pi\) is equitable if and only if there exists a matrix \(\bs{B}\in\real^{k\times k}\) such that \(\bs{A}\bs{C}=\bs{C}\bs{B}\). In this case, \(\bs{B}=(\bs{C}^T\bs{C})^{-1} \bs{C}^T\bs{A}\bs{C}\).
Let \(\pi=\{C_1,C_2,\ldots,C_k\}\) and let \(\bs{C} = \begin{bmatrix}\bs{c}_1&\bs{c}_2&\cdots&\bs{c}_k\end{bmatrix}\), where \(\bs{c}_i\) is the characteristic vector of cell \(C_i\). Let \(\bs{C}_i=\text{diag}(\bs{c}_i)\), in other words, \(\bs{C}_i\) is the diagonal matrix containing a 1 in diagonal entry \((j,j)\) if \(j\in C_i\) and zero other wise. Hence, \(\bs{I} = \bs{C}_1+\bs{C}_2 + \cdots + \bs{C}_k\). For any cell \(C_j\) let
\[
\bs{d}_j = \begin{bmatrix} \deg(v_1, C_j) \\ \deg(v_2, C_j) \\ \deg(v_2, C_j)\\ \vdots \\ \deg(v_n, C_j)\end{bmatrix}
\]
Then it is not hard to see that \(\bs{A}\bs{c}_j = \bs{d}_j\), and therefore we can write
\begin{align*}
\bs{A}\bs{c}_j =
\bs{C}_1 \bs{d}_j + \bs{C}_2 \bs{d}_j + \cdots + \bs{C}_k \bs{d}_j .
\end{align*}
For each \(i\in \{1,2,\ldots,k\}\) we have
\[
(\bs{C}_i\bs{d}_j) ( u) = \begin{cases} \deg(u, C_j), & \text{ if } u \in C_i\\[2ex] 0, & \text{otherwise}\end{cases}
\]
Therefore, if \(\deg(u, C_j) = \deg(v, C_j)\) for all \(u,v\in C_i\) then \(\bs{C}_i \bs{d}_j = \deg(u, C_j) \bs{c}_i\) for any \(u\in C_i\). Hence, if \(\pi\) is equitable then
\[
\bs{A}\bs{c}_j = \deg(u_1, C_j) \bs{c}_1 + \deg(u_2,C_j) \bs{c}_2 + \cdots + \deg(u_k, C_j) \bs{c}_k
\]
where \(u_i \in C_i\), and thus \(\bs{A}\bs{c}_j \in \img(\bs{C})\). Conversely, if \(\bs{A}\bs{c}_j \in \img(\bs{C})\) then our computations above show that necessarily \(\deg(u, C_j) = \deg(v, C_j)\) for all \(u, v\in C_i\), and thus \(\pi\) is equitable.
The matrix \(\bs{B}\) in Theorem 6.3.4 is actually \(\bs{B}=\bs{A}_\pi\) that appeared in Examples 6.7- 6.8. To be precise, if \(\pi=\{C_1,C_2,\ldots,C_k\}\) is an equitable partition then \(\deg(u, C_j)\) is the same for all \(u\in C_i\). We can therefore unambiguously define
\[
\deg(C_i, C_j) = \deg(u, C_j)
\]
for some \(u\in C_i\). Then one can show that
\[
\bs{A}_\pi = \begin{bmatrix} \deg(C_1,C_1) & \deg(C_1, C_2) & \cdots & \deg(C_1, C_k) \\
\deg(C_2,C_1) & \deg(C_2, C_2) & \cdots & \deg(C_2, C_k) \\
\vdots & \vdots & \cdots & \vdots\\
\deg(C_k,C_1) & \deg(C_k, C_2) & \cdots & \deg(C_k, C_k) \end{bmatrix}
\]
and as discussed before in general \(\deg(C_i, C_j) \neq \deg(C_j, C_i)\), i.e., \(\bs{A}_\pi\) is not necessarily symmetric. The proof of Theorem 6.3.4 shows that
\[
\bs{A}\bs{C} = \bs{C} \bs{A}_\pi
\]
that is, \(\bs{A}_\pi = (\bs{C}^T\bs{C})^{-1} \bs{C}^T\bs{A}\bs{C}\).
Let us now discuss how the existence of an equitable partition imposes constraints on the eigenvectors of \(\bs{A}\). Suppose that \(\pi\) is a partition of \(V\) and \(\bs{C}\) is its characteristic matrix. Then \(\img(\bs{C})\) consists of vectors that have the same numerical value on the entries of each cell. For instance, if
\[
\bs{C} =
\begin{bmatrix}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 0 & 1\\
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
1 & 0 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 1 & 0
\end{bmatrix}
\]
then if \(\bs{x}=(\alpha,\beta,\gamma,\rho)\) then
\[
\bs{y}=\bs{C}\bs{x} =
\alpha \begin{bmatrix}1\\0\\0\\1\\0\\1\\0\\0\end{bmatrix} +
\beta \begin{bmatrix}0\\1\\0\\0\\1\\0\\0\\0\end{bmatrix} +
\gamma \begin{bmatrix}0\\0\\0\\0\\0\\0\\1\\1\end{bmatrix} +
\rho \begin{bmatrix}0\\0\\1\\0\\0\\0\\0\\0\end{bmatrix}
= \begin{bmatrix}
\alpha\\ \beta\\ \rho \\ \alpha \\ \beta \\ \alpha \\ \gamma \\ \gamma
\end{bmatrix}.
\]
Hence, a vector \(\bs{y}\in\img(\bs{C})\) has the same values on the entries \(C_1=\{1,4,6\}\), the same values on entries \(C_2=\{2,5\}\), the same values on entries \(C_3=\{7,8\}\), and the same values on entries \(C_4=\{3\}\) (this is trivial because \(C_4\) is a singleton cell). Now, if \(\pi\) is an equitable partition then \(\ms{W}=\img(\bs{C})\) is \(\bs{A}\)-invariant and therefore by Theorem 6.3.3 since \(\bs{A}\) is symmetric there is a basis of \(\img(\bs{C})\) consisting of eigenvectors of \(\bs{A}\). These eigenvectors therefore have entries that are equal on each of the cells of \(\pi\) and there will be \(k\) of these eigenvectors (linearly independent) if \(\pi\) is a \(k\)-partition.
We consider a specific example. Consider again the Frucht graph \(G\) which we reproduce in Figure 6.4. The Frucht graph has three non-trivial equitable partitions, one of which is
\[
\pi=\{\{1,5,7,12\},\{3,6,9,11\},\{2,4,8,10\}\}.
\]
Hence, since \(\pi\) is a \(k=3\) partition then there exists \(k=3\) linearly independent eigenvectors of \(\bs{A}\) whose entries on the cells of \(\pi\) are equal.
The eigenvectors (rounded to two decimal places) of \(\bs{A}\) are (as columns):
\begin{align*}
\bs{X} &= {\left[ \begin{array}{rrrrrrrrrrrr}
0.17 & 0.00 & -0.48 & 0.40 & -0.17 & 0.12 & 0.41 & 0.11 & 0.33 & 0.18 & 0.36 & -0.29 \\
-0.14 & 0.35 & -0.00 & -0.56 & -0.17 & 0.00 & -0.20 & 0.44 & 0.00 & 0.31 & 0.32 & -0.29 \\
-0.20 & -0.35 & 0.27 & 0.14 & 0.50 & -0.27 & -0.20 & -0.14 & 0.27 & 0.40 & 0.22 & -0.29 \\
0.44 & 0.35 & 0.00 & -0.04 & -0.17 & -0.00 & -0.20 & -0.63 & 0.00 & 0.31 & -0.18 & -0.29 \\
-0.51 & -0.00 & -0.12 & 0.03 & -0.17 & -0.33 & 0.41 & -0.16 & -0.48 & 0.18 & -0.21 & -0.29 \\
-0.31 & -0.35 & -0.15 & -0.10 & -0.17 & 0.60 & -0.20 & -0.02 & 0.21 & 0.04 & -0.43 & -0.29 \\
0.39 & 0.00 & 0.00 & -0.27 & 0.50 & 0.00 & 0.41 & 0.29 & -0.00 & -0.00 & -0.43 & -0.29 \\
-0.10 & 0.35 & 0.27 & 0.46 & -0.17 & -0.27 & -0.20 & 0.33 & 0.27 & -0.22 & -0.35 & -0.29 \\
0.36 & -0.35 & 0.21 & 0.27 & -0.17 & 0.15 & -0.20 & 0.26 & -0.60 & 0.04 & 0.14 & -0.29 \\
-0.20 & 0.35 & -0.27 & 0.14 & 0.50 & 0.27 & -0.20 & -0.14 & -0.27 & -0.40 & 0.22 & -0.29 \\
0.15 & -0.35 & -0.33 & -0.30 & -0.17 & -0.48 & -0.20 & -0.09 & 0.12 & -0.49 & 0.07 & -0.29 \\
-0.05 & -0.00 & 0.60 & -0.16 & -0.17 & 0.21 & 0.41 & -0.24 & 0.15 & -0.36 & 0.28 & -0.29
\end{array} \right]}
\end{align*}
Observe that the eigenvectors in columns 2, 7, 12 all have entries that are constant on the cells of \(\pi\). What can we say about the eigenvectors of \(\bs{A}\) not contained in \(\img(\bs{C})\)? It turns out that because \(\bs{A}\) is symmetric then the orthogonal complement of the subspace \(\ms{W}=\img(\bs{C})\) is also \(\bs{A}\)-invariant, where the orthogonal complement of \(\ms{W}\) is the set of vectors orthogonal to vectors in \(\ms{W}\). The orthogonal complement of a subspace \(\ms{W}\) is denoted by \(\ms{W}^\perp\) and thus
\[
\ms{W}^\perp = \{\bs{z}\in\real^n\;|\; \langle\bs{z},\bs{w}\rangle=0\;\;\text{for all \(\bs{w}\in\ms{W}\)}\}.
\]
Since \(\ms{W}^\perp\) is \(\bs{A}\)-invariant, by Theorem 6.3.3, there is a basis of \(\ms{W}^\perp\) consisting of eigenvectors \(\bs{A}\). One can show that for \(\ms{W}=\img(\bs{C})\) it holds that
\[
\ms{W}^\perp = \ker(\bs{C}^T).
\]
It is not hard to see that \(\bs{z}\in\ker(\bs{C}^T)\) if and only if the sum of the entries of \(\bs{z}\) on each cell sum to zero. Hence, the remaining eigenvectors of \(\bs{A}\) not contained in \(\img(\bs{C})\) have the property that the sum of their entries on each cell sum to zero. We summarize with the following.
Suppose that \(\pi=\{C_1,C_2,\ldots,C_k\}\) is an equitable partition of \(G\). Then the eigenvectors of \(\bs{A}\) can be split into two groups, namely, those that are constant on the cells of \(\pi\) (i.e., contained in \(\img(\bs{C}))\) and those that sum to zero on the cells of \(\pi\) (i.e., contained in \(\ker(\bs{C}^T))\).
Another partition of the Frucht graph is
\[
\pi=\{\{3,7,10\},\{1, 2, 4, 5, 6, 8, 9, 11, 12\} \}
\]
so that \(\pi\) is a \(k=2\) partition. One can verify that \(\bs{A}\) has \(k=2\) linearly independent eigenvectors that are constant on the cells of \(\pi\), namely, the 5th eigenvector and the last. The remaining \(n-k = 12-2=10\) eigenvectors sum to zero on the cells of \(\pi\). For example, for the first eigenvector \(\bs{v}_1=(0.17, -0.14, -0.20, 0.44, -0.51, -0.31,0.39, -0.10, 0.36, -0.20,0.15,-0.05)\) the sum of the entries on cell \(C_1=\{3,7,10\}\) is \(-0.20+0.39-0.20 \approx 0\) (due to rounding errors) and the sum of the entires on cell \(C_2=\{1, 2, 4, 5, 6, 8, 9, 11, 12\}\) is \(0.17-0.14+0.44-0.51-0.31-0.10+0.36+0.15-0.05 \approx 0\) (again due to rounding errors).