Proofreading last week’s work...
Solution.
Immediately after this Exercise, the author’s define a graph \(A_2\) with the exact graph I thought was going to be \(2_A\text{.}\) It seems incredibly unlikely that \(A_2 = 2_A\) would be the same graph, so my answer is almost certainly wrong. I decided to take a closer look at the examples leading up to his.
I’ve been starting to form a tighter connection between the graphs and the adjacency matrices. The object \(D\) defined as the “naked dot” is essentially a \(1\times1\) matrix given by \(\begin{bmatrix}
0
\end{bmatrix}\text{.}\) Likewise, the “naked arrow” \(A\) can be thought of a \(2\times2\) matrix given by \(\begin{bmatrix}
0 & 1 \\
0 & 0
\end{bmatrix}\text{.}\) By concatenating matrices along the diagonal and filling in zeroes, we can produce a graph sum such that \(D+D = \begin{bmatrix}
0 & 0 \\
0 & 0
\end{bmatrix}\) and \(A+A =
\begin{bmatrix}
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0
\end{bmatrix}\text{.}\) I also get a nice representation of the “naked loop” as the matrix \(\begin{bmatrix} 1
\end{bmatrix}\text{.}\)
However, it’s important to note that this method of a graph sum doesn’t commute. The matrix \(D+A = \begin{bmatrix}
0 & 0 & 0 \\
0 & 0 & 1 \\
0 & 0 & 0
\end{bmatrix}\) is distinct from \(A+D = \begin{bmatrix}
0 & 1 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0
\end{bmatrix}\) if we’re using an element-wise comparison. Even though they’re not the same, they’re “isomorphic” in that they both have exactly 3 dots and one arrow connecting two of them so every thing has a pair.
Speaking of pairs, I think that’s why our maps to \(C_2\) are important. Using the matrix notation, \(C_2\) can be thought of as \(\alpha = \begin{bmatrix}
0 & 1\\
1 & 0
\end{bmatrix}\) which happens to be the unique antipodal matrix satisfying \(\alpha^2 = I_{D+D}\text{.}\) Multiplying this matrix with the arrow matrix gives me the sum of a naked dot and a loop, as represented by the matrix form \(\begin{bmatrix} 0 & 0 \\ 0 & 1
\end{bmatrix}\text{.}\)
What strikes me as interesting about this matrix representation is the way the graph decribed in the text as \(2_D\) corresponds with the matrix \(\begin{bmatrix} 1 & 1 \\ 1 & 1
\end{bmatrix}\text{.}\) This is the unique matrix that takes every element in the matrix of \(D+D\) and swaps zeroes to ones. It got me wondering what would happen if I applied the same operation to \(A+A\text{.}\) This matrix, given by \(\begin{bmatrix}
1 & 0 & 1 & 1 \\
1 & 1 & 1 & 1 \\
1 & 1 & 1 & 0 \\
1 & 1 & 1 & 1
\end{bmatrix}
\text{,}\) produces the following graph:
I’m starting to think that this might be the graph \(2_A\) that I was looking for. The reason I’m so convinced of this is the way om which it resembles a particular shape on the cover of the book:
Using the notation I have been, this graph corresponds with the matrix given by \(\begin{bmatrix} 1 & 1 \\ 1 & 2
\end{bmatrix}\) (with the 2 repesenting my second self-loop). This seems relevant because taking these elements modulo 2 and multiplying by my antipodal matrix gives me back the original arrow \(\begin{bmatrix} 1 & 0 \\ 1 & 1
\end{bmatrix}\text{.}\)
I think I want to spend a little more time playing with this. I think this notion of a “graph negation” operation could have some useful properties with respect to my sum and product.