Skip to main content

Section 5.24 Session 25: Labelings and products of graphs, Part 2

As I continued to read further, I realized my solution to last week’s Exercise is probably not correct. I’m also beginning to think that I need to finish my solution to certain outstanding problems (i.e. Session 15) before I can proceed any further. It’s possible I might be stuck here a while...

Example 5.24.1. Exercise 1: (Part 2/?).

Proofreading last week’s work...
Solution.
Immediately after this Exercise, the author’s define a graph \(A_2\) with the exact graph I thought was going to be \(2_A\text{.}\) It seems incredibly unlikely that \(A_2 = 2_A\) would be the same graph, so my answer is almost certainly wrong. I decided to take a closer look at the examples leading up to his.
I’ve been starting to form a tighter connection between the graphs and the adjacency matrices. The object \(D\) defined as the “naked dot” is essentially a \(1\times1\) matrix given by \(\begin{bmatrix} 0 \end{bmatrix}\text{.}\) Likewise, the “naked arrow” \(A\) can be thought of a \(2\times2\) matrix given by \(\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}\text{.}\) By concatenating matrices along the diagonal and filling in zeroes, we can produce a graph sum such that \(D+D = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix}\) and \(A+A = \begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \end{bmatrix}\text{.}\) I also get a nice representation of the “naked loop” as the matrix \(\begin{bmatrix} 1 \end{bmatrix}\text{.}\)
However, it’s important to note that this method of a graph sum doesn’t commute. The matrix \(D+A = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{bmatrix}\) is distinct from \(A+D = \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}\) if we’re using an element-wise comparison. Even though they’re not the same, they’re “isomorphic” in that they both have exactly 3 dots and one arrow connecting two of them so every thing has a pair.
Speaking of pairs, I think that’s why our maps to \(C_2\) are important. Using the matrix notation, \(C_2\) can be thought of as \(\alpha = \begin{bmatrix} 0 & 1\\ 1 & 0 \end{bmatrix}\) which happens to be the unique antipodal matrix satisfying \(\alpha^2 = I_{D+D}\text{.}\) Multiplying this matrix with the arrow matrix gives me the sum of a naked dot and a loop, as represented by the matrix form \(\begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix}\text{.}\)
What strikes me as interesting about this matrix representation is the way the graph decribed in the text as \(2_D\) corresponds with the matrix \(\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}\text{.}\) This is the unique matrix that takes every element in the matrix of \(D+D\) and swaps zeroes to ones. It got me wondering what would happen if I applied the same operation to \(A+A\text{.}\) This matrix, given by \(\begin{bmatrix} 1 & 0 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 0 \\ 1 & 1 & 1 & 1 \end{bmatrix} \text{,}\) produces the following graph:
Graph that is two arrows short of completion
I’m starting to think that this might be the graph \(2_A\) that I was looking for. The reason I’m so convinced of this is the way om which it resembles a particular shape on the cover of the book:
Graph depicted on cover of book
Using the notation I have been, this graph corresponds with the matrix given by \(\begin{bmatrix} 1 & 1 \\ 1 & 2 \end{bmatrix}\) (with the 2 repesenting my second self-loop). This seems relevant because taking these elements modulo 2 and multiplying by my antipodal matrix gives me back the original arrow \(\begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix}\text{.}\)
I think I want to spend a little more time playing with this. I think this notion of a “graph negation” operation could have some useful properties with respect to my sum and product.