Skip to main content

Section 5.23 Session 25: Labelings and products of graphs

I started moving forward this week, but found myself doubling back to debug my code from the last Exercise in Session 24. I think these things are connected, so I’m kind of just using this week to flesh out some ideas.

Example 5.23.1. Exercise 1:.

Find a graph \(2_A\text{...}\)
Solution.
I’ve already been thinking of a shape with two arrows in a path as per the following diagram:
\begin{equation*} 2_A = \boxed{\bullet \longrightarrow \bullet \longrightarrow \bullet} \end{equation*}
I think the sort this imposes on \(X\) has something to do with those “tails” on my presentations. Specifically, if I’m thinking of this graph as an endomap, it forms a sequence:
\begin{equation*} 2_A =\boxed{\bullet \longrightarrow \bullet \longrightarrow \bullet} \end{equation*}
\begin{equation*} 1_A =\boxed{\bullet \longrightarrow \bullet} \end{equation*}
\begin{equation*} 0_A = \boxed{\bullet} = D \end{equation*}
What makes this graph interesting is the possible ways the dots in the graph could potentially collapse through incidence. We could have the last two points combine to produce the idempotent \(\boxed{\bullet \longrightarrow \bullet \circlearrowleft} \text{,}\) the first and last collapse to form a 2-cycle \(C_2 = \boxed{\bullet \leftrightarrows \bullet}\text{.}\) the first two collapse to form a non-endomap \(\boxed{\circlearrowright \bullet \longrightarrow \bullet}\text{,}\) or all three dots collapse to form the terminal object \(\boxed{\circlearrowright \bullet \circlearrowleft}\text{.}\) If the cycle \(C_2\) was described as separating \(X\) into “dots mapped to \(u\)” and “dots mapped to \(v\)”, this graph \(A_2\) would seem to divide \(X\) into “arrows mapped from \(u\)” and “arrows mapped from \(v\)”.
That non-map \(\boxed{\circlearrowright \bullet \longrightarrow \bullet}\) seems particularly interesting. If I have some arbitrary endomap \(X^{\circlearrowright \alpha}\text{,}\) I can think of this map \(X\rightarrow 2_A\) as mapping each “dot” to the product of maps \(\langle 1_X, \alpha \rangle\text{.}\)
This reminds me of the peg game from Session 17. Perhaps the solution to that problem lies in finding a map from the “arrows mapped from \(u\)” to “dots mapped to \(v\)” for the specific starting state \(u\) and ending state \(v\text{.}\) The title of this session seems to imply that I might have been on track by how I approached Session 11 Exercise 8. The general idea would be to color all the points/arrows accessible from the start red, color all the points/arrows accessible from the end blue, and those two points are connected if and only if there exists some kind of purple “object”.
I’m not really sure how to articulate “how” this sorting works at the moment, but it seems like this graph is certainly full of interesting properties.

Example 5.23.2. Session 24 Debugging.

Continued from last time...
Solution.
Looking over my code from last week, I noticed another problem with my Python code. While I fixed the commutation problem, I broke my graph product. Instead of \(C_2 \times C_2\) giving me 2 cycles of two as expected, I was getting the graph described in this session as \(2_D\text{.}\) Upon fixing my graph product, it took me back up to 59 generators again. I’m starting to believe that my original result might have actually been right. The new graph_product function I came up with was defined as follows:
def graph_product(A,B):
    # for our product, the nodes are pairs of nodes
    a_size,b_size = len(A.nodes),len(B.nodes)
    new_size = a_size*b_size
    print(f"Nodes = {a_size} x {b_size} -> {new_size}")
    p_nodes = [str([a,b]) for a in A.nodes for b in B.nodes]
    print(p_nodes,len(p_nodes))
    p_edges = [(str([a[0],b[0]]),str([a[1],b[1]])) for a in A.edges for b in B.edges]
    print(p_edges,len(p_edges))
    output = nx.DiGraph()
    output.add_nodes_from(p_nodes)
    output.add_edges_from(p_edges)
    
    return output
I still think there’s a better way of doing this using matrix operations and I think it might be connected to that shape \(2_A\) that I was looking at. Instead of padding my matrices with zeros like I did with graph_sum, I’m thinking I can stack an identity matrix and adjacency matrix together to separate the dots and arrows of a given endomap.
Using the endomaps from Session 15, separated \(X^{\circlearrowright \alpha}\) into a sum of two objects and \(Y^{\circlearrowright \beta}\) into a sum of three objects. Notably one object, \(C_2\text{,}\) falls into both sets. What’s interesting is that \(C_2\) has only one generator, but \(C_2 \times C_2\) requires two. Using my graph_product between pairs of these subgraphs showed that the relationship between the number of generators wasn’t as straight forward as I thought.
If my hypothesis about the generators being “non-zero eigenvalues” is correct, then those 59 generators do in fact come from the products of the disjoint shapes of \(X\) and \(Y\text{.}\)
Subgraph Presentations
\begin{equation*} \boxed{x_0,x_1,x_2} \end{equation*}
\begin{equation*} \alpha^5 x_0 = \alpha^2 x_0\text{,} \end{equation*}
\begin{equation*} \alpha^5 x_0 = \alpha^2 x_0 \end{equation*}
\begin{equation*} \alpha^2 x_0 = \alpha x_1 \end{equation*}
\begin{equation*} \alpha^3 x_0 = \alpha x_2 \end{equation*}
\begin{equation*} \boxed{x_3} \end{equation*}
\begin{equation*} \alpha^2 x_3 = x_3 \end{equation*}
\begin{equation*} \boxed{y_0} \end{equation*}
\begin{equation*} \beta^4 y_0 = \beta y_0 \end{equation*}
Expect 16 generators Expect 2 generators
\begin{equation*} \boxed{y_1,y_2} \end{equation*}
\begin{equation*} \beta^6 y_1 = \beta^2 y_1 \end{equation*}
\begin{equation*} \beta y_1 = \beta y_2 \end{equation*}
Expect 29 generators Expect 4 generators
\begin{equation*} \boxed{y_3} \end{equation*}
\begin{equation*} \beta^2 y_3 = \beta y_3 \end{equation*}
Expect 6 generators Expect 2 generators
These generators of the subproducts sum up to 59 total! The fact that \(59 = \frac{9 \times 13 + 1}{2}\) is probably not a conincidence. Back when I was working in game development, we used to use “homogenized vectors” based around defining a equivalence class from \(\langle x_0,y_0,z_0,1 \rangle\) to \(\langle x_1,y_1,z_1,w \rangle\) by the relation \(\langle x_0,y_0,z_0 \rangle = \langle x_1/w,y_1/w,z_1/w \rangle\) By adding a dimension, we can turn a “translation” into a matrix product. I’m wondering if something like that is going on here.
I also thought that “unraveling” my matrices might tell me something about the products. Since each arrow is a non-zero element in the adjacency matrix, I can take the outer product of flattened matrices to find where my arrows might be in the product. Since the \(9 \times 9\) matrix has 81 elements with exactly 9 non-zero and the \(13 \times 13\) has 169 elements with exactly 13 non-zero, I can take the outer product to form a \(81 \times 169\) matrix with exactly 117 non-zero elements. I think with some clever linear algebra it might be possible to reshape this \(81 \times 169\) matrix into a \(117 \times 117\) matrix representing the product with a pair of projection matrices of shape \(9 \times 117\) and \(13 \times 117\) respectively.
In the same way that I thought \(2_A\) might be like stacking an identity and adjacency matrix together, I think I could extend this idea to an arbitrary \(N_A\) using powers of the adjacency matrix. By using 13 powers of the \(9 \times 9\) matrix and 9 powers of the \(13 \times 13\) matrix I can produce matrices of the shapes I’m looking for: \(9 \times 117\) and \(13 \times 117\text{.}\) However, I haven’t figured out what to do with those matrices to produce the product quite yet.