Skip to main content

Section 5.26 Session 25: Labelings and products of graphs, Part 3

I came back to Session 25 again this week to see if I could make any progress. I found it hard to focus on a single problem though. It feels like I’m still missing something important.

Example 5.26.1. Exercise 1: (Part 3/?).

Continued proofreading...
Solution.
While messing around in Python this week
 1 
github.com/rruff82/LS-Categories/blob/main/assets/notebooks/ls-ses25-p2.ipynb
, I decided to take a brute force to this problem. I used a brute-force method to search for structure preserving maps to see if the shape I hypothesized earlier did anything interesting.
I used couple of examples from the textbook to see if my search algorithm worked as expected. Specifically, the problems from the start of Session 11, Session 15, along with Test 2 Question 2 all gave me some simple cases to verify. I notice that my original solution to Session 11 Exercise 1 was off by 1! My new algorithm gave me a fourth map: the one where the loop maps to a fixed point.
With that in place, I testing out the sample graph I came up with in Session 25 Part 2 5.24. I discovered a slight problem, and that’s that it divide my graphs into four parts instead of just two.
After experimenting with several simple graphs, I’m starting to think that the best candidate for this map \(2_A\) is the standard idempotent map: \(\boxed{\bullet \longrightarrow \bullet \circlearrowleft}\text{.}\) By identifying a map of graphs to this object, it seems to sort out the generators of “tails”. More specifically, it identifies the points of \(X\) which are not in the image \(\text{Im}_\alpha(X)\) — points which are lost through the first application of \(\alpha\text{.}\)
I think it remains to be seen whether this is actually \(2_A\text{,}\) but at least it seems like it could be an useful property anyway.

Example 5.26.2. Exercise 2: (Part 1/?).

(a) Show that if a diagram of sets...
Solution.
We’re given a generic sum diagram:
Generic sum
We’re told this has the property of a coproduct “but restricted to testing against the one cofigure type \(Y=2\)”. Our goal is to use this to prove it’s actually a coproduct.
The property in question is that for any two maps \(B_1 \xrightarrow{f_1} X\) and \(B_2 \xrightarrow{f_2} X\text{,}\) there is exactly one map \(S \xrightarrow{f} X\) such that \(f_1 = f j_1\) and \(f_2 = f j_2\text{.}\)
We’re only testing against the object \(\mathbf{2}\text{,}\) so our assumption is that for any maps \(B_1 \xrightarrow{f_1} \mathbf{2}\) and \(B_2 \xrightarrow{f_2} \mathbf{2}\text{,}\) there is exactly one map \(S \xrightarrow{f} \mathbf{2}\) such that \(f_1 = f j_1\) and \(f_2 = f j_2\text{.}\)
Let’s assume the opposite. Suppose \(S\) is not a sum. There would need to be either “no maps” \(S \xrightarrow{f} \mathbf{2}\) such that \(f_1 = f j_1\) and \(f_2 = f j_2\text{,}\) or more than one such map.
At least one such map should exist as long as the category contains an initial object. The unique maps from the initial object, \(I \rightarrow B_1\) and \(I \rightarrow B_2\text{,}\) can be composed with \(f_1,f_2\) to produce a unique pair of compositions \(I \rightarrow B_1 \xrightarrow{f_1} \mathbf{2}\) and \(I \rightarrow B_2 \xrightarrow{f_2} \mathbf{2}\) . Since there are exactly two maps here, the set of these maps is isomorphic to the set \(\mathbf{2}\text{.}\)
Suppose that another triple of maps \(g: S \rightarrow \mathbf{2},k_1: S \rightarrow B_1,k_2: S \rightarrow B_2 \) has the same property as \(f\text{.}\) If \(f \neq q\text{,}\) there exists some \(s \in S\) such that \(f s \neq g s\text{.}\) Applying \(f_1\) to this \(s\) gives us \(f_1 s = f j_1 s = f_1 s = g k_1 s\text{.}\) Applying \(f_2\) to this \(s\) gives us \(f_2 s = f j_2 s = f_2 s = g k_2 s\text{.}\) However, there are only two objects in \(\mathbf{2}\) and we already know \(f_1 s \neq f_2 s\) by virtue of the domains being different. The contradiction implies \(f\) and \(g\) are the same map.
For part (b), I think I’d like to look at the these through the lens of an adjacency matrix. The generic sum corresponds with a set of \(\mathbf{5}\) objects: three “dots” and “two arrows”. The three dots can be detemine as the dimension of the matrix to be \(3 \times 3\) matrix with \(2\) non-zero elements corresponding to the arrows. Specifically, the matrix would be given by \(\begin{bmatrix} 0 & 0 &1 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{bmatrix} \) .
There’s a unique map which maps each of the three dots to itself, corresponding to the identity matrix: \(\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \)
We can think of our “sum” as producing a pair of \(3 \times 1\) matrices given by \(\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} \) and \(\begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} \) that inject a single dot into the set of three dots, into the positions that will be mapped to the third dot by our adjacency matrix. We know there’s an isomorphism here because multiplying by \(\begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix} \) swaps the roles of the two points and is invertable by iterating a second time.
If my hypothesis about Exercise 1 is correct, the figure \(2_D\) corresponds with the matrix \(\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix} \) and \(2_A\) corresponds with \(\begin{bmatrix} 1 & 1 \\ 0 & 0 \end{bmatrix} \text{.}\) Each of these has a complement formed by subtracting every element from \(1\text{.}\) In this case, the complement of \(\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix} \) is \(\begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix} \) and the complement of \(\begin{bmatrix} 1 & 1 \\ 0 & 0 \end{bmatrix} \) is \(\begin{bmatrix} 0 & 0 \\ 1 & 1 \end{bmatrix} \text{.}\) The reason this strikes me as curious is that we can go from \(\begin{bmatrix} 0 & 0 \\ 1 & 1 \end{bmatrix} \) back to \(\begin{bmatrix} 1 & 1 \\ 0 & 0 \end{bmatrix} \) by muliplying with \(\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \) but no matrix multiplication will take us from \(\begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix} \) back to \(\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix} \) .
I think the idea here is that while there’s isomorphisms from \(\mathbf{2} \leftrightarrows 2_D\) and \(\mathbf{2} \leftrightarrows 2_A\text{,}\) there’s a retraction but it’s not structure preserving. Two disjoint arrows corresponds to the \(4 \times 4\) matrix \(\begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \end{bmatrix} \text{,}\) there’s only one matrix that will send those two arrows to the ones in our sum.
I’m not really sure where I’m going with this anymore, so maybe I’ll come back later.

Example 5.26.3. Exercise 3: (Part 1/?).

Tricoloring a graph...
Solution.
You’ll find me exploring this a little in Python if you look at the code I checked in, so I may as well document what I observed.
I thought I could get away with NetworkX’s greedy_color at first, but it didn’t quite behave as described. After manually verifying a coloring first, I came up with an algorithm that produces a tricoloring from the presentation of the graph. It seemed to work for both of the endomaps from Session 15.
Implementing doesn’t address either of the two parts to this question though. I found the use of parity in this implementation kind of interesting. It reminds me of an old programming problem where you have a linked list and want to determine if it contains a cycle. One of the ways you can approach it is to iterate through the list with pointers that skip elements at two relatively prime rates, i.e. by 2s and 3s. If you reach the end of the list, great, there’s no loops. If the list loops, at somepoint your pointers will overlap and you can terminate the program.
I’m going to have to comeback to see how this induced coloring works, but I think there’s an important connection to be made here.

Example 5.26.4. Exercise 4: (Part 1/?).

In this exercise, ...
Solution.
Again, I didn’t make too much progress here but did check in some Python code that seemed relevant. Considering my hypotheses about the role of adjacency matrices in representating graphs, there seems to be a corresponding map that encodes the values of an adjacency matrix into the “bits” of an integer. While this method doesn’t work for all graphs, the fact that it could work for endomaps seems very releavant to this question.
Not as much progress as I’d hoped, but any progress is progress...