Skip to main content

Section 4.18 Session 17: Some uses of graphs

This week’s reading started with the discussion of a simple game, so I kind of started this week by implementing it in Javascript using React
 1 
rruff82.github.io/LS-Ses17-PegGame
. If you want to see how it works, the source code is available here
 2 
github.com/rruff82/LS-Ses17-PegGame
. Looking back over my code, it’s kind of funny how obvious it is that my Javascript was derived from a Python implementation.
Maybe I’ll come back to this later and see if I can "solve the game" also, but for now I’ll just settle for solving the exercises to see how they might be related to that task.

Problem 4.18.1. Exercise 1:.

...have a terminal object?
Solution.
Our given diagrams are reconstructed below:
Figure 4.18.2.
One part of this problem that’s a little unclear at the moment is how exactly we’re defining "terminal object". Given the context, I’m guessing it means that there is a path from every point in the graph to the terminal object.
The other part that’s confusing is how we’re handling self-loops. Since every "point" in the graph can be thought of as a self-loop, it makes sense to treat each point as a having a path of length 0. Even though \(a_0\) has an explicit map to itself and \(b_0\) does not, I’m thinking we still need to treat both of them as "terminal objects" in order to properly identify paths of length 0.
If I treat \(a_0\) as terminal object in (a) and treat \(b_0\) as a terminal object in (b), then it would make sense to identify \(c_3\) as a terminal point in (c) even though the loop at this point is not explicitly shown. There’s a path of length 0 from \(c_3\) to itself, paths of length 1 from \(c_2\) or \(c_4\text{,}\) and paths of length 2 from \(c_0\) or \(c_1\text{.}\)
In graph (d), I’m thinking it makes the most sense to identify both \(d_0\) and \(d_1\) as terminal points. My reasoning is that the existence of a path of length 0 from \(d_0\) to itself and path of length 1 from \(d_1\) means that \(d_0\) is terminal, while \(d_1\) likewise has a 0 length path to itself and a path of length 1 from \(d_0\text{.}\) The fact that we can follow the loop to potentially make longer paths is irrelevant.
In (e), it seems that \(e_1\) would be the lone terminal object. We have two paths of length 1 from \(e_0\) to \(e_1\) and an implicit path of length 0 from \(e_1\) to itself. There is no path that starts at \(e_1\) and ends at \(e_0\) so that would disqualify \(e_0\) as a terminal point.
In (f), my definition of terminal object would imply that no such element exists here. There is no path from \(f_0, f_1\) to \(f_2\) or vice versa. This seems very similar to the situation we saw in the last exercise of Session 11.

Problem 4.18.3. Exercise 2:.

Show that a diagram of shape \(\boxed{\bullet \mathrel{\substack{ \textstyle\longrightarrow \\[-0.6ex] \textstyle\longleftarrow}} \bullet }\) commutes...
Solution.
Let’s start by naming the objects in the diagram as if they were maps: \(A \mathrel{\substack{s \\[-0.6ex] \textstyle\longrightarrow \\[-0.6ex] \textstyle\longleftarrow \\[-0.6ex] r}} B\text{.}\) These two maps are inverses if and only if \(r \circ s = 1_A\) and \(s \circ r = 1_B\text{.}\)
So what does it mean for the shape \(\boxed{\bullet \mathrel{\substack{ \textstyle\longrightarrow \\[-0.6ex] \textstyle\longleftarrow}} \bullet }\) to commute? I’m thinking that it means we need to enforce \(s = s \circ r \circ s\) and \(r = r \circ s \circ r\text{.}\) This seemed most clear if I expanded my maps into the following external diagrams:
Figure 4.18.4.
In these situations, we can follow the arrow directly from the top left to top right or we can travel "down and around". Saying it "commutes" means we’ll get the same result no matter which path we take.
Given that we have maps satisfying \(s = s \circ r \circ s\) and \(r = r \circ s \circ r\text{,}\) can we use that information to establish \(r \circ s = 1_A\) and \(s \circ r = 1_B\text{?}\) It’s clear that we can apply \(s\) on the left to show \(r \circ s = 1_A \implies s \circ r \circ s = s\) and apply \(r\) on the left to show \(s \circ r = 1_B \implies r \circ s \circ r = r\) but the converse isn’t so straight forward.
Suppose \(r \circ s \neq 1_A\text{.}\) We’d need to have some elements \(a_1,a_2 \in A\) with the property that \(a_2 = (r \circ s)(a_1) \neq a_1\text{.}\) Apply \(s\) of the left of both sides to get \(s(a_2) = (s \circ r \circ s)(a_1) \neq s(a_1)\text{.}\) Since our definition of "commute" establishes \((s \circ r \circ s)(a_1) = s(a_1)\) we get a contradiction \(s(a_2) = s(a_1) \neq s(a_1)\text{.}\) It follows that \(r \circ s = 1_A\text{,}\) and we can use an analogous argument to show \(s \circ r = 1_B\text{.}\)

Problem 4.18.5. Exercise 3:.

In the diagram...
Solution.
Let’s assume we’re given nothing more than the three equations: \(jf = mi\text{,}\) \(kg = nj\text{,}\) \(lh = pk\text{.}\) If follows from the associative property that
\begin{equation*} pnmi = pn(mi) = pn(jf) = pnjf \end{equation*}
\begin{equation*} pnjf = p(nj)f = p(kg)f = pkgf \end{equation*}
\begin{equation*} pkgf = (pk)gf = (lh)gf = lhgf\text{.} \end{equation*}
Thus, \(pnmi = lhgf\text{.}\)
This seems almost too easy, but I’ll take it.

Problem 4.18.6. Exercise 4:.

For each of these diagrams...
Solution.
I think we’ll actually need the diagrams for these:
Figure 4.18.7.
In (a), it looks like we have two paths from \(A\) to \(A\) and two paths from \(B\) to \(B\text{.}\) To make the paths from \(A\) to \(A\) commute we’d need \(g \circ f = h \circ f\text{.}\) To make the paths from \(B\) to \(B\) commute we’d need \(f \circ g = f \circ h\text{.}\)
In (b), we have a cycle of 3 which means we can start and end at any of \(A\text{,}\) \(B\) and \(C\text{.}\) It shouldn’t matter how many times we loop around, so any subsequent loop should be the original. Therefore, the cycle for \(A\) corresponds with the relation \(h \circ g \circ f = (h \circ g \circ f) \circ (h \circ g \circ f)\text{,}\) the cycle for \(B\) corresponds with \(f \circ h \circ g = (f \circ h \circ g) \circ (f \circ h \circ g)\text{,}\) and the cycle for \(C\) corresponds with \(g \circ f \circ h = (g \circ f \circ h) \circ (g \circ f \circ h)\text{.}\)
Are these equations the "minimal" ones? I’m not sure yet. Part of me originally wanted to make the cycle to be equal to the identity map, but my experiences with sturcture preserving maps in Session 15 lead me to believe that’s not true. Obviously, something like \(h \circ g \circ f = 1_A\) would imply that \((h \circ g \circ f) \circ (h \circ g \circ f) = h \circ g \circ f\text{,}\) but there’s no assurance that we don’t lose points from \(A\) when we first apply \(f\text{.}\)
In (c), I’m thinking that the additional map between \(A\) and \(B\) gives us an additional path for each of the 3-cycles. Namely:
\begin{equation*} j \circ h \circ f = j \circ h \circ g = (j \circ h \circ f) \circ (j \circ h \circ f) = (j \circ h \circ g) \circ (j \circ h \circ g) \end{equation*}
\begin{equation*} f \circ j \circ h = g \circ j \circ h = (f \circ j \circ h) \circ (f \circ j \circ h) = (g \circ j \circ h) \circ (g \circ j \circ h) \end{equation*}
\begin{equation*} h \circ f \circ j = h \circ g \circ j = (h \circ f \circ j) \circ (h \circ f \circ j) = (h \circ g \circ j ) \circ (h \circ g \circ j ) \end{equation*}
Furthermore, the explicit self-loop at \(C\) seems to imply that \(j \circ i = j\) and \(i \circ h = h\) because we can freely choose to take or not take \(i\) at our discretion.
So how do I know I covered them all? I’m thinking we could come up with a way to enumerate all possible paths of a given length, and then use the relations to simplify any path longer than the minimal one.
Let’s start by considering the case in (a). We have precisely two paths of length 0, corresponding to the points \(A\) and \(B\) respectively. We have three paths of length 1, one starting at \(A\) and two starting at \(B\text{,}\) which directly correspond to our three maps \(f, g, h\text{.}\) Once we step up to paths of length 2, our relations reduce the four possible sequences down to two equivalence classes. Any longer path would need to be formed by a composition of these building blocks. Specifically, for \(k \in \mathbb{N}\text{,}\) any path of lenth \(2k\) will reduce to \((g \circ f)^n = (h \circ f)^n = g \circ f = h \circ f\) if it starts and ends at \(A\) or \((f \circ g)^n = (f \circ h)^n = f \circ g = f \circ h\) if it starts and ends at \(B\text{.}\) For a path of length \(2k+1\text{,}\) the start and end be different points but we can still simplify the path by taking out the loops of 2. It doesn’t matter how many times it loops because both loops are idempotent.
I think the case in (b) follows quite similarly. We’ve got 3 paths of length 0 corresponding to \(A,B,C\text{,}\) three paths of length 1 corresponding to \(f,g,h\text{,}\) three paths of length 2 corresponding to the compositions \(f \circ g, g \circ h, h \circ f\text{,}\) and three paths of length 3 corresponding to the 3 relations. Since every length can be expressed as \(3*n+m\) for some \(n \in \mathbb{N}\) and \(m \in \{0,1,2\}\text{,}\) every path can be thought of as having one of the 3-cycles in our relations repeated \(n\) times followed by a path of length 1 or 2.
It’s not quite as clear how this works in (c) because the self loop at \(C\) means we have potentially have cycles of almost any length. This means we’re going to have to take extra care to keep track of the domain and codomain relative to how we approached (a). We still have 3 paths of length 0 that start and end at \(A,B,C\) and 5 paths of length 1 for each map \(f,g,h,i,j\text{.}\) Where this gets weird is when we look at paths of length two. In addition to having two pairs of equivalent two-step paths (\(f \circ j = g \circ j\) and \(h \circ f = h \circ g\)) we also have a two-step path \(j \circ i\) which is equal to the one step path \(j\) and likewise for \(i \circ h = h\text{.}\)
I’m thinking this map \(i\) is kind of like a pit-stop in a car race. If we subtract out all the time we spend in pit-stops, what time remains is the time we spend on the track. If take the length of a path, subtract the number of times we apply \(i\text{,}\) what remains would be the number of steps we spend looping around our 3-cycle. Each time we loop we have two choices of path, but our relations enforce the fact that repeated loops are equivalent. I’m thinking that there are 4 unique 4 step paths from \(A\) to \(B\text{,}\) namely \(f \circ j \circ h \circ f\text{,}\) \(f \circ j \circ h \circ g\text{,}\) \(g \circ j \circ h \circ f\text{,}\) and \(g \circ j \circ h \circ g\text{.}\) Any path containing more than 4 steps must be some composition of the smaller paths we’ve already established.
Coming back to the question of "minimal equations", perhaps something can be done to simplify these cycles for (c). If I include the relations \(f \circ j = h \circ j\) and \(h \circ f = h \circ g\text{,}\) then maybe I don’t need as many equations to path out all the 3-cycles. For example, knowing \(h \circ f = h \circ g\) and \(j \circ h \circ f = (j \circ h \circ f) \circ (j \circ h \circ f)\) would automatically imply \(j \circ h \circ g = (j \circ h \circ g) \circ (j \circ h \circ g)\text{.}\)