Skip to main content

Section 5.9 Article 4: Universal Mapping Properties, Part 9

I think I was at least on the right track last week, so let’s pick up work again on the distributive property.

Example 5.9.1. Exercise 20 (part 2):.

Continued from Section 5.8.
Solution.
I’ve been thinking about these product and sum families, and maybe the benefit of looking at indexed objects is that it allows me to gradually extend these diagrams with the sums and products of objects. Given that I already have \(C_0 = A, C_1 = B, C_2 = C\text{,}\) maybe I can let start adding combinations of these objects under the product or sum as additional objects in our category. In particular, I’m curious about the objects \(C_3 = B+C\text{,}\) \(C_4 = B \times C\) because that pair shows up in both sides of the distributive property. Let’s try adding those to my diagram from last week. I’m going to colorize the graph such that my evalations from the left are in blue and right in red.
Figure 5.9.2. Product and Sum Families Combined and Extended
Essentially, we’ve defined "triple map" from \(A \times B \times C\) to \(A + B + C\) that corresponds uniquely to our choice of \(f\text{:}\)
\begin{equation*} X \rightarrow A \times B \times C \mathrel{\substack{\longrightarrow \\ \longrightarrow \\ \longrightarrow}} A+B+C \rightarrow Y \end{equation*}
Since every map in this category \(\mathbf{1}/\mathcal{S}\) is of the form \(\mathbf{1} \rightarrow X \rightarrow Y \rightarrow \mathbf{1} \text{,}\) we can use this to define a isomorphism from \(A+B+C \longrightarrow A \times B \times C\text{.}\)
\begin{equation*} A+B+C \rightarrow Y \rightarrow \mathbf{1} \rightarrow X \rightarrow A \times B \times C \end{equation*}
This, in turn, allows me to define some endomaps on the spaces \(A \times B \times C\) and \(A+B+C\) by the following compositions:
\begin{equation*} A \times B \times C \mathrel{\substack{\longrightarrow \\ \longrightarrow \\ \longrightarrow}} A+B+C \rightarrow Y \rightarrow \mathbf{1} \rightarrow X \rightarrow A \times B \times C \end{equation*}
\begin{equation*} A+B+C \rightarrow Y \rightarrow \mathbf{1} \rightarrow X \rightarrow A \times B \times C \mathrel{\substack{\longrightarrow \\ \longrightarrow \\ \longrightarrow}} A+B+C \end{equation*}
Let’s name these \(A \times B \times C \xrightarrow{e_P} A \times B \times C\) and \(A + B + C \xrightarrow{e_S} A + B + C\text{.}\) I’m thinking that since we constructed these by splitting an isomorphism between \(A + B + C\) and \(A \times B \times C\text{,}\) we know they at least satisfy the properties \(e_P^2 = 1_{A \times B \times C}\) and \(e_S^2 = 1_{A+B+C}\text{.}\) Perhaps the question to be asking here is whether or not \(e_P,e_S\) are the respective identity maps or are maps with no fixed point? This allows us to define two unique maps to \(\mathbf{2}\) based on this fixed point property.
Let’s continue by adding these maps to our diagram, and complete it with the other possible sums and products needed for our distributive property:
Figure 5.9.3. Product and Sum Families Growing Even Bigger
Maybe I’ve haven’t been paying enough attention to the second condition: the fact that \(\mathbf{0} \rightarrow A \times \mathbf{0}\) is an isomorphism. Every \(X \xrightarrow{f} Y\) admits an unique map \(X \mathrel{\substack{\longrightarrow \\ \longrightarrow}}X \times Y\) that pairs each \(x_i\) with the respective \(\langle x_i,y_i \rangle\) pair. We also saw in the last exercise that can define a map \(X \times Y \rightarrow \mathbf{2}\) based on whether \(x_i = y_i\) or \(x_i \neq y_i\text{.}\) We know we have a unique “antipodal map” \(\mathbf{2} \xrightarrow{\alpha} \mathbf{2}\) such that \(\alpha^2 = 1_\mathbf{2}\text{.}\) This is important because we could use this map to swap the roles of our terminal and initial object.
I’m thinking that the existence of this map \(\mathbf{0} \rightarrow A \times \mathbf{0}\) is important because it gives us our monoid \(\mathbb{N}\) through composition with itself. This gives us an isomorphism between the following sequences of maps:
\begin{equation*} \mathbf{0} \rightarrow A \times \mathbf{0} \rightarrow A \times A \times \mathbf{0} \rightarrow A \times A \times A \times \mathbf{0} \rightarrow ... \end{equation*}
\begin{equation*} \mathbf{0} \rightarrow \mathbf{1} \rightarrow \mathbf{2} \rightarrow \mathbf{3} \rightarrow ... \end{equation*}
It seems we should be able to assume an isomorphism \(A \leftrightarrow \mathbb{N}\) here without loss of generality.
If our distributive property is to hold for arbitrary choices of \(A,B,C\text{,}\) what happens if we choose \(B = \mathbf{1}\) and \(C = X\text{?}\) Subsituting into our distributive property gives us \(A \times \mathbf{1} + A \times X = A \times (X + \mathbf{1})\text{.}\) Our result from Exercise 9 says that any terminal object in \(\mathbf{1}/\mathcal{S}\) is also initial, so \(X + \mathbf{1} = X + \mathbf{0} = X\text{.}\)
I think I’m pretty lost again, but here’s my hunch as to whats happening. Both the sum and product need to each have a unique element that functions as the identity map. This gives us unique map from \(\mathbf{2} \rightarrow X \rightarrow \mathbf{2}\) for \(\mathbf{2} = \{\mathbf{0},\mathbf{1}\}\text{.}\) In the same way we saw a monoid arise from \(\mathbf{0} \rightarrow A \times \mathbf{0}\text{,}\) there should also be dual notion of the monoid based on \(\mathbf{1} \rightarrow B+\mathbf{1}\text{,}\) and some map which swaps the behavior of these two monoids. The invertability of maps in \(\mathbf{1}/\mathcal{S}\) conflicts with this because we know there can’t exist a retraction for \(\mathbf{2} \rightarrow \mathbf{1}\text{.}\)

Example 5.9.4. Exercise 21:.

If \(A, D\) denote the generic arrow and the naked dot...
Solution.
So we’re given that \(A = \boxed{\bullet \rightarrow \bullet}\) and \(D = \boxed{\bullet}\text{,}\) and in the category \(\mathcal{S}^{\downarrow_\bullet^\bullet \downarrow}\) we know that all maps must satisfy \(s' f_A = f_D s\) and \(t' f_A = f_D t\) to preserve source and target.
Let’s start by creating a “section” on \(X\) that categorizes each element into a “dot” or “arrow”. Let’s call this "binary" map \(X \xrightarrow{b} \{\text{dot},\text{arrow}\}\text{.}\) Since \(X = X_A + X_D\text{,}\) this gives us a unique pair of injections \(X_A \xrightarrow{j_1} X\) and \(X_D \xrightarrow{j_2} X\text{.}\)
For any arrow in \(X_A\text{,}\) we have a unique map \(X_A \mathrel{\substack{s \\ \longrightarrow \\ \longrightarrow \\ t}} X_D \times X_D\) that decomposes the arrow into a pair of dots. I think the catch here is that this projection map \(X_A \xrightarrow{p_A} A\) needs to preserve structure when we compose it with the map \(X_D \xrightarrow{p_D} D\text{.}\)
Let’s consider an arbitrary pair of arrows \(a_1,a_2\) in \(X\text{.}\) These arrows can share a source, share a target, share neither, or they could share both. Each pair of arrows corresponds with a minimum of one dot and a maximum of four. We should then be able to take a sum of those dots as per the following diagram:
Figure 5.9.5. Product of arrows to sum of dots
Let’s consider the relationship between the dots and arrows of a finite digraph \(X\text{.}\) If there are \(|X_D| = n\) dots in the diagram, there are a maxiumum of \(n^2\) possible arrows between those dots. For each \(x_a\) in the set of arrows \(X_A\text{,}\) we can assign a unique index to that arrow that preserves the source and target. This gives us unique pair of maps \(X_D \rightarrow \mathbb{N}_n\) and \(X_A \rightarrow \mathbb{N}_n \times \mathbb{N}_n \rightarrow \mathbb{N}_{n^2}\text{.}\) Furthermore, knowing these sets are disjoint should also give us an isomorphism \(\mathbb{N}_{n}+\mathbb{N}_{n^2} \rightarrow \mathbb{N}_{n+n^2}\) preserving the separation of dots and arrows. Since we know \(X_D+X_A = X\text{,}\) it follows that we have a unique isomorphism \(X \rightarrow \mathbb{N}_{n+n^2}\) as well.
Next, let’s look at a pair of arrows \(\langle a_1,a_2 \rangle \in X_A \times X_A\text{.}\) If there are \(|X_A| = m\) arrows in the originial diagram, then there are \(m^2\) possible pairs of arrows. This gives us a map \(X_A \times X_A \rightarrow \mathbb{N}_{m^2}\) that indexes those arrows. Given that we have at most as many arrows as are possible, it follows that \(m \leq n^2\) we should have a unique map \(\mathbb{N}_{m} \rightarrow \mathbb{N}_{n^2}\) which links the index of an arrow in \(X_A\) with an indexed pair of points in \(X_D\text{.}\) We can effectively use this to define a map \(\mathbb{N}_{n^2} \rightarrow \mathbf{2}\) by the property that returns \(1\) if there’s an arrow connecting the two dots or \(0\) if not.
So what happens when we try to inject our set of dots into the set of arrows? Each dot \(x_d \in X_D\) can be thought of as an arrow where the source and target are both the same point. This gives us a map from \(\mathbb{N}_n \rightarrow \mathbb{N}_{n^2}\) that pairs each point with the theoretical index of an arrow pointing to itself. We’ve got two cases, one where there already exists a self-loop at that point and one where this arrow is a new object of its own.
This is starting to become a lot of information, so let’s see if we can organize my thoughts a bit better in a diagram:
Figure 5.9.6. Decomposition of \(X\) by indexing dots and arrows
So where does that leave us? I’m guessing that the structure preservation of maps in our category gives us that pair of maps from \(D \mathrel{\substack{s \\ \longrightarrow \\ \longrightarrow \\ t }} A\) and we can use the sum in our category to find the unique map for which all triangles below commute:
Figure 5.9.7. Illustrating the sum \(A+D\)
Maybe the key to all is to section off the points in \(X_D\) that contain corresponding self loop in \(X_A\) from the dots which loop, \(D_L\text{,}\) and dots which do not ,\(D_N\text{.}\) This gives us \(X_D = D_L + D_N\) where each \(D_L\) is paired with an existing arrow in \(X_A\text{.}\) If \(|D_L|\) represents how many points that applies to, then the remaining \(|D_N|\) end up creating new points. Each pair of arrows \(A \times A\) can be thought of as the sum of some arrows \(A\) where the source and target are different, with these objects \(D_L, D_N\) that section off the points and loops. Could this give us the \(A \times A \rightarrow A + D_L + D_N\) we’re looking for?
I’m not exactly happy with these solutions, but at this point I think the best course of action is to move forward. It sounds like some of my confusion will be addressed with the proof of the distributive property later in the text.