Skip to main content

Section 5.5 Article 4: Universal Mapping Properties, Part 5

This week the authors introduce some notation for "families". I take some strange comfort in the fact that the set of indices is allowed to be empty.

Example 5.5.1. Exercise 16:.

Show that... has the appropriate universal mapping property...
Solution.
So let’s sketch out what we’re looking at.
Figure 5.5.2. Problem statement for Article 4 Exercise 16
In order to do so, we need to establish it has the "universal mapping property" with all:
Figure 5.5.3. Universal maps for Article 4 Exercise 16
The "universal mapping property" essentially says we have 3 unique isomorphisms for the terminal object \(\mathbf{1}\text{:}\)
\begin{equation*} \mathbf{1} \rightarrow X \xrightarrow{f_a} C_a \rightarrow \mathbf{1} \end{equation*}
\begin{equation*} \mathbf{1} \rightarrow X \xrightarrow{f_b} C_b \rightarrow \mathbf{1} \end{equation*}
\begin{equation*} \mathbf{1} \rightarrow X \xrightarrow{f_a} C_c \rightarrow \mathbf{1} \end{equation*}
For convenience, let’s steal the multi-arrow notation from the previous exercise and denote it simply as \(X \mathrel{\substack{\longrightarrow \\ \longrightarrow \\ \longrightarrow}} C_a \times C_b \times C_c\text{.}\) By the definition of our products, we can also name two other unique isomorphisms in this manner:
\begin{equation*} \mathbf{1} \rightarrow P \mathrel{\substack{p_a \\ \longrightarrow \\ \longrightarrow \\ p_b}} C_a \times C_b \rightarrow \mathbf{1} \end{equation*}
\begin{equation*} \mathbf{1} \rightarrow Q \mathrel{\substack{q \\ \longrightarrow \\ \longrightarrow \\ q_c}} P \times C_c \rightarrow \mathbf{1} \end{equation*}
I think this is similar to what we did with Exercise 15. Our universal mapping property says that we have a unique map \(f = \langle f_a, f_b, f_c \rangle\) satsifying \(X \mathrel{\substack{\longrightarrow \\ \longrightarrow \\ \longrightarrow}} C_a \times C_b \times C_c\text{.}\) Our products define unique maps \(X \rightarrow P\) and \(X \rightarrow Q\text{,}\) which allow us to form two unique compositions:
\begin{equation*} \mathbf{1} \rightarrow X \rightarrow P \mathrel{\substack{\longrightarrow \\ \longrightarrow }} C_a \times C_b \rightarrow \mathbf{1} \end{equation*}
\begin{equation*} \mathbf{1} \rightarrow X \rightarrow Q \mathrel{\substack{\longrightarrow \\ \longrightarrow }} P \times C_c \rightarrow \mathbf{1} \end{equation*}
We can chain these maps together to get a unique map \(Q \rightarrow P\text{:}\)
\begin{equation*} Q \mathrel{\substack{\longrightarrow \\ \longrightarrow }} P \times C_c \rightarrow \mathbf{1} \rightarrow P \end{equation*}
Or a unqiue map \(P \rightarrow Q\text{:}\)
\begin{equation*} P \mathrel{\substack{\longrightarrow \\ \longrightarrow }} C_a \times C_b \rightarrow \mathbf{1} \rightarrow Q \end{equation*}
This is what we’d expect based on our "Uniqueness of Products" theore. It states that if the maps \(P \rightarrow C_i\) and \(Q \rightarrow C_i\) both make products of the same family, then there is exactly one map \(P \xrightarrow{f} Q\) for which \(q_i f = p_i \forall i \in I\) and that this map \(f\) is an isomorphism. Since our \(I = \{a,b,c\}\text{,}\) this gives us three equations \(q_a f = p_a\text{,}\) \(q_b f = p_b\text{,}\) \(q_c f = p_c\text{.}\)
Well, we already have a uniquely defined map \(Q \rightarrow P\) given by \(q\) so we could only possibly have \(f = q^{-1}\text{.}\) It follows that \(q f = 1_P\) and \(f q = 1_Q\text{.}\) Apply \(q\) on the right of the three equations above to get:
\begin{equation*} q_a f q = q_a 1_Q = q_a = p_a q \end{equation*}
\begin{equation*} q_b f q = q_b 1_Q = q_b = p_b q \end{equation*}
\begin{equation*} q_c f q = q_c 1_Q = q_c = p_c q = q_c f q = q_c \end{equation*}
This map \(p_c\) seems to come from nowhere, but is simultaneously well defined by applying the composition \(q_c q^{-1}\text{.}\)
Putting this all together now, any triple-product \(\langle f_a, f_b, f_c \rangle: X \mathrel{\substack{\longrightarrow \\ \longrightarrow \\ \longrightarrow}} C_a \times C_b \times C_c\) uniquely determines a triple-product \(\langle p_a q, p_b q, q_c \rangle: Q \mathrel{\substack{\longrightarrow \\ \longrightarrow \\ \longrightarrow}} C_a \times C_b \times C_c\) via the isomorphisms given by:
\begin{equation*} \mathbf{1} \rightarrow X \mathrel{\substack{\longrightarrow \\ \longrightarrow \\ \longrightarrow}} C_a \times C_b \times C_c \mathrel{\substack{\longrightarrow \\ \longrightarrow}} (C_a \times C_b) \times C_c \mathrel{\substack{\longrightarrow \\ \longrightarrow}} P \times C_c \rightarrow \mathbf{1} \end{equation*}
and
\begin{equation*} \mathbf{1} \rightarrow Q \mathrel{\substack{\longrightarrow \\ \longrightarrow \\ \longrightarrow}} C_a \times C_b \times C_c \rightarrow \mathbf{1} \end{equation*}
As one last step, let’s see if we can draw a more complete diagram of the situation.
Figure 5.5.4. Problem statement for Article 4 Exercise 16
The "Uniqueness of Products" essentially states that we have 3 unique maps defined by the map equivalencies \(f_a = p_a f_P = p_a q f_Q\text{,}\) \(f_b = p_b f_P = p_b q f_Q\text{,}\) and \(f_c = q_c f_Q\text{.}\)
This still feels a little more "hand-wavy" than I’d like, but it makes sense that when the maps are uniquely defined by the domain and codomain that there would be precisely one way to compose them such that \(\mathbf{1} \rightarrow X \mathrel{\substack{\longrightarrow \\ \longrightarrow \\ \longrightarrow}} C_a \times C_b \times C_c \mathrel{\substack{\longrightarrow \\ \longrightarrow}} P \times C_c \rightarrow \mathbf{1} \text{.}\)

Example 5.5.5. Exercise 17:.

In \(\mathcal{S}\text{,}\) \(\mathcal{S}^{\downarrow_\bullet^\bullet \downarrow}\text{,}\) and \(\mathcal{S}^{\circlearrowright}\text{...}\)
Solution.
First let’s review the definition of sum. A pair of maps \(B_1 \xrightarrow{j_1} S\) and \(B_2 \xrightarrow{j_2} S\) makes a sum \(S = B_1 + B_2\) if for each object \(Y\) and each pair \(B_1 \xrightarrow{g_1} Y, B_2 \xrightarrow{g_2} Y\text{,}\) there is exactly one map \(S \xrightarrow{g} Y\) for which both \(g_1 = g j_1\) and \(g_2 = g j_2\text{.}\)
If the definition of sum holds for every object \(Y\text{,}\) and we know we have a "point" \(\mathbf{1} \rightarrow S \rightarrow \mathbf{1}\text{,}\) we should be able to choose \(Y = \mathbf{1}\) as our object. This means that we have for each pair of maps \(B_1 \xrightarrow{g_1} \mathbf{1}\) and \(B_2 \xrightarrow{g_2} \mathbf{1}\) exactly one map \(S \xrightarrow{g} \mathbf{1}\) satisfying \(g_1 = g j_1\) and \(g_2 = g j_2\text{.}\)
It must continue to hold if we substitute \(Y = S\) as well. For any pair of maps \(B_1 \xrightarrow{h_1} S, B_2 \xrightarrow{h_2} S\text{,}\) we must have exactly one map \(S \xrightarrow{h} S\) such that \(h_1 = h j_1\) and \(h_2 = h j_2\text{.}\) If that’s true, then why not choose \(j_1,j_2\) as our arbitrary pair of maps? It follows that for \(B_1 \xrightarrow{j_1} S, B_2 \xrightarrow{j_2} S\text{,}\) there should be exactly one map \(S \xrightarrow{j} S\) with \(j_1 = j j_1\) and \(j_2 = j j_2\text{.}\)
Two maps are only equal when they produce the same output for every possible input, so this means for every map \(\mathbf{1} \xrightarrow{b_1} B_1\) we have \(j_1 b_1 = j j_1 b_1\) and for every map \(\mathbf{1} \xrightarrow{b_2} B_2\) we have \(j_2 b_2 = j j_2 b_2\text{.}\) Consider an arbitrary point \(\mathbf{1} \xrightarrow{s} S\text{.}\) If this \(S\) "comes from" \(B_1\text{,}\) there exists \(\mathbf{1} \xrightarrow{b_1} B_1\) such that \(s = j_1 b_1\text{.}\) Apply \(j\) on the left to get \(j s = j j_1 b_1 = j_1 b_1 = s\text{.}\) If this \(S\) "comes from" \(B_2\text{,}\) there exists \(\mathbf{1} \xrightarrow{b_2} B_2\) such that \(s = j_2 b_2\text{.}\) Apply \(j\) on the left to get \(j s = j j_2 b_2 = j_2 b_2 = s\text{.}\) We find that every point coming from either \(B_1\) or \(B_2\) is "fixed" by \(j\text{.}\)
Suppose we have some special point \(s_2\) that "comes from both \(B_1,B_2\)". This point \(\mathbf{1} \xrightarrow{s_2} S \xrightarrow{\bar{s_2}} \mathbf{1}\) would need to satisfy \(s_2 = j_1 b_1 = j_2 b_2\) for some \(\mathbf{1} \xrightarrow{b_1} B_1\) and \(\mathbf{1} \xrightarrow{b_2} B_2\text{.}\) Applying \(j\) on the left of all three parts of that equation gives us \(j s_2 = j j_1 b_1 = j_1 b_1 = s_2\) and \(j s_2 = j j_2 b_2 = j_2 b_2 = s_2\text{.}\) Since these are points, we have a corresponding pair of maps \(B_1 \xrightarrow{\bar{b_1}} \mathbf{1}\) and \(B_2 \xrightarrow{\bar{b_2}} \mathbf{1}\text{.}\) By our definition of sum, there should be exactly one map \(S \xrightarrow{\bar{b}} \mathbf{1}\) with \(\bar{b} = \bar{b_1} j_1\) and \(\bar{b} = \bar{b_2} j_2\) but here we have two unique maps given by the compositions \(S \xrightarrow{\bar{s_2}} \mathbf{1} \xrightarrow{b_1} B_1 \xrightarrow{j_1} S\) and \(S \xrightarrow{\bar{s_2}} \mathbf{1} \xrightarrow{b_2} B_2 \xrightarrow{j_2} S\text{.}\) This contradiction means points in \(S\) must come from at most one of \(B_1\) and \(B_2\text{.}\)
Now suppose we have some special point \(\mathbf{1} \xrightarrow{s_0} S \xrightarrow{\bar{s_0}} \mathbf{1}\) which comes from neither \(B_1\) nor \(B_2\text{.}\) There would exist no \(\mathbf{1} \xrightarrow{b_1} B_1\) such that \(j_1 s_0 = b_1\) and no \(\mathbf{1} \xrightarrow{b_2} B_2\) such that \(j_2 s_0 = b_2\text{.}\) We can apply our point’s dual \(\bar{s_0}\) on the right of both sides to say those equations are equivalent to map \(j_1 s_0 \bar{s_0} = j_1 1_S = j_1 = b_1 \bar{s_0}\) and \(j_2 s_0 \bar{s_0} = j_2 1_S = j_2 = b_2 \bar{s_0}\text{.}\) However, if we substitute \(j_1 = b_1 \bar{s_0}\) and \(j_2 = b_2 \bar{s_0}\) into our ealier equations we see \(j_1 s_0 = b_1 \bar{s_0} s_0 = b_1 1_\mathbf{1} = b_1\) and \(j_2 s_0 = b_2 \bar{s_0} s_0 = b_2 1_\mathbf{1} = b_2\text{.}\) This contradicts our choice of \(s_0\) and implies that all points in \(S\) must come from at least one of \(B_1\) or \(B_2\text{.}\)
Having established each "point" \(\mathbf{1} \xrightarrow{s} S\) comes from "at most one point" in \(B_1,B_2\) and "at least one point" in \(B_1,B_2\text{,}\) I think it’s safe to say that each \(s\) comes from "exactly one of" \(B_1,B_2\text{.}\)
I think this is where we get to behavior that splits between different categories. When we’re in \(S\text{,}\) each point "points to" itself. When we move to \(S^{\circlearrowright}\) each "point" \(\mathbf{1} \xrightarrow{x} X^{\circlearrowright \alpha}\) is a "fixed point" of the dynamical system such that \(\alpha x = x\text{.}\) When we move to further out to the category of \(S^{\downarrow_\bullet^\bullet \downarrow}\) our "points" become the "loops" of a graph instead.
In the case of \(S\)-maps, I’m thinking that our map defined as \(S \xrightarrow{j} S\) above could really only possibly be the identity map \(1_S\text{.}\) In the case of \(S^{\circlearrowright}\)-maps, we already saw that \(j\) preserves the behavior of fixed points since \(j j_i = j_i\text{.}\) It stands to reason that in a space \(S^{\downarrow_\bullet^\bullet \downarrow}\) we would have \(s' j = j s\) and \(t' j = j t\text{.}\) As long as each \(j_i\) preserves this structure also, such that \(s j_i = j_i s_i\) and \(t j_i = j_i t_i\text{,}\) it would follow that \(s' j j_i = j s j_i = j j_i s_i\) and \(t' j j_i = j t j_i = j j_i t_i\text{.}\)
I feel like this must have a simplier explanation that I’m overlooking. Maybe I should be using this notion of initial objects somehow. We defined an object \(S\) "initial" if for every object \(X\) of \(\mathcal{C}\) there exists exactly one \(\mathcal{C}\)-map \(\mathbf{0} \rightarrow X\text{.}\) In particular, there should be precisely one map \(\mathbf{0} \rightarrow \mathbf{1}\text{.}\) Maybe precomposing our sum with some of these unique maps might give me more to work with.
Figure 5.5.6. Expanded definition of sum
I’m picturing my sum \(S\) as a set that contains at least two unique elements: \(j_1 b_1\) and \(j_2 b_2\text{.}\) We defined a unique composition by the "path" \(\mathbf{1} \xrightarrow{s} S \xrightarrow{g} Y \rightarrow \mathbf{1}\text{,}\) but our retractions to \(\mathbf{1}\) allow us to "factor" through \(\mathbf{1}\) after we get to \(S\) because we have a unique pair of maps \(S \rightarrow \mathbf{1} \rightarrow S = 1_S\text{.}\) The existence of a equivalent map that doubles back without changing \(\mathbf{1} \xrightarrow{s} S \rightarrow \mathbf{1} \rightarrow S \xrightarrow{g} Y \rightarrow \mathbf{1}\text{,}\) allows use to assert the uniqueness of two other maps:
\begin{equation*} \mathbf{1} \xrightarrow{b_1} B_1 \xrightarrow{j_1} S \rightarrow \mathbf{1} \rightarrow B_1 \xrightarrow{g_1} Y \rightarrow \mathbf{1} \end{equation*}
\begin{equation*} \mathbf{1} \xrightarrow{b_2} B_2 \xrightarrow{j_2} S \rightarrow \mathbf{1} \rightarrow B_2 \xrightarrow{g_2} Y \rightarrow \mathbf{1} \end{equation*}
I can imagine these as maps which goes through the points of \(S\) and labeling each of them as coming from \(B_1\) or \(B_2\text{.}\) Furthermore, I can imagine \(B_1\) and \(B_2\) behaving like "dots" and "arrows" in \(\mathcal{S}^{\circlearrowright}\text{,}\) effectively alternating between \(j_1\) and \(j_2\) as we follow the endomap.
I think I’m going to stop here and let some of this sink in. I feel like I was supposed to use this notion of "families" somehow but didn’t. It might have something to do with my feud over the bookkeeping rules and "maps to nothing" from way back in Article 1.