We’re given a generic sum diagram:
We’re told this has the property of a coproduct “but restricted to testing against the one cofigure type \(Y=2\)”. Our goal is to use this to prove it’s actually a coproduct.
The property in question is that for any two maps \(B_1 \xrightarrow{f_1} X\) and \(B_2 \xrightarrow{f_2} X\text{,}\) there is exactly one map \(S \xrightarrow{f} X\) such that \(f_1 = f j_1\) and \(f_2 = f j_2\text{.}\)
We’re only testing against the object \(\mathbf{2}\text{,}\) so our assumption is that for any maps \(B_1 \xrightarrow{f_1} \mathbf{2}\) and \(B_2 \xrightarrow{f_2} \mathbf{2}\text{,}\) there is exactly one map \(S \xrightarrow{f} \mathbf{2}\) such that \(f_1 = f j_1\) and \(f_2 = f j_2\text{.}\)
Let’s assume the opposite. Suppose \(S\) is not a sum. There would need to be either “no maps” \(S \xrightarrow{f} \mathbf{2}\) such that \(f_1 = f j_1\) and \(f_2 = f j_2\text{,}\) or more than one such map.
At least one such map should exist as long as the category contains an initial object. The unique maps from the initial object, \(I \rightarrow B_1\) and \(I \rightarrow B_2\text{,}\) can be composed with \(f_1,f_2\) to produce a unique pair of compositions \(I \rightarrow B_1 \xrightarrow{f_1} \mathbf{2}\) and \(I \rightarrow B_2 \xrightarrow{f_2} \mathbf{2}\) . Since there are exactly two maps here, the set of these maps is isomorphic to the set \(\mathbf{2}\text{.}\)
Suppose that another triple of maps \(g: S \rightarrow \mathbf{2},k_1: S \rightarrow B_1,k_2: S \rightarrow B_2 \) has the same property as \(f\text{.}\) If \(f \neq q\text{,}\) there exists some \(s \in S\) such that \(f s \neq g s\text{.}\) Applying \(f_1\) to this \(s\) gives us \(f_1 s = f j_1 s = f_1 s = g k_1 s\text{.}\) Applying \(f_2\) to this \(s\) gives us \(f_2 s = f j_2 s = f_2 s = g k_2 s\text{.}\) However, there are only two objects in \(\mathbf{2}\) and we already know \(f_1 s \neq f_2 s\) by virtue of the domains being different. The contradiction implies \(f\) and \(g\) are the same map.
For part (b), I think I’d like to look at the these through the lens of an adjacency matrix. The generic sum corresponds with a set of \(\mathbf{5}\) objects: three “dots” and “two arrows”. The three dots can be detemine as the dimension of the matrix to be \(3 \times 3\) matrix with \(2\) non-zero elements corresponding to the arrows. Specifically, the matrix would be given by \(\begin{bmatrix} 0 & 0 &1 \\
0 & 0 & 1 \\
0 & 0 & 0 \end{bmatrix} \) .
There’s a unique map which maps each of the three dots to itself, corresponding to the identity matrix: \(\begin{bmatrix} 1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \end{bmatrix} \)
We can think of our “sum” as producing a pair of \(3 \times 1\) matrices given by \(\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} \) and \(\begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} \) that inject a single dot into the set of three dots, into the positions that will be mapped to the third dot by our adjacency matrix. We know there’s an isomorphism here because multiplying by \(\begin{bmatrix} 0 & 1 & 0 \\
1 & 0 & 0 \\
0 & 0 & 1 \end{bmatrix} \) swaps the roles of the two points and is invertable by iterating a second time.
If my hypothesis about Exercise 1 is correct, the figure \(2_D\) corresponds with the matrix \(\begin{bmatrix} 1 & 1 \\
1 & 1 \end{bmatrix} \) and \(2_A\) corresponds with \(\begin{bmatrix} 1 & 1 \\
0 & 0 \end{bmatrix} \text{.}\) Each of these has a complement formed by subtracting every element from \(1\text{.}\) In this case, the complement of \(\begin{bmatrix} 1 & 1 \\
1 & 1 \end{bmatrix} \) is \(\begin{bmatrix} 0 & 0 \\
0 & 0 \end{bmatrix} \) and the complement of \(\begin{bmatrix} 1 & 1 \\
0 & 0 \end{bmatrix} \) is \(\begin{bmatrix} 0 & 0 \\
1 & 1 \end{bmatrix} \text{.}\) The reason this strikes me as curious is that we can go from \(\begin{bmatrix} 0 & 0 \\
1 & 1 \end{bmatrix} \) back to \(\begin{bmatrix} 1 & 1 \\
0 & 0 \end{bmatrix} \) by muliplying with \(\begin{bmatrix} 0 & 1 \\
1 & 0 \end{bmatrix} \) but no matrix multiplication will take us from \(\begin{bmatrix} 0 & 0 \\
0 & 0 \end{bmatrix} \) back to \(\begin{bmatrix} 1 & 1 \\
1 & 1 \end{bmatrix} \) .
I think the idea here is that while there’s isomorphisms from \(\mathbf{2} \leftrightarrows 2_D\) and \(\mathbf{2} \leftrightarrows 2_A\text{,}\) there’s a retraction but it’s not structure preserving. Two disjoint arrows corresponds to the \(4 \times 4\) matrix \(\begin{bmatrix} 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 \end{bmatrix} \text{,}\) there’s only one matrix that will send those two arrows to the ones in our sum.
I’m not really sure where I’m going with this anymore, so maybe I’ll come back later.