Show that in the category \(\mathcal{S}^\circlearrowright\text{...}\)
Solution.
The very heavy-handed "hint" suggests that our mystery object \(N\) is most likely related to \(\mathbb{N}^{\circlearrowright \sigma}\) or maybe even \(\Omega\text{.}\)
Further, I’d expect that we’re going to somehow need this theorem that any two graphs \(X,Y\) and maps \(X \mathrel{\substack{f \\ \longrightarrow \\ \longrightarrow \\ g}} Y\text{,}\) we have the property that if \(fx=gx\) for all \(A \xrightarrow{x} X\) and \(D \xrightarrow{x} X\) then \(f = g\text{.}\)
Consider a pair of \(\mathcal{S}^\circlearrowright\)-maps \(X^{\circlearrowright \alpha} \mathrel{\substack{f \\ \longrightarrow \\ \longrightarrow \\ g}} Y^{\circlearrowright \beta}\text{.}\) By definition, these must satisfy \(f \alpha = \beta f\) and and \(g \alpha = \beta g\text{.}\) As endomaps, \(\alpha\) and \(\beta\) require that each "dot" gets assigned exactly one "arrow". Thus, we can effectively use this to split up "objects" of \(X^{\circlearrowright \alpha}\) into the union of some "dots" \(X_D\) and "arrows" \(X_A\) with a 1-1 correspondence.
In Session 15, we established an isomorphism between maps in \(S^\circlearrowright\) and maps in \(\mathcal{S}\) through the process of "evaluation at 0" and "iteration". We defined "iteration" as assigning to each \(y \in Y\) a map \(f\) given by \(f(n) = \beta^n(y)\text{.}\)
This is relevant because we now also know that having a "point in \(X\)" \(\mathbf{1} \xrightarrow{x} X\) and a "map" \(X \xrightarrow{f} Y\) necessarily gives us a "point in \(Y\)" given by \(\mathbf{1} \xrightarrow{fx} Y\text{.}\) Now that we have the addition of an endomap \(X \xrightarrow{\alpha} X\text{,}\) we can define "sequences" of points by iterating \(\alpha\) like so:
\begin{equation*}
\mathbf{1} \xrightarrow{x} X
\end{equation*}
\begin{equation*}
\mathbf{1} \xrightarrow{x} X \xrightarrow{\alpha} X
\end{equation*}
\begin{equation*}
\mathbf{1} \xrightarrow{x} X \xrightarrow{\alpha} X \xrightarrow{\alpha} X
\end{equation*}
If this sequence goes on producing new points forever without repeating, then this element is essentially the object \(\mathbb{N}^{\circlearrowright\sigma}\text{.}\) This the pattern is finite, then there must be some element where it repeats. Instead of simply separating our space into "dots and arrows" like we did in \(\mathcal{S}\text{,}\) in \(\mathcal{S}^\circlearrowright\) we separate into "loops and tails".
Maybe the key here is the sort of "antipodal" relationship between \(\mathbb{N}^{\circlearrowright \sigma}\) and \(\Omega\text{.}\) if we think of there existing a map from \(0 \leftrightarrow \infty\) that swaps the operations \(+1\) and \(-1\text{.}\) If I consider the "number of steps to stabilize" as a map from \(X \xrightarrow{\gamma} \mathbb{N}\text{,}\) then a point \(\mathbf{1} \xrightarrow{x} X\) can be either a "tail", in which case \(\gamma \alpha x = \gamma x - 1\text{,}\) or a "loop", in which case \(\gamma \alpha x = \gamma x\text{.}\) Every "tail" (with the exception of \(\mathbb{N}^{\circlearrowright \sigma}\)) must have a "loop" that acts as a terminal object, and we can establish a sorting of those loops by length.
More specifically though, what I’ve been called "tails" if the endomap are more like a "tree" structure. They might have multiple branches unlike the example from Session 15. What matters is that we have a 1-1 correspondence between these trees and the element of a fixed length cycle. Trees are nice because we have an easily established notion of "depth" that maps to a natural number.
Perhaps I’m approaching this from the wrong angle. What if there exists a way to enumerate all the possible endomaps \(X^{\circlearrowright \alpha}\text{?}\)
We’ve already established that we can construct a presentation for \(X^{\circlearrowright \alpha}\) by sectioning out a set of elements as "generators" combined with a set of "relations" between those generators.
Our object \(\mathbb{N}^{\circlearrowright \sigma}\) was defined by a single generator, \(0\text{,}\) and no relations. What would the presentation of our object \(\Omega = \boxed{\circlearrowleft 0 \leftarrow 1
\leftarrow 2 \leftarrow 3 \leftarrow ... \infty \circlearrowright}\) look like?
Let’s call this endomap(?) \(\Omega \xrightarrow{\rho} \Omega\text{.}\) There’s something close to an isomorphism between \(\mathbb{N}^{\circlearrowright \sigma}\) and \(\Omega^{\circlearrowright \rho}\text{.}\) For each \(n\) in \(\mathbb{N}\text{,}\) we have \(\sigma n = n+1\) and \(\rho n = n - 1\text{.}\) We can easily say that \(\rho \circ \sigma = 1_\mathbb{N}\) but I’m not sure we can assert \(\sigma \circ \rho = 1_\Omega\) The problem is that we need to preserve \(\rho \infty = \infty\) but \(\sigma \infty\) is undefined because \(\infty \notin \mathbb{N}\text{.}\)
It’s almost as if \(\Omega^{\circlearrowright \rho}\) has a presentation with an infinite number of generators. For each object \(n \in \mathbb{N}\) we can associate a respective graph that approaches the behavior of \(\Omega\) as \(n \rightarrow \infty\text{.}\)
\begin{equation*}
\rho_1 = \boxed{\circlearrowleft 0}
\end{equation*}
\begin{equation*}
\rho_2 = \boxed{\circlearrowleft 0 \leftarrow 1}
\end{equation*}
\begin{equation*}
\rho_3 = \boxed{\circlearrowleft 0 \leftarrow 1 \leftarrow 2}
\end{equation*}
In this context, it makes sense to think of \(\rho_0 = \mathbb{N}^{\circlearrowright \sigma}\) and \(\rho_\infty = \Omega^{\circlearrowright \rho}\) to "complete" the natural numbers.
Let’s back this up for a second. Suppose we have an arbitrary \(X^{\circlearrowright \alpha}\text{.}\) We can define a "section" \(X \xrightarrow{s} \{0,1\}\) in \(\mathcal{S}\) by assigning \(1\) to every "terminal object" and \(0\) otherwise. In \(\mathcal{S}^{\circlearrowright}\) those terminal objects are "loops". If it’s not a terminal object, it either has to "go on forever" which would effectively map it to \(\mathbb{N}^{\sigma}\) or it has to repeat in a cycle of some fixed length. Let’s call \(C_n\) the cycle of length \(n\) defined by the behavior of \(\alpha_n n = (n+1) % n\text{.}\) If we want, we can effectively treat the object \(C_0\) as \(\mathbb{N}^{\circlearrowright \sigma}\) and \(C_\infty\) as \(\Omega^{\circlearrowright \rho}\text{.}\)
This gives a means of locating our objects in \(X\) by means of a cartesian product in \(\mathbb{N} \times \mathbb{N}\) of a generator and number of steps of \(\alpha\text{.}\) Each point in the can be represented by an ordered pair \((n,m)\) where \(C_n\) is the terminal object it converges to a cycle of \(n\) and \(\rho_m\) is a \(m\)-step path in a tree which leads to a unique "root" element in the cycle.
In Session 15, I started calling these different subsections of our graphs "clusters". The idea was that there only exists structure preserving maps between clusters that converge to loops of the same size. I’m not sure if I really answered this question here, but those "clusters" essentially performed the function of separating graph maps.
This exercise got me thinking about where I left off on Session 15, so I decided to spend the rest of the week exploring the "minimal categories" that were presented there with Python. You can find my code this week here.
1
github.com/rruff82/LS-Categories/blob/main/assets/notebooks/ls-ses15-small-cats.ipynb
I observed some of the interesting things while doing this were. First, I realized I had to keep close tabs on the domain and codomain of each map. I was representing maps as dictionaries, so sometimes two objects would have the same key and value pairs but were techincally different objects because they came from different sets.
The other thing I noticed was that I seemed to be able to separate out a subset of maps that could be used to form the others through composition. It seemed that I’m able to express all maps in a category from compositions of some "points", "terminators", and "loops". Since those "loops" correspond to structure preserving maps from \(\mathbb{N}^{\circlearrowright \sigma}\text{,}\) this would seem to support my theory that this is, in fact, the object \(N\) I was looking for in this exercise.
This exercise seems to have left me with more questions than answered. It seemed odd how my minimal categories blurred the distinctions between "sets", "points", and "maps". I’m reminded of how, in Session 16, the authors used \(\mathbb{Q}\) as an extension of \(\mathbb{Z}\) in order to utilize the inverse operation. It seems like this mirrors what I was trying to accomplish by mapping between \(\mathbb{N}^{\circlearrowright \sigma}\) and \(\Omega^{\circlearrowright \rho}\text{.}\) It’s almost like I need to step out from \(\mathbb{N}\) to \(\mathbb{Z}\text{,}\) solve the problem there, then come back and verify that the solution is something that still exists in \(\mathbb{N}\text{.}\)