Skip to main content

Section 4.11 Session 15: Objectification of Properties, Part 2

Okay, I spent a lot more time on Exercise 3 over the past week, but I think I’m starting to understand a little better. If I manage to make it through this problem this week, I’ll take that as a win!

Example 4.11.1. Exercise 3: (continued).

...evaluation at 0 and iteration...
Solution.
I want to start this time by stepping back and look and assign some new names to the entities from the diagram I recreated last time.
Figure 4.11.2. Naming objects in "Iteration and evaluation defined"
My reasoning here is that labeling my objects like this will make it a little more clear on what exactly it would mean for our maps to be inverses. Given the naming used above, I’d need to prove \(h \circ g = 1_A\) and \(g \circ h = 1_B\text{.}\) What makes this tricky is that the objects in \(A\) are \(\mathcal{S}^{\circlearrowright}\)-maps that preserve the structure from \(\boxed{\mathbb{N}^{\circlearrowright \sigma} \rightarrow Y^{\circlearrowright \beta}}\) and the objects in \(B\) are maps in \(\mathcal{S}\) from \(\mathbf{1} \rightarrow Y\) that don’t necessarily preserve any structure.
Perhaps what I should be doing is thinking of the maps \(\mathbb{N}^{\circlearrowright \sigma} \xrightarrow{f} Y^{\circlearrowright \beta}\) as a collection of arrows. Each arrow needs to have a source point \(n\) in \(\mathbb{N}\) and a target point \(y\) in \(Y\text{.}\) For \(f\) to preserve structure, we’d need to have \((f \circ \sigma)(n)= (\beta \circ f)(n)\) for every \(n\) in \(\mathbb{N}\text{.}\)
Our map \(g\text{,}\) basically takes advantage of the fact that any map in \(\mathbb{N}^{\circlearrowright \sigma} \xrightarrow{f} Y^{\circlearrowright \beta}\) must be defined over the whole domain. This means that there must be an arrow originating at the point \(0\) pointing to some target point \(y = f 0\) in \(Y\text{.}\) We can then define \(g: A \rightarrow B\) such that for any \(f \in A\) we can define \(g(f)\) to be the map in \(\mathbf{1} \rightarrow Y\) which maps the only element in \(\mathbf{1}\) to the point \(f 0\) in \(Y\text{.}\)
In contrast, our map \(h\) takes a map \(\mathbf{1} \xrightarrow{y} Y\) and uses it to produce a map in \(\mathbb{N}^{\circlearrowright \sigma} \xrightarrow{f} Y^{\circlearrowright \beta}\text{.}\) It does this by applying \(\beta\) to \(y\) zero or more times, such that \(f(n) = \beta^n(y)\) for any \(n \in \mathbb{N}\text{.}\) In the case where \(n = 0\text{,}\) the map \(\beta\) is never applied at all. It follows that for any \(1 \xrightarrow{y} Y\) in \(B\text{,}\) we’re guaranteed to have \((g \circ h)(y) = g(h(y)) = g(f) = f 0 = \beta^0 y = y\text{.}\)
These two maps both produce maps, but handle structure differently. The map \(g\) takes a map which preserves the structure and uses it to produce a map which might not. The map \(h\) takes a map which may not preserve structure and produces one that does. The identity \((g \circ h)(y) = y\) basically establishes that \(g \circ h = 1_B\) in \(\mathcal{S}\text{,}\) but I think we still need to establish that the map \(h y\) is valid in \(\mathcal{S}^{\circlearrowright}\text{.}\)
For any given \(y\text{,}\) the map \(f = h \circ y\) needs to satisfy \((f \circ \sigma)(n) = (\beta \circ f)(n)\) for every \(n \in \mathbb{N}\text{.}\) At \(n = 0\text{,}\) \((f \circ \sigma)(0) = f \sigma 0 = f 1\) and \((\beta \circ f)(0) = \beta f 0 = \beta y = f 1\text{.}\) Since \((f \circ \sigma)(n) = (\beta \circ f)(n)\) holds true for \(n = 0\text{,}\) we can use induction to establish that they must be the same for every ’successor’.
Since each \(n\) can be represented as \(n = \sigma^n 0\text{,}\) we can use that to make a subsitution. If follows that \((f \circ \sigma)(n) = (f \circ \sigma)(\sigma^n 0) = f \sigma \sigma^n 0 = f \sigma^{n+1} 0\) and \((\beta \circ f)(n) = (\beta \circ f)(\sigma^n 0) = \beta f \sigma^n 0\text{.}\) It also follows that \((f \circ \sigma)(n+1) = f \sigma^{n+2} 0\) and \((\beta \circ f)(n+1) = \beta f \sigma^{n+1} 0\text{.}\) Since we know that \(f \sigma^{n+1} 0 = \beta f \sigma^n 0\text{,}\) we can combine these two results to establish that \((\beta \circ f)(n+1) = \beta f \sigma^{n+1} 0 = f \sigma \sigma^{n+1} 0 = f \sigma^{n+2} 0 = (f \circ \sigma)(n+1)\text{.}\) Having established that \((f \circ \sigma)(0) = (\beta \circ f)(0)\) and \((f \circ \sigma)(n) = (\beta \circ f)(n) \implies (f \circ \sigma)(n+1) = (\beta \circ f)(n+1)\text{,}\) \(f \circ \sigma = \beta \circ f\) follows by induction.
Next, let’s compose these maps in the reverse order to produce the map \(h \circ g: A \rightarrow A\text{.}\) Suppose we have some \(\mathbb{N}^{\circlearrowright \sigma} \xrightarrow{f} Y^{\circlearrowright \beta}\) in \(A\text{.}\) It follows from our definitions that that \((h \circ g)(f) = h(g(f)) = h(f 0) = h(y) = f\) which would imply that \(h \circ g = 1_A\text{.}\) Having now established that both \(h \circ g = 1_A\) and \(g \circ h = 1_B\text{,}\) it’s safe to refer to these two maps as inverses.
Q.E.D.