diff --git a/book/C0-Preface.tex b/book/C0-Preface.tex index c0ac913..3c1a543 100644 --- a/book/C0-Preface.tex +++ b/book/C0-Preface.tex @@ -11,7 +11,7 @@ \chapter{Preface}\label{chapter.0} This book is a work in progress --- including the acknowledgements below! Use at your own peril! -Categorical systems theory is an emerging field of mathematics which seeks to apply the methods of category theory to general systems theory. General systems theory is the study of systems --- ways things can be and change, and models thereof --- in full generality. The difficulty is that there doesn't seem to be a single core idea of what it means to be a ``system''. Different people have, for different purposes, come up with a vast array of different modeling techniques and definitions that could be called ``systems''. There is often little the same in the precise content of these definitions, though there are still strong, if informal, analogies to be made accross these different fields. This makes coming up with a mathematical theory of general systems tantalizing but difficult: what, after all, is a system in general? +Categorical systems theory is an emerging field of mathematics which seeks to apply the methods of category theory to general systems theory. General systems theory is the study of systems --- ways things can be and change, and models thereof --- in full generality. The difficulty is that there doesn't seem to be a single core idea of what it means to be a ``system''. Different people have, for different purposes, come up with a vast array of different modeling techniques and definitions that could be called ``systems''. There is often little the same in the precise content of these definitions, though there are still strong, if informal, analogies to be made across these different fields. This makes coming up with a mathematical theory of general systems tantalizing but difficult: what, after all, is a system in general? Category theory has been describe as the mathematics of formal analogy making. It allows us to make analogies between fields by focusing not on content of the objects of those fields, but by the ways that the objects of those fields relate to one another. Categorical systems theory applies this idea to general systems theory, avoiding the issue of not having a contentful definition of system by instead focusing on the ways that systems interact with eachother and their environment. @@ -41,7 +41,7 @@ \chapter{Preface}\label{chapter.0} \end{itemize} \end{informal} - We will give a semi-formal\footnote{And for experts, a formal definition, though we won't fully justify it.} definition of dynamical systems doctrine in \cref{Chapter.6}. For the first five chapters of this book on the other hand, we will work within a fixed doctrine of dynamical systems which we might call the \emph{parameter-setting} doctrine. This doctrine gives a particular answer to the above questions, based around the following defintion of a \emph{system}. + We will give a semi-formal\footnote{And for experts, a formal definition, though we won't fully justify it.} definition of dynamical systems doctrine in \cref{Chapter.6}. For the first five chapters of this book on the other hand, we will work within a fixed doctrine of dynamical systems which we might call the \emph{parameter-setting} doctrine. This doctrine gives a particular answer to the above questions, based around the following definition of a \emph{system}. \begin{informal}\label{inf.dynam_sys} A \emph{dynamical system} consists of: @@ -54,7 +54,7 @@ \chapter{Preface}\label{chapter.0} state that are \emph{exposed} or \emph{output} to the environment. \end{informal} -In the first two chatpers, we will see a variety of examples of such systems, including discrete-time deterministic systems, systems of differential equations, and non-deterministic systems such as Markov decision processes. We will also see what composition patterns can be in the parameter-setting doctrine; they can be drawn as wiring diagrams like this: +In the first two chapters, we will see a variety of examples of such systems, including discrete-time deterministic systems, systems of differential equations, and non-deterministic systems such as Markov decision processes. We will also see what composition patterns can be in the parameter-setting doctrine; they can be drawn as wiring diagrams like this: \[ \begin{tikzpicture}[oriented WD, every fit/.style={inner xsep=\bbx, inner ysep=\bby}, bb small] \node[bb={2}{2}, fill=blue!10] (X1) {}; @@ -123,7 +123,7 @@ \chapter{Preface}\label{chapter.0} it with the first one and then with the second one. \item Suppose that we have a pair of wiring patterns and compatible charts between them. If we - take a bunch of behaviors whose charts are compatable according to the first + take a bunch of behaviors whose charts are compatible according to the first wiring pattern, wire them together, and then compose with a behavior of the second chart, we get the same thing as if we compose them all with behaviors of the first chart, noted that they were compatible with the second wiring diff --git a/book/C1-.tex b/book/C1-.tex index af1f1d8..76f7124 100644 --- a/book/C1-.tex +++ b/book/C1-.tex @@ -58,7 +58,7 @@ \section{Introduction}\label{sec.chap1_intro} \end{itemize} Different people have decided on different answers to these questions for -different purposes. Here are three of the most widespread differents ways to answer those +different purposes. Here are three of the most widespread different ways to answer those questions: \begin{enumerate} \item We'll assume the states form a discrete set, and that if we know the @@ -113,7 +113,7 @@ \section{Introduction}\label{sec.chap1_intro} The dynamical systems we will see in this book are \emph{open} in the sense that they take in inputs from their environment and expose outputs back to their -environment. Because of this, our systems can interact with eachother. One +environment. Because of this, our systems can interact with each other. One system can take what the other system outputs as part of its input, and the other can take what the first outputs as part of its input. For example, when we have a conversation, I take what I hear from you and use it to change how I @@ -588,7 +588,7 @@ \subsection{Deterministic systems}\label{sec.deterministic_system} \begin{example}[SIR model]\label{ex.SIR_model_discrete} The set of states for a deterministic system doesn't need to be finite. The $\Sys{SIR}$ model - is an epimediological model used to study how a disease spreads through a + is an epidemiological model used to study how a disease spreads through a population. ``SIR'' stands for ``susceptible'', ``infected'', and, rather ominously, ``removed''. This model is usually presented as a system of differential equations --- what we will call a differential system --- and we will see it in that form in \cref{ex.SIR_model_diff}. @@ -714,7 +714,7 @@ \subsection{Deterministic systems}\label{sec.deterministic_system} wiring together some systems. For example, suppose we have an $\Sys{Agent}$ acting within a -$\Sys{Environment}$. The agent will take an action, and the evironment will +$\Sys{Environment}$. The agent will take an action, and the environment will respond to that action. Depending on the action taken and response given, the agent and the environment will update their states. We can model this by the following wiring diagram: @@ -756,8 +756,8 @@ \subsection{Differential systems}\label{sec.differential_system} \emph{La nature ne fait jamais des sauts} - Liebniz \end{quote} -A quirk of modeling dynamical systems as determinstic systems is that -determinstic systems lurch from one state to the next. In life, there are no +A quirk of modeling dynamical systems as deterministic systems is that +deterministic systems lurch from one state to the next. In life, there are no next moments. Time, at least at human scales and to a first approximation, flows continuously. @@ -797,7 +797,7 @@ \subsection{Differential systems}\label{sec.differential_system} The population of any predator will also change according to a birth rate and death rate. Suppose we have a similarly defined system of $\Sys{Foxes}$ -goverened whose population is governed by the differential equation +governed whose population is governed by the differential equation \[ \frac{df}{dt} = \const{b}_{\Sys{Foxes}} \cdot f - \const{d}_{\Sys{Foxes}} \cdot f. @@ -904,13 +904,13 @@ \subsection{Differential systems}\label{sec.differential_system} including the category $\Cat{Euc}$ of Euclidean spaces and smooth maps (\cref{def.euc_cat}). It appears that a differential system is the same thing as a deterministic system in the cartesian category $\Cat{Euc}$. But while the - $\rr^n$s occuring in $\update{S} : \rr^n \times \rr^m \to \rr^n$ look the + $\rr^n$s occurring in $\update{S} : \rr^n \times \rr^m \to \rr^n$ look the same, they are in fact playing very different roles. The $\rr^n$ on the left is playing the role of the state space, while the $\rr^n$ on the left is playing the role of the tangent space at $s$ for some state $s \in \rr^n$. The difference will be felt in \cref{chapter.3} when we study behaviors of systems: the way a trajectory is defined is different - for differential systems and determinstic systems. For differential systems, a + for differential systems and deterministic systems. For differential systems, a trajectory will be a solution to the system of differential equations, that is, a function $s : \rr \to \rr^n$ which satisfies $$\frac{ds}{dt}(t) = \update{S}(s(t), i(t)).$$ @@ -929,7 +929,7 @@ \subsection{Differential systems}\label{sec.differential_system} \begin{example}\label{ex:lotka.volterra.model} The system of $\Sys{Rabbits}$ has $1$ state variable (the population of - rabbits), $2$ parameters (the birth and death rates of the rabbbits), and $1$ exposed + rabbits), $2$ parameters (the birth and death rates of the rabbits), and $1$ exposed variable. It exposes its whole state, so that $\expose{S} = \id$, and its update is given by \[ @@ -1470,7 +1470,7 @@ \subsection{Deterministic and differential systems as lenses} input and following the arrow with that label, and then outputting the label on the resulting node. When we wire together systems presented as transition diagrams, the dynamics then involve reading the input labels of all inner systems, moving -along all the arrows with those labels, and then outputing the labels at each +along all the arrows with those labels, and then outputting the labels at each state, possible into the input of another system. \begin{exercise}\label{ex.wiring_transition_diagrams} @@ -1579,12 +1579,12 @@ \subsection{Deterministic and differential systems as lenses} populations can flow between cities. \end{definition} -Now, to define a mutli-city $\Sys{SIR}$ model, we need to know what cities we +Now, to define a multi-city $\Sys{SIR}$ model, we need to know what cities we are dealing with and how population flows between them. We'll call this a \emph{population flow graph}. \begin{definition}\label{def.population_flow_graph} - A \emph{population-flow graph} (for a mutli-city $\Sys{SIR}$ model) is a graph + A \emph{population-flow graph} (for a multi-city $\Sys{SIR}$ model) is a graph whose nodes are labeled by cities and whose edges $\Sys{City}_1 \to \Sys{City}_2$ are labeled by $3 \times 3$ real diagonal matrices $\const{Flow}_{1\to @@ -1986,7 +1986,7 @@ \subsection{Wiring diagrams as lenses in categories of arities}\label{sec:wiring Wiring diagrams are designed to express the flow of variables through the system; how they are to be copied from one port to another, how they are to be shuffled about, and (though we haven't -had need for this yet) how they are to be deleted or forgotton. +had need for this yet) how they are to be deleted or forgotten. @@ -1999,7 +1999,7 @@ \subsection{Wiring diagrams as lenses in categories of arities}\label{sec:wiring \begin{definition}\label{defn:category.of.arities} The category $\arity$ of arities is the free cartesian category generated by - a single object $\XX$. That is, $\arity$ constains an object $\XX$, called + a single object $\XX$. That is, $\arity$ contains an object $\XX$, called the \emph{generic object}, and for any finite set $I$, there is an $I$-fold power $\XX^I$ of $\XX$. The only maps are those that can be defined from the product structure by pairing and projection. @@ -2264,7 +2264,7 @@ \subsection{Wiring diagrams as lenses in categories of arities}\label{sec:wiring \end{enumerate} \end{exercise} -Ok, so the wiring diagrams correponds to the lenses in the category of arities. +Ok, so the wiring diagrams corresponds to the lenses in the category of arities. But do they compose in the same way? Composition of wiring diagrams is given by nesting: to compute $\lens{w^{\sharp}}{w} \then \lens{u^{\sharp}}{u}$, we fill in the inner box of $\lens{u^{\sharp}}{u}$ with the outer box of @@ -2623,7 +2623,7 @@ \subsection{Wiring diagrams as lenses in categories of arities}\label{sec:wiring w &= (t : \Set{Hour},\, m : \Set{a.m./p.m.} \mapsto t : \Set{Hour},\, m : \Set{a.m./p.m.}) \\ w^{\sharp} &= (t : \Set{Hour},\, m : \Set{a.m./p.m.} \mapsto t : \Set{Hour}) \end{align*} - giving us a wiring diagram in $\Cat{WD}_{\cat{T}}$. We can then interepret + giving us a wiring diagram in $\Cat{WD}_{\cat{T}}$. We can then interpret this as the lens from $\cref{ex.ClockWithDisplay}$ as the image of this wiring diagram which interprets the types $\Set{a.m./p.m.}$ and $\Set{Hour}$ as the actual sets $\{\const{a.m.},\const{p.m.}\}$ and $\{1, 2,\ldots, 12\}$ --- which is choosing the $C_{-} : \cat{T} \to \smset$ used in \cref{prop.interpret_typed_wiring_diagram}. @@ -2779,7 +2779,7 @@ \subsection{Wiring diagrams with operations as lenses in Lawvere theories}\label How do we know that the extra maps in a Lawvere theory really do come from the operations of an algebraic theory? We show that the Lawvere theory satisfies a -certain universal property: cartesian functors out of it correpond to +certain universal property: cartesian functors out of it correspond to \emph{models} of the theory. If this is the case, we say that the Lawvere theory is \emph{presented} by the algebraic theory. @@ -2845,7 +2845,7 @@ \subsection{Wiring diagrams with operations as lenses in Lawvere theories}\label able to put matrices in the beads. \end{example} -\section{Summary and Futher Reading} +\section{Summary and Further Reading} In this first chapter, we introduced deterministic and differential systems and saw how they could be composed using wiring diagrams. The trick is that both diff --git a/book/C2-.tex b/book/C2-.tex index 54311c4..43b49da 100644 --- a/book/C2-.tex +++ b/book/C2-.tex @@ -30,7 +30,7 @@ \chapter{Non-deterministic systems theories}\label{sec.non_deterministic_systems email-writer system. In this section, we will see a variety of non-deterministic (discrete-time) -systems theories. The kind of non-deterministism --- possibilistic, stochastic, etc. +systems theories. The kind of non-determinism --- possibilistic, stochastic, etc. --- will be encoded in a \emph{commutative monad} (\cref{def.commutative_monad}). @@ -107,7 +107,7 @@ \section{Possibilistic systems} \end{definition} \begin{remark} - While \cref{def.deterministic_system} can be interepreted in any cartesian + While \cref{def.deterministic_system} can be interpreted in any cartesian category because it only used maps and the cartesian product, \cref{def.possibilistic_system} makes use of the \emph{power set} operation $\powset$ which sends a set to its set of subsets. This can't be interpreted @@ -307,7 +307,7 @@ \section{Possibilistic systems} defined by $$f^{\powset}(X) = \bigcup_{a \in X} f(a).$$ \item $\sigma_{A, B} : \powset A \times \powset B \to \powset (A \times B)$ - is defind by + is defined by $$\sigma_{A, B}(X, Y) = \{(a, b) \mid a \in X, b \in Y\}.$$ \end{itemize} \end{proposition} @@ -515,11 +515,11 @@ \section{Stochastic systems} notion of probabilistic or \emph{stochastic} system. The notion of a stochastic system is based on the idea that there should be a -probability of a given change occuring, conditioned upon the current state. A +probability of a given change occurring, conditioned upon the current state. A useful way to formulate the notion of conditional probability is the notion of \emph{stochastic map}. A stochastic map from $A$ to $B$ is a function which takes an $a \in A$ and yields a probability distribution $p(- \mid a)$ on -elements of $B$ which we think of as likelyhoods conditioned on $a$. We can make +elements of $B$ which we think of as likelihoods conditioned on $a$. We can make this more precise using the notion of monad. \begin{definition}\label{def.set_of_probs} @@ -655,7 +655,7 @@ \section{Stochastic systems} \end{center} Then we can see that there is a $50\%$ chance that `` '' and a $50\%$ chance that ``,'' follows the $5$-character sequence ``to be''. Of course, such a small -sample woulnd't give us very useful statistics, but if we use the combined works +sample wouldn't give us very useful statistics, but if we use the combined works of Shakespeare, we might get a better sense of what is likely to occur next. Now we build a stochastic system $\Sys{S}$ which will generate text. We take @@ -1474,7 +1474,7 @@ \section{Monadic systems theories and the Kleisli category}\label{sec.monad_doct \begin{definition}\label{def.cost_monad} We will define a monad $\Fun{Cost}$ on the category of sets. We think of a Kleisli map $f : A \to \Fun{Cost} ( B )$ as - assiging the best-case cost of producing a $b \in B$ from a given $a \in A$. + assigning the best-case cost of producing a $b \in B$ from a given $a \in A$. For practical reasons, we assume that only finitely many $b \in B$ are possible (that is, have finite cost) to produce from an $a \in A$. @@ -1553,7 +1553,7 @@ \section{Monadic systems theories and the Kleisli category}\label{sec.monad_doct The crucial question we want to ask of this model is: how much will the project cost in the best case scenario, given a sequence of external conditions? That is, we will iterate the action of the system through the - sequence of paramters starting at + sequence of parameters starting at $\emptyset \in \State{Proj}$, and then ask the cost of $\Set{Steps} \in \State{Proj}$ at the end. \end{example} @@ -1686,7 +1686,7 @@ \section{Adding rewards to non-deterministic systems} \end{definition} \begin{exercise} - Show that the monad of $R$-valued rewards is really a commuativative monad. + Show that the monad of $R$-valued rewards is really a commutative monad. That is, show that the above data satisfies all each of the laws in \cref{def.commutative_monad}. Do you see where the commutativity comes into the mix? @@ -1727,7 +1727,7 @@ \section{Adding rewards to non-deterministic systems} \lambda \coloneqq R \times MA \xto{\eta_M \times \id} MR \times MA \xto{\sigma^M} M(R \times A) \] Intuitively, this takes a reward $r \in R$ and a non-deterministic $a \in MA$ -and gives us the non-determinstic pair $(r, a)$. +and gives us the non-deterministic pair $(r, a)$. \begin{proposition} Let $M$ be a commutative monad and $(R, +, 0)$ a commutative monoid. Then the @@ -1753,7 +1753,7 @@ \section{Adding rewards to non-deterministic systems} \end{itemize} \end{proposition} \begin{proof} -It is not obvious that this will satsify the monad laws, but it is a rather +It is not obvious that this will satisfy the monad laws, but it is a rather straightforward check using the laws of $M$ and $R \times -$. We will not prove this result explicitly. However, we will give a slick proof for experts. @@ -1934,7 +1934,7 @@ \section{Adding rewards to non-deterministic systems} \section{Changing the flavor of non-determinism: Monad maps}\label{sec.monad_map} -In the same way that $0$ is a number, determinstic systems are +In the same way that $0$ is a number, deterministic systems are non-deterministic systems, just with a trivial sort of non-determinism. Deterministic systems are $M$-systems for the \emph{identity monad} $\id(X) = X$. No matter what kind of non-determinism we are considering, we can always consider a deterministic system as a @@ -2136,14 +2136,14 @@ \section{Changing the flavor of non-determinism: Monad maps}\label{sec.monad_map always commutes by the naturality of $\phi$. \cref{eqn.com_monad_map_mult} is the lower square in this diagram; so, if it commutes, then the outer square (which is \cref{eqn.com_monad_map_lift}) commutes. On the other hand, if -\cref{eqn.com_monad_map_lift} commmutes for all $f : A \to M B$, we may take $f +\cref{eqn.com_monad_map_lift} commutes for all $f : A \to M B$, we may take $f = \id : M A \to M A$ to find that the outer square of \cref{eqn.com_monad_map_lift_helper} becomes just \cref{eqn.com_monad_map_mult}. \end{proof} \begin{proposition} - Let $\phi : M \to N$ be a commutative monad morhpisms. Then there is a + Let $\phi : M \to N$ be a commutative monad morphisms. Then there is a strict symmetric monoidal functor $$\phi_{\ast} : \Cat{Kl}(M) \to \Cat{Kl}(N)$$ acting as the identity on objects and sending the Kleisli map $f : A \to MB$ @@ -2360,7 +2360,7 @@ \section{Changing the flavor of non-determinism: Monad maps}\label{sec.monad_map \] We can imagine, as Claude Shannon did, that this source is an interlocutor communicating over a wire. Suppose we have another interlocutor who reads -the signal generated by our source and generates their own signal in repsonse: +the signal generated by our source and generates their own signal in response: \[ \begin{tikzpicture}[oriented WD, every fit/.style={inner xsep=\bbx, inner ysep=\bby}, bbx = 1cm, bby =.5cm, bb min width=1cm, bb port length=4pt, bb port sep=1, baseline=(X.center)] \node[bb={1}{1}, fill=blue!10] (X) {$\Sys{Transformer}$}; @@ -2441,7 +2441,7 @@ \section{Changing the flavor of non-determinism: Monad maps}\label{sec.monad_map the \emph{Grothendieck construction}. This perspective constructs the category of lenses out of an \emph{indexed category} $\smctx{-} : \cat{C}\op \to \Cat{Cat}$ of objects of the cartesian category $\cat{C}$ \emph{in context}. -This construction works for \emph{any} indxed category $\cat{A} : \cat{C}\op \to +This construction works for \emph{any} indexed category $\cat{A} : \cat{C}\op \to \Cat{Cat}$, which lets us define a notion of $\cat{A}$-lenses using any indexed category. By choosing an appropriate indexed category, we will arrive at the notion of $M$-lenses for a commutative monad $M$; this will give us the wiring @@ -2458,7 +2458,7 @@ \subsection{Indexed categories and the Grothendieck construction}\label{sec.inde An indexed category $\cat{A} : \cat{C}\op \to \Cat{Cat}$ is a family of categories $\cat{A}(C)$ that varies functorially with an object $C \in \cat{C}$ of the \emph{base category} $\cat{C}$ (see \cref{def.indexed_cat} for the full -definition). We will intepret the base category $\cat{C}$ as the category of +definition). We will interpret the base category $\cat{C}$ as the category of passforward maps, and the categories $\cat{A}(C^+)$ as the categories of passback maps that take $C^+$ as an extra argument. %% @@ -4582,7 +4582,7 @@ \section{Changing the Flavor of Non-determinism} \rho(\const{tails}) &= \frac{1}{10}\const{heads} + \frac{9}{10}\const{tails} \end{align*} -Explictly, we will compose with the wiring diagram: +Explicitly, we will compose with the wiring diagram: \[ \begin{tikzpicture}[oriented WD, every fit/.style={inner xsep=\bbx, inner ysep=\bby}, bbx = 1cm, bby =.5cm, bb min width=1cm, bb port length=0pt, bb port sep=1, baseline=(X.center)] \node[bb={0}{1}, fill=blue!10, dashed] (X) {$\phantom{\Sys{Source}}$}; @@ -4652,7 +4652,7 @@ \section{Summary and Further Reading} We then saw how the notion of lens could be generalized to any indexed category. This notion of generalized lens is due to Spivak in -\cite{spivak2019generalized}. This generalization of lens will underly our +\cite{spivak2019generalized}. This generalization of lens will underlie our formal notion of systems theory, which will be introduced in the next chapter. diff --git a/book/C3-.tex b/book/C3-.tex index 64d2941..385cb8e 100644 --- a/book/C3-.tex +++ b/book/C3-.tex @@ -3204,7 +3204,7 @@ \subsection{The differential systems theories} \end{example} \begin{example} -Let $\cat{C}$ be a category with pullbacks and $\cat{D} = \cat{C}$ be all maps. If we define $TC := C \times C$ and $\pi = \pi_{1} : C \times C \to C$ in the same way as in \cref{ex:cartesian.cat.display.doc}, then the systems theory $\dispdoc_{\cat{C}, T}$ is the systems theory of \emph{dependent} deterministic systems. The main difference between this systems theory and the ordinary determinstic systems theory is that what sort of input a system may take in can depend on the current exposed variable. In particular, an interface for a dependent deterministic system $\Sys{S}$ will consist of a map $v : \In{S} \to \Out{S}$ which we can think of sending each input to the output it is valid for. The update is then of the form +Let $\cat{C}$ be a category with pullbacks and $\cat{D} = \cat{C}$ be all maps. If we define $TC := C \times C$ and $\pi = \pi_{1} : C \times C \to C$ in the same way as in \cref{ex:cartesian.cat.display.doc}, then the systems theory $\dispdoc_{\cat{C}, T}$ is the systems theory of \emph{dependent} deterministic systems. The main difference between this systems theory and the ordinary deterministic systems theory is that what sort of input a system may take in can depend on the current exposed variable. In particular, an interface for a dependent deterministic system $\Sys{S}$ will consist of a map $v : \In{S} \to \Out{S}$ which we can think of sending each input to the output it is valid for. The update is then of the form \[ \begin{tikzcd} \State{S} \times_{\Out{S}} \In{S} \ar[rr, "\update{S}"] & & \State{S} diff --git a/book/C4-.tex b/book/C4-.tex index a9ce0bb..465fc21 100644 --- a/book/C4-.tex +++ b/book/C4-.tex @@ -267,7 +267,7 @@ \section{Introduction} \section{Composing behaviors in general}\label{sec.behaviors_general} -Before we get to this abstract defintion, we will take our time exploring the +Before we get to this abstract definition, we will take our time exploring the sorts of compositionality results one may prove quickly by working in the double category of arenas. @@ -620,7 +620,7 @@ \section{Arranging categories along two kinds of composition: Doubly indexed cat coherence issues. While our doubly indexed category of deterministic systems will satisfy this functoriality condition on the nose, we will soon see a doubly indexed category of matrices of sets for which this law only holds up - to a coherence isomorphism. Again, the issue invovles shuffling parentheses + to a coherence isomorphism. Again, the issue involves shuffling parentheses around, and we will sweep it under the rug.} \item (Horizontal Lax Functoriality) For horizontal maps $f : D_1 \to D_2$, $g : D_2 \to D_3$ and $h : D_3 \to D_4$, the compositors $\mu$ satisfy the @@ -632,7 +632,7 @@ \section{Arranging categories along two kinds of composition: Doubly indexed cat \cat{A}(D_1)$ is the identity profunctor, $\cat{A}(\id_{D_1}) = \cat{A}(D_1)$. Furthermore, $\mu_{\id_{D_1}, f}$ and $\mu_{f, \id_{D_2}}$ are equal to the isomorphisms of \cref{ex.identity_profunctor} given by the naturality of - $\cat{A}(f)$ on the left and right respectively. We may sumarize this may + $\cat{A}(f)$ on the left and right respectively. We may summarize this may saying that $$\mu_{\id, f} = \id_{\cat{A}(f)} = \mu_{f, \id}.$$ \end{itemize} @@ -938,8 +938,8 @@ \subsection{Double Functors} \] such that the following laws hold: \begin{itemize} - \item $F$ commutes with horizontal compostition: $F(\alpha \mid \beta) = F\alpha \mid F\beta$. - \item $F$ commutes with vertical comopsition: $F\left( \frac{\alpha}{\beta} \right) = \frac{F\alpha}{F\beta}$. + \item $F$ commutes with horizontal composition: $F(\alpha \mid \beta) = F\alpha \mid F\beta$. + \item $F$ commutes with vertical composition: $F\left( \frac{\alpha}{\beta} \right) = \frac{F\alpha}{F\beta}$. \item $F$ sends horizontal identities to horizontal identities, and vertical identities to vertical identities. \end{itemize} @@ -1686,7 +1686,7 @@ \subsection{Vertical Slice Construction: Functoriality}\label{sec.functoriality_ \] There is a single natural transformation given as the composite of this ``pasting diagram''. But, if we read it by composing vertically first, and then -composing horizontally second, we arive at $(F \then G)^{\frac{j}{k}}$, while if +composing horizontally second, we arrive at $(F \then G)^{\frac{j}{k}}$, while if we read it by composing horizontally first and then vertically second, we get the composite of $(F \then G)^j$ and $(F \then G)^k$ as desired. \item (Horizontal Functoriality) Let $f : D_1 \to D_2$ and $g : D_2 \to D_3$ be @@ -1907,12 +1907,12 @@ \section{Change of systems theory} which may be defined for any cartesian category $\cat{C}$. While we have focused so far on the case $\cat{C} = \smset$, many other cartesian categories are of interest in the study of deterministic dynamical systems. For example, in -ergodic theory we most often use the category of measureable spaces and +ergodic theory we most often use the category of measurable spaces and measurable functions.\footnote{We most often consider maps which preserve a specific measure on a space as well, but the category of such measure preserving maps is not cartesian. Often one needs to go and twiddle these general definitions of systems theory in particular cases to suit the particular - needs of a subject.} We often assume the dynamics of the systems are not arbitary + needs of a subject.} We often assume the dynamics of the systems are not arbitrary set maps, but are furthermore continuous or differentiable; this means working in the cartesian categories of topological spaces or differentiable manifolds. \item There are also the differential systems theories @@ -2023,7 +2023,7 @@ \subsection{Definition} This is a $(\cat{B}, T_2)$-system, and this process is how we may use a change of systems theories to turn $(\cat{A}, T_1)$-systems into $(\cat{B},T_2)$-systems. -We therefore arrive at the following formal defintion of change of systems theory. +We therefore arrive at the following formal definition of change of systems theory. \begin{definition}\label{def.change_of_doctrine} Let $(\cat{A} : \cat{C} \to \Cat{Cat}, T_1)$ and $(\cat{B} : \cat{D} \to \Cat{Cat}, T_2)$ be theories of dynamical systems. A \emph{change of systems theories} @@ -2144,7 +2144,7 @@ \subsection{Definition} \cref{prop.morphism_commutative_monoid_change_doctrine} gives us a change of systems theory $\nondet_{\probset} \to \nondet_{\powset}$ which reinterprets a probabilistic system as a possibilistic one where the state $s'$ is possibly the -udpate $\phi_{\ast}\update{S}(s, i)$ of state $s$ with input $i$ just when just +update $\phi_{\ast}\update{S}(s, i)$ of state $s$ with input $i$ just when just when the probability $\update{S}(s, i)(s')$ that $s$ will transition to $s'$ on input $i$ is non-zero. \end{example} @@ -2157,7 +2157,7 @@ \subsection{Definition} \[ ((F, \overline{F}), \id) : \determ_{\cat{C}} \to \determ_{\cat{D}} \] -from the deterministic systems theoy in $\cat{C}$ to the cartesian systems theory in $\cat{D}$. +from the deterministic systems theory in $\cat{C}$ to the cartesian systems theory in $\cat{D}$. \end{proposition} \begin{proof} We need to construct the indexed functor $(F, \overline{F})$, and then prove @@ -2195,7 +2195,7 @@ \subsection{Definition} For example, it is obvious that any deterministic dynamical system whose $\update{}$ and $\expose{}$ maps are continuous gives rise to a deterministic -dynamcial system without the constraint of continuity, simply by forgetting that +dynamical system without the constraint of continuity, simply by forgetting that the maps are continuous. We formalize this observation by applying \cref{prop.change_of_doctrine_cartesian} to the forgetful functor $\Fun{U} : \Cat{Top} \to \smset$ which sends a topological space to its underlying set of points. @@ -2209,7 +2209,7 @@ \subsection{Definition} The most interesting examples of changes of systems theory are the ones which move between different sorts of systems theory, such as from differential to deterministic. -An example of this is the Euler approximation, which takes a Eulidean +An example of this is the Euler approximation, which takes a Euclidean differential system to a deterministic system. Let's take a minute to recall the Euler method. If $\lens{u}{r} : \lens{\rr^n}{\rr^n} \fromto \lens{\rr^k}{\rr^m}$ @@ -2570,7 +2570,7 @@ \subsection{Functoriality}\label{sec:functoriality.of.sys} \end{itemize} \begin{example} -For Euler approximatation +For Euler approximation \[ \mathcal{E}_{\varepsilon} : \eucdiff_{|\textbf{Aff}} \to \determ_{\Cat{Euc}}, \] @@ -2606,7 +2606,7 @@ \subsection{Functoriality}\label{sec:functoriality.of.sys} formula: if $u(s, i) = 0$, then $\mathcal{E}_{\varepsilon}u(s, i) = s + \varepsilon \cdot u(s, i) = s$. But we deduced this fact from our general definition of change of systems theory. This sort of analysis can tell us precisely - which sorts of behaviors are preserved even in situtations where it may not be + which sorts of behaviors are preserved even in situations where it may not be so obvious from looking at a defining formula. The fact that $\Cat{Sys}(\mathcal{E}_{\varepsilon})$ is a doubly indexed @@ -2632,6 +2632,6 @@ \section{Summary and Further Reading} relationships. We then saw how this construction varied as we changed systems theory. There are other examples of changes of systems theories not covered here. For example, -the Rutta-Kunge approximation can be seen as a change of systems theory; see \cite{ngotiaoco2017compositionality}. +the Runge–Kutta approximation can be seen as a change of systems theory; see \cite{ngotiaoco2017compositionality}. \end{document} diff --git a/book/C5-.tex b/book/C5-.tex index 2419e33..7011622 100644 --- a/book/C5-.tex +++ b/book/C5-.tex @@ -43,7 +43,7 @@ \section{Introduction} compositionality theorem concerning any sort of behavior in any systems theory. But the behaviors of the component systems must be compatible with -eachother: if a system $\Sys{S_1}$ has its parameters set by the exposed +each other: if a system $\Sys{S_1}$ has its parameters set by the exposed variables of a system $\Sys{S_2}$, then a behavior $\phi_1$ of $\Sys{S_1}$ will be compatible with a behavior $\phi_2$ of $\Sys{S_2}$ when $\phi_2$ is a behavior for the parameters charted by the variables exposed by $\phi_1$. @@ -294,7 +294,7 @@ \section{Steady states compose according to the laws of matrix arithmetic}\label \end{remark} Now we'll go through and define the basic operations of matrix arithmetic: -mutliplication, Kronecker product (also known as the tensor product), and +multiplication, Kronecker product (also known as the tensor product), and partial trace. \begin{definition}\label{def.matrix_of_sets_multiplication} @@ -443,7 +443,7 @@ \section{Steady states compose according to the laws of matrix arithmetic}\label categorical way: we'll make an \emph{doubly indexed functor}. But for now, let's just show that tensoring and partially tracing steady state -matrices correponds to taking the parallel product and wiring an input to an +matrices corresponds to taking the parallel product and wiring an input to an output, respectively, of systems. \begin{proposition}\label{prop.steady_state_matrix_parallel_tensor} @@ -465,7 +465,7 @@ \section{Steady states compose according to the laws of matrix arithmetic}\label \begin{remark} - \cref{prop.steady_state_matrix_parallel_tensor} is our motiviation for using + \cref{prop.steady_state_matrix_parallel_tensor} is our motivation for using the symbol ``$\otimes$'' for the parallel product of systems. \end{remark} @@ -609,7 +609,7 @@ \section{Steady states compose according to the laws of matrix arithmetic}\label The horizontal maps take the steady states of a system, while the vertical map on the left wires together the system with that wiring diagram, and the vertical map on the right applies that transformation of the matrix. In the next section, -we will see how this square can be interepreted as a naturality condition in a +we will see how this square can be interpreted as a naturality condition in a \emph{doubly indexed functor}. One thing to notice here is that taking the partial trace (the right vertical @@ -707,7 +707,7 @@ \section{The big theorem: representable doubly indexed functors} between them. In other words, maps $T \to X$ are edges in $X$. So we may say that $T$ represents edges. \item Suppose that $\cat{C}$ is the category of rings, and let $T = \zz[x, y]$ be - the ring of polynomials in two variables. A ring homomoprhism $f : T \to X$ + the ring of polynomials in two variables. A ring homomorphism $f : T \to X$ can send $x$ to any element $f(x)$ and similarly $y$ to any element $f(y)$; once it's done that, the value of $f$ on any polynomial in $x$ and $y$ must be given by @@ -765,7 +765,7 @@ \subsection{Turning lenses into matrices: Representable double Functors} \end{tikzpicture} \] In this section, we will see a general formula for taking an arbitrary lens and -turning it into a matrix. Mutliplying by the matrix will then correspond to +turning it into a matrix. Multiplying by the matrix will then correspond to wiring according to that lens. This process of turning a lens into a matrix will give us a functor $\Cat{Lens} @@ -927,7 +927,7 @@ \subsection{Turning lenses into matrices: Representable double Functors} inner input which comes from the inner output $a^+$ and outer input $b^-$. To take a concrete example, suppose that $\lens{w^{\sharp}}{w}$ were the -following wiring diagarm: +following wiring diagram: \[ \begin{tikzpicture}[oriented WD, every fit/.style={inner xsep=\bbx, inner ysep=\bby}, bbx = .3cm, bby =.3cm, bb min width=.5cm, bb port length=2pt, bb port sep=1, baseline=(S'.center)] \node[bb={2}{2}, fill=blue!10, dashed] (S) {}; @@ -980,7 +980,7 @@ \subsection{Turning lenses into matrices: Representable double Functors} By the functoriality of \cref{prop.lens_to_matrix_functor_discrete}, we can calculate the matrix of a big wiring diagram by expressing it in terms of a -series of traces, and mutliplying the resulting matrices together. This means +series of traces, and multiplying the resulting matrices together. This means that the process of multiplying, tensoring, and tracing matrices described by a wiring diagram is well described by the matrix we constructed in \cref{prop.lens_to_matrix_functor_discrete}, since we already know that it @@ -1055,7 +1055,7 @@ \subsection{Turning lenses into matrices: Representable double Functors} \item It sends a lens $\lens{w^{\sharp}}{w}$ to the set of squares $\beta : \lens{I}{O} \to \lens{w^{\sharp}}{w}$, indexed by their top and bottom boundaries. - \item It sends a square $\alpha$ to the map given by horizontal compostion + \item It sends a square $\alpha$ to the map given by horizontal composition $\beta \mapsto \beta \mid \alpha$. \end{itemize} @@ -1478,7 +1478,7 @@ \subsection{How behaviors of systems wire together: representable doubly indexed it with the first one and then with the second one. \item (Functorial Interchange) This asks us to suppose that we have a pair of wiring patterns and compatible charts between them (a square in $\Cat{Arena}$). The law then says that if we - take a bunch of behaviors whose charts are compatable according to the first + take a bunch of behaviors whose charts are compatible according to the first wiring pattern, wire them together, and then compose with a behavior of the second chart, we get the same thing as if we compose them all with behaviors of the first chart, noted that they were compatible with the second wiring diff --git a/book/C6-.tex b/book/C6-.tex index 81fe095..7bbe8a5 100644 --- a/book/C6-.tex +++ b/book/C6-.tex @@ -17,7 +17,7 @@ \section{Introduction} since we compose these systems by setting the parameters of some according to the exposed state variables of others. -There are many parameter-setting systems theories: deterministic (distrete, continuous, measurable), differential (Euclidean, general), non-deterministic (possibilistic, probabilistic, cost-aware, etc.). From each doctrine $\dd$, we constructed a doubly indexed category $\Cat{Sys}_{\dd} : \Cat{Arena}_{\dd} \to \Cat{Cat}$, indexed by the double category of arenas in the doctrine $\dd$. This doubly indexed category organized the behaviors of the systems in doctrine $\dd$ (through the charts) and the ways that systems can be composed (through the lenses). +There are many parameter-setting systems theories: deterministic (discrete, continuous, measurable), differential (Euclidean, general), non-deterministic (possibilistic, probabilistic, cost-aware, etc.). From each doctrine $\dd$, we constructed a doubly indexed category $\Cat{Sys}_{\dd} : \Cat{Arena}_{\dd} \to \Cat{Cat}$, indexed by the double category of arenas in the doctrine $\dd$. This doubly indexed category organized the behaviors of the systems in doctrine $\dd$ (through the charts) and the ways that systems can be composed (through the lenses). But composing systems through lenses is not the only way to model systems. In this section we will see two more ways of understanding what it means to be a system: the \emph{behavioral approach} to systems theory, which composes systems by sharing their exposed variables, and the \emph{diagrammatic approach} to systems theory, which composes diagrams describing systems by gluing together their exposed parts. In the behavioral approach (see \cite{sec:behavioral.approach}), systems are understood as (variable) sets of behaviors, some of which are exposed to their environment. These systems are composed by \emph{sharing} these exposed behaviors --- that is, by declaring the behaviors exposed by some systems to be the same. In the diagrammatic approach (see \cite{sec:diagram.approach}), systems are presented by diagrams formed by basic constituent parts, some of which are exposed to their environment. These systems are composed by gluing together their exposed parts. @@ -38,7 +38,7 @@ \section{Introduction} The \emph{parameter-setting doctrine} which has been the focus of the book so far answers these questions in the following way: \begin{itemize} \item A system consists of a notion of how things can be, called the \emph{states}, and a notion of how things will change given how they are, called the \emph{dynamics}. In total, a system is a lens of the form $\lens{\update{S}}{\expose{S}} : \lens{T\State{S}}{\State{S}} \fromto \lens{\In{S}}{\Out{S}}$. - \item The dynamics of a system can invovle certain \emph{parameters}, and \emph{expose} some variables of its state. The admissible parameters can depend on the variables being exposed. In total, an interface for a system is an arena $\lens{\In{S}}{\Out{S}}$. + \item The dynamics of a system can involve certain \emph{parameters}, and \emph{expose} some variables of its state. The admissible parameters can depend on the variables being exposed. In total, an interface for a system is an arena $\lens{\In{S}}{\Out{S}}$. \item A composition pattern between interfaces says which exposed variables will be passed forward, and how the internal parameters should be set according to the external parameters and the exposed variables. That is, a composition pattern is a lens. \item Systems are composed by setting the parameters of some according to the exposed variables of others. This is accomplished by lens composition. \item A map between systems is a function of state which respects observable behavior; it affects the interfaces as a chart. @@ -192,7 +192,7 @@ \section{The Behavioral Approach to Systems Theory}\label{sec:behavioral.approac that the parameters of the original differential system $\Sys{LK}$ are considered as exposed variables of state in the behavioral approach. This is because the behavioral approach composes systems by setting -exposed variables equal to eachother, so the parameters must be considered as +exposed variables equal to each other, so the parameters must be considered as exposed variables so that they can be set equal to other variables. \end{remark} @@ -863,7 +863,7 @@ \subsection{Bubble diagrams as spans in categories of arities}\label{sec:behavio precisely the case, although we will want to restrict to a certain class of ``nice'' spans. -Before we see a formal definiton of bubble diagram, let's give an informal definition. +Before we see a formal definition of bubble diagram, let's give an informal definition. \begin{informal} A \emph{bubble diagram} is a diagram which consists of a number of \emph{inner bubbles} drawn within an \emph{outer bubble}, each with some \emph{ports}. @@ -1626,7 +1626,7 @@ \subsection{Bubble diagrams as spans in categories of arities}\label{sec:behavio \end{example} \begin{remark} - It is not as easy to separate the bubble diagrams from the undirected wiring diagrams when passing to a general essentially algebraic theory. This is fundamentally because the operations of the theory could be arbitrary, and so no longer guarentee that the diagrams really satisfy the properties that bubble diagrams should. + It is not as easy to separate the bubble diagrams from the undirected wiring diagrams when passing to a general essentially algebraic theory. This is fundamentally because the operations of the theory could be arbitrary, and so no longer guarantee that the diagrams really satisfy the properties that bubble diagrams should. \end{remark} @@ -1684,7 +1684,7 @@ \subsection{The behavioral doctrine of interval sheaves}\label{sec:behavioral.ty \item For every interval $\ell$, a set $X(\ell)$ of behaviors which may occur during the interval $\ell$. \item For every morphism $a : \ell' \to \ell$ of intervals, a restriction function $$b \mapsto b|_{a} : X(\ell) \to X(\ell')$$ - which selects out the part of $b$ occuring during the subinterval $\ell'$ beginning at $a$ in $\ell$. + which selects out the part of $b$ occurring during the subinterval $\ell'$ beginning at $a$ in $\ell$. \end{enumerate} This data is required to satisfy the following conditions: \begin{enumerate} @@ -1723,7 +1723,7 @@ \subsection{The behavioral doctrine of interval sheaves}\label{sec:behavioral.ty The restriction maps are given by restricting: if $a : \ell' \to \ell$, then $f|_{a} = f \circ (x \mapsto a + x)$ where $x \mapsto a + x : (0, \ell') \to (0, \ell)$ is the inclusion of $(0, \ell')$ into $(0, \ell)$ shifted over by $a$. Unit and functoriality conditions follow directly from unit and associativity of composition; the only tricky law to check is the gluing condition. We can check both parts of the gluing condition using \cref{lem:gluing.condition.two.conditions}: \begin{enumerate} \item (Separation) Suppose that $f_{1}$ and $f_{2} : (0, \ell) \to \rr$ are $n$-times continuously differentiable and that their restriction to any subintervals are equal. Since $(0, \ell)$ is open, for every $x \in (0, \ell)$ there is a strict subinterval in $(0, \ell)$ containing $x$; therefore, $f_{1}$ and $f_{2}$ are equal on this subinterval and therefore at $x$. So $f_{1} = f_{2}$. - \item (Existence) Suppose we have compatible functions $f_{a} : (0, \ell_{a}) \to \rr$ for every $a : \ell_{a} \rightsquigarrow \ell$. For any $x \in (0, \ell)$, there is a strict subinterval $a_{x} : \ell_{a_{x}} \rightsquigarrow \ell$ containing $x$ in the sense that $x \in (a_{x}, a_{x} + \ell_{a_{x}})$. We may therefore define a function $f : (0, \ell) \to \rr$ by $f(x) = f_{a_x}(x)$. This is well defined since if $a' : \ell' \rightsquigarrow \ell$ is any other strict subinterval containing $x$, then $x$ is also in their intersection which is a strict subinterval; by the compatibility of the functions $f_{a}$, it follows that $f_{a_{x}}(x) = f_{a'}(x)$ on this intersection. Since being $n$-times continuously differentiable is a local property and $f$ is defined to be a $n$-times continously differentiable in the neighborhood of any point, $f$ is also $n$-times continously differentiable. + \item (Existence) Suppose we have compatible functions $f_{a} : (0, \ell_{a}) \to \rr$ for every $a : \ell_{a} \rightsquigarrow \ell$. For any $x \in (0, \ell)$, there is a strict subinterval $a_{x} : \ell_{a_{x}} \rightsquigarrow \ell$ containing $x$ in the sense that $x \in (a_{x}, a_{x} + \ell_{a_{x}})$. We may therefore define a function $f : (0, \ell) \to \rr$ by $f(x) = f_{a_x}(x)$. This is well defined since if $a' : \ell' \rightsquigarrow \ell$ is any other strict subinterval containing $x$, then $x$ is also in their intersection which is a strict subinterval; by the compatibility of the functions $f_{a}$, it follows that $f_{a_{x}}(x) = f_{a'}(x)$ on this intersection. Since being $n$-times continuously differentiable is a local property and $f$ is defined to be a $n$-times continuously differentiable in the neighborhood of any point, $f$ is also $n$-times continously differentiable. \end{enumerate} We can think of the interval $\mathcal{C}^{0}$ as the set of real numbers \emph{varying continuously in time}. @@ -2158,11 +2158,11 @@ \section{Drawing Systems: The Port Plugging Doctrine}\label{sec:diagram.approach \item Maps between systems can be composed along the composition patterns when we have a square in the double category of cospans. \end{enumerate} -There is a close formal analogy between the diagrammatic and the behavioral approachs to systems theory: \cref{eqn:pushout.composition.of.systems} is the same diagram as \cref{eqn:pullback.composition.of.systems}, just with all the arrows going the other way around. This means we can use bubble diagrams to describe composites of diagrams as well! +There is a close formal analogy between the diagrammatic and the behavioral approaches to systems theory: \cref{eqn:pushout.composition.of.systems} is the same diagram as \cref{eqn:pullback.composition.of.systems}, just with all the arrows going the other way around. This means we can use bubble diagrams to describe composites of diagrams as well! \subsection{Port-plugging systems theories: Labelled graphs} -Let's work out a class of port-plugging theories to get a sense for how it feels to work within the doctrine. Most of the examples we gave above were graphs whose nodes and edges were labebelled with some sort of data. We can formalize this situation in general. +Let's work out a class of port-plugging theories to get a sense for how it feels to work within the doctrine. Most of the examples we gave above were graphs whose nodes and edges were labelled with some sort of data. We can formalize this situation in general. First, let's recall what a \emph{graph} is, for a category theorist. There are many different flavors of graph, and what category theorists tend to prefer would be called \emph{directed multi-graphs with loops} by more traditional graph theorists. We will just call them graphs. @@ -2235,7 +2235,7 @@ \subsection{Port-plugging systems theories: Labelled graphs} As you can see, taking the pushout glues together the two graphs over their shared part. \end{example} -We are interested in labelled graphs. We will give a general definition of labelled graphs in the upcoming \cref{defn:labelled.graph.general}, but for now we make the following defintion of two important special cases. +We are interested in labelled graphs. We will give a general definition of labelled graphs in the upcoming \cref{defn:labelled.graph.general}, but for now we make the following definition of two important special cases. \begin{definition}\label{defn:labelled.graph} Let $G$ be a graph and $L$ a set of \emph{labels}. Then \begin{enumerate} @@ -2292,7 +2292,7 @@ \subsection{Port-plugging systems theories: Labelled graphs} The case of node labellings is very similar. Let $\ell : G_{0} \to L$ be a node labelling. We can then define a map $\hat{\ell} : G \to KL$ by $\hat{\ell}_{0} = \ell$ and $\hat{\ell}_{1}(e) = (\ell(s(e)), \ell(t(e)))$. Conversely, for any map $\ell : G \to KL$ we have $\ell_{0} : G_{0} \to L$. These two processes are inverse since any edge $e : a \to b$ must be sent to a edge $\ell(a) \to \ell(b)$, but there is exactly one such edge. \end{proof} -This reframing justifies us generaling the notion of labelling to allow values in any graph. +This reframing justifies us generalizing the notion of labelling to allow values in any graph. \begin{definition} Let $\mathcal{L}$ be a graph. A graph labelled in $\mathcal{L}$ is a graph $G$ together with a labelling map $\ell : G \to \mathcal{L}$. @@ -2300,7 +2300,7 @@ \subsection{Port-plugging systems theories: Labelled graphs} \end{definition} \begin{example} -Continuing \cref{ex:transition.system.labelled.graph}, we can think of a transition diagram for an $\lens{I}{O}$-system as a $KO \times EL$ labelled graph. By the universal propoerty of the product, a labelling in $KO \times EI$ is a labelling in $KO$ together with a labelling in $EI$, which is to say a node labelling in $O$ together with an edge labelling in $I$. +Continuing \cref{ex:transition.system.labelled.graph}, we can think of a transition diagram for an $\lens{I}{O}$-system as a $KO \times EL$ labelled graph. By the universal property of the product, a labelling in $KO \times EI$ is a labelling in $KO$ together with a labelling in $EI$, which is to say a node labelling in $O$ together with an edge labelling in $I$. \end{example} We can think of a general graph $\mathcal{L}$ as giving us a system of labels with constraints. The nodes of $\mathcal{L}$ are the possible node-labels, and the edges of $\mathcal{L}$ are the possible edge labels. But an edge label is constrained to go between two node labels. Therefore, the way the edges are linked together constrains what sort of labels an edge might have given the labels its source and target have. @@ -2448,7 +2448,7 @@ \subsection{Bubble diagrams for the port-plugging doctrine} \arrow[dashed, from=4-1, to=4-4] \end{tikzcd} \] - The solid diagram is a square $\alpha$ from $D$ to the composite of $I \to L \from M$ and $M \to L' \from O$. The dashed arrows are uniquely determined by compsing $D \to I \to L$ and $D \to O \to L'$ respectively, and because the two solid squares commute, they are both equalized when composed with the inclusions into $L +_{M} L'$. Then, by \cref{lem:pushout.also.pullback}, there is a unique map $D \to M$ making the diagram commute; but this is precisely the splitting of $\alpha$ into two squares that we needed. + The solid diagram is a square $\alpha$ from $D$ to the composite of $I \to L \from M$ and $M \to L' \from O$. The dashed arrows are uniquely determined by composing $D \to I \to L$ and $D \to O \to L'$ respectively, and because the two solid squares commute, they are both equalized when composed with the inclusions into $L +_{M} L'$. Then, by \cref{lem:pushout.also.pullback}, there is a unique map $D \to M$ making the diagram commute; but this is precisely the splitting of $\alpha$ into two squares that we needed. \end{proof} Because of \cref{thm:bubble.diagrams.are.spanlike}, we can use \cref{thm:representable.lax.doubly.indexed.functors} to construct representable lax doubly indexed functors in the port-pluggin doctrine. As an example of such a functor, consider the theory of (unlabelled) graphs and the system $\Sys{P}_{3} = \bullet \to \bullet \to \bullet \to \bullet$, with interface $\bullet \quad \bullet$ included as its endpoints, in this theory. This system $\Sys{P}_{3}$ represents paths of length $3$, and so we get a lax doubly indexed functor sending a graph with boundary to the set of paths of length $3$ from one boundary node to another in it. That is doubly indexed functor is lax and not taut reflects an important fact about the compositionality of graphs: when graphs are composed, new paths can appear which weren't possible before. @@ -2460,7 +2460,7 @@ \subsection{Further Reading in the port-plugging doctrine} There has been a lot of work done in the port-plugging doctrine, and we have hardly scratched the surface. -In its categorical guise, this doctrine was innaugurated with Brendan Fong's work on \emph{decorated cospans} in his thesis \cite{Fong:Thesis}. For examples of putting this theory to work, see \cite{baez2018compositional, baez2016compositional, baez2017compositional}. This was later expanded by Kenny Courser and John Baez to a theory of \emph{structured cospans} in \cite{Baez.Courser:Structured.Cospans} (see \cite{Baez.Courser.Vasilakopolou:Structured.vs.Decorated} for a detailed comparison between these approaches). John Baez and his students have used these theories in a variety of settings; see for example \cite{Baez.Master:Open.Petri.Nets, Baez.Courser:Open.Markov.Processes}. +In its categorical guise, this doctrine was inaugurated with Brendan Fong's work on \emph{decorated cospans} in his thesis \cite{Fong:Thesis}. For examples of putting this theory to work, see \cite{baez2018compositional, baez2016compositional, baez2017compositional}. This was later expanded by Kenny Courser and John Baez to a theory of \emph{structured cospans} in \cite{Baez.Courser:Structured.Cospans} (see \cite{Baez.Courser.Vasilakopolou:Structured.vs.Decorated} for a detailed comparison between these approaches). John Baez and his students have used these theories in a variety of settings; see for example \cite{Baez.Master:Open.Petri.Nets, Baez.Courser:Open.Markov.Processes}. diff --git a/book/C7-.tex b/book/C7-.tex index 2e0395e..d1d2a2f 100644 --- a/book/C7-.tex +++ b/book/C7-.tex @@ -629,7 +629,7 @@ \section{Bimodules}\label{sec.bimodules} In \cref{exc.tfae_c_sets} we check that $(\ema{s},\epsilon,\delta)$ really is a comonoid, that $(\alpha_1,\id)\colon\ema{s}\cof\ema{c}$ is a cofunctor, that the roundtrips between cartesian cofunctors and coalgebras are identities, and that these assignments are functorial. \item[$4\cong5$:]This is straightforward and was mentioned in \cref{def.coalgebra}. \item[$5\cong6$:]A right $\0$-module is in particular a polynomial $m\in\poly$ and a map $\rho\colon m\to m\tri\0$ such that $(m\tri\epsilon)\circ\rho=\id_m$. This implies $\rho$ is monic, which itself implies by \cref{prop.monics_in_poly} that $m$ must be constant since $m\tri\0$ is constant. This makes $m\tri\epsilon$ the identity, at which point $\rho$ must also be the identity. Conversely, for any set $M$, the corresponding constant polynomial is easily seen to make the diagrams in \eqref{eqn.bimod_right} commute. - \item[$5\cong7$:] By the adjunction in \cref{prop.adjoint_quadruple} and the fully faithful inclusion $\smset\to\poly$ of sets as constant polynmials, \cref{prop.ff_const_set_to_poly}, we have isomorphisms + \item[$5\cong7$:] By the adjunction in \cref{prop.adjoint_quadruple} and the fully faithful inclusion $\smset\to\poly$ of sets as constant polynomials, \cref{prop.ff_const_set_to_poly}, we have isomorphisms \[\poly(S\yon,\ema{c}\tri S\yon)\cong\smset(S,\ema{c}\tri S\yon\tri\1)=\smset(S,\ema{c}\tri S)\cong\poly(S,\ema{c}\tri S).\] One checks easily that if $S\yon\to\ema{c}\tri S\yon)$ corresponds to $S\to\ema{c}\tri S$ under the above isomorphism, then one is a left module iff the other is. \item[$7\cong8$:] By \cref{prop.flipping_reps_lins} we have a natural isomorphism