-
Notifications
You must be signed in to change notification settings - Fork 5
/
lecture_DynamicTrees.tex
333 lines (254 loc) · 32.2 KB
/
lecture_DynamicTrees.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
\chapter{Link-Cut Trees}
\newcommand{\key}{\operatorname{key}}
\newcommand{\prio}{\operatorname{prio}}
In this chapter, we will learn about a dynamic data structure that allows us to speed-up Dinic's algorithm even more: \emph{Link-Cut Trees}. This chapter is inspired by lecture notes of Richard Peng\footnote{CS7510 Graph Algorithms Fall 2019, Lecture 17 at \url{https://faculty.cc.gatech.edu/~rpeng/CS7510\_F19/}} and the presentation in a very nice book on data structures and network flows by Robert Tarjan \cite{tarjan1983data}.
\section{Overview}
\paragraph{Model.} We consider a directed graph $G=(V,E)$ that is undergoing \emph{updates} in the form of edge insertions and/or deletions. We number the graph in its different \emph{versions} $G^0, G^1, G^2, \dots$ such that $G^0$ is the initial input graph, and $G^i$ is the initial graph after the first $i$ updates were applied. Such a graph is called a \emph{dynamic} graph.
In this lecture, we restrict ourselves to \emph{dynamic rooted forests} that is we assume that every $G^i$ forms a directed forest where in each forest a single root vertex is reached by every other vertex in the tree. For simplicity, we assume that $G^0 = (V, \emptyset)$ is an empty graph.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.2]{./fig/InsertionDynamicTree_lectureDynamicTree.jpeg}
\caption{The $i^{th}$ version of $G$ is a rooted forest. Here, red vertices are roots and there are two trees. The $(i+1)^{th}$ version of $G$ differs from $G^i$ by a single edge that was inserted (the blue edge). Note that edge insertions are only valid if the tail of the edge was a root. In the right picture, the former root on the right side is turned into a normal node in the tree by the update.}
%\label{fig:my_label}
\end{figure}
\paragraph{The Interface.} Let us now describe the interface of our data structure that we call a link-cut tree. We want to support the following operations:
\begin{itemize}
\item $\textsc{Initialize}(G)$: Creates the data structure initially and returns a pointer to it. Each vertex is initialized to have an associated cost $cost(v)$ equal to $0$.
\item $\textsc{FindRoot}(v)$: Returns the root of vertex $v$.
\item $\textsc{AddCost}(v, \Delta)$: Add $\Delta$ to the cost of every vertex on the path from $v$ to the root vertex $\textsc{FindRoot}(v)$.
\item $\textsc{FindMin}(v)$: Returns tuple $(w, cost(w))$ where $w$ is the (first) vertex on the path from $v$ to $\textsc{FindRoot}(v)$ of lowest cost.
\item $\textsc{Link}(u,v)$: Links two trees that contain $u$ and $v$ into a single tree by inserting the edge $(u,v)$. This assumes that $u, v$ are initially in different trees and $u$ was a root vertex.
\item $\textsc{Cut}(u,v)$: Cuts the edge $(u,v)$ from the graph which causes the subtree rooted at $u$ to become a tree and $u$ to become a root vertex. Assumes $(u,v)$ is in the current graph.
\end{itemize}
\paragraph{Main Result.} The following theorem is the main result of today's lecture.
\begin{theorem}\label{thm:mainTheoremLinkCutTree}
We can implement a link-cut tree such that any sequence of $m$ operations takes total expected time $O(m \log^2 n + |V|)$.
\end{theorem}
\section{Balanced Binary Search Trees: A Recap}
In \href{https://kyng.inf.ethz.ch/courses/APC21/Chapter_2.pdf}{Chapter 2 of the course Algorithms, Probability and Computing}, we introduced (balanced) binary search trees, and studied \emph{treaps}, which are a simple randomized approach to building a binary search tree with good performance, at least in expectation.
If you are not familiar with balanced binary search trees, we encourage you to read this chapter from the APC script.
That said, we will quickly recap the important properties of binary search trees and treaps.
If you're already familiar with treaps, you can skip ahead to the next section to see how we use them for building Link-Cut trees.
%
\paragraph{Binary Search Trees.} A binary search tree $\mathcal{T}$ is a data structure for keeping track of a set of \emph{items} where each item has a \emph{key} associated with it.
The data structure stores the items in a rooted binary tree with a node for each item, and maintains the property for each node $v$, all items in the \emph{left} subtree of $v$ have keys less than $v$, and all items in the \emph{right} subtree of $v$ have keys greater than $v$.
This is called the \emph{search property} of the tree, because it makes it simple to search through the tree to determine if it contains an item with a given key.
The binary search trees we studied in APC supported the following operations, while maintaining the search property:
\begin{description}
\item[insert:] We can add an item to the tree with a specified key.
\item[delete:] We can remove an item from the tree with a specified key.
\item[find:] Determine if the tree contains an item with a specified key.
\item[split:] Suppose the tree contains two items $v_1$ and $v_2$ with keys $k_1$ and $k_2$ where $k_1 < k_2$, and suppose no item in tree has a key $k$ in the interval $(k_1,k_2)$. Then the split operation applied to these two items should split the current tree into two: One tree containing all items with keys in $(-\infty,k_1]$ and the other with all items in $[k_2,\infty)$.
\item[join:] Given two binary search trees, one with keys in the interval $(-\infty,k]$, the other with keys in the interval $(k,\infty)$, form a single binary search tree containing the items of both.
\end{description}
\paragraph{Search-Property-Preserving Tree rotations.} In this chapter, we will use that enforcing the search property in a binary tree still leaves various options to have a tree. In fact, there is a classic tree operation that preserves the search property while changing the tree: Tree rotations.
A tree rotation makes a local change to the tree by changing only a constant number of pointers around, while preserving the search property.
Given two items $v_1$ and $v_2$ such that $v_1$ is a child of $v_2$, a tree rotation applied to these two nodes will make $v_2$ the child of $v_1$, while moving their subtrees around to preserve the search property. This is shown in Figure~\ref{fig:binaryTreeRotation}.
When the rotation makes $v_2$ the right child of $v_1$ (because $v_2$ has the larger key), this is called a \emph{right rotation}, and when it makes $v_2$ the left child of $v_1$ (because $v_2$ has a smaller key), it is called a \emph{left rotation}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\textwidth]{./fig/TreeRotation_lectureDynamicTree.png}
\caption{Given two items $v_1$ and $v_2$ such that $v_1$ is a child of $v_2$, a tree rotation applied to these two nodes will make $v_2$ the child of $v_1$, while moving their subtrees around to preserve the search property. This figure shows a \emph{right rotation}.}
\label{fig:binaryTreeRotation}
\end{figure}
\paragraph{Balanced BSTs.} To implement the operations given above efficiently, one needs to ensure that the BST always stays of bounded depth (normally $O(\log n)$). This is typically ensured by making tree rotations. In the rest of the chapter, we use a randomized strategy that ensures that the depth of the tree is $O(\log n)$ in expectation. The obtained data structure is often referred to as \emph{treap}. We will not give the details here, but instead explicitly construct one in the next section and give a full (although brief) analysis.
\section{A Data Structure for Path Graphs}
Before we prove \Cref{thm:mainTheoremLinkCutTree} in its full generality, let us reason about implementing a link-cut tree data structure in a weaker setting: we assume that every version $G^i$ of $G$ is just a collection of rooted vertex-disjoint paths.
We will later build on the routines we develop for the path case to construct the data structure for the general tree case.
To distinguish the routines we develop for paths from those we develop for the general case, we will prefix the path-case routines with a ``P'', i.e. we denote our implementations of $\textsc{FindRoot}, \textsc{AddCost}, \textsc{FindMin}, \textsc{Link},$ and $\textsc{Cut}$ by $\textsc{PFindRoot}, \textsc{PAddCost}, \textsc{PFindMin}, \textsc{PLink},$ and $\textsc{PCut}$.
\paragraph{Representing Paths via Balanced Binary Search Trees.} It turns out that paths can be represented rather straight-forwardly via Balanced Binary Search Trees, with an node/item for each vertex of the graph. For the sake of concreteness, we here use \emph{treaps} to represent paths\footnote{If you have not seen treaps before, don't worry, they are simple enough to understand them from our application here. }.
Most other balanced binary search trees would also work.
Note that we now represent each rooted path with a tree, and this tree has its own root, which is usually \emph{not} the root of the path. To minimize confusion, we will refer to the root of the path as the \emph{path-root} and the root of the associated tree as the \emph{tree-root} (or \emph{subtree-root} for the root of a subtree).
In our earlier discussion of binary trees, we always assumed each item has a key: Instead we will now let the key of each vertex correspond to its distance to the path-root in the path that contains it.
We will make sure the treap respects the search property w.r.t. this set of keys/ordering, but we will not actually explicitly compute these keys or store them.
Note one important difference to the scenario of treaps-with-keys: When we have two paths, with path-roots $r_1$ and $r_2$, we will allow ourselves to join these paths in two different ways: either so that $r_1$ or $r_2$ becomes the overall path-root.
Let us describe how to represent a path $P$ in $G$. First, we pick for each vertex $v$, we assign it a \emph{priority} denoted $\prio(v)$, which we choose to be a uniformly random integer sampled from a large universe, say $[1, n^{100})$. We assume henceforth that $\prio(v) \neq \prio(w)$ for all $v \neq w \in V$.
Then, for each path $P$ in $G$, we store the vertices of $P$ in a binary tree, in particular a treap, $\mathcal{T}_{P}$ and enforce the following invariants:
\begin{itemize}
\item \textbf{Search-Property}: for all $v$, $left_{\mathcal{T}_{P}}(v)$ precedes $v$ on $P$ and $right_{\mathcal{T}_{P}}(v)$ appears later on $P$ than $v$. See the Figure~\ref{fig:PathAsTreap} below for an illustration.
\item \textbf{Heap-Order}: for each vertex $v \in P$, its parent $w = parent_{\mathcal{T}_{P}}(v)$ in $\mathcal{T}_{P}$ is either $NULL$ (if $v$ is the path-root) or has $\prio(v) > \prio(w)$.
\end{itemize}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\textwidth]{./fig/PathRepTreap_lectureDynamicTree.png}
\caption{In the upper half of the picture, the original path $P$ in $G$ is shown, along with the random numbers $\prio(v)$ for each $v$. The lower half depicts the resulting treap with vertices on the same vertical line as before.}
\label{fig:PathAsTreap}
\end{figure}
\paragraph{Depth of Vertex in a Treap.} Let us next analyze the expected depth of a vertex $v$ in a treap $\mathcal{T}_{P}$ representing a path $P$. Let $P = \langle x_1, x_2, \dots, x_k = v, \dots, x_{|P|}\rangle$, i.e. $v$ is the $k^{th}$ vertex on the path $P$. Observe that a vertex $x_i$ with $i < k$ is an ancestor of $v$ in $\mathcal{T}_{P}$ if and only if no vertex $\{x_{i+1}, x_{i+2}, \dots, x_k\}$ has received a smaller random number than $\prio(x_i)$. Since we sample $\prio(w)$ uniformly at random for each $w$, we have that $\mathbb{P}[x_i \text{ ancestor of } v] = \frac{1}{k-i+1}$. The case where $i > k$ is analogous and has $\mathbb{P}[x_i \text{ ancestor of } v] = \frac{1}{i-k+1}$. Letting $X_i$ be the indicator variable for the event that $x_i$ is an ancestor of $v$, it is straight-forward to calculate the expected depth of $v$ in $\mathcal{T}_{P}$:
\[
\mathbb{E}[depth(v)] = \sum_{i \neq k} \mathbb{E}[X_i] = \sum_{i = 1}^{k-1}\frac{1}{k-i+1} + \sum_{i = k+1}^{|P|} \frac{1}{i-k+1} = H_k + H_{|P|-k+1} - 2 = O(\log |P|)
\]
It is straight-forward to see that the operation $\textsc{PFindRoot}(v)$ can thus be implemented to run in expected $O(\log n)$ time by just iteratively following the parent pointers starting in $v$.
\paragraph{Implementing $\textsc{PFindRoot}(v)$.} From any vertex $v$, we can simply follow the parent pointers in $\mathcal{T}_{P}$ until we are at the tree-root of $\mathcal{T}_{P}$. Then, we find the right-most child of $\mathcal{T}_{P}$ by following the $right_{\mathcal{T}_{P}}$ pointers. Finally, we return the right-most child which is the path-root. Since the paths from $v$ is at depth $O(\log n)$ in expectation, and so is the path-root of $v$, we have that this operation can be executed in $O(\log n)$ expected time.
\paragraph{Extra fields to help with $\textsc{PAddCost}(v, \Delta)$ and $\textsc{PFindMin}(v)$.} The key trick to do the Link-Cut tree operations efficiently is to store the change to subtrees instead of updating $cost(v)$ for each affected vertex.
To this end, we store two fields $\Delta cost(v)$ and $\Delta min(v)$ for every vertex $v$. We let $cost(v)$ be the cost of each vertex, and $mincost(v)$ denote the minimum cost of any descendant of $v$ in $\mathcal{T}_{P}$ (where we let $v$ be a descendant of itself).
A warning: the $mincost(v)$ value is the minimum cost in the treap-subtree; \emph{not} the minimum cost on the path between $v$ and the path-root that could be different.
Note also that we do not explicitly maintain these fields.
Then, we maintain for each $v$
\[
\Delta cost(v) = cost(v) - mincost(v)
\]
\[
\Delta min(v) = \begin{cases}
mincost(v) & parent_{\mathcal{T}_{P}}(v) = NULL, \\
mincost(v) - mincost(parent_{\mathcal{T}_{P}}(v)) & otherwise\end{cases}
\]
With a bit of work, we can see that with these definitions, we obtain $mincost(v) = \sum_{w \in \mathcal{T}_{P}[v]} \Delta min(w)$ where $\mathcal{T}_{P}[v]$ is the $v$-to-tree-root path in $\mathcal{T}_{P}$. We can then recover $cost(v) = mincost(v) + \Delta cost(v)$. We say that henceforth, that an operation/manipulation to the tree is \emph{field-preserving} if the fields $\Delta min(v)$ and $\Delta cost(v)$ satisfy the equations above.
\paragraph{Implementing $\textsc{PAddCost}(v, \Delta)$ and $\textsc{PFindMin}(v)$ -- the easy way.}
First, we will discuss ways to implement $\textsc{PAddCost}$ and $\textsc{PFindMin}$ in ways that rely on the fact that $\textsc{PLink}$ and $\textsc{PCut}$ are field-preserving.
$\textsc{PAddCost}(v, \Delta)$ is very easy: First, consider the case when $v$ has no predecessor on the path. If this is the case, we can simply find the tree-root of the tree $\mathcal{T}_{P}$ containing $v$ and at this root increase the value of the field $mincost$ by $\Delta$. This implicitly increases the cost of all nodes in the tree by $\Delta$.
Second, consider the case when $v$ has a predecessor $u$ in the path (we can store these explicitly, or we can search for it in the tree). Call $\textsc{PCut}(u,v)$ -- now $v$ no longer has a predecessor and we can proceed as before. Finally, call $\textsc{PLink}(u,v)$. That's it.
Implemeting $\textsc{PFindMin}(v)$ is also very easy: First, consider the case when $v$ has no predecessor on the path. Again, we can find the tree-root $r$ of the tree $\mathcal{T}_{P}$ containing $v$.
By definition $mincost(r)$ is the minimum cost across the whole tree, and hence across the path between $v$ and the path-root. We can also find the node with the minimum cost: Search in the tree, starting from the tree-root, and, if possible follow the child where $\Delta min(v)=0$ (going left if both are eligible). Eventually, we get to a node where there is no child with $\Delta min(v) = 0$. This must be a minimizer.
Now, to deal with the case where $v$ has a predecessor, we do the same we did for $\textsc{PAddCost}$: Find the predecessor $u$, call $\textsc{PCut}(u,v)$, then find the minimizer, finally call $\textsc{PLink}(u,v)$.
\paragraph{Implementing $\textsc{PAddCost}(v, \Delta)$ and $\textsc{PFindMin}(v)$ -- the hard way.}
We can also implement $\textsc{PAddCost}(v, \Delta)$ and $\textsc{PFindMin}(v)$ without relying on $\textsc{PLink}$ and $\textsc{PCut}$, which is more efficient, but also more unpleasant\footnote{You don't have to know these more complicated approaches for the exam, but we include them for the interested reader.}.
To implement the operation $\textsc{PFindMin}(v)$ more efficiently, consider the sequence of nodes on the path in the tree $\mathcal{T}_{P}$ (not the path $P$) between $v$ and the path-root. First let $v = x_1, x_2, \ldots, x_k = r$ be the nodes leading to the tree-root if $v$ is left of the tree root, and then let $y_1, \ldots, y_l$ be the remaining nodes on the tree path to the path-root.
Observe that either the minimizer one of these nodes (which we can check by checking their $\Delta min$), or it is in a right subtree of some $x_i$, or in any subtree of some $y_j$. So we just search through these subtrees for their minimizers (looking for $\Delta min(v)=0$ ) and pick the best one. There will only be $O(\log n)$ trees in expectation.
Next, let us discuss the implementation of the operation $\textsc{PAddCost}(v, \Delta)$ which is given in pseudo-code above. The first for-loop of this algorithm ensures that the tree is again \emph{field-preserving}. This is ensured by walking down the path from the path-root of $v$ to $v$ and whenever we walk to the left, we make sure to increase all costs in the right subtree by adding $\Delta$ to the subtree-root of the right subtree $r$ in the form of adding it to $\Delta min(r)$. On the vertices along the path it updates the costs by using the $\Delta cost(\cdot)$ fields. It is easy to prove from this that thereafter the tree is \emph{field-preserving}.
\begin{algorithm}
\SetAlgoLined
\DontPrintSemicolon
\SetKwRepeat{Do}{do}{while}
Store the vertices on the path in the tree $\mathcal{T}_{P}$ between $v$ and the path-root, i.e. $\langle v = x_1, x_2, \ldots, x_k = \textsc{PFindRoot}(v)\rangle$.\\
\For{$i = k$ down to $1$}{
$\Delta cost(x_i) = \Delta cost(x_i) + \Delta$.\;
\If{$(r \gets right_{\mathcal{T}_{P}}(x_i)) \neq NULL$ AND ($i = 0$ OR $x_{i-1} \neq r$)}{
$\Delta min(r) \gets \Delta min(r) + \Delta$
}
}
\For{$i = 1$ up to $k$}{
$\textit{minval} \gets \Delta cost(x_i)$.\\
\lForEach{$w \in \{left_{\mathcal{T}_{P}}(x_i), right_{\mathcal{T}_{P}}(x_i)\}, w \neq NULL$}{
$\textit{minval} \gets \min\{ \textit{minval} , \Delta min(w)\}$.
}
$\Delta min(x_i) \gets \Delta min(x_i) + \textit{minval}$.\\
$\Delta cost(x_i) \gets \Delta cost(x_i) - \textit{minval}$.\\
\lForEach{$w \in \{left_{\mathcal{T}_{P}}(x_i), right_{\mathcal{T}_{P}}(x_i) \}, w \neq NULL$}{
$\Delta min(w) \gets \Delta min(w) - \textit{minval} $.
}
}
\caption{$\textsc{PAddCost}(v, \Delta)$}
\end{algorithm}
While after the first for-loop the values can be computed efficiently, we might now be in the situation that $\Delta min(w)$ or/and $\Delta cost(w)$ are negative for some vertices $w$. We may also violate the invariants we want to preserve on the relationship between the $\Delta min(w)$ or/and $\Delta cost(w)$ fields and the true $cost$ and $mincost$ values at each node. We recover through the second for-loop that at each vertex on the tree path from $v$ to its path-root first computes the correct minimum in the subtree again (using the helper variable $\textit{minval}$) and then adjusts all values in its left/right subtree and its own fields. Since we argued that $v$ is at expected depth $O(\log(n))$, we can see that this can be done in expected $O(\log n)$ time.
\paragraph{Implementing $\textsc{PCut}(u,v)$.} Let us first assume that we have a dummy node $d_0$ with $\prio(d_0) = 0$ in the vertex set. The trick is to first treat the operation as splitting the edge $(u,v)$ into $(u,d_0)$ and $(d_0,v)$ by inserting the vertex $d_0$ in between $u$ and $v$ in the tree $\mathcal{T}_{P}$ as a leaf (this is always possible). Then, we can do standard binary tree rotations to re-establish the heap-order invariant (see \Cref{fig:PCutRotation}). It is not hard to see that after $O(\log n)$ tree rotations in expectation, the heap-order is re-established and $d_0$ is at new tree-root of $\mathcal{T}_{P}$. It remains to remove $d_0$ and make $left_{\mathcal{T}_{P}}(d_0)$ and $right_{\mathcal{T}_{P}}(d_0)$ tree-roots.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.2]{./fig/PathCutOperation_lectureDynamicTree.jpeg}
\caption{For $\textsc{PCut}(u,v)$, we insert a vertex $d_0$ as a leaf of either $u$ or $v$ to formally split $(u,v)$ into $(u,d_0)$ and $(d_0,v)$ (this is shown on the left). While this preserves that Search-Property, it will violate the Heap-Order. Thus, we need to use tree rotations (shown on the left) to push $d_0$ to the top of the tree $\mathcal{T}_{P}$ (the arrow between the tree rotations should point in both ways).}
\label{fig:PCutRotation}
\end{figure}
In the exercises for this week, we ask you to come up with the pseudo-code for \emph{field-preserving} tree rotations that still executes each rotation in $O(1)$ time. Given this routine, we can then show that the implementation of $\textsc{PCut}(u,v)$ is field-preserving overall. And since the number of rotations is bound by $O(\max\{depth(u), depth(v)\})$ and thus the time to execute the entire procedure is $O(\log n)$ in expectation.
%To ensure that the fields are correctly adjusted, we can make the $\Delta min(\cdot)$ values of the two vertices that are rotated equal to zero while ensuring that the tree is still \emph{field-preserving} by changing the $\Delta min(\cdot)$ fields of the (at most 3) subtree-roots of the subtrees to be rotated, and adapting the $\Delta cost(\cdot)$ fields of the two vertices to be rotated. Then, after the rotation, it is not hard to see that the tree is still \emph{field-preserving} and that the procedure applied in the second for-loop of the operation $\textsc{PAddCost}(v, \Delta)$ can then just be applied to the two rotated vertices and the subtree-roots of their subtrees. This ensures that the fields are maintained correctly. All of these operations can be implemented in $O(1)$ time per tree rotation.
%Note that $\textsc{PCut}$ is basically just the treap split operation.
\paragraph{Implementing $\textsc{PLink}(u,v)$.} Implementing $\textsc{PLink}(u,v)$ could be done by reversing the process described above: we use a dummy vertex $d_0$ with priority $0$, and add it as the right child of $u$. We use tree rotations to then obtain heap-order again. Now, we can make $v$ the right child of $d_0$. Note that the resulting tree has the search and the heap-order properties.
It remains to get rid of $d_0$. To achieve this, we change its priority to $\infty$ (or say $n^{100}$), and then use tree rotations to obtain heap-order again. Since $d_0$ is the only vertex that violates the heap-order, this step results in $d_0$ being rotated downwards until it is a leaf. Then we can remove $d_0$ entirely.
Again, it is not hard to see that the implementation runs in $O(\log n)$ expected time.
\paragraph{Notes.} The operations $\textsc{PLink}/ \textsc{PCut}$ can be implemented using almost all Balanced Binary Search Trees (especially the ones you have seen in your first courses on data structures). Thus, it is not hard to get a $O(\log n)$ worst-case time bound for all operations discussed above.
\section{Implementing Trees via Paths}
We now use the result from last section as a black box to obtain \Cref{thm:mainTheoremLinkCutTree}.
\paragraph{Path Decomposition.} For each rooted tree $T$, the idea is to decompose $T$ into paths. In particular, we decompose each $T$ into a collection of vertex-disjoint paths $P_1, P_2, \dots, P_k$ such that each internal vertex $v$ in $T$ has exactly one incoming edge in some $P_i$. We call the edges on some $P_i$ \emph{solid} edges and say that the other edges are \emph{dashed}.
We maintain the paths $P_1, P_2, \dots, P_k$ using the data structure described in the last section. To avoid confusion, we use the prefix $P$ when we invoke operations of the path data structure, for example $\textsc{PFindRoot}(v)$ finds the root of $v$ in the path graph $P_i$ where $v \in P_i$.
We no longer need to think about the balanced binary trees that are used internally to represent each $P_i$. Instead, in this section, we use path-root to refer to the root of a path $P_i$ and we use tree root (or just root) to refer to a root of one of the trees in our given collection of rooted trees.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.20]{./fig/HeavyLightDecomposition_lectureDynamicTree.jpeg}
\caption{The dashed edges are the edges not on any path $P_i$. The collection of (non-empty) paths $P_1, P_2, \dots, P_k$ can be seen to be the maximal path segments of solid edges.}
\end{figure}
\paragraph{The $\textsc{Expose}(v)$ Operation.} We start by discussing the most important operation of the data structure that will be used by all other operations internally: the operation $\textsc{Expose}(v)$. This operation flips solid/dashed edges such that after the procedure the path from $v$ to its tree root in $G$ is solid (possibly as a subpath in the path collection). Below you can find an implementation of the procedure $\textsc{Expose}(v)$.
\begin{algorithm}
\SetAlgoLined
$w \gets v$.\\
\While{$(w' = parent_G(\textsc{PFindRoot}(w))) \neq NULL$}{
Invoke $\textsc{PCut}(z,w')$ for the solid edge $(z,w')$ incoming to $w'$.\\
$\textsc{PLink}(\textsc{PFindRoot}(w),w')$.\\
$w \gets w'$.\\
}
\caption{\textsc{Expose}(v)}
\end{algorithm}
\paragraph{Implementing Operations via $\textsc{Expose}(v)$.} We can now implement link-cut tree operations by invoking $\textsc{Expose}(v)$ and then forwarding the operation to the path data structure.
\begin{algorithm}[H]
\SetAlgoLined
$\textsc{Expose}(v)$; $\textsc{PAddCost}(v, \Delta)$
\caption{$\textsc{AddCost}(v, \Delta)$}
\end{algorithm}
\begin{algorithm}[H]
\SetAlgoLined
$\textsc{Expose}(v)$; \Return $\textsc{PFindMin}(v)$
\caption{\textsc{FindMin}(v)}
\end{algorithm}
\begin{algorithm}[H]
\SetAlgoLined
$parent_G(u) \gets v$\;
\lIf{$v$ has an incoming solid edge $(z,v)$}{
$\textsc{PCut}(z,v)$
}
$\textsc{PLink}(u,v)$
\caption{\textsc{Link}(u,v)}
\end{algorithm}
\begin{algorithm}[H]
\SetAlgoLined
$\textsc{Expose}(u)$; $\textsc{PCut}(u,v)$\;
\lIf{$v$ has other incoming edge $(z,v)$}{$\textsc{PLink}(z,v)$}
\caption{\textsc{Cut}(u,v)}
\end{algorithm}
\paragraph{Analysis.} All of the operations above can be implemented using a single $\textsc{Expose}(\cdot)$ operation plus $O(1)$ operations on paths. Since path operations can be executed efficiently, our main task is to bound the run-time of $\textsc{Expose}(\cdot)$. More precisely, since each iteration of $\textsc{Expose}(\cdot)$ also runs in time $O(\log n)$, the total number of while-loop iterations in $\textsc{Expose}(\cdot)$.
To this end, we introduce a dichotomy over the vertices in $G$. We let $parent_G(v)$ denote the unique parent of $v$ in $G$ and let $size_G(v)$ denote the number of vertices in the subtree rooted at $v$ (including $v$).
\begin{definition}
Then, we say that an edge $(u,v)$ is \emph{heavy} if $size_G(u) > size_G(v)/2$. Otherwise, we say $(u,v)$ is \emph{light}.
\end{definition}
It can now be seen that the number of \emph{light} edges on the $v$-to-root path for any $v$ is at most $\lg n$: every time we follow a light edge $(w,w')$, i.e. when $size_G(w') > 2 size_G(w)$, we double the size of the subtree so after taking more than $\lg n$ such edges, we have $> 2^{\lg n} = n$ vertices in the graph (which is a contradiction).
Thus, when $\textsc{Expose}(\cdot)$ runs for many iterations, it must turn many \emph{heavy} edges \emph{solid} (this is also since each vertex has at most one incoming heavy edge, so when we make a heavy edge solid, we also don't make any other heavy edge solid).
\begin{claim}
Each update can only increase the number of dashed, heavy edges by $O(\log n)$.
\end{claim}
\begin{proof}
First observe that every time $\textsc{Expose}(\cdot)$ is invoked, it turns at most $\lg n$ heavy edges from solid to dashed (since it has to visit a light edge to do so).
The only two operations that can cause additional dashed, heavy edges are $\textsc{Link}(u,v)$ and $\textsc{Cut}(u,v)$ by toggling heavy/light. For $\textsc{Link}(u,v)$, we observe that only the vertices on the $v$-to-root path increase their sizes. Since there are at most $\lg n$ light edges on this path that can turn heavy, this increases the number of dashed, heavy edges by at most $\lg n$.
The case for $\textsc{Cut}(u,v)$ is almost analogous: only vertices on the $v$-to-root path decrease their sizes which can cause any heavy edge on such a path to become light, and instead for a sibling of such a vertex might becomes heavy. But there can be at most $\lg n$ such new heavy edges, otherwise the total size of the tree must exceed again $n$ which leads to a contradiction.
\end{proof}
Thus, after $m$ updates, we have created at most $O(m \log n)$ dashed heavy edges. The iterations of $\textsc{Expose}(\cdot)$ either visit a dashed light edge (at most $\ln n$ of them), or consumes a dashed heavy edge, i.e. turning them .
We conclude that after $m$ updates, the while-loop in $\textsc{Expose}(\cdot)$ runs for at most $O(m \log n)$ iterations, \emph{in total}, i.e. summed across the updates so far. Each iteration can be implemented in $O(\log n)$ expected time. This dominates the total running time and proves \Cref{thm:mainTheoremLinkCutTree}.
\section{Fast Blocking Flow via Dynamic Trees}
Recall from \Cref{sec:findBlockingFlows} that computing blocking flows in a level graph $L$ from a vertex $s$ to $t$ can be done by successively running $\textsc{DFS}(s)$ and routing flow along the $s$-$t$ path found if one such path exists and otherwise we know that we have found a blocking flow.
We can now speed-up this procedure by storing the DFS-tree explicitly as a dynamic tree. To simplify exposition, we transform $L$ to obtain a graph $\textsc{Transform}(L)$ that has capacities on vertices instead of edges. To obtain $\textsc{Transform}(L)$, we simply split each edge in $L$ and assign the edge capacity to the mid-point vertex while assigning capacity $\infty$ to all vertices that were already in $L$. This creates an identical flow problem with at most $O(m)$ vertices and edges.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.2]{./fig/TransformToVertCaps_lectureDynamicTree.jpeg}
\caption{Each edge $(u,v)$ with capacity $c(u,v)$ is split into two edges $(u,m)$ and $(m,v)$. The capacity is then on the vertex $m$.}
%\label{fig:my_label}
\end{figure}
Finally, we give the new pseudo-code for the blocking flow procedure below.
\begin{algorithm}[H]
\SetAlgoLined
$H \leftarrow \textsc{Transform}(L)$\;
$\textsc{LC-Tree} \gets \textsc{Initialize}(H)$\;
\While{$s \in H$}{
$u \gets \textsc{LC-Tree}.\textsc{FindRoot}(s)$\;
\eIf{there is an edge $(u,v) \in H$}{
$\textsc{LC-Tree}.\textsc{Link}(u,v)$\;
\If{$v = t$}{
$(w, c) \gets \textsc{LC-Tree}.\textsc{FindMin}(s)$\;
$\textsc{LC-Tree}.\textsc{AddCost}(s, - c)$\;
Remove $w$ and all its incident edges from $H$ and $\textsc{LC-Tree}$ (via $\textsc{Cut}(\cdot)$).
}
}{
Remove $u$ and all its incident edges from $H$ and $\textsc{LC-Tree}$ (via $\textsc{Cut}(\cdot)$).
}
}
Construct $\ff$ by setting for each edge $(u,v)$ of $L$, with mid-point $m$ in $\textsc{Transform}(L)$, the flow equal to $c(m)$ minus the cost on $m$ just before it was removed from $H$.
\caption{\textsc{FindBlockingFlow}(s, t, L)}
\end{algorithm}
\begin{claim}
The running time of $\textsc{FindBlockingFlow}(s, t, L)$ is $O(m \log^2 n + |V|)$.
\end{claim}
\begin{proof}
Each edge $(u,v)$ in the graph $\textsc{Transform}(L)$ enters the link-cut tree at most once (we only invoke $\textsc{Cut}(u,v)$ when we delete $(u,v)$ from $H$).
Next, observe that the first $\mathbf{if}$-case requires $O(1 + \#edgesDeletedFromH)$ many tree operations. The $else$-case requires $O(\#edgesDeletedFromH)$ many tree operations.
But each edge is only deleted once from $H$, thus we have a total of $O(m)$ tree operations over all iterations. Since each link-cut tree operation takes amortized expected time $O(\log^2 n)$, we obtain the bound on the total running time.
\end{proof}
The correctness of this algorithm follows almost immediately using that the level graph $L$ (and therefore $\textsc{Transform}(L)$) is an acyclic graph.
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "main"
%%% End: