-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy paththesis-ch04-ibid.tex
2277 lines (2094 loc) · 90.5 KB
/
thesis-ch04-ibid.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\chapter{Incremental Bidirectional Search}
\label{chap:ibid}
In Chapter~\ref{chap:lazysp},
we introduced the Lazy Shortest Path (LazySP) algorithm,
which addresses domains with expensive edge weight functions
by interleaving the evaluation phase with a sequence of
search queries using an existing pathfinding algorithm.
Because this inner search is conducted many times,
its efficiency is paramount.
Most approaches for reducing the computational cost of pathfinding
attempt to focus the search on a smaller subset of the graph.
We consider three classes of such techniques
(Figure~\ref{fig:ibid:intro-focus}):
\begin{marginfigure}[5cm]%
\centering%
\subfloat[Bidirectional search.
A path is found near the intersection of the two searches.]{%
\centering %
\includegraphics{build/ibid-intro-focus-bidirectional} %
}%
\subfloat[Heuristic search.
The search is biased towards the destination vertex.]{%
\centering %
\includegraphics{build/ibid-intro-focus-heuristic} %
}%
\subfloat[Incremental search.
After a straight-line path is found for the first planning episode,
a new obstacle appears; only the shaded region must be
recomputed to find the new shortest path.]{%
\centering %
\includegraphics{build/ibid-intro-focus-incremental} %
}%
\vspace{0.2cm}
\caption{Illustrations of the three focusing techniques considered
on a spatial pathfinding problem.}%
\label{fig:ibid:intro-focus}
\end{marginfigure}
\begin{enumerate}
\item \emph{Bidirectional Search} -- A bidirectional algorithm
conducts two concurrent searches,
one from the start vertex $s$,
and the other from the destination vertex $t$.
The depth to which each search must descend is typically
a factor of two shallower than that of a unidirectional search.
Such searches are therefore well-suited to graphs with larger
branching factors.
For example,
for a roadmap graph,
the number of vertices is polynomial in the depth
and exponential in the dimensionality of the space.
\item \emph{Heuristic Search} -- A heuristic-informed algorithm
exploits a destination-directed heuristic function over vertices to bias
exploration in the direction of the destination vertex.
A strong and admissible such heuristic can drastically speed the
search by reducing the number of vertices that must be expanded.
\item \emph{Incremental Search} -- An incremental algorithm
is applicable to the dynamic pathfinding problem in which
a sequence of search episodes are conducted with
edge weight function changes (partially) between episodes.
An algorithm is termed \emph{incremental} because
it incrementally updates only the portion of its data structures
that are affected by the updated edge weight changes.
\marginnote{Note that vertex/edge insertions and deletions
can be modeled as edge weight changes via infinite weights.}
\end{enumerate}
The principal contribution of this chapter is IBiD,
an algorithm which combines these three techniques into a single
algorithm
(Table~\ref{tab:ibid:alg-overview}).
While originally motivated for use with LazySP,
IBiD is broadly applicable to incremental search problems.
\paragraph{Chapter outline.}
Finding shortest paths on graphs is a very extensively studied problem.
This chapter begins with a comprehensive review of
methods which solve shortest path problems by computing
distance functions.
After defining the problem in Section~\ref{subsec:ibid-probdef},
we review distance functions and unidirectional methods
in Section~\ref{sec:ibid:distance-functions}.
Section~\ref{sec:ibid:bidirectional} reviews
bidirectional search methods,
and Section~\ref{sec:ibid:incremental} reviews incremental search.
Section~\ref{sef:ibid:ibid}, introduces IBiD,
an algorithm which combines bidirectional and incremental search.
In Section~\ref{sec:ibid:heuristic},
we review heuristic search methods,
and discuss a heuristic-informed generalization of IBiD.
The chapter concludes with experimental results and implementation
notes.
\begin{table}
\centering
\begin{tabular}{ccc}
\toprule
& Unidirectional & Bidirectional \\
\midrule
\addlinespace[0.2em]
Complete
& Dijkstra \citep{dijkstra1959anote}
& Bidirectional Dijkstra \citep{luby1989bidijk} \\
\addlinespace[-0.2em]
\emph{(Heuristic)}
& \emph{A* \citep{hart1968astar}}
& \emph{Bidirectional A* \citep{ikeda1994betterroutes}} \\
\addlinespace[0.3em]
Incremental
& DynamicSWSF-FP \citep{ramalingam1996dynamicswsffp}
& {IBiD} \\
\addlinespace[-0.2em]
\emph{(Heuristic)}
& \emph{Lifelong Planning A* \citep{koenig2004lpastar}}
& \emph{Heuristic IBiD} \\
\addlinespace[0.2em]
\bottomrule
\end{tabular}
\caption{
IBiD generalizes both the heuristic-informed
bidirectional Dijkstra's search \citep{goldberg2005spexternalmemory}
and DynamicSWSF-FP \citep{ramalingam1996dynamicswsffp}.
There are a great many algorithms that we could place in each cell;
we provide only a representative choice in each.}
\label{tab:ibid:alg-overview}
\end{table}
\section{Problem Definition}
\label{subsec:ibid-probdef}
The \emph{shortest path problem} on graphs has been extensively
studied over the past six decades.
Consider a directed graph $G = (V,E)$ and accompanying edge weight
function $w : E \rightarrow \mathbb{R}$.
Note that we allow graphs with multiple edges connecting any pair
of vertices,
as well as graphs with edges to and from the same vertex.
The length of a path is equal to the sum of the weights of its
constituent edges.
\marginnote{%
The single-pair problem is also called the \emph{two-terminal} or
\emph{point-to-point} problem.}
We consider the \emph{single-pair shortest path} (SPSP) problem,
in which a path of minimal length is sought
between distinct start and destination vertices
$s,t \in V$.
(We can also handle planning problems with multiple
start/destination configurations as an SPSP problem
by introducing virtual vertices and edges connecting to each candidate
start/goal.)
Our review is applicable to problems where $w$ is everywhere finite.
The algorithms that we consider do not distinguish between non-existent
and infinite-weight edges,
and so will return that no path exists in the case where shortest
paths contain infinite-weight edges.
\paragraph{An example problem.}
We will carry forward an illustrative example problem from
the public dataset of the 9th DIMACS Implementation Challenge
\citep{demetrescuetal2006dimacs9}
(Northeast USA)
comprising an approximate road network,
using transit time as the edge weight function
(Figure~\ref{fig:ibid:example-intro}).
In this way,
a shortest path between a pair of start and destination locations
minimizes the total transit time between them.
More details about the graph are given in the experimental section
in Section~\ref{sec:ibid:experiments}.
The graph is sufficiently large that minimizing search computation
is important,
and the available transit time heuristic is broadly useful
but not perfectly strong.
The road routing domain is of great interest,
and it is commonly used for benchmarking and easy to visualize.
\begin{marginfigure}%
\centering%
\begin{tikzpicture}
\tikzset{>=latex} % arrow heads
\node[inner sep=0pt,anchor=south west] {%
\includegraphics[width=5cm]{figs/incbi-road-ne/singleshot/example-intro.png}};
\coordinate (s) at (1.75,0.9);
\coordinate (t) at (3.93,2.8);
\node (slab) at (2.5,0.6) {$s$};
\node (tlab) at (4.0,1.5) {$t$};
\draw[->,thick] (slab) -- (s);
\draw[->,thick] (tlab) -- (t);
\end{tikzpicture}%
\caption{A graph of the Northeast USA from the 9th DIMACS
Implementation Challenge
comprises 1,524,453 vertices and 3,868,020 directed edges.
A shortest path problem from a start $s$ in New Jersey
to a destination $t$ outside Boston
will be used as an example.}%
\label{fig:ibid:example-intro}%
\end{marginfigure}
\paragraph{Problem Settings.}
The single-pair problem has been extensively studied.
There are techniques that are particular to memory-constrained
settings \citep{kaindl1997biheurreconsidered}
or to settings where pre-computation is available
\citep{goldberg2007pointtopoint}.
While we do not focus on such settings,
the algorithms we propose may be complementary to these techniques.
\section{Review of Pathfinding with Distance Functions}
\label{sec:ibid:distance-functions}
This section contains a unified presentation of unidirectional,
bidirectional, and incremental search strategies
by examining the properties of the distance functions that they
maintain.
These properties and invariants can be established for arbitrary
distance function approximations.
Examination of these properties then informs the development of
algorithms which calculate them,
which we defer to Section~\ref{sef:ibid:ibid}.
While much of this section summaries prior work,
the presentation of the bidirectional termination condition
(Theorem~\ref{thm:ibid-bidir-sound}
in Section~\ref{sec:ibid:bidirectional})
in particular
is formulated to enable the novel theorems
presented in Section~\ref{sef:ibid:ibid}.
\subsection{Shortest Paths via the Start Distance Function}
\begin{marginfigure}%
\centering%
\includegraphics[width=5cm]{figs/incbi-road-ne/singleshot/example-dijkstraall.png}%
\caption{The distance function from the start vertex.}%
\label{fig:ibid:example-distance-all}%
\end{marginfigure}
The pioneering pathfinding algorithms of the late 1950s address
a generalization of the SPSP problem called
the \emph{single-source} problem,
where shortest paths are calculated from the start vertex $s$
to all vertices on the graph.
They proceed by calculating the \emph{start distance function}
$d^* : V \rightarrow \mathbb{R}$,
which gives the length of the shortest path from $s$
to each vertex $v$.
In other words:
\marginnote{Once the distance function $d^*$ is computed,
a shortest path to any destination $t$ can be generated trivially
by walking backwards to $s$ guided by $d^*$.}
\begin{equation}
d^*(v) = \min_{p \in P_{sv}} \mbox{len}(p,w),
\label{eqn:ibid-distance-function-global}
\end{equation}
where $P_{sv}$ is the set of all paths from $s$ to $v$,
and $\mbox{len}$ is given by (\ref{eqn:lazysp:len-definition})
as the sum of the path's constituent weights
with respect to the edge weight function $w$.
Where no paths to $v$ exist,
we take $d^*(v) = \infty$.
Note that $d^*$ is only well-defined on graphs with no negative-length
cycles reachable from $s$.
Importantly, $d^*$ can also be characterized locally by
\begin{equation}
d^*(v) =
\left\{ \begin{array}{cl}
0 & \mbox{if } v = s \\
\displaystyle\min_{u \in \mbox{\scriptsize Pred}(v)} d^*(u) + w(e_{uv}) & \mbox{otherwise,} \\
\end{array} \right.
\label{eqn:distance-function-char}
\end{equation}
where $\mbox{Pred}(v)$ yields the predecessor vertices of $v$,
and a vertex $v \neq s$ with no predecessors takes $d^*(v) = \infty$.
\marginnote{Note that
while the distance function $d^*$ necessarily satisfies
the equations (\ref{eqn:distance-function-char}),
they are not generally a sufficient condition;
if a reachable cycle of zero length exists,
(\ref{eqn:distance-function-char}) will not have a unique solution.}
The distance function is akin to the \emph{value function}
in more general decision problems addressed by dynamic programming.
The equations (\ref{eqn:distance-function-char})
are the \emph{Bellman equations} \citep{bellman1958routing},
which rely on the principle of optimality.
This characterization also follows implicitly from early results for
the all-pairs problem
\citep{shimbel1955communicationnets, beckmann1955transportation}.
Note that while (\ref{eqn:distance-function-char})
is a necessary condition of $d^*$,
it is not sufficient in general.
In particular,
if $w$ has cycles of length zero,
then even though $d^*$ is well-defined,
(\ref{eqn:distance-function-char}) admits an infinite number
of incorrect solutions for $d$.
\paragraph{Reconstructing a Shortest Path from the Distance Function.}
Calculating the start distance function is only useful for solving
the SPSP problem if it can be used to efficiently determine
a shortest path.
We can show that such a path can be reconstructed by starting at
the destination $t$ and progressively prepending the predecessor
edge $e_{uv}$ (and vertex $u$)
which locally minimizes $d^*(u) + w(e_{uv})$.
Any path constructed in this way is guaranteed to be a shortest path,
and this process is guaranteed to terminate if the graph
contains no zero-length cycles.
\subsection{Approximating $d^*$ via Tensioned Estimates}
\label{subsec:ibid-tension}
How can we compute $d^*$ efficiently over the graph?
Consider an approximation function $d$
which satisfies the following four properties:%
\begin{subequations}%
\begin{eqnarray}
& d^*(v) \leq d(v) & \forall v
\label{eqn:ibid-relaxation-props-nounder} \\
& d(v) = 0 & v = s
\label{eqn:ibid-relaxation-props-ds0} \\
& \displaystyle\min_{u \in \mbox{\scriptsize Pred}(v)}
d(u) + w(e_{uv}) \leq d(v)
& v \neq s
\label{eqn:ibid-relaxation-props-nottoogood} \\
& d(u) + w(e_{uv}) \geq d(v) & \forall e_{uv}
\label{eqn:ibid-relaxation-props-tens}
\end{eqnarray}%
\label{eqn:ibid-relaxation-props}%
\end{subequations}%
Conditions (\ref{eqn:ibid-relaxation-props-ds0} --
\ref{eqn:ibid-relaxation-props-tens})
follow directly from the local characterization
(\ref{eqn:distance-function-char});
in particular,
the case where $v \neq s$ has been split into the two
equivalent inequalities (\ref{eqn:ibid-relaxation-props-nottoogood})
and (\ref{eqn:ibid-relaxation-props-tens}).
In contrast,
for now we take the global inequality
(\ref{eqn:ibid-relaxation-props-nounder}) on faith;
we will later revisit the implications of relying on this assumption.
We can show that any estimate $d$ satisfying these properties
is the unique distance function $d^*$
via Theorem~\ref{thm:ibid-relaxation-notension}.
\begin{theorem}
\marginnote{Proofs for all theorems in this chapter are located in
Appendix~\ref{chap:appendix-ibid-proofs}.}
If $d: V \rightarrow \mathbb{R}$
satisfies (\ref{eqn:ibid-relaxation-props}),
then $d = d^*$.
\label{thm:ibid-relaxation-notension}
\end{theorem}
The principal method for arriving at an approximation
which satisfies (\ref{eqn:ibid-relaxation-props})
is via \emph{tensioned estimates}.
Consider an arbitrary approximation $d$ which satisfies only
(\ref{eqn:ibid-relaxation-props-nounder} --
\ref{eqn:ibid-relaxation-props-nottoogood}),
and consider the following edge labeling:
\begin{equation}
\mbox{edge } e_{uv} \mbox{ is \emph{tensioned}}
\;\;\mbox{iff}\;\;
d(u) + w(e_{uv}) < d(v).
\label{eqn:ibid-relaxation-tensioned}
\end{equation}
Tensioned edges are therefore those that violate
(\ref{eqn:ibid-relaxation-props-tens}).
A restatement of Theorem~\ref{thm:ibid-relaxation-notension}
is that an approximation $d$
satisfying (\ref{eqn:ibid-relaxation-props-nounder} --
\ref{eqn:ibid-relaxation-props-nottoogood})
with no tensioned edges is everywhere correct.
\paragraph{Edge Relaxation.}
How can we arrive at an approximation
satisfying Theorem~\ref{thm:ibid-relaxation-notension}?
The principal technique treats the properties
(\ref{eqn:ibid-relaxation-props-nounder} --
\ref{eqn:ibid-relaxation-props-nottoogood}) as invariants.
An initial approximation $d$ is chosen for which the invariants
trivially hold,
such as $d(v) = \infty \;\forall v \neq s$,
which will generally have many edges in tension.
The tensioned approximation $d$ is then iteratively improved
via \emph{edge relaxation}
as described by Ford \citep{ford1955networkflowtheory},
wherein a tensioned edge $e_{uv}$ is selected and relaxed
by setting $d(v) \leftarrow d(u) + w(e_{uv})$.
It can be shown that applying this process arbitrarily
maintains invariants
(\ref{eqn:ibid-relaxation-props-nounder} --
\ref{eqn:ibid-relaxation-props-nottoogood}).
It can be further shown that for a finite graph,
the number of edge relaxations needed is also finite.
The well-known Bellman-Ford method
\citep{shimbel1955communicationnets, bellman1958routing,
moore1959spmaze}
cycles through all edges repeatedly,
relaxing all tensioned edges found
(at most $|V|-1$ repetitions are sufficient for convergence).
Note that this does not place any requirements on $w$
(other than that $d^*$ must exist, so there must not be
any negative-length cycles reachable from $s$).
\begin{marginfigure}
\centering
\begin{tikzpicture}
\tikzset{>=latex} % arrow heads
\begin{scope}[shift={(0,0)}]
\node[fill=black,circle,inner sep=1.2pt] (a) at (0,0) {};
\node[fill=black,circle,inner sep=1.2pt] (b) at (1.5,0) {};
\node[fill=black,circle,inner sep=1.2pt] (c) at (3.0,0) {};
\draw[->,densely dashed] (a) -- (b) node[midway,fill=white,circle,inner sep=1pt] {1};
\draw[->,densely dashed] (b) -- (c) node[midway,fill=white,circle,inner sep=1pt] {1};
\node[above=-0.00cm of a] {$a$};
\node[above=-0.00cm of b] {$b$};
\node[above=-0.00cm of c] {$c$};
\node[below=0.05cm of a] {$d=0$};
\node[below=0.05cm of b] {$d=2$};
\node[below=0.05cm of c] {$d=4$};
\end{scope}
\begin{scope}[shift={(0,-1)}]
\node[fill=black,circle,inner sep=1.2pt] (a) at (0,0) {};
\node[fill=black,circle,inner sep=1.2pt] (b) at (1.5,0) {};
\node[fill=black,circle,inner sep=1.2pt] (c) at (3.0,0) {};
\draw[->,densely dashed] (a) -- (b) node[midway,fill=white,circle,inner sep=1pt] {1};
\draw[line width=0.10cm,black!10] (b) -- (c) node[midway,fill=white,circle,inner sep=1pt] {1};
\draw[->] (b) -- (c) node[midway,fill=white,circle,inner sep=1pt] {1};
\node[below=0.05cm of a] {$d=0$};
\node[below=0.05cm of b] {$d=2$};
\node[below=0.05cm of c] {$d=3$};
\end{scope}
\begin{scope}[shift={(0,-2)}]
\node[fill=black,circle,inner sep=1.2pt] (a) at (0,0) {};
\node[fill=black,circle,inner sep=1.2pt] (b) at (1.5,0) {};
\node[fill=black,circle,inner sep=1.2pt] (c) at (3.0,0) {};
\draw[line width=0.10cm,black!10] (a) -- (b) node[midway,fill=white,circle,inner sep=1pt] {1};
\draw[->] (a) -- (b) node[midway,fill=white,circle,inner sep=1pt] {1};
\draw[->,densely dashed] (b) -- (c) node[midway,fill=white,circle,inner sep=1pt] {1};
\node[below=0.05cm of a] {$d=0$};
\node[below=0.05cm of b] {$d=1$};
\node[below=0.05cm of c] {$d=3$};
\end{scope}
\begin{scope}[shift={(0,-3)}]
\node[fill=black,circle,inner sep=1.2pt] (a) at (0,0) {};
\node[fill=black,circle,inner sep=1.2pt] (b) at (1.5,0) {};
\node[fill=black,circle,inner sep=1.2pt] (c) at (3.0,0) {};
\draw[->] (a) -- (b) node[midway,fill=white,circle,inner sep=1pt] {1};
\draw[line width=0.10cm,black!10] (b) -- (c) node[midway,fill=white,circle,inner sep=1pt] {1};
\draw[->] (b) -- (c) node[midway,fill=white,circle,inner sep=1pt] {1};
\node[below=0.05cm of a] {$d=0$};
\node[below=0.05cm of b] {$d=1$};
\node[below=0.05cm of c] {$d=2$};
\end{scope}
\end{tikzpicture}
\caption{Ordering problems.
Consider the vertices $a \rightarrow b \rightarrow c$,
with edges $e_{ab}$ and $e_{bc}$ both in tension;
if $e_{bc}$ is relaxed before $e_{ab}$,
then $e_{bc}$ will need to be relaxed a second time.}
\label{fig:ibid:bellman-ford-repetitions}
\end{marginfigure}
\paragraph{Approximation Soundness.}
The need for multiple cycles of Bellman-Ford stems from the fact
that each edge may need to be relaxed several times.
This occurs because relaxing an edge changes the
$d$-value of the destination vertex,
which may newly tension downstream edges
(see Figure~\ref{fig:ibid:bellman-ford-repetitions}).
We can exploit our intuition to order relaxations from start to destination
in the special case where $w \geq 0$
(note that this requirement is stronger than requiring no reachable
negative-length cycles).
We can then show that our approximation $d$
is \emph{sound} for a subset of vertices
as described by Theorem~\ref{thm:ibid-relaxation-sound}.
\begin{marginfigure}
\centering
\includegraphics{build/ibid-dijkstra-trust}
\caption{Tensioned edge trust region
for $w \geq 0$.
Contours are of the current estimate $d$.
Currently tensioned edges are bold and dotted.}
\end{marginfigure}
\begin{theorem}
Consider $d: V \rightarrow \mathbb{R}$
satisfying (\ref{eqn:ibid-relaxation-props-nounder} --
\ref{eqn:ibid-relaxation-props-ds0}),
and let $D$ be the smallest value $d(u)$
among all tensioned edges $e_{uv}$
(or $\infty$ if no such edges exist).
If $w \geq 0$,
any vertex $x$ with $d(x) \leq D$
has $d(x) = d^*(x)$.
\label{thm:ibid-relaxation-sound}
\end{theorem}
As a result,
a given value $D$ creates a region of vertices
with values $d(x) \leq D$ that are known to be
accurate.
This confers two distinct advantages when designing an algorithm:
an efficient relaxation ordering,
and an early termination condition for single-pair problems.
\paragraph{Efficient Relaxation Ordering.}
Therefore,
all tensioned edges $e_{uv}$ with $d(u) = D$
(of which there must be at least one if any edges are tensioned)
can be relaxed immediately,
and will never be retensioned.
This is exactly the order imposed by the OPEN list in Dijkstra's
algorithm \citep{dijkstra1959anote}.
\begin{marginfigure}%
\centering%
\includegraphics[width=5cm]{figs/incbi-road-ne/singleshot/example-dijkstra.png}%
\caption{Dijkstra's algorithm computes the start distance function
$d^*$ to solve the example shortest path problem.
Darker vertices have smaller $d$-values.
The algorithm stops upon reaching the destination vertex $t$
after expanding 1,290,820 vertices.}%
\label{fig:ibid:example-distance}%
\end{marginfigure}
\paragraph{Early Termination.}
The soundness result shows that once the destination
vertex $t$ satisfies $d(t) \leq D$,
it has the correct start distance.
Since we are only interested in reconstructing a shortest path to $t$,
we are interested in terminating computation of the distance
function as early as possible.
However,
while Theorem~\ref{thm:ibid-relaxation-sound} demonstrates
that $d(t) = d^*(t)$,
it is not by itself insufficient
to demonstrate that such a shortest path to $t$ can be reconstructed.
An illustration of such a problem case is given in
Figure~\ref{fig:ibid:relaxation-completeness-issue}.
This requires the addition of
Theorem~\ref{thm:ibid-relaxation-reconstruct} below.
Notably,
the proof for Theorem~\ref{thm:ibid-relaxation-reconstruct}
relies on (\ref{eqn:ibid-relaxation-props-nottoogood}).
\begin{marginfigure}
\centering
\begin{tikzpicture}
\tikzset{>=latex} % arrow heads
\node[fill=black,circle,inner sep=1.2pt] (s) at (0,0) {};
\node[fill=black,circle,inner sep=1.2pt] (a) at (1.5,0) {};
\node[fill=black,circle,inner sep=1.2pt] (b) at (1.5,1.2) {};
\node[fill=black,circle,inner sep=1.2pt] (x) at (3.0,0) {};
\draw[->,densely dashed] (s) -- (a) node[midway,fill=white,circle,inner sep=1pt] {0};
\draw[->] (a) -- (x) node[midway,fill=white,circle,inner sep=1pt] {0};
\draw[->] (s) -- (b) node[midway,fill=white,circle,inner sep=1pt] {1};
\draw[->] (b) -- (x) node[midway,fill=white,circle,inner sep=1pt] {1};
\node[above=0.05cm of s] {$s$};
\node[above=0.05cm of a] {$a$};
\node[below=0.05cm of b] {$b$};
\node[above=0.05cm of x] {$x$};
\node[below=0.05cm of s] {$d=0$};
\node[below=0.05cm of a] {$d=3$};
\node[above=0.35cm of b] {$d=1$};
\node[below=0.05cm of x] {$d=0$};
\node[below=0.35cm of s] {$d^*=0$};
\node[below=0.35cm of a] {$d^*=0$};
\node[above=0.05cm of b] {$d^*=1$};
\node[below=0.35cm of x] {$d^*=0$};
\end{tikzpicture}
\caption{Problem case for pathfinding with distance functions
in cases where invariant (\ref{eqn:ibid-relaxation-props-nottoogood})
does not hold.
Here, $d$ satisfied a and c, with edge $e_{sa}$ tensioned,
and $d' = 0$.
While the approximation $d$ is \emph{sound} at $x$
via Theorem~\ref{thm:ibid-relaxation-sound}
(i.e. $d(x)$ is correct),
the path reconstructed from $t$ is not a shortest path.}
\label{fig:ibid:relaxation-completeness-issue}
\end{marginfigure}
\begin{theorem}
Consider $d: V \rightarrow \mathbb{R}$
satisfying (\ref{eqn:ibid-relaxation-props-nounder} --
\ref{eqn:ibid-relaxation-props-nottoogood}),
and let $D$ be the smallest value $d(u)$
among all tensioned edges $e_{uv}$
(or $\infty$ if no such edges exist).
If $w \geq 0$,
for any vertex $x$ with $d(x) \leq D$,
a path reconstructed backwards from $x$ by iteratively selecting a
predecessor minimizing $d(u) + w(e_{uv})$ until $s$ is reached
is a shortest path from $s$ to $x$ of length $d^*(x)$.
\label{thm:ibid-relaxation-reconstruct}
\end{theorem}
Armed with Theorems~\ref{thm:ibid-relaxation-sound}
and~\ref{thm:ibid-relaxation-reconstruct},
we can terminate edge relaxation early
and reconstruct a shortest path from $s$ to $t$.
This algorithm is listed as ``Dijk''
(Dijkstra's algorithm)
in the results of Figure~\ref{fig:ibid:road-ne-stats}.
\subsection{Bidirectional Search}
\label{sec:ibid:bidirectional}
A prominent technique for minimizing pathfinding computation for
single-pair problems
is bidirectional search
(also called ``doubletree'' search \citep{doran1966doubletree}).
In a bidirectional algorithm,
the distance $d_t$ to the destination is calculated in a growing region
around the destination vertex $t$
concurrently with the conventional start distance $d_s$ around $s$
(Figure~\ref{fig:ibid:example-bidirectional}).
The destination distance function $d_t$,
yielding the distance of a shortest path from each vertex $u$ to $t$,
obeys a complementary definition and local characterization as $d_s$,
with vertex predecessors replaced with successors.
Approaches such as edge relaxation (Section~\ref{subsec:ibid-tension})
can therefore be used to calculate $d_t$ using a region around $t$
within which shortest paths can be reconstructed.
Loosely speaking,
the search can terminate with a shortest path
once the two regions intersect.
Each search need only descend to a depth a factor of two
shallower than a unidirectional search.
Therefore, problems where the graph density grows quickly with
the search depth are particularly well-suited to bidirectional
algorithms.
For example,
for a roadmap graph embedded in an ambient Euclidean space,
the number of expanded vertices in a ball is polynomial in the depth
and exponential in the dimensionality of the space.
It has also been empirically established that bidirectional approaches
are beneficial for instances in which many vertices around
both the start and destination vertices are costly
(e.g. due to obstacles).
This may be because the number of vertices that need to be expanded
in these regions do not grow quickly with the search depth.
\begin{marginfigure}%
\centering%
\includegraphics[width=5cm]{figs/incbi-road-ne/singleshot/example-bidijkstra.png}%
\caption{The bidirectional Dijkatra's algorithm
computes $d_s$ around the start vertex
and $d_t$ around the destination vertex.
Darker vertices have smaller $d$-values in their respective
regions.
The algorithm terminates after expanding a total of
1,178,200 vertices using distance to balance expansions.}%
\label{fig:ibid:example-bidirectional}%
\end{marginfigure}
The first bidirectional algorithm
was proposed by Dantzig \citep{dantzig1963linearprogramming},
and the first precisely described algorithm was presented by
Nicholson \citep{nicholson1966shortest},
%Implementation of a sound and efficient algorithm
%turns on two important questions:
%(a) when and how to terminate with a shortest path,
%and (b) how to balance expansions from the two directions of the
%search.
and the similar Bi-Directional Shortest Path Algorithm (BSPA)
was analyzed by Pohl \citep{pohl1971bidirectional}.
Implementation of a sound and efficient algorithm
turns on when and how to terminate with a shortest path.
\paragraph{A Correct Termination Condition.}
%\label{sec:ibid:bidirectional-termination}
What happens upon an encounter between the forward and reverse searches?
Consider running each search until the first vertex is found
that satisfies Theorem~\ref{thm:ibid-relaxation-sound} in both
directions
(that is, the first vertex $x$ for which
$d_s(x) \leq D_s$ and $d_t(x) \leq D_t$).
While Theorem~\ref{thm:ibid-relaxation-sound} correctly demonstrates
that the values $d_s(x)$ and $d_t(x)$ are correct
(and Theorem~\ref{thm:ibid-relaxation-reconstruct} similarly
demonstrates that a shortest path can be reconstructed from $s$ to $x$
and also from $x$ to $t$),
this is not sufficient to demonstrate that the shortest path
actually passes through $x$.
See Figure~\ref{fig:ibid:bidirectional-termination-issue} for a
counter-example that illustrates this point.
\begin{marginfigure}
\centering
\begin{tikzpicture}
\tikzset{>=latex} % arrow heads
\node[fill=black,circle,inner sep=1.2pt] (s) at (0,0) {};
\node[fill=black,circle,inner sep=1.2pt] (a) at (1.5,0) {};
\node[fill=black,circle,inner sep=1.2pt] (b) at (3.0,0) {};
\node[fill=black,circle,inner sep=1.2pt] (t) at (4.5,0) {};
\node[fill=black,circle,inner sep=1.2pt] (c) at (2.25,0.8) {};
\draw[->] (s) -- (a) node[midway,fill=white,circle,inner sep=1pt] {3};
\draw[->] (a) -- (b) node[midway,fill=white,circle,inner sep=1pt] {3};
\draw[->] (b) -- (t) node[midway,fill=white,circle,inner sep=1pt] {3};
\draw[->] (a) -- (c) node[midway,fill=white,circle,inner sep=1pt] {2};
\draw[->] (c) -- (b) node[midway,fill=white,circle,inner sep=1pt] {2};
\node[below=0.05cm of s] {$s$};
\node[below=0.05cm of a] {$a$};
\node[below=0.05cm of b] {$b$};
\node[below=0.05cm of t] {$t$};
\node[above=0.05cm of c] {$c$};
\end{tikzpicture}
\caption{Simple illustration of a problem case for terminating
a bidirectional search.
With a balanced distance criterion,
$c$ will be the first vertex expanded in both directions,
but it does not lie on the shortest path.}
\label{fig:ibid:bidirectional-termination-issue}
\end{marginfigure}
Importantly, is necessary to consider the \emph{edges} connecting
the two distance function approximations.
A correct termination condition is surprisingly subtle,%
\marginnote{There were early incorrect attempts at a sound
termination condition
\citep{berge1965programminggamestransportation}.}
with several correct variations proposed
\citep{nicholson1966shortest, dreyfus1969appraisalsp,
pohl1969bidirectional, goldberg2005spexternalmemory}.
What is necessary is a bidirectional equivalent to the
completeness-based termination condition
from Theorem~\ref{thm:ibid-relaxation-reconstruct}.
Suppose $d_s$ and $d_t$ are approximations to $d^*_s$ and $d_t^*$,
respectively,
with each satisfying (\ref{eqn:ibid-relaxation-props}).
Let $D_s$ be the minimum $u$-value among all tensioned edges in $d_s$
(and likewise for $d_t$).
Then we can establish the following proof of a correct termination
condition:
\marginnote{This treatment of the bidirectional termination condition
is presented differently than in most related work because it will
help us formulate an incremental version
in Section~\ref{sef:ibid:ibid}.}
\begin{theorem}
Define $E_{\ms{conn}}$ as the set of all edges $e_{uv}$ such that
$d_s(u) \leq D_s$ and $d_t(v) \leq D_t$,
and define $\ell_e$ s.t. $\ell_e(e_{uv}) = d_s(u) + w(e_{uv}) + d_t(v)$.
If $w \geq 0$,
$s \neq t$,
$E_{\ms{conn}}$ is non-empty,
and $e^*_{uv}$ minimizes $\ell_e$
among $E_{\ms{conn}}$ with $\ell_e(e^*_{uv}) \leq D_s + D_t$,
then $\ell_e(e^*_{uv})$ is the length of the shortest path,
and $e^*_{uv}$ lies on one such path.
\label{thm:ibid-bidir-sound}
\end{theorem}
\paragraph{Designing an Efficient Bidirectional Algorithm.}
In the case where the approximations $d_s$ and $d_t$ are improved
via edge relaxation,
since $w \geq 0$,
by Theorem~\ref{thm:ibid-relaxation-sound}
once an edge $e_{uv}$ becomes included in
$E_{\ms{conn}}$,
its length value $\ell_e(e_{uv})$ will not change.
Therefore,
it is sufficient for a relaxation algorithm to consider only edges
newly added to $E_{\ms{conn}}$ at each iteration,
and keep track of the best edge $e^*_{uv}$
with its value $\ell_e(e^*_{uv})$ found so far.
Note also that Theorem~\ref{thm:ibid-relaxation-reconstruct}
allows a shortest path to be constructed from $e^*_{uv}$
buy walking backwards from $u$ to $s$,
and forwards from $v$ to $t$.
This algorithm is listed as ``BiDijk''
(bidirectional Dijkstra's algorithm)
in the results of Figure~\ref{fig:ibid:road-ne-stats}.
%\subsection{Balancing Directions}
%The general bidirectional algorithm leaves open the strategy
%used to balance the progression of the two search directions.
%Options based on alternating \citep{dantzig1963linearprogramming},
%or selecting the direction with the smaller {\sc Open} distance
%\citep{nicholson1966shortest}
%or {\sc Open} (and finite) set cardinality
%\citep{pohl1969bidirectional}
%have been proposed.
%Note that while some literature asserts that
%these can be interleaved arbitrarily,
%termination of the algorithm requires that each direction
%be expanded at least once.
%Our example problems use the balanced distance criterion.
\subsection{Incremental Search for Dynamic Problems}
\label{sec:ibid:incremental}
The shortest path on a graph is of course intimately tied to
the edge weight function $w$.
If the weight function changes from $w^{(1)}$ to $w^{(2)}$,
a shortest path $p^{(1)}$ w.r.t. $w^{(1)}$
will generally no longer be a shortest path w.r.t. $w^{(2)}$,
even if changes are small and localized.
For example,
if the weight of edge on $p^{(1)}$ increases,
or if the weight of some edge not on $p$ decreases,
then some other path $p^{(2)}$ may become shorter than $p^{(1)}$.
The \emph{dynamic shortest-path problem}
considers finding a shortest path
for each of a sequence of input edge weight functions.
Figure~\ref{fig:ibid:example-incremental}
shows an incremental algorithm finding a shortest path quickly
for a subsequent planning episode.
\begin{marginfigure}%
\centering%
\subfloat[Initial episode]{%
\centering%
\includegraphics[width=5cm]{figs/incbi-road-ne/singleshot/example-incuni-0.png}%
}
\subfloat[Subsequent episode]{%
\centering%
\includegraphics[width=5cm]{figs/incbi-road-ne/singleshot/example-incuni-1.png}%
}%
\caption{Initial episode: 1,287,897 expansions.
Subsequent episode: 391,122 expansions.}%
\label{fig:ibid:example-incremental}%
\end{marginfigure}
We concern ourselves with the \emph{fully dynamic} case,
in which arbitrary edge weight changes are allowed.
As mentioned in Section~\ref{subsec:ibid-probdef},
for our pathfinding problems,
we treat edges with infinite weight equivalently with non-existent
edges.
Therefore,
deleting or inserting an edge can be represented by
setting its edge weight to or from infinity, respectively.
We include here a brief review of the approach underlying common
algorithms for the dynamic pathfinding problem.
The dynamic problem has been studied in the literature.
Early results demonstrated properties of optimal spanning trees
under certain types of update operations
\citep{spirapan1975updatingsp}.
Early algorithms considered restricted problem such as
the insert-only problem
with constant edge weights \citep{linchang1991dynamicsp},
or restricted settings such as planar graphs
\citep{klein1998planardynamicshort}.
More general algorithms followed
\citep{ramalingam1996dynamicswsffp,
frigioni1996dynamicsinglesource,
frigioni2000dynamicsp};
we describe and generalize the DynamicSWSF-FP algorithm
of Ramalingam and Reps below.
\marginnote{We defer for now discussing the problem of integrating
heuristic functions with incremental search;
see Section~\ref{subsec:ibid:heuristic-incremental}
for that discussion.}
A review of more general dynamic problems on graphs is available
in \citep{eppstein1999dynamic},
including for the more general all-pairs problem
\citep{demetrescu2010dynamic}.
\paragraph{Problem Definition.}
Consider a directed graph $G = (V,E)$ as described in
Section~\ref{subsec:ibid-probdef}.
Consider a sequence of pathfinding episodes
with different edge weight functions $w^{(1)}, w^{(2)}, \dots$
with $w^{(i)} : E \rightarrow \mathbb{R}$.
The dynamic single-pair shortest-path (SPSP) problem
entails finding shortest paths $p^{(1)}, p^{(2)}, \dots$
between fixed start and destination vertices $s,t \in V$
for each episode.
\paragraph{Approaches to the Dynamic Problem.}
The simplest class of solutions to the dynamic SPSP problem
entails running a conventional SPSP algorithm to compute a solution
path for episode in turn.
Consider, for example, applying the edge relaxation approach
from Section~\ref{subsec:ibid-tension}
to compute the start distance function $d_s^{(i)}$ from scratch
for each episode's edge weight function $w^{(i)}$.
Investigating this approach more closely reveals an opportunity for
a more efficient algorithm.
If the changes in the weight function between each episode affect
only a subset of the edges,
then it is often the case that the value of the distance function
computed does not change for a large fraction of the vertices in the
graph.
In the face of a change in weight function
$w^{(i)} \rightarrow w^{(i+1)}$,
it is the objective of an \emph{incremental} approach
to adapt a previously computed estimate $d_s^{(i)}$
to a new one $d_s^{(i+1)}$ with a minimal amount of additional
computation.
\paragraph{Inapplicability of the Tensioned Estimates Approach.}
The tensioned estimates approach of Section~\ref{subsec:ibid-tension}
made use of two clever devices to allow its approximation $d$ to be
iteratively improved to the true distance function $d^*$.
First, it relied on a global invariant
(\ref{eqn:ibid-relaxation-props-nounder})
to ensure that the approximation was never under-consistent.
Second, it relied on a decomposition of the local characterization
(\ref{eqn:distance-function-char})
into two inequalities
(\ref{eqn:ibid-relaxation-props-nottoogood})
and (\ref{eqn:ibid-relaxation-props-tens}),
the former treated as a second invariant,
and the latter represented not as a constraint but as a labeling
of tensioned edges to be iteratively relaxed.
Unfortunately,
the incremental setting precludes this approach.
Consider a new episode in which the estimate $d^{(i+1)}$
is initialized with the preceding estimate $d^{(i)}$.
Since the weight function $w^{(i+1)}$ has changed,
there is no guarantee that either of the invariants
(\ref{eqn:ibid-relaxation-props-nounder})
or (\ref{eqn:ibid-relaxation-props-nottoogood}) still hold.
In particular,
if edge weights increase,
it is common for one or both invariants to be violated,
and we can no longer rely on Theorems~\ref{thm:ibid-relaxation-sound}
or~\ref{thm:ibid-relaxation-reconstruct} to generate sound
shortest paths.
\paragraph{A New Approach.}
The key idea underlying the incremental approach,
and the DynamicSWSF-FP \citep{ramalingam1996dynamicswsffp} algorithm,
is to restate the local characterization (\ref{eqn:distance-function-char})
in a different way.
In particular:
\begin{subequations}%
\begin{eqnarray}
& r(v) =
\left\{ \begin{array}{cl}
0 & \mbox{if } v = s \\
\displaystyle\min_{u \in \mbox{\scriptsize Pred}(v)} d(u) + w(e_{uv}) & \mbox{otherwise} \\
\end{array} \right.
& \label{eqn:ibid-inc-props-rvalue} \\
& d(v) = r(v)
& \label{eqn:ibid-inc-props-consistent}
\end{eqnarray}%
\label{eqn:ibid-inc-props}%
\end{subequations}%
(Here, $r(v)$ represents the ``right-hand side'' of
the local characterization).
It is easy to see how (\ref{eqn:ibid-inc-props}) taken together
directly implies (\ref{eqn:distance-function-char}).
The motivation behind this decomposition is that the former
(\ref{eqn:ibid-inc-props-rvalue}) can be held as an invariant,
while the latter (\ref{eqn:ibid-inc-props-consistent})
can be established iteratively.
Prior work adopts the following labeling for each vertex $v$:
\marginnote{This meaning of consistency for vertices in incremental
search is distinct from consistency of heuristic or potential functions.}
if $d(v) = r(v)$, the vertex is \emph{consistent},
whereas inconsistent vertices are either
\emph{under-consistent} if $d(v) < r(v)$
or \emph{over-consistent} if $d(v) > r(v)$.
Importantly,
the inapplicability of the global invariant
(\ref{eqn:ibid-relaxation-props-nounder}) between episodes