Sunday, 23 June 2013

Shortest Path Problem


Everyone's favorite way to explain dynamic programming seems to be by example. One of the favorite examples is a simple shortest path problem. We shall be no different.
Consider the directed, weighted graph of Figure 1. In order to find the shortest path from node s to node t, we could use enumeration.


Figure 1 - A Network Path Problem
This would involve examining the following six sequences of vertices.

s® a® c® f® t
s® a® d® f® t
s® a® d® g® t
s® b® e® g® t
s® b® d® g® t
s® b® d® f® t

After some computation we would discover that the shortest path was the one that went through a, c, and f. But we did have to consider all of the possible paths and did not find the answer until the very end of our search for the shortest path. Also, we had to do 24 additions in order to find the lengths of each path.

Other methods that involved building paths from s to t were developed by Dijkstra, Lee, and Moore. All three of these were quite similar and in essence involved successively examining the remaining vertex that is closest to s. Thus it is noted that s® a costs 1, s® a® d costs 2, s® a® c costs 4, and so forth until the shortest path to t is found. This computation involves 12 additions to sum all of the path lengths plus the overhead needed to determine at each stage which remaining node is closest to s. The method does however have the attractive feature of determining shortest paths from s to all of the nodes and it is far better than enumerating all of the possible paths.
The graph in figure 1 is called a layered network because it has five distinct zones of vertices:
{s}, {a, b}, {c, d, e}, {f, g}, and {t},
and if we wish to find the shortest path from node s to node t we must pass through one vertex from each zone. But, rather than go from s to t as before, we shall branch backwards from t going from zone to zone until we reach s.

In order to get to node t we must have come from either f or g. The costs involved are 5 for f® t and 2 for the edge g® t. Backing up one zone, we find that to reach nodes f or g, we had to come directly from c, d, or e. In order to go to t from d there is a choice, namely through f or g. The path through g is shorter, so we select that. The shortest paths from nodes in these two zones to t are shown in Table 1. The way to read this chart is to follow the ‘next’ links. For example, to go from d to t, we go from d to the next node under d, namely g. Then we look at the next node under g, which is t. Thus the shortest path from d to t is d® g® t and its cost is 10.

Table 1 - Distances to node t
At this point we know the shortest paths to t from the zone containing nodes f and g as well as that containing c, d, and e. In turn we find that the shortest path from b to t is through e rather than d and the shortest path to t from a goes through c. The entries in the Table 2 complete the task of finding a path from s to t.


Table 2 - Paths to t in a network
We should note several things at this point. First of all, we not only solved a single path problem, but the all pairs, single destination path problem for a network of this type. Thus we got more for our effort than we initially wanted just like the popular shortest path algorithms present earlier. The computation involved was 12 additions with no additional overhead to keep track of intermediate results.
Most of all, we used the solutions of subproblems to solve longer path problems. We progressed from zone to zone making decisions based upon our results from the previous zone. We were able to do this because of the following important fact.
Every portion of a shortest path is itself a shortest path.
Suppose the graph in Figure 1 was an undirected graph rather than a layered network. We could still use subpaths to build a solution to the all pairs, single destination path problem using methods similar to those of Dijkstra, Lee, or Moore that build paths out of subpaths. We still branch backward, but instead of filling in complete zones at each step, we enter the closest node to our completed paths at each step.

For example, the closest node to t is g. Next come e and f which are 5 away from t along the paths e® g® t and f® t. Then we add the vertices labeled b and c that are 7 units away from t along b® e® g® t and c® f® t. The complete chart of paths to t is shown in Table 3.

Table 3 - Paths to node t in a graph

Again we were able to take advantage of the fact that every shortest path was made up shortest subpaths. Another nice thing about our solution is that we only looked at each edge of the graph once. Compare this to the situation where one might enumerate and examine all of the paths through the graph. For a complete graph this means only O(n2) computational steps instead of an exponential number of steps for complete graphs. And, since we were able to use information again and again, we saved time.
The other rather famous path problem is the all pairs shortest path problem that is sometimes called transitive closure. This also is solved by filling in a chart, except this chare will be of size O(n2) since there are exactly that many paths to find.
We begin by jotting down the distances between all of the pairs of vertices along paths that do not go through any other nodes. Let us take figure 1 and turn it into an undirected graph. For this example the shortest paths which go directly from one node to another and do not pass through other nodes appear in figure 4.


Figure 4 – Short Paths Going Through no Nodes

These are, of course just the weights from the graph of figure 1. We can build from this and produce a table that gives all of the shortest paths that go through either no vertices or vertex s. This only changes the table in figure 4 by adding a paths between a and b of length 10. The next step is to allow paths that can pass through vertices s and a. If we continue on in this manner, things get interesting when we can go through nodes {s, a, b, c, d}. Now it is possible to go from a to b in two ways: a® s® b, a path with length 10 or a® d® b, a path with length 2.
What we did was to compare the shortest path from a to b which went only through {s, a, b, c} with one which went from a to d and then from d to b. We continue in this manner until we know the shortest paths going from any vertex to another passing through any of the nodes in the graph.
Here is the general method due to Floyd and Warshall. We first define subsets of the vertices in a graph as

A0 = Æ , A1 = {v1}, … , Ai = {v1, …, vi}, … , An = {v1, …, vn}.
Let us now define d(Ai, vj, vk) as the distance or cost of a path from vj to vk going through only vertices in the set of vertices Ai, then the following equation provides this value for the next subset of vertices.
d(Ai+1, u, v) = minimum[ d(Ai, vj, vk), d(Ai, vj, vi+1) + d(Ai, vi+1, vk)]

In other words the shortest path either one of length d(Ai, vj, vk) which does not go through vi+1 or one of length d(Ai, vj, vi+1) + d(Ai, vi+1, vk) going through vertex vi+1. This recursive computing procedure allows us to find all shortest paths connecting any two vertices in O(n3) steps compared to the possibly exponential number of steps necessary if all paths were enumerated.

No comments:

Post a Comment