Modified Held-Karp Time Complexity

Now that we have an established cost function, we can attempt to evaluate in terms of . To reiterate:

Keeping this in terms of , we can create a table to see how this recurrence relation gets bigger as increases.

0
1
2
3
4
5

The working for this table is shown below, but you can easily keep going to follow the pattern for higher values of :

:

:

:

:

:

:

Recurrence Relation

Just looking at the coefficients for a second, we have the following recurrence relation:

It is easy to see that this recurrence relation implies that the running time for the algorithm is factorial. After all, the recurrence relation for is .

Attempting to Find an Explicit Formula

Now clearly it is of interest to solve this recurrence relation and find a non-recursive formula, and here we run into a bit of an issue. If the relation were a linear recurrence with constant coefficients or a typical divide-and conquer recurrence, it would likely be solvable by well-known methods such as telescoping or the Master Theorem, but this is not the case.

Theorem 1

While trying to find a way to solve this recurrence relation, I arrived at the conjecture that , so let us try to prove it.

For , the number of operations used to solve an n-sized visit set TSP by the above algorithm (ignoring the cost function) satisfied the formula: .

First let us work with the RHS to rearrange it a bit into a more convenient form:

Base Case

When , the base case of the recurrence relation says that . The above formula matches that with , base case is true.

Induction Step

Pick an arbitrary . Assume that the theorem holds for any TSP with a visit set of size . Thus, it is assumed that .

Proof by induction requires showing the following:

.

Next, we can combine the recurrence above with the induction hypothesis as follows:

(from recurrence relation)

Thus by the principle of mathematical induction.

Theorem 2

Looking all over the web for this, the only place I could find any reference to this sequence is here, which provides us with the relation for the coefficients. This can be rearranged to , but just to be sure that this works for every case, we should probably prove it too.

For , the number of operations used to solve an n-sized visit set TSP by the above algorithm (ignoring the cost function) satisfied the formula: .

Case 1

This is the case where . As seen above, and the proposed formula predicts that . Thus, the base case holds.

Case 2

This is the case where . Because of the floor function, if it can be shown that the following difference is small enough, it will probably be possible to prove that this case works as well.

Lemma 1

When , the following must be true:

This sum looks like it might be related to the power series for at . We already know the power series for , a proof for which can be found here:

It therefore follows that:

Since we know that from the first theorem, we can sub both the power series for and this fact into our definition of :

(by definition)

(power series for )

The lemma is true.

Lemma 2

When , it is true that

This is easily proven using the first lemma:

(Lemma 1)

This upper bound above is in the form of an infinite geometric series with ratio , so the usual formula of can be used: .

The lemma is true.

Lemma 3

If , must hold true.

From Lemma 1, it is clear that is positive .

Then, by Lemma 2, the following must hold: . .

The lemma is true.

Conclusion

Thus, the proof for this theorem is complete for the case :

By the definition of , it must be true that . Since the recurrence relation set up as integer and by Lemma 3, it must hold that .

Time Complexity

Now that we have proved this works for the coefficients of the cost function, we have the formula of .The floor function here is just to deal with the difference of so that we can get an integer output. Subbing in our known time complexity of , we get a final Big O of for the original implementation of our modified Held-Karp with no caching of its own Dijkstra’s outputs. Note that it should already have been obvious that the running time for this algorithm would be in factorial time from the recurrence relation itself, even before finding an explicit formula.

We have already verified that this is correct given that the recurrence relation is correct, but we can also do so by general intuition . If we look back at Part 1, we can get the time taken to run the unoptimised modified Held-Karp on our data with different values. should be a constant for any particular predefined graph, meaning that if our Big O time complexity is correct then 1.

n
5
6
7
8
9

As we can see, this proportionality is fairly constant, so it would probably be safe to assume that the worst-case time complexity for the unoptimised modified Held-Karp algorithm would be , or at least something pretty close to it.

Footnotes

  1. Note that would be rather unreliable due to the decimal inaccuracy of my recorded execution times (4dp)