Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

Its kind of a weird question to ask but I have been asking myself this: Is there

ID: 651999 • Letter: I

Question

Its kind of a weird question to ask but I have been asking myself this: Is there a limit to number of ways to find a solution to a particular problem.

Take the simplest problem: sorting a sequence of numbers.

Now there are a lot of algorithms to sort. Is there a limit to such algorithms? Technically, you can always flip a bit and back again, so there are infinite number of ways.

But still, simple problem as sorting, surely there must be finite number of ways to solve it.

In computer science we have computability theory which tells us whether a problem is solvable, complexity theory which tells us time and space to solve the problem. Similarly, is there something which can tell us the number of ways we can solve a particular problem?

Motivation for this question is that I'm amazed how many sorting algorithms we have. Such a simple problem and there are so many solutions!

Note: I asked this on stackoverflow. It seems that we need to precisely define how we would differentiate between two algorithms. Let's say we do it by somehow recognizing steps that are useless. That way we can say if two algorithms are doing the same thing. What after that? Lets take the example of sorting. All you can do is move one number from one index to other. With just this operation, surely there must be finite number of algorithms to solve the sorting problem

Explanation / Answer

As far as I can see, there are always infinitely many ways of doing something, even if you try to define and disallow redundant steps. Let's take the example of sorting. Even if you restrict to the operations of comparing and swapping numbers, there are infinitely many ways of sorting a list: quicksort the list; swap the first two numbers and then quicksort; swap the first two numbers twice and then quicksort; ...; swap the first two numbers n times, then quicksort; ...

OK, so you try to disallow that as "wasting time". But then there are still infinitely many ways: quicksort; mergesort if the list has length two, otherwise quicksort; mergesort if the list has length three, otherwise quicksort; ...

OK, so you try to disallow that on grounds that quicksort and mergesort will always produce the same answer so the test on the length of the list is redundant. But this is too coarse a notion of equivalence, since now there's exactly one algorithm for every problem