1

I'm having trouble with finding the complexity of recursive methods. I have an algorithm that sorts the elements of an array in ascending order. Basically what I did is write down each step in the algorithm and the best/worst case number of executions, then took the sum of each case and found Big-O/Big-Omega. But I'm not sure about the recursive call? Do I put down the number of times it was called inside the method, or the number of times it was called in total (which may vary)?

So suppose I have an array A = [5, 4, 3, 2, 1] (this would be the worst case, if I'm not mistaken), then I start by going through the array once in the first while loop (see algorithm below), then again backwards in the second while loop, then it's the recursive call. In total, I called my method once (original call), then a second time, and then a third time (which did not go into the if-statement). So that's 3 times for an array of n = 5 elements. But inside the method itself, the recursive call occurs once. I'm so confused! :S

Also, what is the difference when looking at time complexity vs space complexity? Any tips/advice would be helpful.

Thanks!

Here is the given algorithm:

Algorithm MyAlgorithm(A, n) 
    Input: Array of integer containing n elements 
    Output: Possibly modified Array A 
        done ← true 
        j ← 0 
        while j ≤ n - 2 do 
            if A[j] > A[j + 1] then 
                swap(A[j], A[j + 1]) 
                done:= false 
            j ← j + 1 
        end while 
        j ← n - 1 
        while j ≥ 1 do 
            if A[j] < A[j - 1] then 
                swap(A[j - 1], A[j]) 
                done:= false 
            j ← j - 1 
        end while 
        if ¬ done 
            MyAlgorithm(A, n) 
        else 
           return A

And here is my solution:

Statement                Worst Case         Best Case
------------------------------------------------------------------
done = true                  1                  1
j = 0                        1                  1
j <= n-2                     n                  n
A[j] > A[j+1]                n-1                n-1
swap(A[j], A[j+1])           n-1                0
done = false                 n-1                0
j = j + 1                    n-1                n-1
j = n - 1                    1                  1
j >= 1                       n-1                n-1
A[j] < A[j-1]                n-1                n-1
swap(A[j-1], A[j])           n-1                0
done = false                 n-1                0
j = j - 1                    n-1                n-1
if ¬done                     1                  1
MyAlgorithm(A, n)            1                  0
return A                     1                  1
------------------------------------------------------------------
Total:                       10n-2                6n
Complexity:                  f(n) is O(n)         f(n) is Omega(n)

Also this is my first post here on stackoverflow so I'm not sure if I posted those correctly.

8
  • 1
    Your algorithm, as written, does not make it clear in what way sorting A has been made simpler prior to the recursive call. One big clue that your analysis (or the algorithm) is wrong is that you have determined that your comparison-based sort runs in worst-case O(n) time, something that is provably wrong. Commented Jan 30, 2014 at 19:24
  • MyAlgorithm(A, n) = O(1) is definitely wrong. Also, you are missing the proof that the family of inputs you examine (descendingly sorted sequences) is actually the worst case. Commented Jan 30, 2014 at 19:32
  • The algorithm was given, I did not write it myself.. Also, Niklas, I did not write O(1), I found O(n) Commented Jan 30, 2014 at 19:44
  • 1
    One incorrect way to post a question on SO is to put URGENT in the title. Commented Jan 30, 2014 at 20:02
  • 1
    By the way, a recursive bubble sort is a bad idea, if your language doesn't optimize for tail-recursion you'll get a stack overflow on any decent size array (if this weren't just psuedo-code) Commented Jan 30, 2014 at 21:33

1 Answer 1

1

It looks like this algorithm is some kind of variation on the bubble sort. Assuming it works correctly, it should have a performance of O(n^2).

To analyze the performance, you should note that the body of the procedure (absent the recursion) takes O(n), so the total time taken by the algorithm is O(R n), where R is the number of times the recursion is called before it finishes. Since each bubble pass should leave at least one element at a final, sorted location, R<=n/2, therefore the overall algorithm is O(n^2) worst case.

Unfortunately, the way recursion is used in your algorithm is not particularly useful for determining its performance: you could easily replace the recursion with an outer while loop around the two bubble passes which make up the rest of the procedure body (which might have avoided most of your confusion...).

Algorithms for which a recursive analysis is useful typically have some kind of divide-and-conquer structure, where the recursive procedure calls solve a smaller sub-problem. This is conspicuously lacking in your algorithm: the recursive call is always the same size as the original.

Sign up to request clarification or add additional context in comments.

4 Comments

+1 for agreeing that this algorithm is worthless for recursion because it doesn't demonstrate a divide-and-conquer technique. It should perhaps call itself recursively on Array, A, from Start+1 to End-1 until Start >= End because each call looks to guarantee placement of 2 additional sorted items. If that were the case you could solve the recurrence as F(n) = F(n-2) + 2n or something
Thanks for your answers! I don't think this algorithm was meant to be efficient, it's just meant for practice. There are more questions associated with this algorithm which I did not include, inquiring if the algorithm can be improved.
One more thing, would that mean that everything I put in my Worst Case column would have to be multiplied by the number of recursive calls (i.e. n/2)?
It probably makes more sense to multiply everything afterwards, after you've summed it up. That is: compute the complexity of the procedure minus the recursion, then use that as a component when reasoning about the overall performance.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.