0

I have come up with the following algorithm for a task. primarily the for-loop in line 23-24 that is making me unsure.

function TOP-DOWN-N(C, n, p)
  let N[1...n, 1...C] be a new array initialized to -1 for all indicies
  return N(C, n)

function N(C, n)
  if N[C,n] >= 0 then
    return N[C,n]
  if C < 0 or n < 1 then
    return 0
  elif C = 0 then
    return 1
  elif C > 0 and i > 0 then
    r = 0
    for i = 0 to n do
      r += N(C-p[n-i], n-(i+1))
    N[C,n] = r
    return N
5
  • As for the analysis, you're correct that at most O(nC) calls can possibly take above O(1) time, but you still have to count the ones that take O(1) time because there could be a lot of them. Commented Mar 2, 2020 at 20:44
  • the ones that take O(1) time will not change that the runtime of the algorithm will be O(nC), correct? I will type it out in pseudocode now. Commented Mar 2, 2020 at 20:52
  • If there are O(nC) calls where the loop happens, and each of them has a loop with O(n) iterations, then the total complexity should be O(n^2 C), shouldn't it? That's assuming the loop body takes constant time, which is not strictly true but is a valid simplification because the recursive calls made from within the loop are already accounted for. But nonetheless, the loop itself does O(n) function calls and O(n) additions. Commented Mar 2, 2020 at 20:55
  • You might be correct. What is throwing me off is that - if a function call terminates be N[C,n] >= 0 then this will not reach the for loop. Commented Mar 2, 2020 at 21:15
  • 1
    For the N[C,n] >= 0 condition to be true, then it must have been false at some point in the past with the same values of C and n, otherwise that array cell would never have been written to. Commented Mar 2, 2020 at 21:16

1 Answer 1

2

Let's ignore the fact that this algorithm is implemented recursively. In general, if a dynamic programming algorithm is building an array of N results, and computing each result requires using the values of k other results from that array, then its time complexity is in Ω(Nk), where Ω indicates a lower bound. This should be clear: it takes Ω(k) time to use k values to compute a result, and you have to do that N times.

From the other side, if the computation doesn't do anything asymptotically more time-consuming than reading k values from the array, then O(Nk) is also an upper bound, so the time complexity is Θ(Nk).


So, by that logic we should expect that the time complexity of your algorithm is Θ(n2C), because it builds an array of size nC, computing each result uses Θ(n) other results from that array, and that computation is not dominated by something else.

However, your algorithm has an advantage over an iterative implementation because it doesn't necessarily compute every result in the array. For example, if the number 1 isn't in the array p then your algorithm won't compute N(C-1, n') for any n'; and if the numbers in p are all greater than or equal to C, then the loop is only executed once and the running time is dominated by having to initialize the array of size nC.

It follows that Θ(n2C) is the worst-case time complexity, and the best-case time complexity is Θ(nC).

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.