I was thinking about a recursive algorithm (it's a theoretical question, so it's not important the programming language). It consists of finding the minimum of a set of numbers
I was thinking of this way: let "n" be the number of elements in the set. let's rearrange the set as:
- (a, (b, c, ..., z) ).
the function moves from left to right, and the first element is assumed as minimum in the first phase (it's, of course, the 0-th element, a). next steps are defined as follows:
- (a, min(b, c, ..., z) ), check if a is still minimum, or if b is to be assumed as minimum, then (a or b, min(c, d, ..., z) ), another check condition, (a or b or c, min(d, e, ..., z)), check condition, etc.
I think the theoretical pseudocode may be as follows:
f(x) {
// base case
if I've reached the last element, assume it's a possible minimum, and check if y < z. then return a value to stop recursive calls.
// inductive steps
if ( f(i-th element) < f(i+1, next element) ) {
/* just assume the current element is the current minimum */
}
}
I'm having trouble with the base case. I don't know how to formalize it. I think I've understood the basic idea about it: it's basically what I've written in the pseudocode, right?
does what I've written so far make sense? Sorry if it's not clear but I'm a beginner, and I'm studying recursion for the first time, and I personally find it confusing. So, I've tried my best to explain it. If it's not clear, let me know, and I'll try to explain it better with different words.
(b, c, ..., z)ina, but I would have thought that you would nest all the way down(a, (b, (c, ... (z) ... ))).fthat takes one argument, but you show two in the secondif.f(x)you can't start usingf(x, y).