I've seen a lot of different implementations of this algorithm, but I'm wondering if there are ways to improve the efficiency beyond just making the search binary. I've designed this particular version of the algorithm so the edges and midpoint of an array/list will be checked immediately for the key being searched for, as to avoid looping through a search when the key your looking for is just the first, middle, or last element.
def searchRB(the_array, the_key, imin, imax):
print("searching")
found = False
if (0 > the_key or the_key > len(the_array)):
return found
else:
imid = imin + ((imax - imin) // 2)
if imid == the_key or imin == the_key or imax == the_key:
found = True
return found
elif the_array[imid] > the_key:
return searchRB(the_array, the_key, imin, imid-1)
elif the_array[imid] < the_key:
return searchRB(the_array, the_key, imid+1, imax)
else:
return found
For example, if your were looking for the number 1 in a list of 1-100, this would find it on the first loop, unlike some other implementations.
However, I'm not sure if this actually improves efficiency at all (except for certain edge cases), and if checking the first, mid, and end values in the list/array is actually detrimental once you continue to loop and have to check those three values every time.
Is this is good or bad implementation of this type of algorithm, or am I just splitting hairs?
the_arraygets large), recognizing the endpoints quickly doesn't benefit. The number of possible endpoints is quite small compared to the number of elements, so you waste extra time on the average element checking for a rare case. Also, thefoundvariable isn't useful: just returnFalse,TrueandFalse(respectively) where you now returnfound, and delete the two linesfound = <bool-const>.