I have started recently learning programming (in Python). I have two pieces of code that use while loops:
a=100000000
#piece of code 1
while a > 0:
a-=10
print("done")
#piece of code 2
while True:
a-=10
if a <= 0:
print("done")
break
Both are functionally equivalent, i.e. they execute in essence the same task. For curiosity, I recorded the time necessary to execute this operation using both versions of this while loop using the time module. The results were:
piece of code 1: 0.99 s
piece of code 2: 0.89 s
They show essentially the same performance, although piece of code 2 was slightly more efficient. This is ok because this difference is basically irrelevant even for very large numbers. However, this is sort of unexpected to me, as I believe the first while loop executes fewer operations. Could someone please explain why the second piece of code is more efficient?
>just as many times as the second one evaluates<=. Unless measurement uncertainty is the explanation, I could imaging that>is implemented in terms of<=.aand got similar results. But anyway where the results are always in the same order of magnitude, there's nothing really interesting happening here.