I am writing some code in python to find out the first natural number which does not appear in the first billion digits of pi. Here's what I've written:
import datetime
from tqdm import tqdm
def findpi(end):
notin = []
stop = 0
s1 = datetime.datetime.now()
with open(r"C:\Users\shamm\Desktop\Text Documents\1 BILLION Digits of pi.txt", 'rt') as p:
pi = str(p.read())
for i in tqdm(range(1,end+1)):
if str(i) not in pi:
if notin == []:
stop = i
notin.append(i)
s2 = datetime.datetime.now()
tdelta = s2 - s1
ts = tdelta.total_seconds()
return [notin, stop, ts]
pi = findpi(1000000)
print("Not in:", pi[0])
print("Last:", pi[1])
print("Time taken:", pi[2])
It works fine for small numbers, and when I tried it with the first million natural numbers, the code faces a sudden blockade. For the first 10 seconds, it runs at about 10k iterations/s, but then it suddenly goes down to 1k iterations/s. I tried it with a larger input of 10 million, and the same thing happens, running at 10k it/s only for 10 seconds. After 30+ minutes of running, it goes down to 100 it/s.
Is there a bottleneck which limits it from using too much memory or processing power? Or is something wrong in my code?
Edit: so it seems like the cause is the length of the ever-increasing length of the number which needs to be searched. How can I optimize this search so that it will not slow down with every increase in the number of digits?