So every second I am making a bunch of requests to website X every second, as of now with the standard urllib packages like so (the requestreturns a json):
import urllib.request
import threading, time
def makerequests():
request = urllib.request.Request('http://www.X.com/Y')
while True:
time.sleep(0.2)
response = urllib.request.urlopen(request)
data = json.loads(response.read().decode('utf-8'))
for i in range(4):
t = threading.Thread(target=makerequests)
t.start()
However because I'm making so much requests after about 500 requests the website returns HTTPError 429: Too manyrequests. I was thinking it might help if I re-use the initial TCP connection, however I noticed it was not possible to do this with the urllib packages.
So I did some googling and discovered that the following packages might help:
Requestshttp.clientsocket?
So I have a question: which one is best suited for my situation and can someone show an example of either one of them (for Python 3)?
requestsis (probably) the best - it handles keep alive automatically. What might actually help though is to make fewer requests.