I wrote a code in python using 'requests' and 'beautifulSoup' api to scrape text data from first 100 sites return by google. Well it works good on most of sites but it is giving errors on those which are responding later or not responding at all I am getting this error
raise MaxRetryError(_pool, url, error or ResponseError(cause)) requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='www.lfpress.com', port=80): Max retries exceeded with url: /2015/11/06/fair-with-a-flare-samosas-made-easy (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 11001] getaddrinfo failed',))
Am I supposed to change code written inside requests API? Or I need to use some proxy? How can I leave that site and move on to next one? As error is stopping my execution.