When I use this code, I get timeout exception after a while.
driver = webdriver.Firefox()
driver.implicitly_wait(100)
def csv_url_reader(url_obj):
reader = csv.DictReader(url_obj, delimiter=',')
for line in reader:
url = line["URL"]
driver = webdriver.Firefox()
driver.get(url)
try:
title = WebDriverWait(driver, 100).until(
EC.presence_of_element_located((By.CLASS_NAME, "some class name with title"))
).text
finally:
driver.close()
driver.quit()
print("Title is " + title)
if __name__ == "__main__":
with open("url.csv") as url_obj:
csv_url_reader(url_obj)
CSV File contains about 3 thousand links and after processing two hundredths of them it outputs an error. How can I get around this error? Can I restart the script from the last processed link?
try exceptblock and then ignore this timeout (or just log it somewhere) and continue. I'm assuming it'sdriver.get(url)that times out. Include it in yourtry exceptblock.