I am trying to use a python script on a Pi and Windows to pull csv data hosted on an internal site however after a while of pulling the data ( on both linux and windows ) there seems to be a glitch where it doesn't process ( which im trying to figure out or have a work around for ) and my script fails with IndexError: list index out of range which i'm assuming is because my script can't access the HTTP site - however if i run the application as soon as it fails it's fine and will run again and fail after a random amount of time usually ~ 2h
Here is a snip of the code i'm using: - All i want from the CSV is the last line of data
import csv
import time
import math
import subprocess
import datetime
import requests
from contextlib import closing
def exec_code():
url = "http://xx.xx.xx.xx/daily.csv"
l = []
with closing(requests.get(url, stream=True)) as r:
reader = csv.reader(r.iter_lines(), delimiter=',', quotechar='"')
for index,line in enumerate(reader): #This iterates the file line by line which is memory efficient in case the csv is huge.
if index < 0: #removes header if 1 adds header
l.append(line)
if index > 1: # means the file has at least 3 lines
l.append(line)
#creates variables
for row in l:
Time = row[0]
Temp = row[5]
if __name__ == '__main__':
while True:
exec_code()
time.sleep(60)
Is there a way instead of the application crashing when there is no indexes to sleep for 5 seconds and then try again?
thank you
Time = row[0] Temp = row[5]are probably the issue. Check the number of elements before indexing.