I have a csv file with the following sample output:
3/12/1970
3/1/1942
10/20/1945 10/20/1945
10/27/1960
10/5/1952
I bring it into pandas with df = pd.read_csv(filename).
I know there are rows with double dates as noted above. The dtype of this column is object in pandas. When trying to convert this column to datetime format in pandas, I get errors on all the rows with this double date issue and have to find and edit them in the csv, one by one. So, I have tried the following to clean out all the rows in my 50K rows which have this double date issue:
df[col] = df[col].str.strip()
df[col] = df[col].str[:10]
Does not affect any of the double dates at all.
I also tried to calculate the length of each value in the col and then simply remove date values if the resulting col length exceeds 10. Still, the double date rows remain.
I have also tried the following to even locate this particular row to inspect it further, but this code results in 0 rows.
bad_dates = df[df[col].str.contains('10/20/1945')]
So, any creative ideas to clean these double dates? (It happens with probably one hundred randomly distributed column values)
colas the correct column name? Anyway, using.str[:10]may not be the best solution for you since dates can have one or two digits month and day. Maybe you can try to use split(' ') or regex (here an example: stackoverflow.com/questions/46064162/…).