I am working on one of the data cleansing project, I have to clean multiple fields of a pandas data frame as part of it. Mostly I am writing regular expressions and simple functions. Examples below,
def func1(s):
s = str(s)
s = s.replace(' ', '')
if len(s) > 0 and s != '0':
if s.isalpha() and len(s) < 2:
return s
def func2(s):
s = str(s)
s = s.replace(' ', '')
s = s.strip(whitespace+','+'-'+'/'+'\\')
if s != '0':
if s.isalnum() or s.isdigit():
return s
def func3(s):
s = str(s)
if s.isdigit() and s != '0':
return s
else:
return None
def func4(s):
if str(s['j']).isalpha() and str(s['k']).isdigit() and s['l'] is none:
return s['k']
And calling them like this.
x['a'] = x['b'].apply(lambda x: func1(x) if pd.notnull(x) else x)
x['c'] = x['d'].apply(lambda x: func2(x) if pd.notnull(x) else x)
x['e'] = x['f'].apply(lambda x: func3(x) if pd.notnull(x) else x)
x['g'] = x.apply(lambda x: func4(x), axis = 1)
Everything is fine here, however I have written nearly 50 such functions like this and my dataset has more than 10 million records. Script runs for hours, If my understanding is correct, the functions are called row wise, so each function is called as many times as the rows and its taking long time to process this. Is there a way to optimise this? How can I approach this in a better way? May be not through apply function? Thanks.
Sample dataset:-
Name f j b
339043 Moir Point RD 3 0
21880 Fisher-Point Drive Freemans Ba 6 0
457170 Whakamoenga Point 29 0
318399 Motukaraka Point RD 0 0
274047 Apirana Avenue Point England 360 0 366
207588 Hobsonville Point RD 127 0
747136 Dog Point RD 130 0
325704 Aroha Road Te Arai Point 36 0
291888 One Tree Point RD 960 0
207954 Hobsonville Point RD 160 0 205D
248410 Huia Road Point Chevalier 106 0
multiprocessingmodule and run your job in as many threads as your processor has.Noneif the condition is not met. Is this your intention?