Pandas has isnull() and fillna() methods to replace NaN values in DataFrames. I have a dataset that has mostly string typed columns, but some columns have a few floating point values scattered in them. Are there some equivalent methods in Pandas for finding and replacing these?
So if I have a DataFrame like this:
In [60]: df1=pd.DataFrame([[1.0,'foo'],[2.0,1.0],[float('NaN'),'bar'],[4.0,0.0],[5.0,'baz']],columns=['fval','sval'])
In [61]: df1
Out[61]:
fval sval
0 1.0 foo
1 2.0 1
2 NaN bar
3 4.0 0
4 5.0 baz
In [63]: df1.isnull()
Out[63]:
fval sval
0 False False
1 False False
2 True False
3 False False
4 False False
...I can replace the NaN values in the 'fval' column like this:
In [64]: df1.fillna(2.5)
Out[64]:
fval sval
0 1.0 foo
1 2.0 1
2 2.5 bar
3 4.0 0
4 5.0 baz
Is there convenient method in Pandas to replace the 0 and 1 values in the 'sval' column with, say, 'na'? How about an equivalent to is isnull() for out-of-place values?
np.nanusingdf.replace