12

Give such a data frame df:

id_      val     
11111    12
12003    22
88763    19
43721    77
...

I wish to add a column diff to df, and each row of it equals to, let's say, the val in that row minus the diff in the previous row and multiply 0.4 and then add diff in the previous day:

diff = (val - diff_previousDay) * 0.4 + diff_previousDay

And the diff in the first row equals to val * 4 in that row. That is, the expected df should be:

id_      val     diff   
11111    12      4.8
12003    22      11.68
88763    19      14.608
43721    77      ...

And I have tried:

mul = 0.4
df['diff'] = df.apply(lambda row: (row['val'] - df.loc[row.name, 'diff']) * mul + df.loc[row.name, 'diff'] if int(row.name) > 0 else row['val'] * mul, axis=1) 

But got such as error:

TypeError: ("unsupported operand type(s) for -: 'float' and 'NoneType'", 'occurred at index 1')

Do you know how to solve this problem? Thank you in advance!

1
  • You could use itertuples or iterrows. Commented Jun 24, 2016 at 8:24

4 Answers 4

10

You can use:

df.loc[0, 'diff'] = df.loc[0, 'val'] * 0.4

for i in range(1, len(df)):
    df.loc[i, 'diff'] = (df.loc[i, 'val'] - df.loc[i-1, 'diff']) * 0.4  + df.loc[i-1, 'diff']

print (df)
     id_  val     diff
0  11111   12   4.8000
1  12003   22  11.6800
2  88763   19  14.6080
3  43721   77  39.5648

The iterative nature of the calculation where the inputs depend on results of previous steps complicates vectorization. You could perhaps use apply with a function that does the same calculation as the loop, but behind the scenes this would also be a loop.

Sign up to request clarification or add additional context in comments.

5 Comments

I think it is the only solution, given that the vectorization method is not available. But surprisingly the speed is quite fast :)
I'm sorry @user5779223, but it's not fast! I have a 1,7 M rows x 11 columns dataset, which I need to groupby on a column with about 80k distinct values, and apply this kind of running aggregate (with a little if inside). cumsum and cumcount run in 800 and 300 microseconds respectively. The applied callback, doing iterrows on the GroupByDataframe runs in 4 minutes. I'm currently checking if numba can help me out here.
@TomaszGandor asking 1 year late, but did numba work for you? I have 70 M rows and I am attempting to generate new variables based on recursive values and conditional statements. I want to know how can I speed the process as much as I can.
@Turtle - I don't remember how it ended ;) Faced with this today, I'd install modin (modin.readthedocs.io/en/latest/using_modin.html) and checked if it helps.
@TomaszGandor oh might have a look at it for later projects. I tried numba on big nested loops and time is reduced to less than half the execution time of the regular python code. Converting all inputs from pandas to numpy was a bit annoying though
5

Recursive functions are not easily vectorisable. However, you can optimize your algorithm with numba. This should be preferable to a regular loop.

from numba import jit

@jit(nopython=True)
def foo(val):
    diff = np.zeros(val.shape)
    diff[0] = val[0] * 0.4
    for i in range(1, diff.shape[0]):
        diff[i] = (val[i] - diff[i-1]) * 0.4 + diff[i-1]
    return diff

df['diff'] = foo(df['val'].values)

print(df)

     id_  val     diff
0  11111   12   4.8000
1  12003   22  11.6800
2  88763   19  14.6080
3  43721   77  39.5648

Comments

1

if you are using apply in pandas, you should not be using the dataframe again within the lambda function.

your object in all cases within the lambda function should be 'row'.

2 Comments

but how can I extract the data like the row before the current one
you can't in an apply if the axis = 1. Each row is treated as an isolated data structure, and the order of the rows is not important. If you want to extract a previous value, you can create a new column using .shift() and then apply across the new row and subtract within the row.
1

I just want to add another alternative to jezrael's answer. My answer is similar but I found to be much faster:

def calc_diff(val: pd.Series) -> pd.Series:
    diff = pd.Series(0.0, index=range(len(val)))
    diff[0] = val[0]
    for i in range(1, len(val)):
        result[i] = (val[i] - diff[i-1]) * 0.4 + diff[i-1]
    return result
df['diff'] = calc_diff(df['val'])

I tested using 10,000 rows of random numbers and the result is 194ms vs 4s for jezrael's method.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.