I have some real estate data and I would like to efficiently calculate the TimeDelta since the last sale date for that property. The result must be efficient because I have over 2 million rows so my solution has been wayyy too slow. Here is what I have implemented so far but this takes days to calculate on my dataframe. Is there a faster way to implement this?
import pandas as pd
import numpy as np
import datetime #import datetime
pd.set_option('display.max_columns',5)
## Make some dummy data
data_dict = dict(
ADDRESS=[
'123 Main Street', '123 Apple Street', '123 Orange Street', '123 Pineapple Street', '123 Pear Street',
'123 Main Street', '123 Apple Street', '123 Orange Street', '123 Pineapple Street', '123 Pear Street',
'123 Main Street', '123 Apple Street', '123 Orange Street', '123 Pineapple Street', '123 Pear Street',
],
SALE_DATE=[
'2002-01-01', '2006-01-01', '2009-01-01', '2011-01-01', '2012-01-01',
'2013-01-01', '2012-01-01', '2012-01-01', '2012-01-01', '2014-01-01',
'2016-01-01', '2018-06-01', '2017-01-01', '2017-01-01', '2019-01-01'
]
)
# format as a pandas df
sale_data = pd.DataFrame(data_dict)
sale_data['SALE_DATE'] = pd.to_datetime(sale_data['SALE_DATE'])
# instantiate a df that we will append our results to
master_df = pd.DataFrame()
#loop through each address to get the last sale and expected future sale date
for address in enumerate(sale_data.ADDRESS.drop_duplicates()):
df_slice = sale_data[sale_data.ADDRESS == address[1]].sort_values(by='SALE_DATE')
df_slice['days_since_last_sale'] = df_slice['SALE_DATE'] - df_slice['SALE_DATE'].shift(1)
df_slice['days_since_last_sale'] = [x.days if x.days > 0 else np.nan for x in df_slice['days_since_last_sale']]
df_slice['years_since_last_sale'] = df_slice['days_since_last_sale'] / 365
days_average = np.mean(df_slice['days_since_last_sale'])
df_slice['next_sale'] = datetime.datetime.today() + datetime.timedelta(days=days_average)
master_df = pd.concat([df_slice, master_df],
axis=0)
print(len(master_df))
print('_________________________________________________________________________________')
print(master_df)
df['column'] == this_valuefor all values in thedfyou might as well trydf.groupby()and apply some aggregations. I was working on an answer but you already have two good ones.