So far my approach to the task described in the title is quite straightforward, yet it seems somewhat inefficient/unpythonic. An example of what I usually do is as follows:
The original Pandas DataFramedf has 6 columns: 'open', 'high', 'low', 'close', 'volume', 'new dt'
import pandas as pd
df_gb = df.groupby('new dt')
arr_high = df_gb['high'].max()
arr_low = df_gb['low'].min()
arr_open = df_gb['open'].first()
arr_close = df_gb['close'].last()
arr_volumne = df_gb['volume'].sum()
df2 = pd.concat([arr_open,
arr_high,
arr_low,
arr_close,
arr_volumne], axis = 'columns')
It may seem already efficient at first glance, but when I have 20 functions waiting to apply on 20 different columns, it quickly becomes unpythonic/inefficient.
Is there any way to make it more efficient/pythonic? Thank you in advance