I need to subset a particular column from a Dataframe, of simulated stock prices, and find its mean.
Variables previously defined are:
T = 1
dt = 1/1000
which makes T/dt = 1000. (float)
Now, directly indexing DataFrame as follows, throws an error:
StockPrice[T/dt].mean() -> error
However, casting index as 'int' before using, works fine:
StockPrice[int(T/dt)].mean()
So I am trying to understand, what is the standard practice when sub-setting DataFrames using other variables that may generate integer values (but with float datatype). Should we cast them as int and then use them, or is there an alternate way?
1974 # get column 1975 if self.columns.is_unique: 1976 return self._get_item_cache(key) 1977 1978 # duplicate columns & possible reduce dimensionality KeyError: 1000.0