The most straightforward idea I can think of is converting a groupby to a dict.
def df_to_dict_original(df, column):
return {key: value.drop(columns=[column]) for key, value in df.groupby(column)}
Benchmark:
import pandas as pd
csv_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data'
# using the attribute information as the column names
col_names = ['Sepal_Length','Sepal_Width','Petal_Length','Petal_Width','Class']
iris = pd.read_csv(csv_url, names = col_names)
# Create dataframe with 15000 rows
iris_big = pd.concat([iris]*100)
%timeit as_dict = df_to_dict_original(iris, "Class")
# 1.13 ms ± 15.9 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
%timeit as_dict = df_to_dict_groupby(iris, "Class")
# 1.18 ms ± 13.2 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
%timeit as_dict = df_to_dict_original(iris_big, "Class")
# 7.73 ms ± 152 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit as_dict = df_to_dict_groupby(iris_big, "Class")
# 2.82 ms ± 8.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
It would appear to be about the same speed as your original idea, except when dealing with a big dataframe, where it is about 2x faster. Your mileage may vary: results will vary based on the cardinality of the column and size of the dataframe.