You can reconstruct with df['items'].values.tolist() and join.
I went this direction because it's faster than apply.
Considering the large size of your data, you'll want this instead.
df.drop('items', 1).join(
pd.DataFrame(df['items'].values.tolist(), df.index).rename(
columns=lambda x: 'item_{}'.format(x + 1)
)
)
user item_1 item_2 item_3
0 1 product1 product2 product3
1 2 product5 product7 product2
2 3 product1 product4 product5
We can shave a bit of time off of this with
items_array = np.array(df['items'].values.tolist())
cols = np.core.defchararray.add(
'item_', np.arange(1, items_array.shape[1] + 1).astype(str)
)
pd.DataFrame(
np.column_stack([df['user'].values, items_array]),
columns=np.append('user', cols)
)
Timing
%timeit df[['user']].join(df['items'].apply(pd.Series).add_prefix('item_'))
%timeit df.drop('items', 1).join(pd.DataFrame(df['items'].values.tolist(), df.index).rename(columns=lambda x: 'item_{}'.format(x + 1)))
1000 loops, best of 3: 1.8 ms per loop
1000 loops, best of 3: 1.34 ms per loop
%%timeit
items_array = np.array(df['items'].values.tolist())
cols = np.core.defchararray.add(
'item_', np.arange(1, items_array.shape[1] + 1).astype(str)
)
pd.DataFrame(
np.column_stack([df['user'].values, items_array]),
columns=np.append('user', cols)
)
10000 loops, best of 3: 188 µs per loop
larger data
n = 20000
items = ['A%s' % i for i in range(1000)]
df = pd.DataFrame(dict(
user=np.arange(n),
items=np.random.choice(items, (n, 100)).tolist()
))
%timeit df[['user']].join(df['items'].apply(pd.Series).add_prefix('item_'))
%timeit df.drop('items', 1).join(pd.DataFrame(df['items'].values.tolist(), df.index).rename(columns=lambda x: 'item_{}'.format(x + 1)))
1 loop, best of 3: 3.22 s per loop
1 loop, best of 3: 492 ms per loop
%%timeit
items_array = np.array(df['items'].values.tolist())
cols = np.core.defchararray.add(
'item_', np.arange(1, items_array.shape[1] + 1).astype(str)
)
pd.DataFrame(
np.column_stack([df['user'].values, items_array]),
columns=np.append('user', cols)
)
1 loop, best of 3: 389 ms per loop