If you need something fast, np.bincount could be a good solution instead of a Pandas groupby.
np.bincount(df.loc[df.b > 20, 'a']) / np.bincount(df.a))
which returns
array([ 1. , 0.33333333, 0.5 ])
Or if you wanted to transform the output back to a series, you could subsequently use np.take.
pd.Series((np.bincount(df.loc[df.b > 20, 'a']) / np.bincount(df.a)).take(df.a))
# 0 1.000000
# 1 1.000000
# 2 0.333333
# 3 0.333333
# 4 0.333333
# 5 0.500000
# 6 0.500000
# dtype: float64
In either case, this seems to be quite fast.
Smaller case: provided dataset
groupby approach from MaxU
%timeit df.groupby('a')['b'].transform(lambda x: x.gt(20).mean())
2.51 ms ± 65.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
np.bincount approach
%timeit pd.Series((np.bincount(df.loc[df.b > 20, 'a']) / np.bincount(df.a)).take(df.a))
271 µs ± 5.28 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Larger case: generated dataset
df = pd.DataFrame({'a': np.random.randint(0, 10, 100000),
'b': np.random.randint(0, 100, 100000)}).sort_values('a')
groupby approach from MaxU
%timeit df.groupby('a')['b'].transform(lambda x: x.gt(20).mean())
11.3 ms ± 40.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
np.bincount approach
%timeit pd.Series((np.bincount(df.loc[df.b > 20, 'a']) / np.bincount(df.a)).take(df.a))
1.56 ms ± 5.47 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)