If your conditions are formed from the basic operations (<, <=, ==, !=, >, >=), then we can do this more efficiently using getattr. We use .str.extract to parse the condition and separate the comparison and the value. Using our dictionary we map the comparison to the Series attributes that we can then call for each unique comparison separately in a simple groupby.
import pandas as pd
print(df)
ID Val Cond
0 1 5 >10
1 1 15 >10
2 1 20 ==20
3 1 25 <=25
4 1 26 <=25
# All operations we might have.
d = {'>': 'gt', '<': 'lt', '>=': 'ge', '<=': 'le', '==': 'eq', '!=': 'ne'}
# Create a DataFrame with the LHS value, comparator, RHS value
tmp = pd.concat([df['Val'],
df['Cond'].str.extract('(.*?)(\d+)').rename(columns={0: 'cond', 1: 'comp'})],
axis=1)
tmp[['Val', 'comp']] = tmp[['Val', 'comp']].apply(pd.to_numeric)
# Val cond comp
#0 5 > 10
#1 15 > 10
#2 20 == 20
#3 25 <= 25
#4 26 <= 25
#5 10 != 10
# Aligns on row Index
df['Result'] = pd.concat([getattr(gp['Val'], d[idx])(gp['comp'])
for idx, gp in tmp.groupby('cond')])
# ID Val Cond Result
#0 1 5 >10 False
#1 1 15 >10 True
#2 1 20 ==20 True
#3 1 25 <=25 True
#4 1 26 <=25 False
#5 1 10 !=10 False
Simple, but inefficient and dangerous, is to eval on each row, creating a string of your condition. eval is dangerous as it can evaluate any code, so only use if you truly trust and know the data.
df['Result'] = df.apply(lambda x: eval(str(x.Val) + x.Cond), axis=1)
# ID Val Cond Result
#0 1 5 >10 False
#1 1 15 >10 True
#2 1 20 ==20 True
#3 1 25 <=25 True
#4 1 26 <=25 False
#5 1 10 !=10 False