Given a table vp with column timestamp type bigint, and a btree index on timestamp, why would Postgres ignore the index and run a seq scan on comparison of timestamp with a floating point value, when an index scan would produce identical results?
Integer comparison:
SELECT * FROM vp WHERE vp.timestamp > 1470752584 takes 48 ms:
Index Scan using vp_ts_idx on vp (cost=0.57..257.87 rows=2381 width=57) (actual time=0.014..38.669 rows=80323 loops=1)
Index Cond: ("timestamp" > 1470752584)
Total runtime: 48.322 ms
Numeric comparison:
SELECT * FROM vp WHERE vp.timestamp > 1470752584.1 takes 103 seconds because it ignores vp_ts_idx and performs a seq scan of the entire table:
Seq Scan on vp (cost=0.00..7378353.16 rows=95403915 width=57) (actual time=62625.420..103122.701 rows=98240 loops=1)
Filter: (("timestamp")::numeric > 1470752584.1)
Rows Removed by Filter: 285945491
Total runtime: 103134.333 ms
Context: A query for recent vehicle positions compared timestamp with EXTRACT(EPOCH FROM NOW()) - %s, where %s was the desired number of seconds, without explicitly casting to a bigint. The workaround is to use CAST(EXTRACT(EPOCH FROM NOW()) - %s AS bigint).
Why wouldn't the query planner doesn't do this automatically when the column type is bigint? Is this a bug, or am I not considering some edge case where this behavior would be useful?