I have a table :
CREATE TABLE users (
id BIGSERIAL PRIMARY KEY,
first_name varchar(255) NOT NULL,
last_name varchar(255) NOT NULL,
cell_id BIGINT
...
)
cell_id is an uint64 from s2 and can represent any position on the planet.
-> click here for good description of s2
Now I'd like to have an index on cell_id mostly to use an equality operator.
CREATE INDEX user_position ON users (cell_id);
But now I'm afraid that this index is going to be overly updated and queried and in the end lead up to deadlocks.
So I've had the idea of doing something like this
CREATE INDEX user_position_even ON users (cell_id) WHERE user id % 2 = 0
CREATE INDEX user_position_odd ON users (cell_id) WHERE user id % 2 = 1
and even may be add more indexes/scarcity.
Now I have some questions:
- Will Postgres use both indexes when I do a query?
- Would this help to keep performances up?
- Was my first concern wrong?
- Should I just have different tables instead of indexes?
- Would there be another way of doing this better?
idcolumn (f.ex. only queries theuserstable by location aka.cell_id) won't use any of your "clustered" indexes. -- Indexes on their own won't cause deadlocks, usually multiple tables are involved in deadlocks.pguint.FROM users WHERE cell_id = $1 AND id % 2 = 0will certainly use that index,WHERE cell_id = $1 AND id = $2could use one of those indexes, but onlyWHERE cell_id = $1won't use any of those indexes.