I have thousands tables with same structure, and I need to add a index on each.
In order to simplify my work, I code a function like this:
CREATE OR REPLACE FUNCTION createIndexTD(
IN tbl_name information_schema.sql_identifier,
OUT result BOOLEAN)
LANGUAGE 'plpgsql' AS
$func$
BEGIN
EXECUTE format('CREATE INDEX idx_%s_td_time ON %s USING BRIN(trade_day, ABS(EXTRACT(EPOCH FROM (tick_time - brok_time))));',
tbl_name, tbl_name);
SELECT TRUE INTO result;
END
$func$;
And then call it in a query like this:
SELECT createIndexTD("table_name")
FROM information_schema."tables"
WHERE table_schema = 'public' AND table_type = 'BASE TABLE'
AND "table_name" LIKE 'ticks%';
It works correctly, but uses only one core of my CPU, while there are 12 cores(24 threads) and I configured 12 workers within my Postgresql instance.
The amount of the data is huge.
Is there a way, I can make this function to run parallelly?
In other words, how to concurrently add indexes on multiple different tables?
Thanks!
------------------------- Updated ---------------------------------
According to the hint by John K.N., I added PARALLEL SAFE and COST 2000 into my function, and tried it again.
But it looks like still only one postgresql worker doing the job(by watching the output of Linux command top).
Then I edited my postgresql.conf, and switch force-parallel option to on, and restarted postgresql.
This time, Postgresql-11 refused to run my function and said:
ERROR: cannot execute CREATE INDEX during a parallel operation.