SELECT table_1.time, table_1.time, table_2.time FROM table_1
INNER JOIN table_2 on table_1.time = table_2.time
INNER JOIN table_3 on table_1.time = table_3.time
...;
I am using the above query syntax to query all rows in multiple tables, join columns from different tables, and return. However, as the number of rows in a table increases & number of tables increases, the performance drops in a large scale. Is there any way to optimize the query performance? There will be about 0.1 - 1 million rows for each table.
I've heard terms like indexing, partitioning, and SSD, but I'm really of a novice in Postgres, and not sure which one to look in to. Can anyone provide some query command syntax that's better than what I currently have, or give some detailed advice on editing the structure of my database?
Edit: Fetching all data happens only once when loading a page. So I'm trying to load all the data that's present in DB to visualize plots. After the initial plot is generated, the page will be querying only the last rows of each table to update the plots. The table structures are very simple.
Table 1: SPM1
time | spm1 |
------------------------------
2018-09-05 22:23:52 | 43.21 |
Table 2: SPM2
time | spm2 |
------------------------------
2018-09-05 22:23:52 | 43.21 |
... and there are about 30 tables of these
Thanks,
timecolumns? The logic of the join implies they are 100% equal. (JOIN table_2 on table_1.time = table_2.time, etc). Regarding performance, creating indexes on filtered columns will help - if you restrict the dataset, as @Schwern suggested.