Skip to main content

Questions tagged [postgresql-performance]

Performance issues with PostgreSQL queries

Filter by
Sorted by
Tagged with
0 votes
1 answer
50 views

I just discovered that logical replication doesn't work between different schemas, like you cannot publish schema1.table1 in server1 to schema2.table1 in server2. In my setup, I have multiple servers ...
sophia sophia's user avatar
1 vote
1 answer
111 views

We've been tasked to restore one large database on daily basis for analytics purpose. Here is something that we cannot change: Vendor provided database dump file on daily basis on S3 We cannot ...
Rianto Wahyudi's user avatar
1 vote
1 answer
103 views

I have the following table partitioned by projects. CREATE TABLE IF NOT EXISTS entries ( id bigint NOT NULL, status_id bigint NOT NULL, group_id bigint NOT NULL, project_id bigint ...
Josh's user avatar
  • 13
0 votes
1 answer
109 views

We're running PostgreSQL v11.7 (upgrade is already planned) on Windows and seem to be hitting a rare dynamic shared memory bug #15749 which results in the error message: FATAL: cannot unpin a segment ...
andrews's user avatar
  • 258
0 votes
2 answers
110 views

In PostgreSQL 15, there is a large non-partitioned table (~2.4 TB) where VACUUM has stopped completing in an acceptable time and has been stuck in the vacuuming indexes phase for 4 days. ...
Fahrenheit's user avatar
1 vote
1 answer
94 views

I’m working with PostgreSQL 15 and need to move rows from TableA into TableB_dedup. TableB_dedup has a primary key that should "eat" duplicate rows — effectively performing deduplication ...
Fahrenheit's user avatar
0 votes
2 answers
90 views

In PostgreSQL 16.9 I have a table Time (duration, resourceId, date, companyId) representing timesheet entries and table Resources (id, name); I want to list sum of Time durations per week and employee ...
Lukas Macha's user avatar
0 votes
0 answers
47 views

I'm working with PostgreSQL 15 and experimenting with table partitioning to improve performance. Original setup: I have two tables: tasks (parent) with ~65M rows and records (child) with ~200M ...
Cowabunga's user avatar
  • 145
1 vote
0 answers
44 views

My question is about PostgreSQL. I found similar questions for MS SQL server but I don't know if the answers apply here. My table looks like this: scores ====== | ID | UserID | ValidFrom | ValidUntil ...
MrSnrub's user avatar
  • 181
4 votes
1 answer
205 views

I'm using PostgreSQL 14.17. My database schema has two tables: Table "public.log_records" Column | Type ...
willn-cpx's user avatar
0 votes
1 answer
108 views

I saw some log entries that indicated transaction time outliers of up to 10s at times, where transaction times are typically below 1s. To get a view of how often this happens, is there a way to get ...
nsandersen's user avatar
1 vote
1 answer
123 views

I am using PostgreSQL 14.17. I am trying to debug a query planner failure in a bigger query, but I think I've narrowed down the problem to a self-join on a join table: SELECT t2.item_id FROM ...
Felipe's user avatar
  • 317
0 votes
0 answers
57 views

I am getting ERROR: invalid memory alloc request size 2727388320 after I tried vacuuming in EnterpriseDB. It is getting bigger and bigger. How to solve this issue now ?
MD Nasirul Islam's user avatar
0 votes
2 answers
115 views

I’m facing a strange issue with PostgreSQL query performance. The same query runs in under 1 second during certain periods, but takes around 20 seconds at other times. However, the data volume and ...
Suruthi Sundararajan's user avatar
0 votes
1 answer
54 views

I am modeling my star schema. So, I want to create my dimension table for the customer, and the primary key from the raw table is a varchar. Is it advisable to make a surrogate key that will stand as ...
Chidinma Okeh's user avatar
2 votes
0 answers
29 views

I have a large temporal table (>20 million rows) that contains data over time on about 15k unique objects. I can't figure out how to efficiently query "get the most recent record for each ...
hobbes274's user avatar
1 vote
0 answers
96 views

I have a table with around 400k rows. With the current autovacuum configuration, the table is automatically vacuumed and analyzed up to 4 times daily. There are usually a couple of hours between the ...
Ezenwa's user avatar
  • 23
0 votes
1 answer
59 views

We have a source table called events containing data of 500gb for various customers. we have column in this events table which is hex values . The source table has foreign references to other tables**....
user3747198's user avatar
0 votes
1 answer
55 views

How long should the duration of a pg_basebackup of around 500G usually should be? Currently it takes about 7hrs. What could be done to speed up this process? We create the backup locally and move it ...
Johannes's user avatar
1 vote
0 answers
69 views

Environment: Database: AWS Aurora PostgreSQL ORM: SQLAlchemy API Framework: Python FastAPI Issue: I'm experiencing significant query performance degradation when my API receives concurrent requests. ...
Abhishek Tyagi's user avatar
2 votes
1 answer
366 views

I have a SQL statement that runs slow the first time. The second time it will take only 6s and the first time will take more than 50s. How can I reproduce my observations with the first run of the ...
Dolphin's user avatar
  • 937
1 vote
0 answers
61 views

I have a table that contains accumulated sales of many products over time. The schema is as follows: CREATE TABLE IF NOT EXISTS public.accumulated_sales ( id bigint NOT NULL, sale_date ...
juvian's user avatar
  • 111
0 votes
1 answer
127 views

I have the following tables: CREATE TABLE IF NOT EXISTS users ( id NUMERIC(20, 0) NOT NULL DEFAULT NEXTVAL('users_sequence') PRIMARY KEY, list_id ...
Hasan Can Saral's user avatar
4 votes
0 answers
324 views

Most of the queries got slower after upgrading our postgres from version 15 to 17 using pg_upgrade. I reconfirmed that "vaccuum, analyze" were all taken care. To debug, instead of upgrade, I ...
Sajith P Shetty's user avatar
6 votes
2 answers
422 views

I have the following table (in PostgreSQL 14.6): create table waste_trajectory ( id uuid default uuid_generate_v4() not null primary key, observation_id uuid not null, ...
6006604's user avatar
  • 173
2 votes
1 answer
82 views

I've upgraded the OS on PG database server from Win Server 2012r2 Standard to 2019 Standard. Nothing else was changed. Now I see write throughput deteriorates greatly during peak update times with ...
sevzas's user avatar
  • 375
1 vote
0 answers
120 views

My web server is deployed on Kubernetes with horizontal pod scaling and a separate, non auto-scaling, PostgreSQL service which runs a both a master and readonly replica nodes, with high-availability ...
Alechko's user avatar
  • 229
2 votes
0 answers
81 views

I'm using a PG13 server as a feature DB. There is no else job on the server but this PG instance and a GPU based machine learning process. There is no online transaction, and the data lossing or ...
Leon's user avatar
  • 413
1 vote
1 answer
190 views

We are using PostgreSql-13 as our core server, and encountered a performance bottleneck. The hardware includes 2 CPUs(AMD EPYC9754 with 128 core 256 threads of each), 128GB memory, hardware RAID0 ...
Leon's user avatar
  • 413
0 votes
1 answer
103 views

I have PostgreSQL 15 running in a Kubernetes Pod using the Zalando Spilo image, managed by Patroni. The container has a memory limit of 16 GB, and the database size is 40 GB (data dir on disk). When I ...
ALZ's user avatar
  • 171
0 votes
2 answers
36 views

As you know in postgreSQL database server for best performance It is recommended to use data (data_directory) and pg_wal directory under seperate partitions. Therefore How to set data_directory as /...
Siyavus's user avatar
2 votes
1 answer
233 views

PostgreSQL - 13.15. I have a table that is 7 TB in size. There is an index - CREATE INDEX mytable_cmp_ts ON mytable USING btree (campaign_id, created_at); I can't try explain analyze for the ...
Jayadevan's user avatar
  • 1,051
0 votes
1 answer
96 views

We are running postgres 14.12 in rds and expirence very slow IO reads.. around 30MB/s on index scans. we can't figure out what might be the cause of it. any ideas to what we should / could check? ...
edena's user avatar
  • 1
4 votes
2 answers
118 views

Purely relational databases (SQL) seem to suffer to nowadays from the lack of a good solution for search/indexing hierarchies. Here a systematic review of the subject: https://stackoverflow.com/q/...
Peter Krauss's user avatar
0 votes
1 answer
45 views

We have a PostgreSQL (w/TimescaleDB extension) running on a Ubuntu server where load has steadily increased over the year. We'd like to keep load under control. For CPU, or load, we'd like to ...
Mikko Ohtamaa's user avatar
0 votes
0 answers
13 views

For the below diagram, the query attempts to find all classes where a person is either a teacher or a student. For a large dataset, searching for just teacherId or studentId (by commenting out either ...
AVIDeveloper's user avatar
0 votes
3 answers
226 views

My postgresql instance keeps throwing this error and it temporarily goes away after I restart the database. Here's the logs: ERROR: parallel worker failed to initialize 2025-02-25 21:00:51.586 UTC [...
Starbody's user avatar
0 votes
1 answer
65 views

A request like that, is automated by Postgres. Is it ok to have 27 minutes duration? It seems it will never end VACUUM pg_toast.pg_toast_423351
Slim's user avatar
  • 291
0 votes
1 answer
63 views

Let's pretend that I have 1,000,000 rows to insert data and I don't care about data loss. How much faster would 1 commit after 1,000,000 rows be rather than 1 commit every 100,000 rows? Same question ...
Chicken Sandwich No Pickles's user avatar
0 votes
1 answer
92 views

I'm trying to optimize a somewhat convoluted report-type query that's performed over two large tables and 3 smaller ones. Part of the filtering on the two large tables is done on a varchar nullable (...
bqback's user avatar
  • 107
0 votes
1 answer
112 views

Will a hash index make this query faster? LIKE 'abc%' Hash indices can speed up point queries.
Marlon Brando's user avatar
0 votes
0 answers
45 views

We are using Postgres 10 with standby replication (read replica in recovery mode). For the same query, we observe significantly higher planning time on the replica compared to the primary. Execution ...
Aftab's user avatar
  • 1
0 votes
2 answers
149 views

My table is like this: CREATE TABLE IF NOT EXISTS public.ticks_a_2507 ( tick_time timestamp(6) with time zone NOT NULL, tick_nano smallint NOT NULL, trade_day date NOT NULL, -- other ...
Leon's user avatar
  • 413
2 votes
0 answers
64 views

I am investigating a performance issue in our postgres DB. The basic thing I was able to improve is to use && comparison instead of IN to achieve a 5-10 times faster execution of a query. The ...
chbi's user avatar
  • 21
0 votes
1 answer
2k views

After upgrading the Postgres Server from Version 15 to Version 17 running on RHEL 9, few of the SQLs changed the plan and running pretty slow. I am basically an Oracle DBA and thinking to perform the ...
Naveed Iftikhar's user avatar
1 vote
1 answer
732 views

I am inserting data from the tmp_details table into the details table using an INSERT query in chunks. I am also modifying the data while inserting it. The query is taking a lot of time to execute. My ...
Purushottam Nawale's user avatar
0 votes
1 answer
385 views

I have a long running bug where some larger queries, sometimes run much much longer due to being stuck on wait_even MessageQueueSend. The difference can be anything from <100% to 1000s percent when ...
user20061's user avatar
0 votes
1 answer
321 views

I used to update a very large table using UPDATE queries, but they were taking too long to execute. To improve performance, I switched to using the CREATE TABLE approach and adding indexes to update ...
Purushottam Nawale's user avatar
0 votes
2 answers
377 views

I'm working with a table that contains approximately 70 million records. I need to create a primary key and several indexes on this table. The SQL queries I'm using are as follows: BEGIN; ALTER TABLE ...
Purushottam Nawale's user avatar
2 votes
0 answers
98 views

PostgreSQL 14.12 on Linux. I am encountering a weird situation where the same query gets 250x faster when minimally arranging my query into a subquery. This is even though the query plans are pretty ...
nh2's user avatar
  • 121

1
2 3 4 5
25