Postgresql 18 enables postgres checksums, by default, when creating a new database:
By default, data pages are protected by checksums, but this can optionally be disabled for a cluster. When enabled, each data page includes a checksum that is updated when the page is written and verified each time the page is read. Only data pages are protected by checksums; internal data structures and temporary files are not.
See also a blog post of a company that offers 3rd level postgres support:
With release of PostgreSQL 18, the community decided to turn on data‑checksums by default – a major step toward early detection of these failures.
(credativ.de blog, 2025-11-03)
This indicates that whatever reason there was in the past for having checksums disabled, such as perceived risk, performance implications, overhead - was re-evaluated and resolved, by the development team.
The discussion of a patch that added code for this goes a bit into the previous reasoning:
There was some hesitation years ago when
this feature was first added, leading to the current situation where the
default is off. However, many years later, there is wide consensus that
this is an extraordinarily safe, desirable setting. Indeed, most (if not
all) of the major commercial and open source Postgres systems currently
turn this on by default.
I think the last time we dicussed this the consensus was that
computational overhead of computing the checksums is pretty small for
most systems (so the above change seems warranted regardless of whether
we switch the default), but turning on wal_compression also turns on
wal_log_hints, which can increase WAL by quite a lot.
Depending on your storage stack, the postgresql checksums are a game changer or superfluous. For example, if your postgres db is located on a filesystem that already implements checksumming, such as btrfs or zfs, you don't need postgres checksums. However, performance of postgres on copy-on-write filesystems (such as the mentioned ones) likely suffers and thus something else is used.
Even if you have some kind of super expensive enterprise storage where the vendor promises internal checksumming and redundancy, it's a black box, likely containing tons of complexity, code of unknown quality, with a non-zero bug probability, where it's unknown how much of the product price goes into marketing, sales bonus, the Porsches/Lambos/Yachts of the C-Suite and how much into QA and thorough engineering and development. Thus, additional checksumming on a higher layer can be seen as a defense in depth.
See also: