Questions tagged [compression]
The name given to the process of encoding data such that it uses lesser number of bits as compared to the original representation.
183 questions
5
votes
1
answer
180
views
Will unchanged database extents result in identical SQL backup bits on disk? What about with backup compression enabled?
(I'm trying to predict long-term SQL backup storage needs and if/where to enable compression, on a storage appliance with de-duplication. I know the best answer is "test both and see", which ...
1
vote
0
answers
27
views
Is it possible to change compression settings in CouchDB?
I'm experiencing some issues with Apache CouchDB's database compression (version 3.3.3). Sometimes the process gets stuck midway, which causes significant difficulties. I'd like to know if it's ...
3
votes
1
answer
319
views
What compression should be used in MariaDB?
MariaDB offers multiple compression methods: bzip2, lz4, lzma, lzo and snappy.
Why are there so many? Which one is recommended to use? Why isn't zstd and option?
If one compression method is adopted ...
0
votes
3
answers
561
views
How to backup without compression?
when I take a diff backup of my database without specifying compression
It still does compress.
what if I explicitly want without compression?
BACKUP DATABASE MyDatabase TO DISK='\\myserver\SQLBackups$...
1
vote
0
answers
22
views
cassandra compaction problem
We are currently using Cassandra version 1.2.14 and mirroring across 3 IPs. We have 29TB of storage space, with approximately 24TB of data currently stored. I have a few questions:
During internal ...
-1
votes
2
answers
175
views
Is it possible to compress or parition existing huge table in place for mariadb?
I have a mariadb database set up for logging experiment data. In one of the tables, I store huge raw images every row. With a few million rows each containing 3 512*512px images, I run out of disk ...
4
votes
1
answer
614
views
Does compressing indexes reduce the size of already compressed backups?
I use backup compression everywhere. Recently, I discovered that my backups were getting a little too big. I decided to fix this by compressing the indexes in the database (typically PAGE compression)....
1
vote
0
answers
44
views
Master - Slave - Galera cluster
The current setup is comprised of two nodes, a Master-Slave setup. No encryption, no compression. I would love to switch this to a Galera Cluster, with encryption, compression and, to make things ...
3
votes
1
answer
249
views
Should tiny dimension tables be considered for row or page compression on servers with ample CPU room?
An old Microsoft paper says to consider using ROW compression by default if you have lots of CPU room (emphasis mine).
If row compression results in space savings and the system can accommodate a 10 ...
1
vote
1
answer
46
views
GridDB v5.6 Compression Type (ZSTD). Querying much faster?
GridDB 5.6 has a new compression method that I wanted to test. I made a simple test where I ingested X amount of rows and tested the compression method against the old compression available prior to 5....
2
votes
1
answer
250
views
Does enabling backup compression in a Back Up Database Task enable checksums, no matter what setting you pick for them?
Observations
The documentation says
To enable backup checksums in a BACKUP (Transact-SQL) statement, specify the WITH CHECKSUM option. To disable backup checksums, specify the WITH NO_CHECKSUM option....
0
votes
1
answer
98
views
Best database solution for high volume tables / data indexing (snapshots) [PostgreSQL]
My problem at the moment, is that I need to index/save snapshots of some data every x minutes, so every x minutes I am inserting around 500k new rows, into a table, which each one represents a ...
1
vote
2
answers
1k
views
Does LZ4 compression have an effect on JSONB columns?
After doing some experimentation, it appears as though COMPRESSION lz4 doesn't have any effect on JSONB columns (whereas it does for JSON, and TEXT). Is that indeed the case? If so, why is that?
I ...
0
votes
1
answer
239
views
Compressing partitioned tables in Oracle 19c
I am compressing partitioned tables.
before partitioned tables, I tried normal tables with the follwoing steps:
DBMS_REDEFINITION.START_REDEF_TABLE(
uname => 'USER',
orig_table => '...
0
votes
1
answer
152
views
How does the arithmetic for how many bytes are saved by row-compressing a datetime add up?
Everything that follows is quoted from the datetime row here. I have broken the quote to give commentary on my understanding.
Uses the integer data representation by using two 4-byte integers. The ...
0
votes
1
answer
253
views
Why does data compression not affect backups?
The documentation is very clear on this point
Compression doesn't affect backup and restore.
but how is that possible? Surely, if my table is smaller (due to compression), then that should have some ...
0
votes
1
answer
158
views
Will NVARCHAR(max) columns benefit from row compression?
I have a table that I believe owes most of its size to a huge NVARCHAR(max) column. I do not know how to test where most of its size comes from. Regardless, would such a table benefit from row ...
0
votes
1
answer
193
views
Need help on SQL Server data compression, hope to compress more on a large table
We have made good progress following our previous question about SQL Server data compression. For one of the tables, we compressed it from 20 GB to 2 GB, so this proves that the best compression ratio ...
0
votes
1
answer
55
views
How to compose a SQL Server command for doing and un-doing compressions
The difference between simple and detailed WITH clauses:
In YouTube tutorial SQL Server Tutorial 20: Table, Index and Row Compression, the example at time 5:43 contains a simple WITH clause of the ...
-1
votes
1
answer
111
views
For row-level or page-level, how to undo a compression in SQL Server? With SQL command and/or GUI Management Studio
We want to compress a table, using SQL statements like the samples below. And, just in case any issue happened after the compression, and we need to roll back to the last good point. How do we do it?
...
0
votes
1
answer
308
views
Compressing a json-shaped postgres response during transport
We have an application that has a complex sql code generation scheme, that outputs queries of the form:
SELECT
coalesce(json_agg("foo"), '[]') AS "foo"
FROM
(
SELECT
...
4
votes
1
answer
953
views
How does the XML_COMPRESSION option work?
XML_COMPRESSION has recently gone GA in Azure SQL Database
I've been trying to find some details about how it works to understand the pros and cons and so far not found any specifics.
Trying the below ...
-2
votes
1
answer
123
views
Is there any database uses lzma?
according to this article, lzma has a great performance on compression ratio. I think lzma database is a good choice if I don't care about compression speed, so is there any database that uses it?
0
votes
0
answers
110
views
pg_dump significant size difference between dumps
So we've been doing some temporary backups on windows machines and came across something interesting.
Doing plain formatted dumps with no compression and blobs,
we get significant size differences ...
0
votes
1
answer
251
views
Mariabackup, Xrtrabackup, how those handling InnodDB page compression, meaning: sparse files?
InnoDB engine, when using page compression, heavily depend on file system sparse file support. InnoDB Page Compression.
It raises the question, how will mariabackup/xtrabackup handle those InooDB ...
1
vote
3
answers
814
views
MariaDB column compression and binlog
I have a table with big text column, approx 3M per row, the table is using InnoDB storage engine. The binlog size is an issue,
I have binlog turned on and configured to format:row.
I do know about ...
0
votes
1
answer
347
views
What is the best option to store XML in SQL Server 2016?
On SQL Server 2016 (13.0.6300.2) we have one table that have two XML columns and around 1 500 000 rows. Size of table is around 150 GB. What is the best option to compress this table ? I was checking ...
0
votes
1
answer
969
views
PostgreSql 14 LZ4 not showing pg_column_compression. Not working with COPY import command
I have setup two test systems for primary/stanbdy replication. On the bigger VM with more CPU power I did default_toast_compression = lz4 and wal_compression=on. When i created tables on that VM I ...
6
votes
2
answers
3k
views
Disable TOAST compression for all columns
I am running PostgreSQL on compressed ZFS file system. One tip mentioned is to disable PostgreSQL's inline TOAST compression because ZFS can compress data better. This can be done by setting column ...
0
votes
0
answers
263
views
Backup with Compression of large TDE databases
I'm looking for help with Backup Compression for TDE databases, I'm working out of SQL 2016 (SP3-CU1) and am aware that the MaxTransferSize must be specified as over 64K. A few these DBs are over ...
1
vote
2
answers
2k
views
is my data compressed?
Lets say I want to implement basic compression for my table. I know it can be done in two steps, for example:
alter table MYTABLE compress;
alter table MYTABLE move;
Is there a way to check that ...
1
vote
1
answer
1k
views
How to decrypt zipped data in MSSQL table
We're trying to import (reverse engineer) some data from someone else's MS SQL database, without any vendor support.
In the past the data has been stored in either plain text or RTF so easy to extract....
3
votes
1
answer
760
views
TDE Backups Won't Compress with NOINIT
We run weekly backups of a TDE-enabled database using a command in the following format:
BACKUP DATABASE [DBName] TO DISK = N'C:\Temp\DBName.bak' WITH
FORMAT,
INIT,
MEDIANAME = N'DBName ...
2
votes
4
answers
2k
views
Tuning innoDB for a write-intensive machine
I have MariaDB 10.5 on my desktop with multiple disks (SSD and HDD) for write-intensive projects. Writing to a single table is fast and the percentage of dirty pages remains close to zero with 1000-...
0
votes
0
answers
75
views
Working with index-key sizes which are close to the DBMS's index-key size limit
I might need to work with relatively large (non-clustered) index keys (between 1KB to 4KB). I found that Spanner and DB2 support 8KB index-key size, MSSQL allows 1.7KB and Postgress around 2.6KB (1/3 ...
1
vote
1
answer
791
views
Does SQL Server 2019 automatically un-compress data when it is queried by an application? [duplicate]
We are using SQL Server Express 2019 and have inevitably ran into the 15GB cap. I now have to figure out a way to quickly compress data since we cannot buy a license at the moment.
At the moment it is ...
0
votes
1
answer
149
views
Space estimates for each table if compression is removed
I am trying to gather space estimates if compression is removed from tables in a database. I started with a temp table and generated statements to execute sp_estimate_data_compression_savings as ...
1
vote
1
answer
674
views
How to backup clickhouse over SSH?
In postgreSQL, I usually run this command to backup and compress (since my country have really low bandwidth) from server to local:
mkdir -p tmp/backup
ssh sshuser@dbserver -p 22 "cd /tmp; ...
0
votes
0
answers
45
views
Database Page Compression and ColumnStore Archive - is data compressed when in buffer pool and when sent to client? [duplicate]
One of benefits of Page Compression or ColumnStore Archive is reduced storage space requirements
Here I have two questions:
When compressed data is read from disk to buffer pool, in the buffer pool ...
3
votes
2
answers
1k
views
Test to see when table is candidate for compression (row or page)
Does anyone know a tool like this page https://columnscore.com/ where one can determine if a table is a good candidate for row or page compression.
Also I am trying to understand the benefits of ...
1
vote
1
answer
526
views
slow disk with InnoDB page compression
I have a write-intensive MariaDB with both NVMe SSD and HDD disks. I recently enabled page compression (innodb_compression_default=ON). I encountered two problems:
The database gets slow after a ...
3
votes
1
answer
2k
views
ALTER innoDB table from row to page compression
My tables have been created with InnoDB row compression (ENGINE=InnoDB ROW_FORMAT=COMPRESSED). Now I am changing them to page compression. According to the official documentation of MariaDB, enabling ...
2
votes
2
answers
1k
views
is it safe to compress backups for databases with TDE enabled?
I've been reading about this for a long time, and it seems it's not safe to compress a backup when the database has TDE enabled.
is anyone experiencing errors during restore with compressed backups ...
1
vote
1
answer
3k
views
PostgreSQL backup + gzip
We have a need to send PostgreSQL DB backup to CRM developer.
We send them and get feedback that the backup can't be "uzipped", as archive is broken.
The backup script snippet looks like ...
0
votes
2
answers
774
views
Table compression on 500GB table using Enterprise edition with online=on
I am using MS SQL2016 enterprise edition and we have several tables that need to be compressed to save memory and backup space. Right now they are each around 500GB and I am trying to get some space ...
0
votes
1
answer
492
views
Dynamic partitions with page compressed index
I created a table with partitions by following this SQL Server 2012 partitioned index document.
I created partitions monthly based on Date_Id column;
CREATE PARTITION FUNCTION Fnc_Prt_Fact_Sales (INT)
...
0
votes
1
answer
792
views
Question on compression for very large tables in sql server
I have a confusion on how this compression feature works in SQL server:
For quite some large tables , ones with over TB, we recently implemented PAGE level compression.
Example of how it was done:-
...
0
votes
2
answers
207
views
How to plan bring down the space used by databases to as low as possible
Per our security breaks and guidelines when a database grow too big that it cant meet the requirements where it cannot be restored under a given RTO we need to plan on achieving same:-
Based on your ...
0
votes
1
answer
247
views
SQL Server 2016 compress function details - external decompression
According to SQL Server 2016 docs COMPRESS and DECOMPRESS methods are just a blackbox - you put the data in and after some magic it gets compressed or decompressed. The problem is that I need to find ...
1
vote
1
answer
190
views
COMPRESS TABLESPACE 11g
I have quite big datapump dump file and tried to load it into compressed tablespace that is created with "DEFAULT COMPRESS FOR OLTP" because of lack of free space. Anyway, when I look at &...