Skip to main content

Questions tagged [compression]

The name given to the process of encoding data such that it uses lesser number of bits as compared to the original representation.

Filter by
Sorted by
Tagged with
5 votes
1 answer
180 views

(I'm trying to predict long-term SQL backup storage needs and if/where to enable compression, on a storage appliance with de-duplication. I know the best answer is "test both and see", which ...
BradC's user avatar
  • 10.1k
1 vote
0 answers
27 views

I'm experiencing some issues with Apache CouchDB's database compression (version 3.3.3). Sometimes the process gets stuck midway, which causes significant difficulties. I'd like to know if it's ...
Sergio's user avatar
  • 11
3 votes
1 answer
319 views

MariaDB offers multiple compression methods: bzip2, lz4, lzma, lzo and snappy. Why are there so many? Which one is recommended to use? Why isn't zstd and option? If one compression method is adopted ...
Otto's user avatar
  • 479
0 votes
3 answers
561 views

when I take a diff backup of my database without specifying compression It still does compress. what if I explicitly want without compression? BACKUP DATABASE MyDatabase TO DISK='\\myserver\SQLBackups$...
Marcello Miorelli's user avatar
1 vote
0 answers
22 views

We are currently using Cassandra version 1.2.14 and mirroring across 3 IPs. We have 29TB of storage space, with approximately 24TB of data currently stored. I have a few questions: During internal ...
cassandra beginer's user avatar
-1 votes
2 answers
175 views

I have a mariadb database set up for logging experiment data. In one of the tables, I store huge raw images every row. With a few million rows each containing 3 512*512px images, I run out of disk ...
hz lin's user avatar
  • 1
4 votes
1 answer
614 views

I use backup compression everywhere. Recently, I discovered that my backups were getting a little too big. I decided to fix this by compressing the indexes in the database (typically PAGE compression)....
J. Mini's user avatar
  • 1,362
1 vote
0 answers
44 views

The current setup is comprised of two nodes, a Master-Slave setup. No encryption, no compression. I would love to switch this to a Galera Cluster, with encryption, compression and, to make things ...
Silviu Bajenaru Marcu's user avatar
3 votes
1 answer
249 views

An old Microsoft paper says to consider using ROW compression by default if you have lots of CPU room (emphasis mine). If row compression results in space savings and the system can accommodate a 10 ...
J. Mini's user avatar
  • 1,362
1 vote
1 answer
46 views

GridDB 5.6 has a new compression method that I wanted to test. I made a simple test where I ingested X amount of rows and tested the compression method against the old compression available prior to 5....
L. Connell's user avatar
2 votes
1 answer
250 views

Observations The documentation says To enable backup checksums in a BACKUP (Transact-SQL) statement, specify the WITH CHECKSUM option. To disable backup checksums, specify the WITH NO_CHECKSUM option....
J. Mini's user avatar
  • 1,362
0 votes
1 answer
98 views

My problem at the moment, is that I need to index/save snapshots of some data every x minutes, so every x minutes I am inserting around 500k new rows, into a table, which each one represents a ...
Ala's user avatar
  • 1
1 vote
2 answers
1k views

After doing some experimentation, it appears as though COMPRESSION lz4 doesn't have any effect on JSONB columns (whereas it does for JSON, and TEXT). Is that indeed the case? If so, why is that? I ...
user295232's user avatar
0 votes
1 answer
239 views

I am compressing partitioned tables. before partitioned tables, I tried normal tables with the follwoing steps: DBMS_REDEFINITION.START_REDEF_TABLE( uname => 'USER', orig_table => '...
datascinalyst's user avatar
0 votes
1 answer
152 views

Everything that follows is quoted from the datetime row here. I have broken the quote to give commentary on my understanding. Uses the integer data representation by using two 4-byte integers. The ...
J. Mini's user avatar
  • 1,362
0 votes
1 answer
253 views

The documentation is very clear on this point Compression doesn't affect backup and restore. but how is that possible? Surely, if my table is smaller (due to compression), then that should have some ...
J. Mini's user avatar
  • 1,362
0 votes
1 answer
158 views

I have a table that I believe owes most of its size to a huge NVARCHAR(max) column. I do not know how to test where most of its size comes from. Regardless, would such a table benefit from row ...
J. Mini's user avatar
  • 1,362
0 votes
1 answer
193 views

We have made good progress following our previous question about SQL Server data compression. For one of the tables, we compressed it from 20 GB to 2 GB, so this proves that the best compression ratio ...
James's user avatar
  • 149
0 votes
1 answer
55 views

The difference between simple and detailed WITH clauses: In YouTube tutorial SQL Server Tutorial 20: Table, Index and Row Compression, the example at time 5:43 contains a simple WITH clause of the ...
James's user avatar
  • 149
-1 votes
1 answer
111 views

We want to compress a table, using SQL statements like the samples below. And, just in case any issue happened after the compression, and we need to roll back to the last good point. How do we do it? ...
James's user avatar
  • 149
0 votes
1 answer
308 views

We have an application that has a complex sql code generation scheme, that outputs queries of the form: SELECT coalesce(json_agg("foo"), '[]') AS "foo" FROM ( SELECT ...
jberryman's user avatar
  • 481
4 votes
1 answer
953 views

XML_COMPRESSION has recently gone GA in Azure SQL Database I've been trying to find some details about how it works to understand the pros and cons and so far not found any specifics. Trying the below ...
Martin Smith's user avatar
  • 88.7k
-2 votes
1 answer
123 views

according to this article, lzma has a great performance on compression ratio. I think lzma database is a good choice if I don't care about compression speed, so is there any database that uses it?
destination's user avatar
0 votes
0 answers
110 views

So we've been doing some temporary backups on windows machines and came across something interesting. Doing plain formatted dumps with no compression and blobs, we get significant size differences ...
mrxaxen's user avatar
0 votes
1 answer
251 views

InnoDB engine, when using page compression, heavily depend on file system sparse file support. InnoDB Page Compression. It raises the question, how will mariabackup/xtrabackup handle those InooDB ...
g.pickardou's user avatar
1 vote
3 answers
814 views

I have a table with big text column, approx 3M per row, the table is using InnoDB storage engine. The binlog size is an issue, I have binlog turned on and configured to format:row. I do know about ...
g.pickardou's user avatar
0 votes
1 answer
347 views

On SQL Server 2016 (13.0.6300.2) we have one table that have two XML columns and around 1 500 000 rows. Size of table is around 150 GB. What is the best option to compress this table ? I was checking ...
adam.g's user avatar
  • 487
0 votes
1 answer
969 views

I have setup two test systems for primary/stanbdy replication. On the bigger VM with more CPU power I did default_toast_compression = lz4 and wal_compression=on. When i created tables on that VM I ...
ultimo_frogman's user avatar
6 votes
2 answers
3k views

I am running PostgreSQL on compressed ZFS file system. One tip mentioned is to disable PostgreSQL's inline TOAST compression because ZFS can compress data better. This can be done by setting column ...
Mikko Ohtamaa's user avatar
0 votes
0 answers
263 views

I'm looking for help with Backup Compression for TDE databases, I'm working out of SQL 2016 (SP3-CU1) and am aware that the MaxTransferSize must be specified as over 64K. A few these DBs are over ...
Rebekah W's user avatar
1 vote
2 answers
2k views

Lets say I want to implement basic compression for my table. I know it can be done in two steps, for example: alter table MYTABLE compress; alter table MYTABLE move; Is there a way to check that ...
elfcheg's user avatar
  • 167
1 vote
1 answer
1k views

We're trying to import (reverse engineer) some data from someone else's MS SQL database, without any vendor support. In the past the data has been stored in either plain text or RTF so easy to extract....
Delphinus's user avatar
3 votes
1 answer
760 views

We run weekly backups of a TDE-enabled database using a command in the following format: BACKUP DATABASE [DBName] TO DISK = N'C:\Temp\DBName.bak' WITH FORMAT, INIT, MEDIANAME = N'DBName ...
Tanner Jotblad's user avatar
2 votes
4 answers
2k views

I have MariaDB 10.5 on my desktop with multiple disks (SSD and HDD) for write-intensive projects. Writing to a single table is fast and the percentage of dirty pages remains close to zero with 1000-...
Googlebot's user avatar
  • 4,551
0 votes
0 answers
75 views

I might need to work with relatively large (non-clustered) index keys (between 1KB to 4KB). I found that Spanner and DB2 support 8KB index-key size, MSSQL allows 1.7KB and Postgress around 2.6KB (1/3 ...
user5721565's user avatar
1 vote
1 answer
791 views

We are using SQL Server Express 2019 and have inevitably ran into the 15GB cap. I now have to figure out a way to quickly compress data since we cannot buy a license at the moment. At the moment it is ...
Ivan Snyman's user avatar
0 votes
1 answer
149 views

I am trying to gather space estimates if compression is removed from tables in a database. I started with a temp table and generated statements to execute sp_estimate_data_compression_savings as ...
Patrick's user avatar
  • 698
1 vote
1 answer
674 views

In postgreSQL, I usually run this command to backup and compress (since my country have really low bandwidth) from server to local: mkdir -p tmp/backup ssh sshuser@dbserver -p 22 "cd /tmp; ...
Kokizzu's user avatar
  • 1,403
0 votes
0 answers
45 views

One of benefits of Page Compression or ColumnStore Archive is reduced storage space requirements Here I have two questions: When compressed data is read from disk to buffer pool, in the buffer pool ...
Aleksey Vitsko's user avatar
3 votes
2 answers
1k views

Does anyone know a tool like this page https://columnscore.com/ where one can determine if a table is a good candidate for row or page compression. Also I am trying to understand the benefits of ...
xhr489's user avatar
  • 827
1 vote
1 answer
526 views

I have a write-intensive MariaDB with both NVMe SSD and HDD disks. I recently enabled page compression (innodb_compression_default=ON). I encountered two problems: The database gets slow after a ...
Googlebot's user avatar
  • 4,551
3 votes
1 answer
2k views

My tables have been created with InnoDB row compression (ENGINE=InnoDB ROW_FORMAT=COMPRESSED). Now I am changing them to page compression. According to the official documentation of MariaDB, enabling ...
Googlebot's user avatar
  • 4,551
2 votes
2 answers
1k views

I've been reading about this for a long time, and it seems it's not safe to compress a backup when the database has TDE enabled. is anyone experiencing errors during restore with compressed backups ...
Racer SQL's user avatar
  • 7,558
1 vote
1 answer
3k views

We have a need to send PostgreSQL DB backup to CRM developer. We send them and get feedback that the backup can't be "uzipped", as archive is broken. The backup script snippet looks like ...
Michał Lipok's user avatar
0 votes
2 answers
774 views

I am using MS SQL2016 enterprise edition and we have several tables that need to be compressed to save memory and backup space. Right now they are each around 500GB and I am trying to get some space ...
tbear58203's user avatar
0 votes
1 answer
492 views

I created a table with partitions by following this SQL Server 2012 partitioned index document. I created partitions monthly based on Date_Id column; CREATE PARTITION FUNCTION Fnc_Prt_Fact_Sales (INT) ...
rkapukaya's user avatar
0 votes
1 answer
792 views

I have a confusion on how this compression feature works in SQL server: For quite some large tables , ones with over TB, we recently implemented PAGE level compression. Example of how it was done:- ...
BeginnerDBA's user avatar
  • 2,230
0 votes
2 answers
207 views

Per our security breaks and guidelines when a database grow too big that it cant meet the requirements where it cannot be restored under a given RTO we need to plan on achieving same:- Based on your ...
BeginnerDBA's user avatar
  • 2,230
0 votes
1 answer
247 views

According to SQL Server 2016 docs COMPRESS and DECOMPRESS methods are just a blackbox - you put the data in and after some magic it gets compressed or decompressed. The problem is that I need to find ...
Paweł Sopel's user avatar
1 vote
1 answer
190 views

I have quite big datapump dump file and tried to load it into compressed tablespace that is created with "DEFAULT COMPRESS FOR OLTP" because of lack of free space. Anyway, when I look at &...
Mikhail Aksenov's user avatar