0

I am running 5.5.60-MariaDB and I need to upload a DB on the server. The .sql file is located on an SSD drive (locally) and it contains ~700 million rows, with 4 columns indexed. Everything is included in the dump file.

My question is, is it normal that it is taking already 24h and still the DB is not up? The server is not busy doing other things, has 16 cores and 125 GB of RAM.

The command I am using is:

mysql -u root myDB < database_dump.sql

My configuration file is as follows:

[mysqld]
datadir=/home/ssd/mysql_datadir
tmpdir=/home/ssd/mysql_tmdir
socket=/var/lib/mysql/mysql.sock
innodb_buffer_pool_size=4GB

# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# Settings user and group are ignored when systemd is used.
# If you need to run mysqld under a different user or group,
# customize your systemd unit file for mariadb according to the
# instructions in http://fedoraproject.org/wiki/Systemd

[mysqld_safe]
log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid

# include all files from the config directory
!includedir /etc/my.cnf.d

Both the tmp and data directory are on the same SSD disk, and the sql dump as well. I do not know if this a normal time to expect, or I should change something in my settings, that it why I reach out. The engine of the DB to upload is InnoDB.

Thanks!

9
  • If you look in the datadir /home/ssd/mysql_datadir can you see files and are they growing in size? Also try logging into the database and running SHOW PROCESSLIST to see a list of currently running queries. Commented Aug 15, 2019 at 12:34
  • Yes they do! And in the processes, I can see inserts being executed... My question is maybe more general; I mean, if this is a reasonanble amount of time given the size of the mysql dump (157 GB) and the fact that the indexing is including in the dump file. Or should it have already finished? Another issue is that if I try to do USE MYDB then it hangs and I cannot log in into it (I assume it is because it is being used at the moment. But if I could log in, I could e.g. count the lines and see how many have been uploaded so far... Is there a way around this? Commented Aug 15, 2019 at 12:40
  • That's good news :) The dumpfile won't contain the actual indexes, just the information to recreate them. The indexes are now being rebuilt as the data is inserted into your tables. It's hard to say what speed and time is reasonable really but if the RAM and CPU levels looks fairly low it could point to an IO bottleneck. Backup and restore operations are probably the most intense things your database will have to do so whatever time is set now is a benchmark for future comparison. From the Linux command line run iostat -d 2 to see how your IO is currently performing. Commented Aug 15, 2019 at 12:49
  • Many thanks @KevH. I have not worked with such DB sizes before, so I wasn't sure if I was doing something wrong... I ran the command but what should I be looking for? Commented Aug 15, 2019 at 12:52
  • 1
    It takes a lot of time to shovel 157GB of data. Also (depending on the parameters used in the dump), after it finishes INSERTing, it may spend hours more building the indexes. Commented Aug 15, 2019 at 17:37

1 Answer 1

0

If you are unsure anything is happening during the restore here are a few ways to check for activity:

Using Top

From the command line run top. It will show you what's happening right now on the system. You can then filter the list to look for MySQL by pressing o and typing COMMAND=mysql

If RAM and CPU usage looks fairly low it may indicate the bottleneck being your system's IO being the slowest part while you finish the restore. In the top header you'll see the wa value, which is the time the CPU spends waiting for I/O to complete.

You could interrogate the IO subsystem more by using the linux iostat command. You may be hitting IO bottlenecks but, unless you are planning on doing this regularly, this import task isn't going to be typical system usage so don't worry too much about IO performance just from this. You're importing a 157GB file - that's a lot of data to read, insert and build indexes for!

Mysql client

Log into the server using mysql. Run the command show processlist to see a list of currently running queries. You should see the inserts taking place and changing on each execution of the query.

Filesystem

take a look in the /home/ssd/mysql_datadir directory to see if the physical database files are growing as the data is being inserted. This will show you the file size every two seconds. They'll be growing nicely as the data imports.

watch -n 2 ls -la /home/ssd/mysql_datadir/

ibdata1 will be the data for your database. ib_logfile0 and ib_logfile0 is used by MySQL\MariaDB to help ensure data consistency after an unclean shutdown.

It's a bit too late to restart your import but in future you can use a small utility called pipe viewer to see where in the backup file the import has got to by piping the import through it.

pv database_dump.sql | mysql -u root myDB

It will give you a fancy progress bar and show you the read rate of the file and give an estimated completion time. This is only measuring the reading of the file though, not how long MariaDB is taking to process things.

I hope that helps!

4
  • Excellent feedback! Many thanks! Question: the size of the ibdata1 is not growing in size, YET it keeps changing it's timestamp. Not sure what it means... Also, is it possible that I can kill the process, turn indexing off, restore the DB from the dump, turn indexing back on and re-create the indexes? Commented Aug 15, 2019 at 14:30
  • @bioplanet The page here will tell you more about why you're seeing the behaviour you describe. According to the mysqldump reference page if the backup file was created with the --disable-keys flag set then the import should go faster as it will leave the index creation until after the data is loaded. That page shows you what to look for in the backup file to see if that is the case or not. If you're happy with the answer please mark it as correct! Thanks! Commented Aug 15, 2019 at 15:08
  • 1
    Depending on the setting of innodb_files_per_table, either ibdata1 will grow, or a dbname/tablename.ibd file will be growing. Commented Aug 15, 2019 at 17:38
  • Login as root and SELECT @@innodb_files_per_table; will display the setting. If 1 (ON) then each table will have tablename.ibd listed by the OS as each table load completes. If 0 (OFF) only ibdata1 will grow. Commented Aug 22, 2019 at 23:34

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.