Greetings Support Community,
I have about 10 million+ files that I am trying to load into MySQL database using the following script:
WORKING_DIR=/tmp
FILE1="*test*"
timestamp_format="%Y-%m-%d %H:%i:%s.%x"
for i in ${WORKING_DIR}/${FILE1}
do
if [ -f "$i" ]; then
mysql -uroot -ptest my_database --local-infile=1<<-SQL
SET sql_log_bin=0;
LOAD DATA LOCAL INFILE '${i}' INTO TABLE my_table
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '\"'
LINES TERMINATED BY '\n'
IGNORE 1 LINES
(id, transaction_id, app_id, sub_id);
SQL
fi
done
Its an extremely slow process. After about 24 hours, I've only been able to load about 2 million records. In each file, there is one record. At this rate, this will complete in about 5 days. Is there a faster way of doing this? E.g. Should I concatenate the files before processing?
Any suggestion to improve loading this data into MySQL would be greatly appreciated.
Thanks!
/tmp, Linux.IGNORE 1 LINES- does that mean that each file has a header row? If you concatenate the files you may need to remove the header row.