The fastest way
I've used the solution below to import a 100GB database (not a typo) of almost 300 million records in less than 10 hours, so it's safe to say it will rip through the average SQL dump in no time.
First, you should make sure your database is making the most of your hardware - most out-of-the-box SQL installations are set up to be as lightweight (and therefore inefficient) as possible. Assuming your database is using the InnoDB engine (which it is if you are using XAMPP), edit the following variables in your my.ini file:
innodb_buffer_pool_size=8G // Change to 50-75% of your system's RAM, depending on how much you need to keep free
innodb_log_file_size=2G // Change to 25% of the above value
innodb_log_buffer_size=8M
If you're using another database engine then you'll need to find the equivalent configuration options and tune them accordingly.
Once you've made those configuration changes, restart your server and run the following in the command-line:
mysql -u root -e "CREATE DATABASE IF NOT EXISTS <database_name>; USE <database_name>; \
SET FOREIGN_KEY_CHECKS = 0; SET UNIQUE_CHECKS = 0; SET AUTOCOMMIT = 0; \
source <file_name>; SET UNIQUE_CHECKS = 1; SET FOREIGN_KEY_CHECKS = 1; COMMIT;"
Although this uses the mysql CLI tool it's just executing plain MySQL so you can alternatively copy-paste the SQL code from it directly into your database tool's SQL tab (eg. PHPMyAdmin).
database < file.sqldoes not look like any command to me, and if you see some syntax errors, please share them