So I have a little bash script:
for store in {4..80}
do
for page in {1..200}
do
curl 'https://website.com/shop.php?store_id='$store'&page='$page'&PHPSESSID=sessionID' -O
done
done
The script is working but downloads all 200 store pages of all stores from 4-80 one after another, which takes a lot of time. (Notice the bash variable store and page in the curl URL)
My goal would be to run as many curl requests simultaneously for each store/page instead getting it worked on one after one, to save time.
Is this possible?
&to yourcurlcommand?curlcommand will spawn upwards of 15K+ concurrentcurlcalls (in reality your OS will probably choke/periodically-hang and/or some calls will complete before you get to the end of the 15K+ calls); even if your OS can juggle 15K+ concurrentcurlcalls chances are your network and/or disk are going to turn into bottlenecks (ie, excessive thrashing will degrade overall performance); so you'll want to look at putting a limit on the number of concurrentcurlcalls you have outstanding; (google) search onbash limit number of jobs