I have a simple script that pulls the SMART data from a series of hard drives and writes it to a timestamped log file which is later logged and parsed for relevant data.
filename="filename$( date '+%Y_%m_%d_%H%M' ).txt"
for i in {a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p}
do
smartctl -a /dev/sd$i >> /path/to/location/$filename
done
Since this takes several seconds to run, I would like to find a way to parallelize it. I've tried just appending an '&' to the end of the single line in the loop, however that causes the text file to be written haphazardly as sections finish rather than sequentially and in a readable manner. Is there a way to fork this into seperate processes for each drive then pipe the output back into an orderly text file?
Also, I assume setting the filename variable will have to be moved into the for loop in order for the forks to be able to access it. That causes an issue however if the script runs long enough to roll over into a new minute (or two) and then the script becomes sequentially datestamped fragments rather than one contiguous file.
{a,b,c,...}whena b c ...is both shorter and 100% portable? Don't write bash scripts. Write shell scripts.