The bottleneck of the performance of your script likely comes from the fact that it is writing to 3 files at the same time, causing massive fragmentation between the files and hence lots of overhead.
So instead of writing to 3 files at the same time as you read through the lines, you can buffer up a million lines (which should take less than 1GB of memory), before you write the 3 million words to the output files one file at a time, so that it will produce much less file fragmentation:
def write_words(words, *files):
for i, file in enumerate(files):
for word in words:
file.write(word[i] + '\n')
words = []
with open('input.txt', 'r') as f, open('words1.txt', 'w') as out1, open('words2.txt', 'w') as out2, open('words3.txt', 'w') as out3:
for count, line in enumerate(f, 1):
words.append(line.rstrip().split(','))
if count % 1000000 == 0:
write_words(words, out1, out2, out3)
words = []
write_words(words, out1, out2, out3)
process_lineactually do? Please show us the code.