Here is a similar example, that uses multiprocessing instead of multithreading (for documentation, see the official docs ). Mulitprocessing works very similarly to multithreading, but it circumvents the Global Interpreter Lock, therefore allowing your script to actually run different processes at the same time, and potentially making better use of limited computing resources.
import multiprocessing as mp
def my_function(*args):
print("Arguments: {0}".format(args))
class MyProcess(mp.Process):
def __init__(self, target, args):
mp.Process.__init__(self, target=target, args=args)
def main():
a1 = MyProcess(target=my_function, args=("1st Process...",))
a2 = MyProcess(target=my_function, args=("2nd Process...",))
a3 = MyProcess(target=my_function, args=("3rd Process...",))
a4 = MyProcess(target=my_function, args=("4th Process...",))
proclist = [a1, a2, a3, a4]
for proc in proclist:
proc.start()
for proc in proclist:
proc.join()
if __name__ == '__main__':
main()
Output:
Arguments: ('1st Process...',)
Arguments: ('2nd Process...',)
Arguments: ('3rd Process...',)
Arguments: ('4th Process...',)
While these came in in what appears to be a set order, if you add a task that takes a non-deterministic time, they will come in in the order they finish. Just replace the contents of my_function with your code, and you should be set. (Note: this uses Python 2. The same works in Python 3 with very little modification--maybe none--but if you're in Python 3, you should also investigate concurrent.futures).