2

I am running an executable using the subprocess.Popen API.

I have also set a timeout since the executable could hang or take too long in the communicate call. The problem is that on a heavy loaded server it seems like the run times out often. I think it is because it does not get enough CPU time as I run multiple processes in parallel.

I could increase the timeout, however this is a slippery slope as the machine might become even more loaded (it's used as a Jenkins server).

process = subprocess.Popen(cmd, stdout=log, stderr=log, ...)
process.communicate(timeout=timeout)

Is there a way I can refactor this to instead measure CPU time given to the process and timeout based on that?

I've seen questions suggesting timeit.default_timer(). However I am not sure that'll work for me.

3
  • 2
    in that case don't use communicate, just wait for the process to complete with a poll loop, and in that loop you can monitor the CPU using psutil module and decide according to the CPU taken Commented Sep 8, 2020 at 8:50
  • How much load do you realistically expect on the machine? Is your machine is so overloaded that runtime is significantly affected, compared to safety margins of timeouts? Say all programs are slower by factor 3 or more due to load, then a single program taking too long appears to be the least of your problems. Your machine is massively undersized in this case. Commented Sep 8, 2020 at 8:53
  • Does this answer your question? Python - get process names,CPU,Mem Usage and Peak Mem Usage in windows Commented Sep 10, 2020 at 7:40

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.