8

I have an application that consists of multiple python scripts. Some of these scripts are calling C code. The application is now running much slower than it was, so I would like to profile it to see where the problem lies. Is there a tool, software package or just a way to profile such an application? A tool that will follow the python code into the C code and profile these calls as well?

Note 1: I am well aware of the standard Python profiling tools. I'm specifically looking here for combined Python/C profiling.

Note 2: the Python modules are calling C code using ctypes (see http://docs.python.org/library/ctypes.html for details).

Thanks!

4
  • "slower than it was" ? So Why do you have to change it? Commented Oct 29, 2010 at 10:53
  • 1
    @joni: Code can change for many different reasons. Also, it might have slowed down without any code changes (heavier workloads, busier server, network issues, etc.). Commented Oct 29, 2010 at 10:57
  • @joni: since I am making regular changes to this application, both in Python and in C, I want to figure out now what change caused the code to run slower. Commented Oct 29, 2010 at 11:01
  • Do we need the tags profiling, profilingtools, and profiling-tools? Last 2 removed. Commented Oct 29, 2010 at 11:11

2 Answers 2

3

Stackshots work. Since you have combined Python and C you can handle them separately. For Python, you can hit Ctrl-C while it's being slow to examine the stack. Do this several times. That will expose anything you can fix in the python code. For the C code, run the whole thing under a debugger like GDB and hit Ctrl-C to get a stack trace in C. Several of those will expose anything you can fix in the C code. I'm told OProfile can also do this. (Another way is to use lsstack if it is available.)

This is a little-known method that works on this principle: Suppose you have an infinite loop or a nearly infinite loop. How would you find it? You would halt the program and see what it was doing, right? Suppose the program only took twice as long as necessary. Each time you halted it, the chance that you would catch it doing the unnecessary thing is 50%. So all you have to do is halt it a number of times. As soon as you see it doing something that could be improved, on as few as 2 samples, you know you can fix that for a healthy speedup. Then you can repeat it to get the next problem. Measuring is not the point. Catching things you can improve is the point.

Sign up to request clarification or add additional context in comments.

1 Comment

Interesting idea. I'll try that out this weekend. Thx.
1

The combination would be pretty hard, but you can use some of the standard profilers like valgrind, gprof or even oprofile (although I never managed to get meaningful output out of it).

3 Comments

Profilers that you can't get anything meaningful out of tend to be for one or more of these reasons: stackoverflow.com/questions/1777556/alternatives-to-gprof/…
The question linked to by @MikeDunlavey has been removed.
@kynan: Yeah. I guess it's controversial (though it shouldn't be). When you get enough rep you can see it.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.