5

I use node.js to send an http request. I have a requirement to measure how much time it took.

start = getTime()
http.send(function(data) {end=getTime()})

If I call getTime inside the http response callback, there is the risk that my callback is not being called immediately when the response cames back due to other events in the queue. Such a risk also exists if I use regular java or c# synchronous code for this task, since maybe another thread got attention before me.

start = getTime()
http.send()
end=getTime()

How does node.js compares to other (synchronous) platform - does it make my chance for a good measure better or worse?

4
  • You can't do this without low level (C++) hooks directly into the event loop Commented Dec 3, 2011 at 4:36
  • do you mean I need to implement the http module by myself or is there some extensibility mechanism I can use? Commented Dec 3, 2011 at 10:11
  • 4
    I mean you want to know exactly when the raw tcp packet comes back. There's no way to do this other then wait for your http handler to be called. There's only a small amount of latency between the two (an insignificant amount compared to the network traffic time). Commented Dec 3, 2011 at 12:53
  • Do you need to measure it for every single request or build up a profile of average request length? Since Raynos is right that the work involved to know for sure is heavy, you could more easily run tests or some of your traffic through a proxy to measure send/response times. For example, I use Charles to measure RTT when testing locally: charlesproxy.com Commented Jul 7, 2013 at 18:21

1 Answer 1

1

Great observations!

Theory:

If you are performing micro-benchmarking, there exists a number of considerations which can potentially skew the measurements:

  1. Other events in the event loop which are ready to fire along with the http send in question, and get executed sequentially before the send get a chance - node specific.

  2. Thread / Process switching which can happen any time within the span of send operation - generic.

  3. Kernel’s I/O buffers being in limited volume causes arbitrary delays - OS / workload / system load specific.

  4. Latency incurred in gathering the system time - language / runtime specific.

  5. Chunking / Buffering of data: socket [ http implementation ] specific.

Practice:

Noe suffers from (1), while a dedicated thread of Java / C# do not have this issue. But as node implements an event driven non-blocking I/O model, other events will not cause blocking effects, rather will be placed into the event queue. Only the ones which are ready will get fired, and the latency incurred due to them will be a function of how much I/O work they have to carry out, and any CPU bound actions performed in their associated callbacks. These, in practice, would become negligible and evened out in the comparison, due to the more visible effects of items (2) to (5). In addition, writes are generally non-blocking, which means they will be carried out without waiting for the next loop iteration to run. And finally, when the write is carried out, the callback is issued in-line and sequentially, there is no yielding to another event in between.

In short, if you compare a dedicated Java thread doing blocking I/O with a Node code, you will see Java measurements good, but in large scale applications, the thread context switching effort will offset this gain, and the node performance will stand out.

Hope this helps.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.