0

Sometimes we should benchmarking two similar algorithms, but if we execute them continuously, CPU optimization may affect the result. (From I know)

I created three benchmarking files, one file of them contains other two benchmarking tasks.

// bench.js

console.time('bench1')
for (let i = 0; i < times; i++) {
    // do some things
}
console.timeEnd('bench1')

console.time('bench2')
for (let i = 0; i < times; i++) {
    // do some things
}
console.timeEnd('bench2')

// bench1.js

console.time('bench1')
for (let i = 0; i < times; i++) {
    // do some things
}
console.timeEnd('bench1')

// bench2.js

console.time('bench2')
for (let i = 0; i < times; i++) {
    // do some things
}
console.timeEnd('bench2')

In fact, all these task are same. In other words, it expected to get similar benchmarking results from bench1 and bench2.

But executing bench.js, I found the task which run later spends less time in most times.

Then I execute bench1.js, and execute bench2.js after a while. I get silimar results from them. That's expected.


Results on my machine:

> node .\benchmark\bench.js

bench1: 96.419ms
bench2: 41.822ms

> node .\benchmark\bench1.js

bench1: 96.293ms

> node .\benchmark\bench2.js

bench2: 97.805ms

From I know, I think it because of CPU optimization.

So, how to avoid these factors in practice? Or my speculate is wrong?

2
  • 1
    Instead of avoiding CPU optimization most benchmarking code try to only test CPU optimized code. After all, it is the optimized run that is important. They do this by running the benchmark a few times and discarding the results before running the real benchmark. This is called "warmup" (engineering.appfolio.com/appfolio-engineering/2017/5/2/…) Commented Aug 24, 2022 at 8:34
  • @slebetman I have tried "warmup". My practice is ignore the first quintile of benchmarkings. The impact seems reduced but it still exists. Maybe I should ignore more benchmarkings? Commented Aug 25, 2022 at 7:33

1 Answer 1

1

Use a benchmarking tool such as hyperfine to run your bench1.js and bench2.js.

It will average out outliers etc.

You can also run with --jitless flag to try and tell Node to do less optimization still (see node --v8-options for more) but that's apparently not a good idea.

Sign up to request clarification or add additional context in comments.

3 Comments

You can also run with --jitless - But don't. Unless you really know what you're doing, and perhaps are trying to benchmark the cold-startup performance of something. See an answer from a V8 developer (jmk) on Which nodejs v8 flags for benchmarking?
@PeterCordes Great point and link, thanks :)
I'm using a benchmarking library called benchmarkjs now, but I still don't know how benchmarking library resolved this problem.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.