Sometimes we should benchmarking two similar algorithms, but if we execute them continuously, CPU optimization may affect the result. (From I know)
I created three benchmarking files, one file of them contains other two benchmarking tasks.
// bench.js
console.time('bench1')
for (let i = 0; i < times; i++) {
// do some things
}
console.timeEnd('bench1')
console.time('bench2')
for (let i = 0; i < times; i++) {
// do some things
}
console.timeEnd('bench2')
// bench1.js
console.time('bench1')
for (let i = 0; i < times; i++) {
// do some things
}
console.timeEnd('bench1')
// bench2.js
console.time('bench2')
for (let i = 0; i < times; i++) {
// do some things
}
console.timeEnd('bench2')
In fact, all these task are same. In other words, it expected to get similar benchmarking results from bench1 and bench2.
But executing bench.js, I found the task which run later spends less time in most times.
Then I execute bench1.js, and execute bench2.js after a while. I get silimar results from them. That's expected.
Results on my machine:
> node .\benchmark\bench.js
bench1: 96.419ms
bench2: 41.822ms
> node .\benchmark\bench1.js
bench1: 96.293ms
> node .\benchmark\bench2.js
bench2: 97.805ms
From I know, I think it because of CPU optimization.
So, how to avoid these factors in practice? Or my speculate is wrong?