0

I was tuning my library goodcore and set up some performance tests to compare against native array functions. Then I ran them against Edge, FF, Chrome and Node 10.9 on my laptop. Of course my lib had mixed results but what was more interesting was that the difference between browsers were sometimes 30x between best and worst and it did not seem to vary completely between operations.

The arrays used are 10000 long with random ints between 0 and 100000.

EDIT Versions:

  • Chrome: 68.0.3440.106
  • FF: 62.0
  • Edge: 41.16299.371.0
  • Node: 10.9

Here are my results (only for native operations):

EDIT: now with correct values and also custom algorithms (no native)

Graph1 Graph2

The data shows ops/sec in Benchmark.js.

enter image description here

Is this due to datastructure implementation or micro optimizations?

1
  • Why am I getting downvoted? Please give me feedback and I will improve. Commented Sep 11, 2018 at 8:59

1 Answer 1

2

Is this due to datastructure implementation or micro optimizations?

Yes.

Longer answer: probably both, but the only way to answer this for sure is to look at each browser's implementation in detail.

The larger differences that you've measured in particular look like they might be due to fundamentally different choices of data structures under the hood; however even with the same basic data structure, the efficiency of the rest of the implementation can make a huge difference (I've seen 10x - 100x).

Also, IMHO your results are somewhat suspicious: Chrome and Node use the same V8 engine and should have very similar performance. Results like "indexOf" or "splice(remove 1)", where you're seeing a ~10x difference between what should be the ~same result, indicate that something might be wrong in your benchmarks. And if those two results can't be trusted, then why would you have any more confidence in your Edge/Firefox results?

Speaking of benchmark quality: using only one type of array (only one size, only one type of contents, always dense) is another reason why your results probably don't reflect the full story; so please be careful with drawing any conclusions from this.

Why is there such big performance difference

Because making the Array built-in methods fast is a ton of engineering effort. Each browser's engineering team is doing their best to spend the time they have on the functionality they think matters the most. The result is that you'll see varying degrees of optimization in the various implementations.

If there are differences in chosen data structures under the hood (I don't know), then those are typically tradeoffs: one choice might be faster at X but slower at Y than another choice; or one might be faster but consume more memory; etc.

Sign up to request clarification or add additional context in comments.

1 Comment

Thank you. That was a miss on my side. When I look at the real data there was a zero lost when transfering it to the excel sheet (there 2 other places where this happened too). Node and Chrome are now very close and any small variation can be attributed to different versions of the engine used. In the tests I only run with random ints, but reasonably large arrays and assigning the values as to avoid optimizing the whole loop away. I believe the values are relevant and interesting. And the Splice, Reverse and forEach result indicate that there is more to get.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.