28

I've noticed though some tests that native JavaScript functions are often much slower than a simple implementation. What is the reason behind that?

5
  • 2
    Probably because it has to establish the context before executing the code. Commented Jan 10, 2014 at 6:52
  • 2
    In my case (FF26), in concat-native-vs-implemented the native is faster than the implementation. Anyway, interesting point. I never thought of this and assume native would be always better. Commented Jan 10, 2014 at 7:31
  • Just tested it on my FF27 and for me the native code is also faster. Commented Feb 13, 2014 at 8:39
  • @Viclib, are you not satisfied with my answer? :) Commented Feb 15, 2014 at 1:15
  • To make your map function around twice as fast in Chrome, just say how big the array needs to be: jsperf.com/map-native-vs-implemented/9. Maybe this doesn't hold true for arrays of all lengths, but interesting none the less. Commented Mar 8, 2015 at 13:39

1 Answer 1

25

After looking at ECMA-262. It seems that the native implementations just do more in terms of error handling and features than the simple self-implementations.

For instance, check out the polyfill implementation of map on MDN: Array.prototype.map(). It's based off the same algorithm specified in ECMA-262. Updating your example to use this algorithm now makes the native implementation faster—although just slightly: map-native-vs-implemented.

Also, map might not be the best example to test since it's bouncing back and forth between native code and the provided lambda function.

I would have expected better performance for the native concat function. Nevertheless, looking at ECMA-262 we can see it also just does more. Looking at the algorithm in section 15.4.4.4, we can see that it handles some extra cases. For instance combining multiple arguments—some being arrays and some being other types:

[1, 2, 3].concat([4, 5, 6], "seven", 8, [9, 10]);

returns

[1, 2, 3, 4, 5, 6, "seven", 8, 9, 10]

Finally it's important to note that these are pretty basic algorithms. When running such algorithms on huge data sets, or thousands of times consecutively, it may seem that one is significantly faster than the other. However, performing even a couple of extra safety checks over thousands of iterations can render one algorithm significantly slower than one that does not do those checks. Count the computational operations—if your extra error handling and features doubles the lines of code in the loop, it's only natural that it would be slower.

Sign up to request clarification or add additional context in comments.

3 Comments

If you look at your jsPref results, in most cases the implementation is still faster than the native method, therefore maybe this is not the reason. :P
@Derek朕會功夫 Hi Derek, thanks for checking. It seems that according to ECMA-262, the algorithm should use Object.defineProperty, however the polyfill I had used from MDN had a comment saying it did not for compatibility reasons. After updating the JSPerf example to use Object.defineProperty it is now significantly slower. :)
Another jsPerf link that bit the dust :(

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.