4

when comparing the operation

var fat_cats = cats.slice()

to

var fat_cats = new Array(cats.length)

the performance differences are confusing.

In firefox and chrome new Array is slower (when it should be faster, it's just allocating an empty array and not doing iteration over it)

Where as in IE8 new Array is faster (which is just confusing)

Any explanation appreciated.

benchmark

4
  • @MahmoudAl-Qudsi either way your overwriting the array with those values Commented May 6, 2012 at 11:47
  • I don't think the test case is valid, as you're introducing some cache locality issues. I think you should stick to j[i] = a[i] / 255; at all times, instead of switching between it and j[i] = j[i] / 255;, which might (naturally?) be faster due to cache locality. EDIT - changed the test case to be more valid, still same results: jsperf.com/mapclone/8 Commented May 6, 2012 at 11:50
  • In your "Clone empty and loop" benchmark you have var j = new Array(a.Length) instead of a.length so j is getting var j = new Array(undefined). If you fix that it's still slower, but by a much smaller margin. yours vs bugfix Commented May 6, 2012 at 12:48
  • It seems to be an optimization for large arrays in the V8 source. Updated my answer. Commented May 6, 2012 at 12:50

1 Answer 1

8

Figured it out by looking at the source code for V8's array functions.

If an array has more than 1000 elements and .slice is called, a function called SmartSlice is used, verses the SimpleSlice function used otherwise.

SimpleSlice is implemented as a for loop copy (exactly the same as the code in the array copy test case). SmartSlice, on the other hand, uses sparse arrays to represent the data.

In a test case where the number of elements is dropped from 10,000 to below 1000, they are exactly the same performance (within the margin of error), whereas in a better-controlled test case with fewer variation and more than 1000 elements, the SmartSlice method is ~36% faster than the naïve copy.


While this explains the V8 side of things perfectly, I do not know why Firefox is also slower on new arrays than sliced arrays, even at smaller sizes - unless they have a similar optimization in place (perhaps for all slice functions).

EDIT

This kept bugging me, so I downloaded the Firefox source and checked out js/src/jsarray.cpp!array_slice, and Firefox does have a similar optimization: the result of .slice is a DenseCopiedArray or DenseAllocatedArray, which is apparently similar to the V8 sparse array.

Sign up to request clarification or add additional context in comments.

8 Comments

.slice() is used to copy an array, so it would create a new array.
No, that's my point. I think it doesn't create a new array until the first write, so they're just aliases for the same object until then?
Good work with the global variable :) It's surprising that this makes such a big difference.
@RobW I don't think it's COW actually, I'm doing more tests. But your writes aren't making a difference since they're after the writing anyway. It would have to be done after the slice and before anything else.
@Rob W: What's your point? The whole array will have been copied by then anyway.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.