Jon Linnell

How slow is the Spread operator in JavaScript?

17 August 2022
js

I was challenged on my use of the Spread operator in the return value of a reducer today. The colleague in question raised a concern over performance when spreading a potentially unknown number of elements, simply to add an element to that array.

Being the professional, mature adult that I am, I immediately set out to prove them wrong.

It was, of course, me who was wrong.

TL;DR

It gets exponentially slower the more elements it has to cover.

Array.concat() is significantly more performant at scale than a spread-merge. Keep reading to find out how I tested this, and what else I covered along the way.

What is the Spread operator?

It iterates iterables.

What?

Basically, it expands an object (such as an array) into an available space.

Some (highly contrived) examples:

1// 1. Shallow-cloning an array;
2const arr = [1, 2, 3, 4];
3
4const clonedArr = [...arr];
5// clonedArr now contains all the elements of arr, but is a brand new array.
6
7// 2. Creating a object that inherits some but not all properties from another:
8const person = {
9  name: "Jon",
10  age: 30,
11  city: "London",
12};
13
14const personAfterHouseMove = {
15  ...person,
16  city: "Manchester",
17};
18// => { name: 'Jon', age: 30, city: 'Manchester' }
19
20// 3. Expanding an array into the arguments of a function call:
21const args = ["20", 10];
22
23const number = parseInt(...args);
24// => 20

We also use it a lot for conditional merges in reducer return values. If you've ever written a Redux reducer to handle replacing an object from an array of objects by a specific property, this will look very familiar to you:

1const people = [
2  { id: 1, name: "Safi" },
3  { id: 2, name: "Francis" },
4  { id: 3, name: "Sam" },
5];
6
7return [...people.filter((person) => person.id !== 2), { id: 2, name: "Fran" }];
8
9// => [
10//      { id: 1, name: "Safi" },
11//      { id: 3, name: "Sam" },
12//      { id: 2, name: "Fran" },
13//    ];

In this example, we're returning a new array containing all items in people with the exception of item with id 2, and a new item.

So how slow is it?

Let's do some science and find out.

In this experiment, our aim is to combine two arrays of equal length into a new array.

To test this, we'll create the arrays and populate them with some random dummy data, start a stopwatch, then call a function that merges them. When that function has returned, we stop the stopwatch.

Here's the code we'll use:

1const arraySize = 64;
2
3const randomString = () => (Math.random() * 1.2e17).toString(16);
4
5// A little helper function to make the result of hrtime() a bit nicer to look at.
6const hrtimeToMilliseconds = ([ms, ns]) => ms * 1e9 + ns / 1e6
7
8function mergeArrays(a, b) {
9  // Our implementation will go here
10  // ...
11}
12
13// Create two arrays of size n (64 to start with) populated with random strings
14const itemsA = [...new Array(arraySize)]
15  .map(randomString);
16const itemsB = [...new Array(arraySize)]
17  .map(randomString);
18
19// GO!
20const start = process.hrtime());
21
22mergeArrays(itemsA, itemsB);
23
24const end = process.hrtime();
25
26console.log(`${(hrtimeToMilliseconds(end) - hrtimeToMilliseconds(start)).toPrecision(4)}ms`);

Control case

First, let's establish a control case: how long does it take mergeArrays to simply return all its arguments?

1function mergeArrays(a, b) {
2  return [a, b];
3}

Result: 0.03125ms

So that's our control case, less than a tenth of a millisecond to return the values without operating on them.

Spread

Let's try the spread operator first:

1function mergeArrays(a, b) {
2  return [...a, ...b];
3}

Result: 0.06892ms

Clearly some computation going on there, but nothing indicating you should go away and refactor all your spread-merges just yet.

Array.concat()

This function on the Array prototype allows you, as you'd expect, to add another array on to the end of an array, returning a new array.

You can read up on it on MDN.

1function mergeArrays(a, b) {
2  return a.concat(b);
3}

Result: 0.05266ms

Some folks will prefer calling the concat method on an empty array literal:

1function mergeArrays(a, b) {
2  return [].concat(a, b);
3}

Result: 0.05276ms

This is perfectly valid and probably preferable, especially given there's basically no performance penalty.

One thing to consider is that in TypeScript, the interpreter does not know what kind of array you're defining, and infer a type of never[] to the array literal. This will cause it to trip over whatever you're concat'ing in. A solution to this is to lead the compiler by the nose:

1function mergeArrays<T>(a: T[], b: T[]): T[] {
2  return ([] as T[]).concat(a, b);
3}

Hang on a second...

This is nowhere near dramatic enough. If performance issues begin to reveal themselves at scale, then we need to adjust our methodology and crank up those numbers to get some actual results.

1const arraySize = 1e6; // 1,000,000.

That'll do it.

So now we're trying to join two arrays of a million elements each together. Let's start the test over.

Spread (1million)

Result: 63.34ms

Now we're talking — slow code!

63ms is fairly slow. I'd start to get nervous at this scale, given we're working in the region of millions of elements here.

Array.concat() (1million)

Result: 9.833ms

As different as day and night.

Both methods (calling concat on one of the target arrays, and calling it on an empty Array literal) yielded the same result.

That feels like a pretty clear winner to me, but, for the sake of argument, let's give a few other methods a go:

The good ol' for loop (1million)

In this example, we use an incremented pointer to reference each index of the arrays and push them into the result array.

I toyed with using var in this example, for the true early JavaScript experience, but I just couldn't bring myself to do it.

1function mergeArrays(a, b) {
2  const array = [];
3
4  for (let i = 0; i < a.length; i++) array.push(a[i]);
5
6  for (let i = 0; i < b.length; i++) array.push(b[i]);
7
8  return array;
9}

Result: 73.57ms

The slowest yet. There are better ways of iterating arrays than this, so let's keep this one in the history books and not in our Git diffs.

for..of iteration (1million)

Another for loop, but this time we're using the arrays' built-in iterator to yield their values for the array.push()

1function mergeArrays(a, b) {
2  const array = [];
3
4  for (const item of a) array.push(item);
5
6  for (const item of b) array.push(item);
7
8  return a, b;
9}

Result: 76.94ms

Even slower.

Using the for..of method of iteration is extremely useful in some circumstances, but this isn't one of them.

Array.reduce() (silly)

For the sake of argument (and to see if we can get into triple-digit milliseconds), let's implement a wildly inefficient reducer that merges these arrays one element at a time, spreading the previous results as we go to create new arrays on each iteration.

1function mergeArrays(...arrays) { // note the use of the spread operator here
2  return arrays.reduce(
3    (result, currentArray) => [
4      ...result,
5      currentArray.reduce(
6        (innerResult, innerElement) => [...innerResult, innerElement],
7        []
8      ),
9    ],
10    []
11  );

Result: FAIL

Our first failure! The limiting factor of my ever-waning patience caused this experiment to conclude without conclusion; the function never returned, and I killed the process after a minute or two.

I dropped the element count down to 100,000, still nothing (within the bounds of my patience).

At 10,000 elements per array, the function clocked in at an absolutely glacial 237ms.

The Finals

With the heats over, we have two competitors ready to go head-to-head.

Array.concat() has emerged as the best alternative to the all-too-easy spread-merge, but as we're examining the effect at scale, let's pit each method head-to-head and really dial up those numbers.

I'll try array lenghts of 10 through to 10,000,000 and compare the two.

Numbers are boring, so here's a graph of the results:

A line graph comparing performance of Array.concat and spread-merge. The vertical axis is the number of records (from 10 to 10,000,000), the horizontal axis is the number of milliseconds elapsed while merging the arrays using the relevant method. Array.concat() is fastest by a wide margin; merging 20,000,000 elements in 99.69 milliseconds, a spread-merge took 465ms.

Oh have the numbers anyway:

[items]         10    100   1000  10000  1e5    1e6    1e7
Spread-merge    0.03  0.03  0.09  1.00   11.75  64.09  465.20 [ms]
Array.concat()  0.03  0.03  0.03  0.19   0.94   9.84   99.69  [ms]

Array.concat() wins by a mile. 🥇

Both methods are level-pegging up to around 1000 elements, at which point a spread-merge begins to lag behind. But the real problems come when we get into five-digits.

By eight-digits, the difference is enormous; Array.concat() merged 20,000,000 items into the same array 365ms faster than a spread-merge.

Why is it so much faster?

This is down to how these two methods process data under-the-hood.

I can't say for certain, and I'll be damned if I'm going to do any research that involves reading the native C++ implementation of Array prototype functions.

My semi-educated guess, given the disparity in timings we see, is that the spread operator is iterating one-by-one through each element, assigning each one to a new space in memory in sequence.

Array.concat(), however, I would expect to do some lower level memory manipulation to duplicate and stack the arrays next to each other. This would explain the very slight increase in time-complexity with increased elements; the number of elements matters to an extent, but isn't as big a dent in performance as iterating through every one of them.

Conclusion

A spread-merge is probably fine when you're sure you're dealing with no more than a few thousand items, but if you want to make sure your application scales beyond that, give Array.concat() a try.

👈 Back to articles

This article was written by Jon Linnell, a software engineer based in London, England.