#tech
#javascript
#analysis
#benchmarks
6 min read

Performance of JavaScript .forEach, .map and .reduce vs for and for..of

Andriy Obrizan

Once, we interviewed a mid-developer position candidate that couldn’t answer a simple question involving the while loop. He explained that he writes only declarative code, and there’s no point in imperative programming anymore. While we partially agree, that got us thinking: “should programmers always prefer .map, .reduce, and .forEach Array methods over simple loops in JavaScript?”

Declarative programming style is very expressive, easier to write, and far more readable. It’s better 99% of the time, but not when performance matters. Loops are usually three or more times faster than their declarative counterparts. It doesn’t add up to a significant difference in most applications. Still, when processing large amounts of data in some business intelligence app, video processing, scientific calculations, or game engine, this will have a massive effect on the overall performance.

We’ve prepared some tests to show it. All the code is available on GitHub. Feel free to play around with it.

We would love your feedback and contributions!

About the tests

The test application uses the benchmark library to get statistically significant results. Input for the tests was an array of one million objects with the structure {a: number; b: number; r: number} . Here’s the code that generates this array:

function generateTestArray() {
  const result = [];
  for (let i = 0; i < 1000000; ++i) {
    result.push({
      a: i,
      b: i / 2,
      r: 0,
    });
  }
  return result;
}

We used a Lenovo T480s laptop with Intel Core i5-8250, 16Gb RAM running Ubuntu 20.04 LTS with Node v14.16.0 to get the results.

Array.forEach vs for and for..of

Operations per second, higher is better

This test calculates the sum of a and b for every array element and stores it to r:

array.forEach((x) => {
   x.r = x.a + x.b;
});

We deliberately created an r field in the object during array generation to avoid changing the object structure since it will affect the benchmarks.

Even with these simple tests, loops are almost three times faster. The for..of loop is slightly ahead of the rest, but the difference is not significant. Micro-optimization of the for loop that works in some other languages, like caching the array length or storing an element for repeated access in a temporary variable, had zero effect in JavaScript running on V8. Probably V8 already does them under the hood.

Since .forEach isn’t that different from the for..of loop, we don’t see much sense in using it over the traditional loop in most cases. It’s worth using only when you already have a function to invoke on every array element. In this case, it’s a one-liner, with zero performance degradation:

array.forEach(func);

Array.map vs for vs for..of

Operations per second, higher is better

These tests map the array to another array with the a + b for each element:

return array.map((x) => x.a + x.b);

Loops are also much faster here. The for..of creates an empty array and push-es every new element:

const result = [];
for (const { a, b } of array) {
   result.push(a + b);
}
return result;

It’s not the optimal approach since the array is dynamically re-allocated and moved under the hood. The for version pre-allocates the array with the target size and sets every element using the index:

const result = new Array(array.length);
for (let i = 0; i < array.length; ++i) {
   result[i] = array[i].a + array[i].b;
}
return result;

Here, we also tested if destructuring has any effect on the performance. With .map, the benchmarks were identical, and with for..of the results aren’t that different and might be just a benchmark fluke.

Array.reduce vs for and for..of

Operations per second, higher is better

Here we just calculate the sum of a and b for the whole array:

return array.reduce((p, x) => p + x.a + x.b, 0);

Both for and for..of are 3.5 times faster than reduce. However, the loops are much more verbose:

let result = 0;
for (const { a, b } of array) {
   result += a + b;
}
return result;

Writing that many code lines for just a simple sum must have a strong reason, so unless the performance is that critical, .reduce is much better. The tests again showed no difference between the loops.

Conclusion

The benchmarks proved that imperative programming with loops results in better performance than using convenient Array methods. Invoking callback function is not free and adds up for big arrays. For more complex code than a simple sum, however, there won’t be that much of a relative difference, as the calculations themselves would take more time.

Imperative code is a lot more verbose in most cases. Five code lines for a simple sum are too much, and reduce is just a one-liner. On the other hand, .forEach is almost the same as for or for..of, only slower. There’s not much performance difference between the two loops, and you can use whatever better fit’s the algorithm.

Unlike in AssemblyScript, micro-optimizations of the for loop don’t make sense for arrays in JavaScript. V8 already does a great job and probably even eliminates the boundary checks as well.

Pre-allocating an array of known length is much faster than relying on dynamic growth with push. We’ve also confirmed that destructuring is free and should be used whenever it’s convenient.

Good developers should know how the code works and choose the best solution in every situation. Declarative programming, with its simplicity, wins most of the time. Writing lower-level code only makes sense in two cases:

  • to optimize the bottlenecks found by extensive profiling
  • for obviously performance-critical code

Remember, premature optimizations are the root of all evil.

Feel free to clone the GitHub repo and play around with the benchmarks.