Comparing React.js performance vs. native DOM

React.js is a promising new library for Javascript view component development. A similar approach is said to be leveraged in the upcoming Angular 2 release. Mike Hostetler has given a nice introduction the technology coming from a server-side development background. In this article, I will be comparing performance of React’s virtual DOM rendering approach against native DOM manipulation in Javascript.

My personal interest in React came about from having a background in Java and Groovy applications, which would have RIA-style front ends implemented with Adobe Flex. Notwithstanding any apparent pitfalls in the Flash player runtime, it was actually quite a nice environment to work in for the UI development of interactive business applications. Given the deprecation of the Flash player in modern computing, Javascript is now the viable alternative for interactive web application development. From this perspective, React does seem like a good fit, since you can define components as class-like JS constructs, and use their JSX compiler to bind your xml-based view layout to the JS logic. Another potential benefit of React is their implementation of a virtual DOM, which is supposed to delay rendering of any DOM elements if they’ve not changed.

While working on some personal experiments leveraging React, I did surprisingly hit a performance bottleneck. While using a 3rd-party table component used to display paginated results, I was able to quickly bring the mobile chrome browser on my Android phone (Moto X 1st gen.) to it’s knees, literally crashing the browser after a few pages of rendered data. Now, this did not feel like a performance boon to me, and after surfing the interwebs, I did seem to find anecdotal evidence of similar performance concerns around React. With that in mind, I did start working on an alternative approach, which I implemented with native DOM manipulations via JS, which I did find to have better performance than the React version, but it was not really a 1 to 1 comparison, and I hadn’t really pursued any React specific optimizations. With that in mind, I’m writing this post to create a more objective performance comparison between a React-based UI and one written using native DOM operations. My demo source code can be found on github:

To get the ball rolling, I set up a small portal page to launch the DOM and React example apps:

Screen Shot 2015-11-13 at 11.54.53 AM
For the purposes of this evaluation, I wanted to create a simple app to demonstrate a simple client app where the user can ‘infinitely scroll’ down through a set of results. I was able to simulate this by following a simple pattern:

1) App starts up, I track the initial start time
2) User ‘scrolls’ to bottom of page
3) App fetches next set of results from server
4) Send results to client, which handles rendering
5) Track execution time(s)
6) Delay a moment (I defaulted to 750ms)
7) Start process over, till ‘maxRows’ exceeded

I created a basic common library that could drive both apps, where I was trying to consolidate as much as possible to the common lib, and let the two client apps focus as much as possible on the task at hand.

Each client invokes the following code, which sets things in motion, tracking the start time and exposing a callback function which can handle the simulated results from a server in the ‘pageSet’ param:

//DOM version:
lib.runApp(function(pageSet, allRows){
   // simulate updating model with server results, after a scroll
//React version:
 lib.runApp(function(pageSet, allRows){
   // simulate updating model with server results, after a scroll
   _app.state.rows = allRows;


I wanted to create a performant React demo out of the box, so I picked up the React CommonJS example as a quick place to start, and I’ll get my code compiled so there shouldn’t be the runtime penalty for interpreting JSX tags. The React app I’ve created takes an InfiniteTable component which wraps a normal table element, then appends table rows via the TableRow component. You’ll see in the code above that after a new page set is returned, I simply take the current list of all rows and assign it to the state object for the table component. This example may not exactly play into the major strengths of React, as I am simply appending rows to the end of a list, and not reordering the list by inserting rows in the middle. However, the original app I’d worked on where I’d started to see performance issues only consisted of a simple paginated table, so I’m not really able to see why that scenario should bring a mobile browser to its knees, in the first place.

The DOM version uses the same common lib code to grab the next set of results and simply append the rows to the table in the UI, as they come in from the ‘back end’.

For testing purposes, I’ve added a URL param to specify ‘maxRows’, to quickly try out different loads in the apps, such as:



When you run the examples, you’ll see the table start rendering out rows, and periodically scrolling down. Once processing is complete for the ‘maxRows’ specified, you’ll see a nice summary printed at the bottom. I did track execution times for each iteration, and they can be seen in the summary as: Execution, Avg. and Median times. However, those times did not seem that interesting, as the React iteration was consistently longer, in comparison to the DOM version. This probably had a lot to do with reassigning the React table with an entirely new set of rows in the model state, where the DOM version simply appends table rows to a table body tag.

Screen Shot 2015-11-13 at 11.45.31 AM

At the time of this writing I leveraged the following tools, on a Macbook Pro, running with 2Ghz i7, 16GB RAM:

  • OSX 10.11, El Capitan
  • node: v5.0.0
  • npm: 3.3.9
  • Chrome 46.0.x

Execution Times

I’ve not done extensive Javascript performance profiling, so for now I’m borrowing some ideas from:

The results I’ve compiled here were painfully manual. It wasn’t till I was nearly done with this writeup that I stumbled across Benchpress for automated performance testing. The next time I attempted something like this, I would definitely consider that library. So for now, I simply ran multiple iterations of different data sizes, and recorded the total times as printed in the apps.

Desktop – Chrome browser

The first surprise I received was when I compiled rendering times from the apps in my desktop browser. While there were differences in times for the different data sets, they were statistically insignificant when charted together. React and the DOM versions were just completely neck and neck to each other, for every size of data set thrown at them.:

1 - Rendering Time - Desktop

The one good thing this shows me is that my apps seem to be set up with a pretty good baseline compared to each other. There are probably ways to optimize the React application that I’ve not taken the time to consider, but for now, this looks like a pretty good starting point for further comparisons.

Nexus 7 tablet – Chrome browser

The real shake-out between the two approaches rears its head when switching over to a mobile browser. My next set of timings were recorded on the Nexus 7 tablet, also using the Chrome browser on that system:

2 - Rendering Time - Nexus 7

Again, the two apps are in a dead heat at 500 elements, with both clocking in around 8.2s. Surprisingly, this is also very close to the same time on the desktop version, which is respectable. However, things get dicey as soon as we throw more elements at the app, at 1000 elements. Now, both become a few seconds slower than the desktop, but React starts to fall behind the DOM version by a around 1 second. At 2000 elements, the DOM version is comparable to the desktop time, but cruises ahead of React on the Nexus with a 12s lead. By 4000 rows, the React version is struggling to render at 2.25min, and is more than twice as slow as the DOM version. I wish I could tell you why this was the case, but I can only guess that the React version must do twice as much work, as it is juggling both its virtual DOM as well as manipulating the actual DOM to render components. I can only imagine that the desktop V8 engine is optimized well beyond the point of the same found in the mobile browser, in its ability to handle so many elements in memory as well as in the DOM at the same time.

Moto X phone (1st gen) – Chrome browser

Now I really slammed the doors on these apps, and started them up on my phone. The trend lines look very similar to the Nexus 7, but with half the number of elements being rendered. I suppose this makes sense, as I am under the assumption that the Nexus has a quad core processor, while the Moto X has a dual core, while their installed RAM appears to be about the same. The problem is that I’m not really sure that the browser’s individual tabs are really optimized to leverage multiple CPU cores.

3 - Rendering Time - Moto X

Again, at 500 elements, the performance is a close match to the desktop browser. However, the phone really struggles to render a very high number of rows. By the time I get to 1000 rows, I start getting used to seeing the warning: “Chrome isn’t responding. Do you want to close it?”, and this is in both the DOM and the React versions. In that regard, neither one seems to have an advantage over the other in the mobile browser. In order to push on, I must often times push the ‘wait’ button to allow the javascript engine to catch up. I’m assuming that this has to do with how I have set up a timer to periodically return and render the results, without some method of queuing things up, instead of just letting them pile on.  This tells me that regardless of the approach taken, some kind of steps must be taken to limit the number of rendered elements in any screens presented on mobile browsers. Again, by the time you get to the upper limits of 2000 elements, the DOM version has pulled away to be nearly two times faster than the React version.

Memory Consumption

In an attempt to verify memory utilization, I simply opened Chrome’s Task Manager window, and manually tracked memory consumption for the tab running the app. For example, with the React app running 1000 rows, I generally saw around 155-160MB memory consumed. I’d love to find a better approach to verifying memory consumption. I tried using window.performance.memory, but those numbers seemed to reflect more on available heap size, then actual utilization. What really threw me off was when I’d seen the numbers actually get updated on my Nexus 7, only to later realize that Chrome must be running with a flag enabled.

4 - Memory Usage - Desktop

What floors me when I looked at the compiled numbers is how the React version starts out with just about the same memory footprint as the top end of the DOM app, run with the maximum number of elements. So it looks like when considering React in the mobile environment, the memory consumption would have to be optimized for, or to be at least aware of. This could be a part of any slowness seen in the React app across apps, if the environment has to start doing some kind of virtual memory page swapping against disk or something. I was able to find anecdotal information that the mobile chrome browser tab may have access to around 128MB memory by default. There is supposed to be an admin flag to override this value, but I was unable to locate it in the version of Chrome my phone is running. This would be a great test to verify whether increasing available memory grants a sizable boost to rendering performance. Again, there are probably ways to slice and dice the data so that the React component tree doesn’t have to hold references to every piece of data, but that is also outside of the scope of this article.


There is no doubt that React.js is a great approach towards managing state in the view layer of a browser application. However, from my own limited experience as well as just comparing raw numbers from time spent in rendering to the amount of memory consumed, there is a clear price to pay in any convenience gained from a library like React. From a pure performance perspective it does appear there may be some merit in evaluating a pure JS DOM approach to a given problem. However, from the perspective of managing a team of developers who want to deliver a large amount of maintainable code, there can be a clear advantage to leveraging libraries and frameworks such as React, Polymer, Angular, etc.

I would love to read some in depth analysis that may go farther in explaining the results that I’ve seen here. I’m definitely aware that my approaches may not be ideal in every way, and am definitely open to improving what I’ve done here. However, it does look like it’s best to have your eyes open in evaluating these libraries, especially in regards to their use in a mobile browser environment.

One thought on “Comparing React.js performance vs. native DOM

  1. Pratik Patel says:

    This is a good first attempt at comparing React vs. native DOM performance. Obviously, using any framework incurs some overhead. However, there are two big flaws in your test:
    * You continue to load results (add rows) well beyond what a user would do. Once you’ve reached a couple of hundred rows, you’ve exhausted the user’s attention span – heck, even if you think a user will fetch 1000 rows, your test extends out to 10k rows. That makes this performance test a “synthetic test” rather than a “real world test”. What would be more interesting is to the see the fine-grained numbers for 0 to 500 rows.
    * You also inject a 750ms delay between showing an additional batch of rows. This has two problems:
    a) It skews the test results, but we can’t be sure in which direction for each of the two frameworks compared. It skews the results as JavaScript is inherently a single-threaded execution environment.
    b) Why 750ms? Will the user actually be able to browse that many rows in such a short time? If you upped the delay to something more reasonable, would it allow the two frameworks to have more time to do GC, thus effecting the overall time that you chart?

    1. Hi Pratik. I had set up a very basic paginated grid application using React JS which would get slow and crash on my Android smartphone. This was after viewing only 2-3 pages of data, where I was only displaying 25-50 rows each. Now, re-writing that app / component in straight JS would definitely be less convenient than using a UI library like React, you have no argument from me, there. This test was merely set up to see if I could get some idea as to whether there was some kind of baseline performance difference between React and native DOM JS. This does appear to be the case, but especially on mobile platforms under Android. My takeaway on this project has been that Android mobile browser development does appear to lag behind iOS / mobile safari. It would be interesting to run my tests on a comparable iphone, but I have not been able to do that at this time. Jeff Atwood had a nice writeup that made me realize that there may be some big discrepancies between mobile browsers on iOS vs. Android:

      Regarding your specific concerns:
      1) Yes, this is not a realistic test. Is definitely set up as a stress test.

      2) The setTimeout will be implemented the same way in the same browser, regardless of library used. As such, it is fairly deterministic, but could easily be dropped. In actuality, I am measuring the actual time taken during the UI update code for each iteration, and even factoring out the timeout, I would see React taking much longer during execution time, as compared to native JS. You are correct, it would be worth removing the timeout, to see how much affect it has.
      b) 750 ms was just the upper end of my rough estimate of how quickly a user could flip through in infinitely scrolling list. These are actually very common in mobile environment, even if not seen as much in web applications.

Leave a Reply

Your email address will not be published. Required fields are marked *