My Experiences with Optimizing CSS for React Frontends

Web Performance
3rd Feb, 2022

Haircut mishaps and CSS optimization - what do they have in common?
Haircut mishaps and CSS optimization - what do they have in common?

Between the start of 2021 and now, I got opportunities to optimize CSS loading for three different React Server Side Rendered (SSR) websites. Each of these websites was delivering 500+ KBytes of CSS before their pages could render anything. And, in each of these cases, the styling code had gathered over the years, as separating unused CSS from used CSS isn’t trivial.

But, this post isn’t about that one option that I switched on to gain a 95% reduction in the render-critical CSS. This post is about the factors I evaluated and the choices I made during the CSS optimization exercise to not end up being the guy in the yellow t-shirt in the picture above. 😀

CSS Optimization Considerations

The target of every CSS optimization exercise I undertook was to set up an automatic mechanism to eliminate the unused CSS. But, there’s no one tool or solution that worked in all situations. For each of the websites, I evaluated the available tools against the following considerations before picking one setup over the others:

Risk of Delivering a Wrecked UI

CSS optimization tools can sometime mistakenly remove styling code that is actually in use. And, when this happens, the build tools (like webpack) or the browser do not report any error. Only the wrecked UI rendered to our visitors signals the problem.

The larger the number of route-types and possible page states (e.g.: anonymous vs logged-in vs administrator), the greater the risk of not catching a wrecked UI.

An ideal CSS optimization solution should minimize this risk. One way for to minimize this risk is as following. Remove the styling code identified as unused from the critical path. But, lazy-load this removed styling code. As a result, even if some part of the UI is mistakenly broken due to CSS optimization, it is only until the lazy-loaded CSS kicks in.

Time to Extract Optimized CSS

CSS optimization tools are commonly integrated into the frontend build process. As a result, the time taken by these tools to extract the CSS adds to the overall build time. An ideal CSS optimization tool would:

  • Not add more than ~5 seconds to the build time.
  • Not result in a linear increase of CSS extraction time with the increase in the number of routes.

Some CSS optimization tools may be setup to perform CSS extraction at runtime. In such cases, the time to extract the CSS should be less than the speed-gain achieved from CSS size reduction. This means, the page’s start render should see a net-improvement with the CSS optimization in place.

CSS Optimization Tools

Here’s the list of tools I evaluated for the CSS optimization work:


Critical takes HTML as an input and outputs the HTML with the (above-the-fold) critical CSS inlined. The rest of the CSS set to load without blocking the page’s rendering. But, above-the-fold part of a page can differ based on the client device dimensions. So, critical also requires viewport width and height as inputs.

Critical uses Puppeteer (headless Chrome) to render the provided HTML to identify the critical CSS. It can be integrated during the build-time to generate HTML with optimized CSS.

My experience with critical:

Out of all the tools I evaluated, critical reduced the render-critical CSS the most. But, is also turned out to be the slowest to extract the critical CSS. This was expected because it requires headless chrome to render the HTML and the page loading has to be simulated for different viewport sizes. The high time-to-extract critical CSS meant I could not integrate critical into any frontend build pipeline.


UnCSS takes an HTML file or a URL as an input and outputs only that CSS which is being used to render the content on that page. It also executes any on-page JavaScript prior to critical CSS extraction.

UnCSS renders the page using jsdom to identify all the CSS used on that page. Like critical, it can be integrated during the build-time to generate HTML with optimized CSS.

My experience with UnCSS:

I found UnCSS to be faster than critical for CSS extraction (no viewport requirement, jsdom is lighter than Puppeteer). But, it was still adding a minute to the build-time when performing CSS extraction for just 10 of the route-types. Also, UnCSS - with HTML file as an input - is suited towards CSR (client side rendered) repositories. This took it out of the equation for my work (all React SSR websites).


PurifyCSS takes our CSS and source-code files (HTML, JS, PHP) as an input. It returns only that CSS which is being used in the provided source-code files. It does so by filtering out those selectors which are not present in our HTML, JS, PHP, etc files.

Because PurifyCSS works with the source-code files, it does not require rendering any URLs. As a result, it is faster at CSS optimization than critical or UnCSS. But, this also means that PurifyCSS doesn’t work on specific URLs. Instead, it works on removing CSS unused in the entire codebase.

My experience with PurifyCSS:

I found PurifyCSS to optimize CSS for codebases with 1000+ JavaScript files in less than 5 seconds. But, PurifyCSS isn’t actively maintained anymore. It hasn’t been updated in the last 5 years. And, the repository for purifycss webpack plugin actually recommends using PurgeCSS. This resulted in me giving-up on evaluating it any further.


Like PurifyCSS, PurgeCSS takes our CSS and source-code files as an input and returns only that CSS which is used in the provided source-code files. But, instead of considering every word in the source-code files (as PurifyCSS does), PurgeCSS only looks at CSS selectors used in the source-code files.

With no need to render any URLs, PurgeCSS is also fast at CSS optimization. Also, the optimized CSS from PurgeCSS is smaller than that from PurifyCSS. But, since it doesn’t work on optimizing for specific URLs, the optimized CSS is definitely a lot more than critical or UnCSS.

My experience with PurgeCSS:

Like PurifyCSS, PurgeCSS took less than 5 seconds to optimize CSS for codebases with 1000+ Javascript files. It’s size gains were better than PurifyCSS - but definitely not as good as critical or UnCSS. I was able to integrate it with source files via the PostCSS plugin (smaller optimized CSS size). I was also able to integrate it with the generated bundles via purgecss command (less prone to breaking the UI).

However, PurgeCSS default setup did break our UI in a few places because of the dynamically generated CSS selector names. This required a thorough UI validation and setting up a safe-list.


Critters takes in the page HTML as an input and returns the HTML with optimized CSS. The HTML that critters returns also lazy-loads the CSS it identified as unused.

Critters does not render the HTML in a headless browser to extract the used CSS. This also means that it does not execute the on-page JavaScript during CSS optimization. It expects the provided HTML to be pre-rendered. Also, because it lazy-loads unused CSS - the possibility of delivering a wrecked UI is minimized.

My experience with Critters:

I integrated Critters to optimize the HTML rendered by the React server-side at runtime. I observed it to take ~1 second for CSS optimization. The size of the optimized CSS was lesser than that from PurgeCSS. And, I did not observe the UI to be broken due to CSS optimization.

But, at ~1 second for CSS optimization during runtime - I could only use it in situations where the optimized HTML could be cached.

What Worked When

One of the websites that I worked on heavily relied on caching their server-side rendered HTML. This allowed us to leverage Critters to optimize the CSS for this website. Critters offered a lucrative combination of sizeable CSS reduction gains and low chances of delivering a wrecked UI. Critters reduced the critical CSS from ~90 KB gzipped to ~28 KB gzipped.

But, for the other two websites, caching the server-side generated HTML was not feasible. As a result, we integrated PurgeCSS to trim side-wide unused legacy CSS code. It brought no runtime overhead and our build-time increase, at less than 5 seconds, was acceptable. We observed a ~20% reduction in the size of our critical CSS.

How About the Future?

In the near future, I anticipate Critters (or something similar) mature well to provide a fast CSS extraction from server-side rendered pages at the runtime. Once that happens, it would be my de facto choice for SSR & SSG websites. Even while that happens, Critters can be used effectively with SSG sites and with SSR sites where caching server-side rendered content is feasible.

For the rest of the websites, I would prefer to evaluate PurgeCSS to quantify the cost-vs-gain of using it.

All things stated, the frontend ecosystem has been evolving very fast. More so, in the areas of server-side rendering and progressive rendering. As a result, the choices that we make today (2021-22) may not be the best choices in a few months / years from now.

You can follow me on my twitter. I mostly tweet about my web development, site speed and entrepreneurship experiences.
Copyright (c) 2017-2024 Tezify All Rights Reserved. Created in India. GSTIN : 24BBQPS3732P1ZW.