Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Select an option

  • Save carefree-ladka/6f7fefeb0cd8fd0be97d1f7b6d2de58b to your computer and use it in GitHub Desktop.

Select an option

Save carefree-ladka/6f7fefeb0cd8fd0be97d1f7b6d2de58b to your computer and use it in GitHub Desktop.
Frontend Performance Optimization

Table of Contents

  1. The Four Layers at a Glance
  2. Bundle Optimization
  3. Asset Optimization
  4. Network Optimization
  5. Rendering & Runtime Performance
  6. API & Data Optimization
  7. Performance Metrics & Monitoring
  8. Real-World Example: Netflix / YouTube Style
  9. Interview Answers

The Four Layers at a Glance

Layer Goal Key Techniques
Bundle Ship less JS Code splitting, tree shaking, lazy loading
Assets Serve smaller files WebP/AVIF, WOFF2, CDN
Network Reduce latency Brotli, caching, HTTP/2, resource hints
Runtime Keep the UI responsive Memoization, virtualization, Web Workers

1. Bundle Optimization

Goal: Reduce the amount of JavaScript the browser must download, parse, and execute.

Code Splitting

Split your bundle by route or component so users only load what they need:

// Route-based splitting (Next.js does this automatically)
// Component-based splitting with React.lazy
const Dashboard = React.lazy(() => import("./Dashboard"));
const HeavyChart = React.lazy(() => import("./HeavyChart"));

<Suspense fallback={<Spinner />}>
  <Dashboard />
</Suspense>

Tree Shaking

Remove unused exports at build time. Requires ES module syntax (import/export) — CommonJS (require) defeats tree shaking.

// ✅ Tree-shakeable — bundler removes unused exports
import { format } from "date-fns";

// ❌ Imports entire library
import _ from "lodash";

// ✅ Use lodash-es for tree shaking
import { debounce } from "lodash-es";

Vendor Splitting

Separate node_modules into their own chunk. Vendor code changes rarely, so it stays cached even when your app code updates.

// vite.config.js
build: {
  rollupOptions: {
    output: {
      manualChunks: {
        vendor: ["react", "react-dom"],
        charts: ["recharts"],
      },
    },
  },
},

Replace Heavy Libraries

Replace With Size Savings
moment dayjs ~65KB → ~2KB
lodash lodash-es or native methods Tree-shakeable
axios ky or native fetch Smaller footprint

Minification

Use fast, modern tools:

Tool Speed Notes
Terser Fast Default in Webpack
SWC Very fast Rust-based, used by Next.js
ESBuild Fastest Used by Vite

Bundler Choice

Prefer Vite or ESBuild for new projects — significantly faster build times than Webpack.


2. Asset Optimization

Goal: Reduce static asset payload without sacrificing quality.

Images

Use modern formats:

Format Best For Browser Support
WebP Photos, thumbnails Excellent
AVIF Maximum compression Good (modern browsers)
SVG Icons, illustrations Universal

Implement responsive images:

<!-- Serve the right size for the device -->
<img
  src="image.webp"
  srcset="image-400.webp 400w, image-800.webp 800w, image-1200.webp 1200w"
  sizes="(max-width: 600px) 400px, 800px"
  loading="lazy"
  alt="Description"
/>

In Next.js, <Image /> handles format conversion, resizing, and lazy loading automatically:

import Image from "next/image";

<Image src="/hero.jpg" width={800} height={400} alt="Hero" priority />

Fonts

Best practices:

  • Use WOFF2 — best compression, widely supported
  • Subset fonts — only include characters you actually use
  • Preload critical fonts to eliminate flash of invisible text (FOIT)
<!-- Preload the font used above the fold -->
<link
  rel="preload"
  href="/fonts/inter-var.woff2"
  as="font"
  type="font/woff2"
  crossorigin
/>
/* Prevent layout shift while font loads */
@font-face {
  font-family: "Inter";
  src: url("/fonts/inter-var.woff2") format("woff2");
  font-display: swap;
}

Video

  • Use adaptive bitrate streaming (HLS / DASH) — quality adjusts to connection speed
  • Compress with H.264 or AV1 (better compression, growing support)
  • Serve via CDN — never from your origin server
  • Lazy load off-screen videos with loading="lazy" or Intersection Observer

3. Network Optimization

Goal: Reduce latency, request count, and transfer size.

Compression

Method Compression Ratio Notes
Gzip Good Universal support
Brotli ~15–20% better than Gzip Supported in all modern browsers

Enable Brotli at the CDN or server level — it's a free performance win.

Caching Headers

# Long-lived cache for hashed assets (JS, CSS, images)
Cache-Control: public, max-age=31536000, immutable

# Short cache for HTML (changes with deployments)
Cache-Control: public, max-age=0, must-revalidate

Hashed filenames (main.a3f2c1.js) mean you can cache assets forever — a new hash means a new file.

CDN

Distribute static assets (JS, CSS, images, fonts) to edge servers close to your users. Major options: Cloudflare, Fastly, AWS CloudFront, Vercel Edge Network.

Resource Hints

Tell the browser what to fetch before it discovers it:

<!-- Warm up DNS + TLS for third-party origins -->
<link rel="preconnect" href="https://api.example.com" />

<!-- Preload a resource needed very soon (high priority) -->
<link rel="preload" href="/main.js" as="script" />

<!-- Prefetch a resource likely needed for the next navigation (low priority) -->
<link rel="prefetch" href="/dashboard.js" />

Other Techniques

  • HTTP/2 multiplexing — multiple requests over one connection, no more domain sharding needed
  • API request batching — combine multiple small requests into one
  • Service Workers — cache assets and API responses for offline use and repeat visits

4. Rendering & Runtime Performance

Goal: Keep the main thread free and the UI responsive.

Prevent Unnecessary Re-renders

// React.memo — skip re-render if props haven't changed
const Card = React.memo(function Card({ title, price }) {
  return <div>{title}: {price}</div>;
});

// useMemo — memoize expensive computations
const sortedProducts = useMemo(
  () => products.sort((a, b) => a.price - b.price),
  [products]
);

// useCallback — stable function reference for memoized children
const handleSelect = useCallback((id) => {
  setSelected(id);
}, []);

When to use each:

Hook Use When
React.memo Component renders often with stable props
useMemo Calculation is expensive or result passed to memoized child
useCallback Function passed to React.memo child or used in useEffect deps

Virtualize Large Lists

Never render thousands of DOM nodes at once. Only render what's visible:

import { FixedSizeList } from "react-window";

function ProductList({ products }) {
  return (
    <FixedSizeList
      height={600}
      width="100%"
      itemCount={products.length}
      itemSize={80}
    >
      {({ index, style }) => (
        <div style={style}>
          <ProductCard product={products[index]} />
        </div>
      )}
    </FixedSizeList>
  );
}

Use react-window for fixed-size rows, react-virtual for variable-size or complex layouts.

Debounce & Throttle

// Debounce — fires after user stops typing
const debouncedSearch = useMemo(
  () => debounce((query) => fetchResults(query), 300),
  []
);

// Throttle — fires at most once per interval
const throttledScroll = useMemo(
  () => throttle(() => updatePosition(), 100),
  []
);

Avoid Layout Thrashing

Interleaving DOM reads and writes forces synchronous reflows:

// ❌ Forces reflow on every iteration
elements.forEach((el) => {
  el.style.width = el.offsetWidth + 10 + "px"; // read then write
});

// ✅ Batch reads first, then writes
const widths = elements.map((el) => el.offsetWidth); // all reads
elements.forEach((el, i) => {
  el.style.width = widths[i] + 10 + "px"; // all writes
});

Web Workers for Heavy Computation

Keep the main thread free for rendering:

// worker.js
self.onmessage = ({ data }) => {
  const result = expensiveCalculation(data);
  postMessage(result);
};

// main.js
const worker = new Worker("/worker.js");
worker.postMessage(largeDataset);
worker.onmessage = ({ data }) => setResult(data);

Good candidates for Web Workers: data parsing, encryption, image processing, complex sorting.


5. API & Data Optimization

Goal: Reduce server cost, network usage, and perceived wait time.

Avoid Waterfall Requests

// ❌ Sequential — each waits for the previous
const user = await fetchUser();
const posts = await fetchPosts();
const comments = await fetchComments();

// ✅ Parallel — all fire at once
const [user, posts, comments] = await Promise.all([
  fetchUser(),
  fetchPosts(),
  fetchComments(),
]);

Use a Caching Layer

React Query and SWR provide caching, deduplication, background refetching, and stale-while-revalidate out of the box:

import { useQuery } from "@tanstack/react-query";

function UserList() {
  const { data, isLoading } = useQuery({
    queryKey: ["users"],
    queryFn: fetchUsers,
    staleTime: 1000 * 60 * 5, // consider fresh for 5 minutes
  });

  if (isLoading) return <Spinner />;
  return <List items={data} />;
}

Paginate or Virtualize Data

  • Pagination — fetch one page at a time
  • Infinite scroll — fetch next page as user nears the bottom
  • GraphQL — fetch only the fields you need, nothing more

Debounce API Calls

function SearchBar() {
  const [query, setQuery] = useState("");
  const debouncedQuery = useDebounce(query, 300);

  useEffect(() => {
    if (debouncedQuery) fetchResults(debouncedQuery);
  }, [debouncedQuery]);

  return <input onChange={(e) => setQuery(e.target.value)} />;
}

Optimistic Updates

Update the UI immediately, sync with the server in the background. If it fails, roll back.

// React Query optimistic update
useMutation({
  mutationFn: updatePost,
  onMutate: (newPost) => {
    queryClient.setQueryData(["post", newPost.id], newPost); // instant UI update
  },
  onError: () => {
    queryClient.invalidateQueries(["post"]); // rollback on failure
  },
});

6. Performance Metrics & Monitoring

Always measure before and after optimizing. Intuition-driven optimization often targets the wrong thing.

Core Web Vitals

Metric Measures Good Threshold
LCP — Largest Contentful Paint Loading speed < 2.5s
INP — Interaction to Next Paint Responsiveness < 200ms
CLS — Cumulative Layout Shift Visual stability < 0.1

Additional Metrics

Metric What It Tells You
FCP — First Contentful Paint When the first content appears
TTI — Time to Interactive When the page is reliably interactive
TBT — Total Blocking Time How long the main thread was blocked
Bundle Size Total JS payload

Tools

Tool Use For
Lighthouse Page-level audit, Core Web Vitals, bundle suggestions
Chrome DevTools Performance Long tasks, layout thrashing, JS flame chart
WebPageTest Real-network testing, waterfall charts
React DevTools Profiler Why and how often components re-render
Bundlephobia Check library size before installing
Real User Monitoring (RUM) Production metrics from real users

7. Real-World Example: Netflix / YouTube Style

Here's how you'd describe a complete optimization strategy for a video streaming platform:

Bundle

  • Route-based code splitting — video player, dashboard, and settings load independently
  • Lazy load heavy components (player UI, recommendation engine)
  • Replace moment with dayjs, avoid heavy utility libraries

Assets

  • Thumbnails served as WebP, with srcset for responsive sizing
  • Fonts subset to Latin characters only, served via CDN with font-display: swap
  • Video uses adaptive bitrate streaming (HLS) — quality adjusts to bandwidth automatically

Network

  • All static assets on CDN with long-lived cache headers
  • Brotli compression for JS/CSS
  • preconnect to API and media origins
  • Service Worker caches shell and recently watched thumbnails

Rendering

  • Homepage rows are virtualized — only visible tiles render
  • Recommendation component is memoized
  • Playback state managed in a local reducer, not global context, to prevent unnecessary re-renders
  • Comments and metadata load in parallel via Promise.all

Monitoring

  • Track LCP (hero image / first frame) and INP (playback start)
  • React Profiler identifies expensive re-renders in the feed
  • RUM captures real-world metrics across devices and regions

8. Interview Answers

The 15-Second Answer

"I optimize in four layers: bundle, assets, network, and runtime. For bundles — code splitting, lazy loading, tree shaking. For assets — WebP/AVIF, WOFF2, CDN. For network — Brotli, caching headers, resource hints. For runtime — memoization, virtualization for large lists, and Web Workers for heavy computation."

The Senior-Level Answer

"Before optimizing anything I'd measure with Lighthouse and React Profiler to find the actual bottleneck — whether it's bundle size, render performance, or network latency. For a product like YouTube, I'd use route-based code splitting to keep initial JS small, WebP thumbnails with srcset for asset size, aggressive CDN caching for static assets, and virtualized lists for the feed so we're never rendering more than what's visible. I'd track LCP and INP as the primary production metrics."

Demonstrating Tradeoff Awareness

"Memoization with useMemo and useCallback reduces renders but adds memory overhead and code complexity — I only reach for it after profiling confirms a real problem. Similarly, virtualization solves list performance but adds complexity around scroll restoration and dynamic row heights. The right optimization depends on what the profiler shows, not assumptions."

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment