State Management at Scale

  1. When do you choose Redux vs Zustand vs Recoil vs MobX vs React Query vs custom Context?
  2. How do you handle 10+ MB of state without freezing the UI?
  3. How do you sync state between 50+ browser tabs in real-time?
  4. Offline-first architecture: how do you handle conflict resolution (CRDTs vs OT)?

Q17. When do you choose Redux vs Zustand vs Recoil vs MobX vs React Query vs custom Context?

Here’s a practical decision guide used by most experienced React teams in 2025 on when to pick each state management solution.

Use Case / RequirementBest Choice(s)Why
Very small app, <10 components share stateReact Context + useReducer (custom)Zero dependencies, good enough, easy to understand
Medium app, need simple global UI state (theme, auth user, modals, sidebar open/closed)Zustand or custom ContextZustand wins on simplicity and devtools; Context is fine if you hate adding deps
Need powerful selectors, computed values, middleware, devtools, and minimal boilerplateZustandThe current “sweet spot” for most apps. Tiny (~1-2 kB), great DX
You love React concurrent features and want fine-grained re-renders out of the boxRecoil (or Jotai, which is similar now)Atom-based, automatic dependency tracking, works great with Suspense and Transitions
Already heavily invested in Redux ecosystem (RTK Query, Redux Toolkit, millions of lines)Redux ToolkitDon’t migrate for no reason. Modern Redux (RTK) is actually good now
Complex async server state (caching, deduplication, background refetch, invalidation)React Query (now TanStack Query)Still the undisputed king of server-state. Use it even if you have Redux/Zustand
Very complex domain logic, many derived data, need observable-like reactivityMobXExtremely powerful, but magic-heavy. Great for enterprise dashboards
You need TypeScript auto-completion heaven and zero runtime costZustand + Valtio (for proxy-based) or JotaiAll three have excellent TS support in 2025

Current 2025 Recommendations (Most Teams)

Project Size / TypeRecommended Stack
Small–Medium apps, startups, new projectsZustand (global) + TanStack Query (server state) + local useState/Context
Large-scale apps with many developersRedux Toolkit + RTK Query (if you already have Redux) or Zustand + TanStack Query
Maximum performance + concurrent modeRecoil or Jotai + TanStack Query
Heavy forms or complex local stateMobX or Zustand with immer
Minimum dependencies, marketing sites, etc.Custom Context + useReducer

Quick Rules of Thumb

  • Use TanStack Query for anything that comes from or goes to a server → always.
  • Never manage server data in Redux/Zustand/Recoil anymore (2025 best practice).
  • If you find yourself writing a lot of useEffect + useState for global things → pick Zustand.
  • If you’re writing a lot of boilerplate with Redux Toolkit → switch to Zustand (you’ll save 60–80% code).
  • If you need true observable reactivity and don’t mind-blowing performance on huge datasets → MobX.
  • If you want zero-boilerplate atoms and love the Recoil philosophy but hate Facebook → use Jotai (most people moved here from Recoil).

Real-World Adoption (2025)

  • Most new green-field React apps: Zustand + TanStack Query
  • Most companies that were on Redux 2018–2022: Still on Redux Toolkit (inertia), slowly adding TanStack Query
  • Vercel, Shopify (parts), many startups: Zustand or Jotai
  • Meta internal + some teams: Still Recoil
  • Enterprise dashboards (trading, analytics): Often MobX

Bottom line in 2025:

TanStack Query  → always for server state
Zustand         → default for client/global state
Only reach for Redux/MobX/Recoil if you have a specific reason
Custom Context  → totally fine for small-to-medium apps

Pick Zustand + TanStack Query unless you have a strong reason not to — it’s what the majority of the React community has converged on.

Q18. How do you handle 10+ MB of state without freezing the UI?

Handling 10+ MB of state in real-time across 50+ tabs without freezing the UI is a very common problem in 2025 for large dashboards, trading terminals, collaborative canvases, genome browsers, etc.Here are the battle-tested techniques that actually work at that scale (all used in production today).

1. Never Put the Full 10+ MB State Directly into React/Vue/Svelte StateThis is the #1 killer. If you do:

const [state, setState] = useState(huge10MBObject)  // React will freeze!

→ React will diff 10 MB on every change → UI blocks for 100–500 ms. Rule: Keep zero megabytes of your big shared state in framework-reactive state.2. Use a CRDT / Mutable Proxy That Lives Outside the Framework (Yjs, Automerge, Fluid, FluidFramework, etc.)
Best choice in 2025: Yjs (used by Figma, Notion, Excalidraw, Logseq, etc.)

import * as Y from 'yjs'

const ydoc = new Y.Doc()
const ymap = ydoc.getMap('data')           // This can be 50+ MB, no problem
const yarray = ydoc.getArray('rows')       // Millions of rows OK

// Observe only the tiny parts you care about
useEffect(() => {
  const handler = () => {
    // This runs only when something inside the observed path changes
    renderOnlyWhatChanged()
  }
  ymap.observeDeep(handler)   // or yarray.observe, or fine-grained path
  return () => ymap.unobserveDeep(handler)
}, [])

Yjs is zero-cost GC, uses structural sharing, and only triggers callbacks on actual changed paths. You can have 100 MB in a Y.Doc and the UI stays 60 FPS.3. Virtualize Everything That Renders (List, Grid, Canvas)Even with perfect state, rendering 100k rows kills the DOM.

Data TypeLibrary (2025 best)Why it works at 10M+ rows
Tables / GridsAG-Grid, Glide Data Grid, React-Window + customOnly mounts visible cells
Treesreact-virtualized-tree, tanstack-virtualRecursive virtualization
Canvas / MapsKonva, PixiJS, deck.gl, custom WebGLNo DOM at all
Text / CodeMonaco, CodeMirror 6, ProseMirrorVirtualized lines internally

4. Delta / Patch-Based Sync Instead of Full StateNever send the full 10 MB on every change.

LibraryHow big are updates?
YjsUsually < 5 KB per user action, even on 50 MB doc
LiveblocksSub-kilobyte ops
Custom msgpack + JSON-patch1–20 KB typical

5. Off-Main-Thread Everything (When You Can’t Use Yjs)

If you’re stuck with plain objects:

// Parse incoming updates in a Worker
const worker = new Worker('sync-worker.js')
worker.onmessage = (e) => {
  // Only tiny message crosses thread boundary
  applyPatch(e.data.patch)
}

// Inside worker (no UI freeze)
self.onmessage = (e) => {
  const newState = JSON.parse(e.data)               // 10 MB → blocked only worker
  const patch = fastJsonPatch.compare(oldState, newState)
  self.postMessage({ patch })
}

Also move compression/decompression to workers:

const compressed = await new CompressionStream('gzip').pipeTo(...)

6. Lazy Load + Progressive Hydration

Don’t load all 10 MB up front.

// Example: trading dashboard with 500 symbols
const visibleSymbols = new Set()
const subscription = ydoc.getMap('price')

// Only subscribe to what’s on screen
const unsub = subscription.observe((event) => {
  for (const key of event.keysChanged) {
    if (visibleSymbols.has(key)) renderRow(key)
  }
})

7. Binary Formats (msgpack, Protobuf, Cap’n Proto)

JSON is 2–4× bigger and slower to parse.

// 10 MB JSON → 2–4 MB msgpack
const encoded = msgpack.encode(state)    // in worker!
const decoded = msgpack.decode(buffer)   // in worker!

8. Summary: The Stack That Actually Works in 2025 for 10–100 MB Shared State

LayerTechnology (2025)Size it can handle comfortably
Shared mutable stateYjs (CRDT)100+ MB
Sync transportWebSocket + msgpack + gzip100+ MB docs, <10 KB updates
Renderingreact-window / tanstack-virtual + grid10M+ rows
Heavy parsingWeb Worker + Comlink50+ MB without freezing UI
Presence / cursorsLiveblocks or custom Yjs awareness1000+ users

Real-World Examples That Do This Today

AppState sizeTech stack
Figma10–200 MBCustom CRDT + WebSocket + WebGL
Notion (multiplayer)~50 MB/docYjs-like + WebSocket
Excalidraw10–30 MBYjs + y-websocket
Linear, Tldraw10–50 MBLiveblocks (built on Yjs)
High-frequency trading dashboards20–200 MBCustom binary protocol + WebGL grids

TL;DR – Recipe for 10+ MB Without Freezing

  1. Store the big state in Yjs (or Automerge 2 if you like Rust).
  2. Sync with WebSocket + msgpack + gzip (or just use Liveblocks/PartyKit).
  3. Never put the Y.Doc in React state.
  4. Observe only the paths you render (observeDeep with path filtering).
  5. Virtualize every list/grid.
  6. Do all parsing/compression in a Worker.

Do that and you can comfortably run 50–100 MB of live shared state across 100 tabs at 60 FPS.

Q19. How do you sync state between 50+ browser tabs in real-time?

Syncing state in real-time across 50+ (or even hundreds) of browser tabs is a hard but solvable problem. Here are the practical, battle-tested approaches used by real-world apps that need this (collaborative editors, trading dashboards, multiplayer games in the browser, live dashboards, etc.).

1. Central Server with WebSocket / Server-Sent Events (Most Common & Reliable)This is the gold standard for anything that actually needs strong consistency.Architecture:

[Tabs] ←→ [WebSocket connection each] ←→ [Server (Node.js, Go, Elixir, etc.)]
                                   ↑
                             State + pub/sub

How it works:

  • Every tab opens a WebSocket (or SSE/long-poll fallback) to a central server.
  • The server holds the authoritative single source of truth (SSOT).
  • When any tab changes state → send diff/patch to server.
  • Server validates, applies, and broadcasts the update (or a patch) to all connected clients.
  • Clients apply the update locally (often with CRDTs or operational transformation for conflict-free merging).

Tools/libraries people actually use at scale:

  • Socket.IO (easy, but heavier)
  • ws + Redis Pub/Sub (very fast, scales horizontally)
  • Ably, Pusher, Supabase Realtime, Firebase Realtime DB / Firestore (managed)
  • Phoenix Channels (Elixir) – handles 100k+ connections easily
  • Deepstream, Yjs + y-websocket server
  • PartyKit, Liveblocks, or Replicache (newer, purpose-built for this)

Pros:

  • Truly real-time (<100ms latency typical)
  • Scales to thousands of tabs/users
  • Easy to add auth, persistence, conflict resolution
  • Works even if tabs are on different machines

Cons:

  • Requires a server (obviously)

2. CRDTs + Peer-to-Peer (No/Low Server – Experimental but Works)If you want to minimize or eliminate a server:Best stack today (2025):

  • Yjs (most mature CRDT framework for JS)
  • y-websocket (for syncing via a lightweight signaling server)
  • Or y-p2p / libp2p-based providers (fully decentralized, but higher latency)

How it looks:

import * as Y from 'yjs'
import { WebsocketProvider } from 'y-websocket'

const doc = new Y.Doc()
const provider = new WebsocketProvider('wss://your-sync-server', 'room-name', doc)

// All tabs automatically sync
const sharedMap = doc.getMap('state')
sharedMap.set('counter', 42)

sharedMap.observe(() => {
  console.log('state updated from another tab!')
})

Yjs can sync hundreds of tabs with sub-second convergence even under high churn.Pros:

  • Conflict-free merging (no “last write wins” pain)
  • Works offline-first (tabs sync when they reconnect)
  • Very little server logic needed

Cons:

  • Larger payload size than hand-rolled diffs
  • Harder debugging
  • P2P is still flaky in corporate networks

3. BroadcastChannel + SharedWorker (Pure In-Browser, Same Origin Only)For when all tabs are on the same domain AND same browser instance (not cross-browser).

const channel = new BroadcastChannel('app-state')

// In every tab:
channel.onmessage = (e) => {
  if (e.data.type === 'STATE_UPDATE') {
    applyState(e.data.payload)
  }
}

// To broadcast:
channel.postMessage({ type: 'STATE_UPDATE', payload: newState })

Limitations:

  • Only works within the same browser (Chrome tabs sync with each other, Firefox tabs don’t see Chrome ones)
  • No persistence or cross-device
  • SharedWorker can help coordinate, but still same-origin and same-browser only

Use this for local-only multi-tab apps (e.g., a desktop-like web IDE).4. LocalStorage + storage event (Old-School Hack)

window.addEventListener('storage', (e) => {
  if (e.key === 'app-state') {
    const newState = JSON.parse(e.newValue)
    applyState(newState)
  }
})

function broadcastState(state) {
  localStorage.setItem('app-state', JSON.stringify(state))
}

Works across tabs in the same browser, even different windows.Downsides:

  • Fires the event on the tab that made the change too → need to ignore self
  • String-only, size limits (~5MB)
  • No fine-grained diffs → full state every time
  • Slower than BroadcastChannel

Real-World Recommendations (2025)

Use CaseRecommended Stack
Collaborative app (Figma-like)Yjs + Liveblocks or self-hosted y-websocket
Multiplayer game / trading dashboardCustom WebSocket server + Redis pub/sub + msgpack
Internal dashboard, same companySocket.IO or Phoenix Channels
Offline-first, occasional syncYjs + minimal signaling server
Simple same-browser multi-tab onlyBroadcastChannel + SharedWorker

Bottom LineFor 50+ tabs with real real-time requirements: you almost certainly want a central server with WebSockets + a CRDT library (Yjs) or careful operational transformation / diff patching.Pure in-browser solutions break down past ~20–30 tabs or when you need cross-browser or cross-device sync.

Q20. Offline-first architecture: how do you handle conflict resolution (CRDTs vs OT)?

Offline-first architecture means the app works fully without internet, syncs locally-made changes when connectivity returns, and must resolve conflicts that arise when the same data is edited on multiple devices simultaneously.The two dominant approaches for conflict resolution in offline-first systems are CRDTs (Conflict-free Replicated Data Types) and Operational Transformation (OT) (or its modern successor, Automerge/Peritext-style sequence CRDTs that evolved from OT ideas).Here’s a detailed comparison to help you choose the right one for your use case.

AspectCRDTs (especially state-based)Operational Transformation (OT) / Transform-based systems
Core philosophyState-based: ship the final state, merge function resolves conflicts automaticallyOperation-based: ship intentions (operations), transform conflicting ops against each other
Conflict resolutionDeterministic merge function, no “lost updates” (last-writer-wins or additive for commutative types)Requires transformation functions per operation type; can preserve intent better
Implementation complexityVery high for custom types; low if you use existing libraries (Yjs, Automerge, RxDB + CRDTs)High historically (Google Wave failures); moderate today with mature libraries
Metadata overheadOften high (tombs, version vectors, dot kernels, etc.)Usually lower (just op log + occasional checkpoints)
Storage growthGrows forever unless you have anti-entropy + tombstone GCCan prune acknowledged ops more easily
Network efficiencyCan send delta-state or full state; Yjs/awareness is very efficientUsually very efficient (send only new ops)
Supported data typesRich set with libraries: Map, Array, Text, Register, Counter, etc.Rich set in modern libs (Automerge, Peritext, CollabText)
Concurrent text editingYjs (2016+, moved to sequence CRDTs) is currently the gold standardLegacy Google Docs used OT; new rich-text editors (ProseMirror + Yjs, Slate + custom) mostly moved to CRDTs
Intent preservationAdditive only (counters, sets) or LWW; text is excellent with YjsCan preserve intent better in some cases (e.g., two people formatting the same text differently)
Causal consistencyGuaranteed if using version vectors/dotsRequires careful op ordering and transformation correctness
Notable real-world usageYjs (used by Figma-like tools, Tldraw, Matrix Element X), Automerge, Firebase Firestore (LWW registers), Redis CRDTs, Riot.im/ElementGoogle Docs (originally), ShareDB, Etherpad (with OT plugins), Teletype for Atom (now using Automerge-style)

Current practical recommendations (2025)Use CRDTs (specifically Yjs or Automerge) when:

  • You are building collaborative editing (text, drawings, whiteboards, Figma-like tools) → Yjs is the clear winner today (battle-tested at scale, tiny deltas, excellent text + awareness).
  • You want “set it and forget it” conflict resolution without writing transformation functions.
  • You need strong eventual consistency with minimal server logic.
  • You care about peer-to-peer or mesh sync (Yjs + WebRTC is amazing).
  • You are okay with slightly higher storage/memory overhead.

Libraries: Yjs (most popular), Automerge 2.x (pure JS, great for offline-first with IndexedDB persistence), RxDB + CRDT plugin, ElectricSQL (Postgres → SQLite with CRDT sync).

Use operation-based systems (modern OT or Automerge-style) when:

  • You need very precise intent preservation for complex rich-text formatting (rarely needed; Yjs handles 99% of cases now).
  • You have an existing OT codebase (ShareDB, etc.).
  • You want minimal storage growth and can prune old operations aggressively.

Most new projects have abandoned classic OT in favor of Yjs or Automerge.Hybrid & emerging approaches

  • Automerge (v2+) now uses a hybrid: columnar storage + operation log + CRDT-like registers → best of both worlds.
  • Diamond types (the CRDT behind Tldraw, Liveblocks) – extremely small binary format for text/arrays.
  • ElectricSQL / PowerSync / WatermelonDB + custom LWW – use simple last-writer-wins + manual conflict resolution UI for relational data.

Simple rule of thumb (2025)

Use caseRecommended tech
Real-time collaborative text / canvasYjs (undisputed leader)
Offline-first JSON documentsAutomerge 2 or Yjs
Relational data (tables, foreign keys)ElectricSQL, PowerSync, or LWW + manual resolution
Simple key-value or formsLWW registers + timestamps or client-assigned lamport clocks

Summary

  • Classic OT is mostly dead for new projects (too error-prone).
  • Yjs (sequence CRDT) has essentially won real-time collaboration.
  • Automerge has won pure-offline-first JSON document sync.
  • For anything beyond JSON/text, you’ll likely combine LWW + user-mediated resolution or use emerging local-first databases (ElectricSQL, Pglite + CRDTs, Triplit, etc.).

So in 2025: Prefer CRDTs (Yjs or Automerge) unless you have a very specific reason not to. The ecosystem has matured dramatically, and the old OT complexity nightmares are largely solved.