Skip to content

03 - JavaScript Runtime ​


1. Event Loop (Macro vs Microtasks) ​

What: JS is single-threaded. The event loop is the mechanism that handles async operations by queuing work and executing it when the call stack is empty.

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                 Call Stack                    β”‚
β”‚  (executes one function at a time)           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
              β”‚ when empty, check:
              β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚           Microtask Queue                    β”‚  ← ALL microtasks drain first
β”‚  Promise.then, queueMicrotask,              β”‚
β”‚  MutationObserver                            β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
              β”‚ when empty, then:
              β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚           Macrotask Queue                    β”‚  ← ONE macrotask, then back to micro
β”‚  setTimeout, setInterval, setImmediate,     β”‚
β”‚  I/O callbacks, UI rendering, MessageChannelβ”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Critical rule: After each macrotask, ALL microtasks run before the next macrotask.

js
console.log('1');                          // sync β€” runs first

setTimeout(() => console.log('2'), 0);     // macrotask β€” queued

Promise.resolve().then(() => {
  console.log('3');                        // microtask β€” queued
  Promise.resolve().then(() => console.log('4')); // microtask from microtask
});

console.log('5');                          // sync β€” runs second

// Output: 1, 5, 3, 4, 2
// Sync first, then ALL microtasks (including nested), then macrotask

Where this matters for React:

  • setState batching uses microtasks (React 18)
  • useEffect cleanup runs synchronously before the next effect, but the effect itself is scheduled as a macrotask-like callback
  • flushSync forces synchronous state update (bypasses batching)

requestAnimationFrame: Runs before the browser paints, but AFTER microtasks and before macrotasks.

[macrotask] β†’ [all microtasks] β†’ [rAF callbacks] β†’ [paint] β†’ [next macrotask]

2. Task Starvation ​

What: When high-priority tasks keep being added, preventing lower-priority tasks from ever running.

Microtask starvation:

js
function flood() {
  Promise.resolve().then(flood); // endlessly queues microtasks
}
flood();
// setTimeout, UI rendering, user input β€” NOTHING else runs
// The browser freezes because microtasks drain before anything else

Microtasks can starve macrotasks (and rendering) because the event loop drains the entire microtask queue before moving on.

In React concurrent mode: React's scheduler prevents starvation with time slicing. It yields to the browser every ~5ms, ensuring user input and rendering get a chance to run. High-priority updates (user input) can also preempt lower-priority renders.

In practice:

  • Don't recursively queue microtasks
  • Use setTimeout or requestIdleCallback for background work
  • Use scheduler.postTask() (Scheduler API) for explicit priority control

3. Priority Inversion in Async Code ​

What: When a low-priority task blocks a high-priority task because the high-priority task depends on the low-priority result.

Example:
High priority: Show search results (needs data from API)
Low priority:  API request to fetch data (network I/O)

The high-priority UI update can't happen until the low-priority
network request completes. The high-priority task is "inverted"
to the priority of the low-priority task it depends on.

In frontend context:

tsx
// User types in search box (high priority: show input)
// Search results need API call (lower priority)
// If we await the API call in the render path, input becomes laggy

// Solution: Separate priorities
function Search() {
  const [query, setQuery] = useState('');
  const deferredQuery = useDeferredValue(query); // deferred = lower priority

  return (
    <>
      <input value={query} onChange={e => setQuery(e.target.value)} /> {/* HIGH priority */}
      <Results query={deferredQuery} /> {/* LOW priority, won't block input */}
    </>
  );
}

Solution patterns:

  • startTransition / useDeferredValue for UI priority
  • Scheduler API (scheduler.postTask()) for explicit priority
  • Web Workers for CPU-bound work (completely off main thread)

4. AbortController ​

What: A mechanism to cancel async operations (fetch requests, event listeners, timers, streams).

tsx
// Cancel fetch
const controller = new AbortController();

fetch('/api/data', { signal: controller.signal })
  .then(res => res.json())
  .then(data => setData(data))
  .catch(err => {
    if (err.name === 'AbortError') {
      console.log('Request cancelled');
    } else {
      throw err;
    }
  });

// Later: cancel it
controller.abort();

Common React pattern β€” cancel on unmount or dep change:

tsx
useEffect(() => {
  const controller = new AbortController();

  async function fetchData() {
    try {
      const res = await fetch(`/api/users/${id}`, { signal: controller.signal });
      const data = await res.json();
      setUser(data);
    } catch (err) {
      if (err instanceof Error && err.name !== 'AbortError') {
        setError(err.message);
      }
    }
  }

  fetchData();
  return () => controller.abort(); // cleanup cancels in-flight request
}, [id]);

Abort with reason (newer API):

tsx
controller.abort(new Error('User navigated away'));
// err.cause contains the reason

AbortSignal.timeout (newer):

tsx
fetch('/api/data', { signal: AbortSignal.timeout(5000) }); // auto-abort after 5s

AbortSignal.any (combine signals):

tsx
const timeout = AbortSignal.timeout(5000);
const manual = new AbortController();
const combined = AbortSignal.any([timeout, manual.signal]);
fetch('/api/data', { signal: combined }); // aborts on timeout OR manual abort

5. Backpressure in Streams API ​

What: When a data producer creates data faster than the consumer can process it, backpressure is the mechanism to slow down or pause the producer.

Producer (network) β†’ 100MB/s β†’ Transform β†’ Consumer (UI render) β†’ 10MB/s
                                                          ↑
                                              Can't keep up! Memory fills up.
                                              Backpressure needed.

Streams API handles this:

tsx
const response = await fetch('/api/large-data');
const reader = response.body!.getReader();

const stream = new ReadableStream({
  async pull(controller) {
    const { done, value } = await reader.read();
    if (done) {
      controller.close();
      return;
    }
    controller.enqueue(value);
    // pull() is only called when the consumer is ready for more data
    // = automatic backpressure
  },
});

TransformStream with backpressure:

tsx
const transform = new TransformStream({
  transform(chunk, controller) {
    const processed = processChunk(chunk);
    controller.enqueue(processed);
    // If downstream can't consume, this naturally pauses
  },
});

// Pipe with automatic backpressure
response.body
  .pipeThrough(transform)
  .pipeTo(writableStream);

Why it matters: Processing large files (CSV export on dashboard), streaming server responses (SSE for real-time transactions), large JSON parsing.


6. Streaming Fetch Response Handling ​

What: Process a fetch response as it arrives, chunk by chunk, instead of waiting for the entire response.

tsx
async function streamResponse(url: string, onChunk: (text: string) => void) {
  const response = await fetch(url);
  const reader = response.body!.getReader();
  const decoder = new TextDecoder();

  while (true) {
    const { done, value } = await reader.read();
    if (done) break;
    onChunk(decoder.decode(value, { stream: true }));
  }
}

// Usage: stream large transaction list
streamResponse('/api/transactions/export', (chunk) => {
  appendToTable(parseCSVChunk(chunk));
});

Server-Sent Events (SSE) for real-time:

tsx
// Server sends events line by line
const eventSource = new EventSource('/api/transactions/live');

eventSource.onmessage = (event) => {
  const transaction = JSON.parse(event.data);
  addTransaction(transaction);
};

eventSource.onerror = () => {
  eventSource.close(); // cleanup on error
};

NDJSON (Newline-Delimited JSON) streaming:

tsx
async function* streamNDJSON<T>(url: string): AsyncGenerator<T> {
  const response = await fetch(url);
  const reader = response.body!.getReader();
  const decoder = new TextDecoder();
  let buffer = '';

  while (true) {
    const { done, value } = await reader.read();
    if (done) break;

    buffer += decoder.decode(value, { stream: true });
    const lines = buffer.split('\n');
    buffer = lines.pop()!; // keep incomplete line in buffer

    for (const line of lines) {
      if (line.trim()) yield JSON.parse(line) as T;
    }
  }
}

// Usage
for await (const tx of streamNDJSON<Transaction>('/api/transactions/stream')) {
  updateDashboard(tx);
}

Frontend interview preparation reference.