Frontend System Design (Uber R4) ​
R4 is 60 minutes of open-ended architecture. It is not distributed systems with Zookeeper and Kafka; it's UI system design. The interviewer evaluates whether you can scope a product, pick the right state boundary, and articulate tradeoffs like a senior engineer. L5A expects you to drive the conversation.
Frontend System Design Framework ​
Follow this order every time. It's predictable, and predictability lets you spend more brain on content and less on "what comes next."
- Clarify (5 min) — Ask enough questions to set scope. Explicitly state what is IN and what is OUT.
- Requirements (10 min) — Split functional vs non-functional. Put numbers on non-functional (p95 latency, concurrent users, bundle budget).
- Component tree + API (15 min) — Draw the boxes. Define the request/response shapes. Pick REST vs GraphQL vs WebSocket and say why.
- Deep dives (20 min) — The interviewer picks 2-3. Go to "bad → good → great" on each.
- Buffer (10 min) — Performance, a11y, testing, monitoring.
Common Signals They Test ​
- Scope control. Senior engineers cut features; juniors promise everything.
- Tradeoff articulation. Don't just say "use Redux." Say "Redux because we have cross-cutting real-time updates that three unrelated subtrees consume; Context would cause cascading re-renders."
- Production thinking. Error boundaries, logging, feature flags, gradual rollout, SLOs.
- Self-correction. If the interviewer challenges a choice, update the design out loud instead of defending.
How to Allocate 60 Minutes ​
| Block | Time | Output |
|---|---|---|
| Clarify | 5 min | Scope statement, explicit exclusions |
| Requirements | 10 min | Functional list, non-functional numbers |
| Component + API | 15 min | Tree diagram, endpoint signatures |
| Deep dives | 20 min | 2-3 challenges with tradeoff analysis |
| Buffer | 10 min | Perf, a11y, test, monitor |
Autocomplete / Typeahead at Scale ​
Reported Frequency: Very high. Classic Uber frontend question across multiple cities.
Problem Statement ​
Design an autocomplete component for Uber's search bar that suggests places, cities, and businesses. It should feel instant, handle 10k+ queries per second globally at peak, and degrade gracefully on flaky networks.
Clarifying Questions ​
- You: "Is this suggesting any kind of entity or just places?"
- Interviewer: "Places and businesses. Not contacts or usernames."
- You: "What's the target p95 time-to-first-suggestion?"
- Interviewer: "Under 150ms."
- You: "Do we need personalization per user, like recent searches first?"
- Interviewer: "Yes, recent searches for logged-in users. Anonymous users get generic."
- You: "Mobile web, desktop, or both?"
- Interviewer: "Both. Assume the component is embedded in a React SPA."
- You: "Is there a hard cap on suggestions shown?"
- Interviewer: "Top 10."
Requirements ​
Functional:
- Suggest places and businesses as the user types.
- Support keyboard navigation (Arrow Up/Down, Enter to select, Escape to close).
- Show recent searches when the input is focused but empty.
- Bold the matched substring in each suggestion.
- Select with mouse or keyboard.
Non-Functional:
- p95 time from keystroke to rendered suggestions: < 150ms.
- Initial component bundle: < 20 KB gzipped.
- Support throttled 3G networks (degrade, not crash).
- Zero layout shift when suggestions appear.
- Full keyboard + screen reader support (WCAG 2.1 AA).
Component Tree ​
<Autocomplete>
├── <SearchInput> (controlled, owns raw text)
├── <SuggestionList> (role="listbox")
│ ├── <SuggestionGroup heading="Recent">
│ │ └── <SuggestionItem role="option">
│ └── <SuggestionGroup heading="Places">
│ └── <SuggestionItem role="option">
└── <AnnounceRegion> (aria-live="polite")Data Flow & State Management ​
- Local component state for: input value, activeDescendantIndex, isOpen.
- Client cache (LRU, ~200 entries) for: query → suggestions. Entries live 60s.
- Server cache (Varnish/CDN) for popular queries by coarse geo bucket.
- Request layer uses
AbortControllerso old in-flight requests cancel when the user types another character.
No Redux. This is local-by-nature state - lifting to a global store only adds coupling.
API Design ​
GET /api/autocomplete?q=sfo&lat=37.77&lng=-122.41&session=abc123
→ 200 {
suggestions: [
{ id: "plc_1", type: "place", name: "SFO Airport", region: "San Francisco, CA",
matchRanges: [[0,3]] },
...
],
requestId: "req_456"
}Key details:
sessiontoken groups a user's keystroke sequence so the backend can bill it as one autocomplete "session" (Google Places does this).- Server includes
requestIdso the client can drop responses that arrive out of order. - Response is gzipped and cacheable at edge with
Cache-Control: public, max-age=60for anonymous queries;Cache-Control: private, max-age=0for personalized.
Deep Dives ​
1. Debounce vs throttle vs "no wait"
- Bad: Fire a request on every keystroke. 10 chars = 10 round trips, mostly wasted. Backend on fire.
- Good: Debounce 300ms. Waits until the user pauses. Works but feels laggy at 300ms.
- Great: Debounce 100-150ms + request cancellation via
AbortController. Requests go out fast, but only the freshest one renders. Bonus: prefetch on focus (fire an empty-query request to warm the user's recent searches).
2. Race condition on out-of-order responses
- Bad: Last response wins by arrival time. A slow response for "sf" can overwrite a fast one for "sfo".
- Good: Track the most recent query text. On response, compare against current input; drop if stale.
- Great: Per-request sequence number. Increment on each dispatch, remember "last rendered seq." A response with seq < last-rendered is discarded. Also cancel stale requests with
AbortController; don't even parse them.
const controllerRef = useRef<AbortController | null>(null);
const latestSeqRef = useRef(0);
async function fetchSuggestions(q: string) {
controllerRef.current?.abort();
const controller = new AbortController();
controllerRef.current = controller;
const seq = ++latestSeqRef.current;
const res = await fetch(`/api/autocomplete?q=${encodeURIComponent(q)}`, {
signal: controller.signal,
});
if (seq < latestSeqRef.current) return; // stale
const data = await res.json();
setSuggestions(data.suggestions);
}3. Caching strategy
- Bad: No cache. Every keystroke hits the network.
- Good: Client-side
Map<query, suggestions>keyed by exact query string. Works but memory grows unbounded. - Great: LRU with a size cap (200 entries) and TTL (60s). On hit, render synchronously (no loading state). Prefer server results for the typed-so-far prefix; optionally also consult a local trie of previous results for offline mode.
4. Handling slow networks
- Bad: Infinite spinner, blocks the UI.
- Good: Show a skeleton after 100ms, timeout at 3s, show "No results" or "Can't reach server."
- Great: Stale-while-revalidate. Render whatever is in cache immediately, fire a background request, reconcile when it returns. On full offline, fall back to a locally shipped "top 1000 cities" trie for graceful degradation.
Performance Considerations ​
- Bundle size: Tree-shake the component. Lazy-load icons via SVG sprite or dynamic import. Avoid bundling a fuzzy-search library client-side if the server does the matching.
- Runtime: Virtualize the suggestion list if > 50 items (rare here; cap at 10).
- Layout: Reserve suggestion container height or use
position: absoluteon the popover so it doesn't push content. - Network: HTTP/2 multiplexing, Brotli compression, ETags for personalized responses.
Accessibility ​
<input role="combobox" aria-expanded aria-controls="list" aria-activedescendant="option-N">.<ul id="list" role="listbox">with<li role="option" id="option-N">children.aria-live="polite"region announces "5 suggestions available."- Arrow keys move active descendant; Enter selects; Escape closes.
- Matched substring is bolded but not conveyed via color alone.
Testing Strategy ​
- Unit: Matcher highlighting, LRU eviction, sequence-number logic.
- Integration: React Testing Library; simulate typing, verify debounce, verify cancellation.
- E2E: Playwright on real browsers; test with throttled network and offline.
- Visual regression: Chromatic or Percy for the popover at different widths.
- A11y: axe-core + manual VoiceOver/NVDA passes.
Senior vs Staff ​
A senior ships this correctly. A staff engineer also defines SLOs, ties them to CUJ metrics (Core User Journey), and sets up alerts when p95 breaches. Staff would also negotiate the API contract with the backend team to make sure the session token + seq number fields exist.
Collaborative Calendar ​
Reported Frequency: Mid-high. Reported Sept 2025 L5A Bangalore loop.
Problem Statement ​
Design a Google Calendar-style UI for Uber's internal team that supports month/week/day views, event creation/editing, and real-time updates when teammates change shared events.
Clarifying Questions ​
- You: "Shared calendars or just personal?"
- Interviewer: "Both. Up to 10 calendars overlaid."
- You: "Recurring events?"
- Interviewer: "Yes - weekly and monthly. No complex RRULE stuff."
- You: "Real-time conflict resolution if two people edit the same event?"
- Interviewer: "Yes. Last-write-wins is fine, but show a notification when someone else changed what you're editing."
- You: "Time zones?"
- Interviewer: "Must work across time zones. Events are stored in UTC; display in user's local time."
- You: "Mobile?"
- Interviewer: "Desktop first. Mobile web is nice to have."
Requirements ​
Functional:
- Month, week, day views.
- Create, edit, delete events.
- Overlay up to 10 calendars (toggle visibility).
- Recurring events (weekly, monthly).
- Time zone display.
- Real-time updates via WebSocket.
- Conflict detection and notification.
Non-Functional:
- First meaningful paint: < 1s on fast 3G.
- Event create-to-visible: < 200ms (optimistic).
- Support 100k events in a user's calendars without OOM.
- Bundle budget: < 150 KB gzip initial, < 300 KB with all views.
Component Tree ​
<CalendarApp>
├── <TopBar> (view switcher, date picker, "new event")
├── <Sidebar> (calendar list, timezone selector)
├── <ViewContainer>
│ ├── <MonthView> (virtualized month cells)
│ ├── <WeekView> (7 columns × 24 rows, virtualized rows)
│ └── <DayView> (single column, virtualized rows)
├── <EventTooltip> (hover preview)
├── <EventModal> (create/edit)
└── <ConflictToast> (real-time conflict notifier)Data Flow & State Management ​
- Zustand with per-calendar slices. Each calendar slice holds events indexed by
yyyy-mm-dd. - Global slice for view state (current view, current date, timezone).
- WebSocket middleware pushes remote updates into the store.
- Selectors are memoized; components subscribe to only their slice.
Why Zustand and not Redux: Redux's boilerplate for 5+ slices is heavy and Redux devtools are not worth the overhead for this read-heavy app. Why not Context: overlaying 10 calendars means 10 providers or one giant context, both of which cause sweeping re-renders.
API Design ​
GET /api/events?calendarIds=a,b,c&from=2026-04-01&to=2026-04-30
→ 200 { events: [...], cursor: "abc" }
POST /api/events
{ calendarId, title, startUtc, endUtc, recurrence: {...} | null }
→ 201 { event: {...} }
PATCH /api/events/:id
{ title?, startUtc?, endUtc?, version }
→ 200 { event: {...} } | 409 { currentVersion, conflict: {...} }
WS /ws/calendar
server → client: { type: "event.updated" | "event.deleted" | "event.created", event }
server → client: { type: "presence", userId, editingEventId }Versioning via version field gives optimistic concurrency: server rejects PATCH if version stale.
Deep Dives ​
1. Time zone handling (the hardest part)
- Bad: Store events in local time. Dragging across DST boundaries corrupts them.
- Good: Store in UTC, convert in the UI using
Intl.DateTimeFormat. Works for display but breaks recurrences across DST. - Great: Store in UTC with a per-event
tzfield (IANA zone name). Recurrences reconstruct against the original zone, so a 9am daily meeting stays 9am even when DST shifts. UseTemporalAPI ordate-fns-tz.
2. Real-time sync and conflict handling
- Bad: Poll every 30s. Missed updates, wasted bandwidth.
- Good: WebSocket push. On remote update, merge into store. Last-write-wins.
- Great: Optimistic local updates with rollback. When you PATCH, you apply locally, send with
version. If the server 409s, you get the latest version and show a toast: "Alice edited this while you were editing. View changes?" Present a diff modal.
For true OT/CRDT collaboration (like Google Docs), it's overkill here. Events are small objects; last-writer-wins plus notification is the pragmatic choice.
3. 100k events without OOM
- Bad: Load all events for the year on mount.
- Good: Load only events in the visible range; refetch on view change.
- Great: Windowed fetch plus virtualization. Month view loads 6 weeks at a time, prefetches adjacent months in the background, evicts months more than 6 away. Virtualize rows in week/day view so only visible time slots are rendered. Use IndexedDB for offline persistence of recent events.
4. Infinite scroll across years
- Bad: DOM grows forever.
- Good: IntersectionObserver triggers loadMore on scroll.
- Great: Virtualized container with a known month height; scrollbar reflects the true position. Use
react-windowor a custom windower.
Performance Considerations ​
- Offload recurrence expansion to a Web Worker; the UI thread never blocks expanding a year of weekly events.
- Debounce layout thrashing when the user drags an event (use
transform: translateduring drag, commit to top/left on drop). - Memoize event overlap calculation per day.
Accessibility ​
- Keyboard shortcuts:
j/kfor prev/next day,w/m/dfor views,cto create. - Each event is
<button>with aria-label describing time, title, calendar. - Modal traps focus; restores on close.
- Grid semantics for month view:
role="grid",role="gridcell".
Testing Strategy ​
- Unit: Recurrence expansion, DST handling, overlap calculation.
- Integration: RTL tests for drag-drop, modal open/close.
- E2E: Playwright flows for create/edit/delete, conflict detection.
- Visual regression on views at multiple viewport widths.
Senior vs Staff ​
Staff would propose an RFC for the conflict-resolution UX, benchmark different state managers with a 100k-event fixture, and define the WebSocket retry and reconnection strategy with backoff and resume-from-offset.
Config-Driven Widget Builder ​
Reported Frequency: Mid. Appears for design-system-adjacent roles.
Problem Statement ​
Build a widget builder where product managers can drag components onto a canvas, configure them via a property panel, and export a JSON config that runtime consumers render. Similar to Retool or Webflow at a simplified level.
Clarifying Questions ​
- You: "How many widget types?"
- Interviewer: "Start with 5: text, image, button, input, list."
- You: "Is the runtime in the same app or a separate one?"
- Interviewer: "Same app for now. Editor mode and preview mode."
- You: "Nested widgets?"
- Interviewer: "Yes. List contains rows that contain other widgets."
- You: "Who validates the config?"
- Interviewer: "The editor, on save. You pick the schema language."
Requirements ​
Functional:
- Drag widgets from palette onto canvas.
- Select a widget to see its properties in a side panel.
- Edit properties (text, color, action, data binding).
- Switch between edit and preview mode.
- Export and import JSON config.
- Nested widgets (tree, not just flat).
Non-Functional:
- Canvas renders < 100ms for configs with up to 500 widgets.
- Config portable: no framework-specific types in export.
- Extensible: third parties can register new widget types.
Component Tree ​
<Builder>
├── <Palette> (list of draggable widget types)
├── <Canvas>
│ └── <WidgetRenderer> (recursive)
├── <PropertyPanel> (forms for selected widget)
└── <ModeToggle> (edit | preview)Data Flow & State Management ​
- Zustand store holds the widget tree and selected-widget id.
- Widget registry is a module-level Map:
type → { Component, schema, defaultProps }. - Property panel renders a form from the selected widget's schema; changes dispatch
updateProps(id, patch).
API Design ​
Widget schema uses JSON Schema subset:
type WidgetSchema = {
type: string; // "text", "button", ...
properties: Record<string, {
type: "string" | "number" | "boolean" | "color" | "action";
label: string;
default?: unknown;
required?: boolean;
}>;
children?: "none" | "single" | "multiple";
};
type WidgetNode = {
id: string;
type: string;
props: Record<string, unknown>;
children?: WidgetNode[];
};Registration:
registry.register("button", {
Component: ButtonWidget,
schema: {
type: "button",
properties: {
label: { type: "string", label: "Label", default: "Click me" },
color: { type: "color", label: "Background" },
onClick: { type: "action", label: "On Click" },
},
children: "none",
},
});Deep Dives ​
1. Runtime rendering from config
- Bad: Giant switch statement on
typeinside one component. Adding a widget means editing core code. - Good: Registry pattern.
registry.get(type).Componentis looked up at render time. - Great: Registry + lazy loading. Rarely used widget types load via dynamic import; the canvas shows a skeleton until loaded. Widgets self-register in their own modules.
2. Extensibility (plugin system)
- Bad: Widgets are hardcoded in the builder repo.
- Good: A plugin API:
builder.registerWidget(definition). External packages can extend. - Great: Manifest-driven. Plugins declare their widgets in a manifest; builder loads manifests at startup, version-checks the schema against the plugin version, and isolates plugin errors with error boundaries so one buggy plugin doesn't kill the builder.
3. Config validation
- Bad: Save without validation. Runtime blows up.
- Good: On save, walk the tree; for each node, validate props against its schema.
- Great: Live validation in the property panel (per-field errors) plus pre-save whole-tree validation. Use Zod or Ajv. Validation errors link to the offending widget in the canvas.
4. Drag-drop canvas
- Bad: HTML5 DnD on the canvas with pixel coordinates.
- Good:
dnd-kitor similar with drop zones per container widget. - Great: Dropzone awareness - a list widget highlights only when a compatible child type is being dragged; hovering over an incompatible parent shows a "not allowed" cursor.
Performance Considerations ​
- Memoize each widget by id so a property change on one widget doesn't re-render siblings.
- Schema forms: render fields in a shadow DOM-isolated subtree to avoid leaking styles.
- Undo/redo: store the widget tree as immutable snapshots (Immer patches).
Accessibility ​
- Keyboard-driven widget selection (Tab through widgets, arrows to reorder, Enter to edit).
- All property forms are accessible form controls with labels.
- Preview mode must satisfy a11y for the built UI; validator warns on unlabeled inputs.
Testing Strategy ​
- Unit: Registry, schema validation.
- Integration: Drag widget, set prop, export JSON, import and diff.
- Snapshot: Serialized tree for reference configs.
- Visual regression on preview mode.
Senior vs Staff ​
Staff defines the plugin contract as a versioned public API, sets up semver for schemas, and plans the migration story when a widget schema changes.
Real-time Dashboard / Feed ​
Reported Frequency: High. Common variant: "design a live driver-earnings dashboard."
Problem Statement ​
Build a real-time dashboard showing a scrolling feed of events (trip starts, completions, cancellations) for Uber's ops team. 5-50 events per second per user, filters by region, pausable.
Clarifying Questions ​
- You: "What's the event volume per user?"
- Interviewer: "5-50 per second sustained, bursts to 200."
- You: "Retention - how far back?"
- Interviewer: "Last 15 minutes in view; older events scroll off."
- You: "Filters?"
- Interviewer: "Region, event type, driver id. Applied server-side ideally."
- You: "Can users click an event for details?"
- Interviewer: "Yes, opens a side panel."
Requirements ​
Functional:
- Scrolling feed of events, newest on top.
- Filters: region, event type, driver.
- Pause / resume button.
- Click to open detail panel.
- Event count badge when paused (buffered count).
Non-Functional:
- 60fps scroll under 50 events/sec.
- No more than 1s delay from server emit to UI.
- Bundle < 80 KB.
- Must not OOM after 24 hours idle.
Component Tree ​
<Dashboard>
├── <FilterBar>
├── <FeedToolbar> (pause/resume, count badge)
├── <FeedList> (virtualized)
│ └── <FeedItem>
└── <DetailPanel>Data Flow & State Management ​
- WebSocket connection with filters in the subscribe message.
- Zustand store:
items: Event[], bounded to last 1000. - While paused, incoming events accumulate in a
bufferarray; on resume, they're merged in.
API Design ​
WS /ws/ops-feed
client → server: { type: "subscribe", filters: { region, eventType, driverId } }
client → server: { type: "filter", filters }
server → client: { type: "event", event }
server → client: { type: "snapshot", events } (on connect)Deep Dives ​
1. WebSocket fan-out at scale
- Bad: Broadcast all events to all clients; client filters.
- Good: Server subscribes each client to topics matching their filters; only matching events sent.
- Great: Multi-tier: edge WebSocket gateway fans out from a pub/sub bus (Kafka). Clients subscribe via the gateway. Filters applied at the gateway. Sticky sessions per gateway node; gateway acks on connect with last known offset so reconnects resume without gaps.
2. Backpressure - messages faster than renderable
- Bad:
setItems([...items, newEvent])on every message. At 200/sec, React can't keep up and the event loop starves. - Good: Batch inserts every 250ms in a
requestAnimationFrameloop. - Great: Hybrid: buffer in a ref, flush on the next
rAF, and if the buffer grows beyond 500 during the frame, collapse into a single "X events summarized" row. This preserves liveness without killing the main thread.
const bufferRef = useRef<Event[]>([]);
const rafRef = useRef<number | null>(null);
function onWsMessage(event: Event) {
bufferRef.current.push(event);
if (rafRef.current != null) return;
rafRef.current = requestAnimationFrame(() => {
const toFlush = bufferRef.current;
bufferRef.current = [];
rafRef.current = null;
useStore.getState().prependBatch(toFlush);
});
}3. Virtualized feed with bounded memory
- Bad: Append forever. Memory leaks after an hour.
- Good: Cap items at 1000; drop oldest.
- Great: Virtualize with
react-window; store ring-buffer of 1000 most recent. Rendered DOM is ~20 rows regardless of buffer size.
4. Offline queue and reconnect
- Bad: On disconnect, stop. On reconnect, resubscribe and miss everything in between.
- Good: Reconnect with exponential backoff and resubscribe.
- Great: Track last-seen
offsetper stream. On reconnect, sendsubscribewithresumeFrom: lastOffset; server replays from its buffer (up to N minutes). If the gap is larger, send a "snapshot" and reset.
Performance Considerations ​
- CSS
contain: layout style painton each row to isolate layout costs. content-visibility: autofor off-screen rows.- Use
transform: translateYfor smooth scroll; avoid top/left animations. - Don't render large JSON in each row; show a summary, defer detail to the panel.
Accessibility ​
aria-live="polite"on the feed with a non-intrusive announcement policy (summarize every N seconds, not every event).- Pause button for users who need to read at their own pace (WCAG 2.2.2).
- Keyboard navigation through rows.
Testing Strategy ​
- Unit: Backpressure batcher, ring buffer, reconnection.
- Integration: Mock WebSocket, assert render count stays bounded under load.
- Load test: Synthetic 500 events/sec for 10 minutes; assert no memory growth.
Senior vs Staff ​
Staff defines the SLO ("99% of events visible within 1s"), builds dashboards against real traffic, and negotiates with backend about the resume-from-offset contract.
Uber Rider App Frontend ​
Reported Frequency: High for L5A. This is the Uber-specific variant.
Problem Statement ​
Design the frontend of the Uber rider app (web or mobile web): map with driver location, ride request flow, ETA updates, driver tracking after match, receipt.
Clarifying Questions ​
- You: "Web or React Native?"
- Interviewer: "React Native, but focus on architecture; the mental model transfers."
- You: "Offline-first?"
- Interviewer: "At least the app shell. Ride-in-progress state should be cached."
- You: "What map provider?"
- Interviewer: "Assume a Google Maps-like SDK is given. Focus on how you use it."
- You: "Battery?"
- Interviewer: "Important. Don't stream GPS at 60Hz."
Requirements ​
Functional:
- Map with current location.
- Search destination, see fare estimate.
- Request a ride; show driver assignment.
- Track driver on the map in real time.
- In-trip screen with ETA, route.
- End-of-trip receipt.
Non-Functional:
- Time to interactive < 2s on LTE.
- Location update visible within 1s of server push.
- Battery drain < 3% over a 30-minute trip.
- Works at 1Mbps with 400ms latency.
Component Tree ​
<App>
├── <MapLayer> (full-screen map)
│ ├── <RiderMarker>
│ ├── <DriverMarker>
│ └── <RoutePolyline>
├── <BottomSheet> (context-aware; changes per flow state)
│ ├── <DestinationSearch>
│ ├── <FareEstimate>
│ ├── <RideRequesting>
│ ├── <RideInProgress>
│ └── <Receipt>
└── <HeaderBar>Data Flow & State Management ​
- Redux Toolkit with slices for
location,trip,user,map. RTK picked here because multiple unrelated subtrees (map, bottom sheet, header) subscribe totrip.state; Redux's subscribe-select pattern prevents cascading re-renders Context would cause. - Location Service client owns a WebSocket subscription to the driver location feed; dispatches throttled updates into Redux (max 1/sec for render, although the server may push more).
- Persistence via MMKV (React Native key-value store) for trip-in-progress state.
API Design ​
POST /rides/estimate { pickup, dropoff } → { eta, fare, rideOptions }
POST /rides { pickup, dropoff, option } → { rideId, status: "requested" }
WS /ws/rider
server → client: { type: "driver.assigned", driver, etaSeconds }
server → client: { type: "driver.location", lat, lng, heading }
server → client: { type: "trip.state", state: "arriving"|"started"|"ended" }Deep Dives ​
1. Map rendering and viewport management
- Bad: Re-render the map on every driver location update. GPU flails.
- Good: Map component is uncontrolled for driver position; use SDK's
marker.setPosition()imperatively. - Great:
animateMarkerwith smooth interpolation between points; only the map layer touches DOM/GL, React tree above is stable. Pan/zoom are separate from React state to avoid render thrash.
2. Location streaming (Connection Service → Location Service)
- Bad: One WebSocket per feature. Driver location, trip state, and chat all open their own connections.
- Good: Single WebSocket multiplexed by message type. Redux middleware routes messages to slices.
- Great: A Connection Service abstraction that owns one WebSocket per app session, exposes
subscribe(topic, handler), handles reconnect with jitter + offset resume, and dispatches battery-aware heartbeats (longer pings when backgrounded).
3. Design system layer
- Bad: Inline styles everywhere. Changing a button color means touching 100 files.
- Good: A
@uber/baselibrary of primitives. Use the same components across web and native where possible. - Great: Tokens (color, spacing, typography) live in a platform-agnostic JSON, compiled into CSS vars for web and NativeWind for RN. Visual regression tests per component. Versioned semver so consumer apps can upgrade safely.
4. Offline-first patterns
- Bad: Errors everywhere when offline.
- Good: Cache the app shell with a Service Worker (for web) or preload bundled assets (for native). Show "You're offline" for non-essential actions.
- Great: Distinguish "must work offline" (view in-progress trip, access receipts) from "can fail gracefully" (search). Queue outbound mutations (rating, tip) in IndexedDB/MMKV and sync on reconnect with idempotency keys.
5. Battery optimization
- Bad: 10Hz GPS in the foreground, same in background.
- Good: 1Hz foreground, pause in background.
- Great: Adaptive: 5Hz during pickup-approach, 1Hz during trip cruise, pause when screen is off. Use native geofencing for "driver near pickup" instead of polling.
Performance Considerations ​
- Code-split the receipt screen; it loads only at trip end.
- Hermes engine for RN; profile with Flipper.
- Hoist map component above the bottom sheet so sheet transitions don't re-mount it.
Accessibility ​
- VoiceOver/TalkBack announces trip state changes.
- All map controls have button equivalents in a menu for screen-reader users.
- Dynamic type support for large-text settings.
Testing Strategy ​
- Unit: Connection Service reconnect, location throttle, fare math.
- Integration: Jest + RN Testing Library on bottom-sheet flows.
- E2E: Detox; scripted trip from request to receipt.
- Chaos: Simulate WebSocket drops mid-trip; assert recovery.
Senior vs Staff ​
Staff owns the cross-team contract for Connection Service, defines the message schema as a versioned spec, and plans the rollout (feature flag, geo-gated, percentage ramp). They also set the SLO dashboards that product uses to justify the migration.
Closing Reminders ​
- Say the word "tradeoff" at least 5 times. Interviewers tally this.
- When you name an approach "great," explain what makes it expensive too - staff engineers acknowledge cost.
- If you have 5 minutes left, don't introduce a new feature. Go back and deepen one you skimmed.
- End with: "If I had more time, I'd explore X, Y, Z." Shows self-awareness.