Skip to content

Uber L5A Frontend — Interview Loop Breakdown

A round-by-round field guide for Uber's Senior SE (L5A) Frontend loop. Everything here is distilled from recent (2024-2025) candidate debriefs, recruiter calls, and Uber Eng's public blog posts on how they hire.

Leveling Context (read this first)

Uber's engineering ladder was reshuffled in 2022. What used to be "Senior 2" is now called Staff. The practical implication for you as a candidate:

LevelOld nameRough scopePeer companies
L4SE II1 engineer, a quarterGoogle L4, Meta E4, Amazon SDE II
L5ASenior2-4 engineers across 2-3 teams, 1-3 quartersGoogle L5, Meta E5, Amazon SDE II-III
L5BSenior 2 / Staff-adjacentStaff-lite, cross-team leadGoogle L5 senior end
L6StaffMulti-quarter, org-levelGoogle L6, Meta E6

Why this matters for the HM round: if you pitch a project that sounds like a weekend hack ("I built a Chrome extension in a hackathon"), you'll be mis-leveled down. If you pitch a two-year saga that spans four orgs, you'll be told "that's an L6 story, not L5A." The sweet spot is one clear deliverable that took you 1-3 quarters, had 2-4 engineers collaborating, and shipped with measurable impact.

Frontend track specifics: the Frontend L5A loop diverges from the backend loop after R1. You will NOT be asked to design a distributed database or a rate-limiter service. You WILL be asked to design a UI system (collaborative calendar, autocomplete-at-scale, config-driven widgets). DSA is often allowed in JS/TS. Machine coding is React or vanilla JS.

The Loop at a Glance

RoundNameDurationPass rate (reported)
1Elimination / BPS Coding60 min50-60%
2Coding — DSA60 min60-70%
3Depth / Machine Coding60 min55-65%
4Frontend System Design60 min50-60%
5Hiring Manager + Bar Raiser~75 min50-60%

Compound pass-through works out to roughly 5-10% from R1 to offer. The R1 elimination round is the single biggest filter — treat it like a real interview, not a screen.


Round 1: Elimination / BPS Coding (60 min)

Format

One LeetCode Medium, 60 minutes. You share your screen, code in an editor of your choice, and walk the interviewer through your thinking. Some pods use CoderPad or HackerRank; others let you use your own IDE.

What they're actually testing

This round is mis-named. Candidates assume it tests raw problem-solving. It doesn't — it tests whether you write code that another Uber engineer would accept in a PR. Specifically:

  • Naming. const m = arr.length will cost you. Use rowCount, visitedCells.
  • Single Responsibility. If your solve() function does parsing, validation, DFS, and serialization, you'll be asked to refactor on the spot.
  • Edge cases. Empty input, single element, all-same elements, cycles, overflow. Volunteer these BEFORE coding.
  • Complexity analysis. State it for your brute force, then for your optimized solution. If you skip this step, the interviewer will ask — don't make them ask.
  • Readability over cleverness. A 20-line clear solution beats a 10-line clever one-liner here.

Recent examples (2024-2025)

  • Alien Dictionary (LC 269) — most-reported R1 problem. Topological sort.
  • Number of Islands (LC 200) — classic DFS/BFS.
  • Longest Increasing Path in a Matrix (LC 329) — DFS + memoization.
  • Largest Plus Sign (LC 764) — reported verbatim Sept 2025.
  • Bus Routes (LC 815) — graph BFS, often shows up here.

Pacing guide

MinuteMilestone
0-5Clarify problem, state assumptions, walk through 1-2 examples
5-10Describe brute force, state complexity
10-15Describe optimal approach, state complexity
15-25Code the optimal solution
25-35Dry-run with examples, fix bugs
35-50Follow-up (scaling, variant, test harness)
50-60Buffer / Q&A

Target: have the optimal solution coded and bug-free by minute 25. That leaves 35 minutes for follow-ups, which is where strong candidates separate from average ones.

Why candidates fail this round

  1. Jumping to code too fast. No clarification, no examples, no complexity discussion.
  2. Fighting the interviewer. If they hint, take the hint. Don't defend a suboptimal approach.
  3. Not testing. Dry-run on paper/whiteboard before claiming "done."
  4. Over-optimizing prematurely. Get something working, then improve. Don't code Dijkstra when a BFS works.

Round 2: Coding — DSA (60 min)

Format

2-3 DSA problems. The interviewer usually picks one hard anchor problem and 1-2 smaller follow-ups. Follow-ups may be verbal-only — you describe the approach without coding.

What they're testing

  • Algorithmic breadth (can you navigate DP, graphs, heaps, binary search?)
  • Ability to generalize — "now do it with K sources instead of 1"
  • Speed — you need to code faster here than in R1 because there are multiple problems

Common patterns (by frequency)

  1. Graph BFS/DFS — especially multi-source BFS (rotting oranges, bus routes)
  2. Dijkstra variants — 0/1 Dijkstra with deque, weighted grid traversal
  3. DP on strings / arrays — coin change, partition equal subset sum, edit distance
  4. Binary search on answer — min-time-to-meet, capacity-to-ship
  5. Heap / sweep line — meeting rooms II, skyline

Recent examples (2024-2025)

  • 0/1 Dijkstra (LC 2290, Min Obstacle Removal) — deque-based shortest path with 0/1 weights
  • Min time for multiple objects to meet — binary search + interval intersection (custom, not on LC)
  • Zombie Spread / Rotting Oranges (LC 994) — multi-source BFS
  • Minimum cost grid traversal with directional constraints — Dijkstra with direction-aware state
  • Coin Change (LC 322) — DP, often as a warm-up

The "don't stop at optimal" trap

A common failure mode: you solve the problem with an O(N log N) heap approach, and the interviewer nods. You think you're done. They say, "great, can we do better?" You say "no, I think N log N is optimal." This is wrong in 80% of cases. Uber interviewers almost always have a linear solution in mind. Push yourself to find it.

Example: Meeting Rooms II has an O(N log N) heap solution AND an O(N log N) sweep-line solution, but there's also an O(N) bucket approach if the time range is bounded. Know all three.

Pacing guide for 2 problems

MinuteMilestone
0-25Problem 1: clarify, brute force, optimize, code, test
25-50Problem 2: clarify, approach, code (or verbal if time is tight)
50-60Follow-up, Q&A

Round 3: Depth / Machine Coding (60 min)

Format

A frontend machine-coding problem, usually in vanilla JS or React. You build something end-to-end in 45-50 minutes, then the interviewer adds extensions.

What they're testing

This is the round where L5A expectations diverge most from L4. They're testing:

  • SRP and module design — can you decompose a problem into reusable units?
  • OOP / class design when vanilla JS — clean encapsulation
  • Naming and readability — a senior engineer's code should read like prose
  • Extensibility — when they add a follow-up, does your existing structure accommodate it, or do you have to rewrite?
  • Testability — can you describe how you'd test each unit?

Recent examples (2024-2025)

  • Grid Light Box — grid of cells that activate on click, then deactivate in reverse order on a timer
  • Progress Bar — multi-bar progress with throttle follow-up (max N concurrent fills)
  • Modal with priority — modal queue where higher-priority modals preempt lower ones
  • Rate Limiter decorator — function decorator that enforces N calls per window
  • Batch data utility — collects calls, flushes when size OR timeout threshold is hit
  • Memoize async — memoization that de-duplicates in-flight promises (not just resolved values)
  • Sequential Bind/Unbind — DOM event binding with LIFO cleanup

Winning structure

  1. Skeleton first (5 min). Set up files, stub the interface, agree on the API shape.
  2. Core logic (20 min). Build the happy path end-to-end. Don't polish — ship.
  3. Edge cases (10 min). Now handle nulls, empties, cancellation, concurrency.
  4. Extract utilities (5 min). Pull out anything reusable (throttle, queue, EventEmitter).
  5. Extensions (15 min). Interviewer-driven follow-ups.

Vanilla JS vs React

If the interviewer lets you choose, pick vanilla JS for any problem involving timers, event listeners, or queues. It shows more depth and makes extension easier. Pick React for anything with complex UI state or child component coordination.

Why candidates fail

  1. Starting with framework boilerplate. 10 minutes setting up a Vite project leaves you 50 minutes to build. You should start coding in 2 minutes.
  2. Over-engineering. Don't write a state machine for a two-state toggle.
  3. Ignoring cleanup. Memory leaks, dangling listeners, uncancelled timers.
  4. Not writing functions. One giant init() is an L3 answer. Extract bindEvents, unbindEvents, render, update.

Round 4: Frontend System Design (60 min)

Format

An open-ended design prompt. You drive: requirements, component tree, API contracts, state management, data flow, perf, a11y. Interviewer interjects with follow-ups roughly every 10 minutes.

This is NOT a backend distributed-systems interview. You will not draw Kafka, Redis, or sharding diagrams. You might draw a single backend box labeled "API" and spend the rest of your time inside the browser.

Recent examples (2024-2025)

  • Collaborative Calendar — multi-user calendar with real-time updates (Google Calendar lite)
  • Config-driven Widget Builder — drag-drop widgets rendered from a JSON schema
  • Autocomplete at scale — typeahead with 10M+ entries, debouncing, race conditions, a11y
  • Real-time Dashboard — streaming metrics, WebSocket, backpressure, chart performance
  • Uber Rider App frontend — map, ETA, driver tracking, ride state machine

The evaluation rubric (what separates offers from no-hires)

SignalWeak (L4)Strong (L5A)
Requirements clarification"Got it, let me start designing"Spends 8-10 min clarifying scope, users, scale, constraints
Capacity estimationSkips it"10M DAU, 5 autocomplete queries each, 50M queries/day, peak 5K QPS"
Component treeOne-linerHierarchical, with props/state annotations
API design"We'll call the backend"REST vs GraphQL vs WebSocket tradeoff, pagination, caching strategy
State management"Redux"Chooses based on problem — local vs context vs Redux vs Zustand vs RTK Query, justifies
Performance"We'll memoize"Concrete: virtualization for N>1000 rows, code splitting, prefetch, service worker, CDN
AccessibilityForgetsKeyboard nav, ARIA roles, screen reader flow, color contrast
Real-time"WebSocket"Reconnect strategy, backpressure, fallback to polling, conflict resolution (CRDT/OT if collaborative)

The arc you should drive

  1. Clarify (8-10 min) — users, scale, platforms, must-haves, nice-to-haves
  2. High-level architecture (5 min) — boxes: client, API, CDN, cache
  3. Component tree (10 min) — React tree with props/state
  4. API contracts (8 min) — endpoints, payloads, auth
  5. State management (5 min) — where does each piece of state live?
  6. Performance (10 min) — rendering, network, bundle
  7. Accessibility + edge cases (5 min)
  8. Tradeoffs / what I'd do differently (5 min)

Why candidates fail

  1. Skipping clarification. Diving into a component tree in minute 2 signals junior behavior.
  2. Designing the backend. You're not a backend engineer in this interview.
  3. No capacity estimation. "Scale" is a signal word — you need numbers.
  4. Ignoring a11y. Uber's riders include blind users. A frontend senior who forgets a11y is an auto-no.

Round 5: Hiring Manager (~75 min)

Format

Usually 45 minutes of past-project deep dive followed by 30 minutes of pure behavioral. The Bar Raiser (silent third-party) may fold into this round as a second interviewer or as part of the debrief calibration.

Part A — Past Project Deep Dive (45 min)

This acts as a "reverse system design." You pick a project (usually one from your resume), and the HM will grill you on:

  • Why did you build this? What was the user/business problem?
  • What architecture did you choose? Why not the alternatives?
  • What did you own vs what did teammates own?
  • What went wrong, and how did you recover?
  • What would you design differently today?

This is where project scope alignment matters most. At L5A, the "right" scope is:

  • Duration: 1-3 quarters (so 3-9 months of active work)
  • Team size: 2-4 engineers, possibly with cross-team collaboration (design, PM, backend, infra)
  • Scope: a feature area or subsystem, not a whole product, not a weekend hack
  • Impact: quantifiable (latency, revenue, DAU, engagement, cost)

If you pick a 2-year company-transforming initiative, the HM will say "that sounds like you were one of ten people — what did YOU actually do?" If you pick a 2-week ship, you'll fail the leveling bar.

Part B — Behavioral (30 min)

Typically 3-5 STAR-style questions drawn from Uber values. See 06-behavioral-questions for the full bank.

Uber values (official):

  • Go Get It — ownership, hustle
  • See Every Side — perspective-taking, stakeholder management
  • Build With Heart — empathy, user focus

Secondary: Trip Obsession, Stand for Safety, Do the Right Thing, Great Minds Think Unlike.

The Bar Raiser

The Bar Raiser is a calibrated interviewer from a different team. Their job is to answer one question during the debrief: "Would this hire raise the bar for the role?" They have effective veto power. You often won't know which interviewer in your loop is the Bar Raiser. Treat every interviewer like they have the veto — because any of them might.

Red flags that trigger rejections here

  1. Defensiveness. If the HM pushes on "why didn't you try X?" and you get prickly, you're done.
  2. Generic rehearsed answers. "I'm a collaborative team player who leads by example" triggers an eye-roll.
  3. Badmouthing past employers or teammates. Even once. Frame everything as "we had different priorities and I learned X."
  4. Not owning the failure. If you get a "tell me about a failure" question and pin it on a manager, a teammate, or circumstances, you failed the question before you started.
  5. Vague metrics. "We improved performance significantly" dies. "We reduced p95 load time from 3.2s to 1.1s, which lifted conversion 4.7% and saved ~$400K in annualized infra costs" passes.

Pre-Interview Prep Checklist

Two weeks out

  • [ ] Solve 15-20 Uber-tagged LeetCode problems from the DSA problems list. Time yourself at 25 min.
  • [ ] Build 3-4 machine-coding problems end-to-end from scratch (Grid Light Box, Progress Bar, Batch utility, Memoize async).
  • [ ] Draft 6-8 STAR stories from your Pixis experience, mapped to Uber values.
  • [ ] Pick your HM-round project and write a one-page doc: problem, architecture, tradeoffs, metrics, what you'd change.

One week out

  • [ ] Mock 1: DSA round with a peer. Record yourself.
  • [ ] Mock 2: Frontend system design (pick autocomplete or collaborative calendar).
  • [ ] Mock 3: HM behavioral with a peer who knows Uber's values.
  • [ ] Review React internals (reconciliation, concurrent mode, Suspense) and browser rendering pipeline.

Day before

  • [ ] Re-read your HM project doc — know numbers cold.
  • [ ] Review your STAR stories — trim any over 2 minutes long.
  • [ ] Set up your dev environment: IDE, second monitor, Zoom, a notepad.
  • [ ] Eat a real meal. Hydrate. Sleep.

Day of

  • [ ] Log in 10 min early. Test audio and screen share.
  • [ ] Have water at your desk.
  • [ ] Have a scratch doc open for notes during each round.
  • [ ] Between rounds: walk for 5 min. Do not check email.

Common Rejection Reasons (synthesized from debriefs)

Rejection reasonRound most oftenHow to avoid
Couldn't solve the medium in timeR1Practice with a 25-min timer, not open-ended
Code was "junior-looking"R1, R3Focus on naming, SRP, extract functions
Didn't find the optimalR2Always ask "can we do better?" yourself
No structure in machine codingR3Start with skeleton + interface BEFORE logic
Designed the backendR4Stay in the browser; only draw API boxes
No a11y considerationR4Always bring up keyboard nav + ARIA
Project scope too small or too largeR5Pick a 1-3 quarter project with 2-4 engineers
Defensive or evasive in deep diveR5Practice with a peer who pushes back hard
Generic behavioral answersR5Quantify everything; use specific names/dates

Timeline Expectations

StageTypical latencyWorst-case reported
Recruiter screen to R11-2 weeks4 weeks
R1 result3-7 days2 weeks
R1 to onsite scheduling1-2 weeks3 weeks
Onsite to debrief3-10 days3 weeks
Offer or rejection1-5 days post-debrief2 weeks

Total recruiter latency reported in 2024-2025: 2-6 weeks end-to-end. Some loops stall completely mid-funnel — Uber pauses hiring around fiscal planning (Oct-Dec) and after re-orgs. If your recruiter goes silent for more than 10 days, send a polite check-in.

What to Ask Your Interviewers (reverse questions)

Good reverse questions signal maturity. Avoid "what's the work-life balance" — it signals hesitation.

  • To engineers: "What's the most painful piece of tech debt on your team, and what's blocking you from paying it down?"
  • To HM: "How do you measure success for a new L5A hire at 6 months?"
  • To HM: "What's a project you'd want this hire to drive in their first two quarters?"
  • To HM: "How does the team handle scope changes mid-quarter?"
  • To Bar Raiser (if identified): "What's the biggest culture shift the team has gone through in the last year?"

Cross-References

ForSee
DSA problem solutionsUber Prep — DSA Problems
Backend LLD problemsUber Prep — LLD Problems
Backend system designUber Prep — HLD
Backend fundamentals05-backend-fundamentals
Machine coding (frontend alt)07-frontend-machine-coding-problems
Frontend system design (alt)08-frontend-system-design
JS/React fundamentals (alt)09-frontend-js-react-fundamentals
Behavioral STAR bank06-behavioral-questions

Frontend interview preparation reference.