Uber L5A Frontend — Interview Loop Breakdown
A round-by-round field guide for Uber's Senior SE (L5A) Frontend loop. Everything here is distilled from recent (2024-2025) candidate debriefs, recruiter calls, and Uber Eng's public blog posts on how they hire.
Leveling Context (read this first)
Uber's engineering ladder was reshuffled in 2022. What used to be "Senior 2" is now called Staff. The practical implication for you as a candidate:
| Level | Old name | Rough scope | Peer companies |
|---|---|---|---|
| L4 | SE II | 1 engineer, a quarter | Google L4, Meta E4, Amazon SDE II |
| L5A | Senior | 2-4 engineers across 2-3 teams, 1-3 quarters | Google L5, Meta E5, Amazon SDE II-III |
| L5B | Senior 2 / Staff-adjacent | Staff-lite, cross-team lead | Google L5 senior end |
| L6 | Staff | Multi-quarter, org-level | Google L6, Meta E6 |
Why this matters for the HM round: if you pitch a project that sounds like a weekend hack ("I built a Chrome extension in a hackathon"), you'll be mis-leveled down. If you pitch a two-year saga that spans four orgs, you'll be told "that's an L6 story, not L5A." The sweet spot is one clear deliverable that took you 1-3 quarters, had 2-4 engineers collaborating, and shipped with measurable impact.
Frontend track specifics: the Frontend L5A loop diverges from the backend loop after R1. You will NOT be asked to design a distributed database or a rate-limiter service. You WILL be asked to design a UI system (collaborative calendar, autocomplete-at-scale, config-driven widgets). DSA is often allowed in JS/TS. Machine coding is React or vanilla JS.
The Loop at a Glance
| Round | Name | Duration | Pass rate (reported) |
|---|---|---|---|
| 1 | Elimination / BPS Coding | 60 min | 50-60% |
| 2 | Coding — DSA | 60 min | 60-70% |
| 3 | Depth / Machine Coding | 60 min | 55-65% |
| 4 | Frontend System Design | 60 min | 50-60% |
| 5 | Hiring Manager + Bar Raiser | ~75 min | 50-60% |
Compound pass-through works out to roughly 5-10% from R1 to offer. The R1 elimination round is the single biggest filter — treat it like a real interview, not a screen.
Round 1: Elimination / BPS Coding (60 min)
Format
One LeetCode Medium, 60 minutes. You share your screen, code in an editor of your choice, and walk the interviewer through your thinking. Some pods use CoderPad or HackerRank; others let you use your own IDE.
What they're actually testing
This round is mis-named. Candidates assume it tests raw problem-solving. It doesn't — it tests whether you write code that another Uber engineer would accept in a PR. Specifically:
- Naming.
const m = arr.lengthwill cost you. UserowCount,visitedCells. - Single Responsibility. If your
solve()function does parsing, validation, DFS, and serialization, you'll be asked to refactor on the spot. - Edge cases. Empty input, single element, all-same elements, cycles, overflow. Volunteer these BEFORE coding.
- Complexity analysis. State it for your brute force, then for your optimized solution. If you skip this step, the interviewer will ask — don't make them ask.
- Readability over cleverness. A 20-line clear solution beats a 10-line clever one-liner here.
Recent examples (2024-2025)
- Alien Dictionary (LC 269) — most-reported R1 problem. Topological sort.
- Number of Islands (LC 200) — classic DFS/BFS.
- Longest Increasing Path in a Matrix (LC 329) — DFS + memoization.
- Largest Plus Sign (LC 764) — reported verbatim Sept 2025.
- Bus Routes (LC 815) — graph BFS, often shows up here.
Pacing guide
| Minute | Milestone |
|---|---|
| 0-5 | Clarify problem, state assumptions, walk through 1-2 examples |
| 5-10 | Describe brute force, state complexity |
| 10-15 | Describe optimal approach, state complexity |
| 15-25 | Code the optimal solution |
| 25-35 | Dry-run with examples, fix bugs |
| 35-50 | Follow-up (scaling, variant, test harness) |
| 50-60 | Buffer / Q&A |
Target: have the optimal solution coded and bug-free by minute 25. That leaves 35 minutes for follow-ups, which is where strong candidates separate from average ones.
Why candidates fail this round
- Jumping to code too fast. No clarification, no examples, no complexity discussion.
- Fighting the interviewer. If they hint, take the hint. Don't defend a suboptimal approach.
- Not testing. Dry-run on paper/whiteboard before claiming "done."
- Over-optimizing prematurely. Get something working, then improve. Don't code Dijkstra when a BFS works.
Round 2: Coding — DSA (60 min)
Format
2-3 DSA problems. The interviewer usually picks one hard anchor problem and 1-2 smaller follow-ups. Follow-ups may be verbal-only — you describe the approach without coding.
What they're testing
- Algorithmic breadth (can you navigate DP, graphs, heaps, binary search?)
- Ability to generalize — "now do it with K sources instead of 1"
- Speed — you need to code faster here than in R1 because there are multiple problems
Common patterns (by frequency)
- Graph BFS/DFS — especially multi-source BFS (rotting oranges, bus routes)
- Dijkstra variants — 0/1 Dijkstra with deque, weighted grid traversal
- DP on strings / arrays — coin change, partition equal subset sum, edit distance
- Binary search on answer — min-time-to-meet, capacity-to-ship
- Heap / sweep line — meeting rooms II, skyline
Recent examples (2024-2025)
- 0/1 Dijkstra (LC 2290, Min Obstacle Removal) — deque-based shortest path with 0/1 weights
- Min time for multiple objects to meet — binary search + interval intersection (custom, not on LC)
- Zombie Spread / Rotting Oranges (LC 994) — multi-source BFS
- Minimum cost grid traversal with directional constraints — Dijkstra with direction-aware state
- Coin Change (LC 322) — DP, often as a warm-up
The "don't stop at optimal" trap
A common failure mode: you solve the problem with an O(N log N) heap approach, and the interviewer nods. You think you're done. They say, "great, can we do better?" You say "no, I think N log N is optimal." This is wrong in 80% of cases. Uber interviewers almost always have a linear solution in mind. Push yourself to find it.
Example: Meeting Rooms II has an O(N log N) heap solution AND an O(N log N) sweep-line solution, but there's also an O(N) bucket approach if the time range is bounded. Know all three.
Pacing guide for 2 problems
| Minute | Milestone |
|---|---|
| 0-25 | Problem 1: clarify, brute force, optimize, code, test |
| 25-50 | Problem 2: clarify, approach, code (or verbal if time is tight) |
| 50-60 | Follow-up, Q&A |
Round 3: Depth / Machine Coding (60 min)
Format
A frontend machine-coding problem, usually in vanilla JS or React. You build something end-to-end in 45-50 minutes, then the interviewer adds extensions.
What they're testing
This is the round where L5A expectations diverge most from L4. They're testing:
- SRP and module design — can you decompose a problem into reusable units?
- OOP / class design when vanilla JS — clean encapsulation
- Naming and readability — a senior engineer's code should read like prose
- Extensibility — when they add a follow-up, does your existing structure accommodate it, or do you have to rewrite?
- Testability — can you describe how you'd test each unit?
Recent examples (2024-2025)
- Grid Light Box — grid of cells that activate on click, then deactivate in reverse order on a timer
- Progress Bar — multi-bar progress with throttle follow-up (max N concurrent fills)
- Modal with priority — modal queue where higher-priority modals preempt lower ones
- Rate Limiter decorator — function decorator that enforces N calls per window
- Batch data utility — collects calls, flushes when size OR timeout threshold is hit
- Memoize async — memoization that de-duplicates in-flight promises (not just resolved values)
- Sequential Bind/Unbind — DOM event binding with LIFO cleanup
Winning structure
- Skeleton first (5 min). Set up files, stub the interface, agree on the API shape.
- Core logic (20 min). Build the happy path end-to-end. Don't polish — ship.
- Edge cases (10 min). Now handle nulls, empties, cancellation, concurrency.
- Extract utilities (5 min). Pull out anything reusable (throttle, queue, EventEmitter).
- Extensions (15 min). Interviewer-driven follow-ups.
Vanilla JS vs React
If the interviewer lets you choose, pick vanilla JS for any problem involving timers, event listeners, or queues. It shows more depth and makes extension easier. Pick React for anything with complex UI state or child component coordination.
Why candidates fail
- Starting with framework boilerplate. 10 minutes setting up a Vite project leaves you 50 minutes to build. You should start coding in 2 minutes.
- Over-engineering. Don't write a state machine for a two-state toggle.
- Ignoring cleanup. Memory leaks, dangling listeners, uncancelled timers.
- Not writing functions. One giant
init()is an L3 answer. ExtractbindEvents,unbindEvents,render,update.
Round 4: Frontend System Design (60 min)
Format
An open-ended design prompt. You drive: requirements, component tree, API contracts, state management, data flow, perf, a11y. Interviewer interjects with follow-ups roughly every 10 minutes.
This is NOT a backend distributed-systems interview. You will not draw Kafka, Redis, or sharding diagrams. You might draw a single backend box labeled "API" and spend the rest of your time inside the browser.
Recent examples (2024-2025)
- Collaborative Calendar — multi-user calendar with real-time updates (Google Calendar lite)
- Config-driven Widget Builder — drag-drop widgets rendered from a JSON schema
- Autocomplete at scale — typeahead with 10M+ entries, debouncing, race conditions, a11y
- Real-time Dashboard — streaming metrics, WebSocket, backpressure, chart performance
- Uber Rider App frontend — map, ETA, driver tracking, ride state machine
The evaluation rubric (what separates offers from no-hires)
| Signal | Weak (L4) | Strong (L5A) |
|---|---|---|
| Requirements clarification | "Got it, let me start designing" | Spends 8-10 min clarifying scope, users, scale, constraints |
| Capacity estimation | Skips it | "10M DAU, 5 autocomplete queries each, 50M queries/day, peak 5K QPS" |
| Component tree | One-liner | Hierarchical, with props/state annotations |
| API design | "We'll call the backend" | REST vs GraphQL vs WebSocket tradeoff, pagination, caching strategy |
| State management | "Redux" | Chooses based on problem — local vs context vs Redux vs Zustand vs RTK Query, justifies |
| Performance | "We'll memoize" | Concrete: virtualization for N>1000 rows, code splitting, prefetch, service worker, CDN |
| Accessibility | Forgets | Keyboard nav, ARIA roles, screen reader flow, color contrast |
| Real-time | "WebSocket" | Reconnect strategy, backpressure, fallback to polling, conflict resolution (CRDT/OT if collaborative) |
The arc you should drive
- Clarify (8-10 min) — users, scale, platforms, must-haves, nice-to-haves
- High-level architecture (5 min) — boxes: client, API, CDN, cache
- Component tree (10 min) — React tree with props/state
- API contracts (8 min) — endpoints, payloads, auth
- State management (5 min) — where does each piece of state live?
- Performance (10 min) — rendering, network, bundle
- Accessibility + edge cases (5 min)
- Tradeoffs / what I'd do differently (5 min)
Why candidates fail
- Skipping clarification. Diving into a component tree in minute 2 signals junior behavior.
- Designing the backend. You're not a backend engineer in this interview.
- No capacity estimation. "Scale" is a signal word — you need numbers.
- Ignoring a11y. Uber's riders include blind users. A frontend senior who forgets a11y is an auto-no.
Round 5: Hiring Manager (~75 min)
Format
Usually 45 minutes of past-project deep dive followed by 30 minutes of pure behavioral. The Bar Raiser (silent third-party) may fold into this round as a second interviewer or as part of the debrief calibration.
Part A — Past Project Deep Dive (45 min)
This acts as a "reverse system design." You pick a project (usually one from your resume), and the HM will grill you on:
- Why did you build this? What was the user/business problem?
- What architecture did you choose? Why not the alternatives?
- What did you own vs what did teammates own?
- What went wrong, and how did you recover?
- What would you design differently today?
This is where project scope alignment matters most. At L5A, the "right" scope is:
- Duration: 1-3 quarters (so 3-9 months of active work)
- Team size: 2-4 engineers, possibly with cross-team collaboration (design, PM, backend, infra)
- Scope: a feature area or subsystem, not a whole product, not a weekend hack
- Impact: quantifiable (latency, revenue, DAU, engagement, cost)
If you pick a 2-year company-transforming initiative, the HM will say "that sounds like you were one of ten people — what did YOU actually do?" If you pick a 2-week ship, you'll fail the leveling bar.
Part B — Behavioral (30 min)
Typically 3-5 STAR-style questions drawn from Uber values. See 06-behavioral-questions for the full bank.
Uber values (official):
- Go Get It — ownership, hustle
- See Every Side — perspective-taking, stakeholder management
- Build With Heart — empathy, user focus
Secondary: Trip Obsession, Stand for Safety, Do the Right Thing, Great Minds Think Unlike.
The Bar Raiser
The Bar Raiser is a calibrated interviewer from a different team. Their job is to answer one question during the debrief: "Would this hire raise the bar for the role?" They have effective veto power. You often won't know which interviewer in your loop is the Bar Raiser. Treat every interviewer like they have the veto — because any of them might.
Red flags that trigger rejections here
- Defensiveness. If the HM pushes on "why didn't you try X?" and you get prickly, you're done.
- Generic rehearsed answers. "I'm a collaborative team player who leads by example" triggers an eye-roll.
- Badmouthing past employers or teammates. Even once. Frame everything as "we had different priorities and I learned X."
- Not owning the failure. If you get a "tell me about a failure" question and pin it on a manager, a teammate, or circumstances, you failed the question before you started.
- Vague metrics. "We improved performance significantly" dies. "We reduced p95 load time from 3.2s to 1.1s, which lifted conversion 4.7% and saved ~$400K in annualized infra costs" passes.
Pre-Interview Prep Checklist
Two weeks out
- [ ] Solve 15-20 Uber-tagged LeetCode problems from the DSA problems list. Time yourself at 25 min.
- [ ] Build 3-4 machine-coding problems end-to-end from scratch (Grid Light Box, Progress Bar, Batch utility, Memoize async).
- [ ] Draft 6-8 STAR stories from your Pixis experience, mapped to Uber values.
- [ ] Pick your HM-round project and write a one-page doc: problem, architecture, tradeoffs, metrics, what you'd change.
One week out
- [ ] Mock 1: DSA round with a peer. Record yourself.
- [ ] Mock 2: Frontend system design (pick autocomplete or collaborative calendar).
- [ ] Mock 3: HM behavioral with a peer who knows Uber's values.
- [ ] Review React internals (reconciliation, concurrent mode, Suspense) and browser rendering pipeline.
Day before
- [ ] Re-read your HM project doc — know numbers cold.
- [ ] Review your STAR stories — trim any over 2 minutes long.
- [ ] Set up your dev environment: IDE, second monitor, Zoom, a notepad.
- [ ] Eat a real meal. Hydrate. Sleep.
Day of
- [ ] Log in 10 min early. Test audio and screen share.
- [ ] Have water at your desk.
- [ ] Have a scratch doc open for notes during each round.
- [ ] Between rounds: walk for 5 min. Do not check email.
Common Rejection Reasons (synthesized from debriefs)
| Rejection reason | Round most often | How to avoid |
|---|---|---|
| Couldn't solve the medium in time | R1 | Practice with a 25-min timer, not open-ended |
| Code was "junior-looking" | R1, R3 | Focus on naming, SRP, extract functions |
| Didn't find the optimal | R2 | Always ask "can we do better?" yourself |
| No structure in machine coding | R3 | Start with skeleton + interface BEFORE logic |
| Designed the backend | R4 | Stay in the browser; only draw API boxes |
| No a11y consideration | R4 | Always bring up keyboard nav + ARIA |
| Project scope too small or too large | R5 | Pick a 1-3 quarter project with 2-4 engineers |
| Defensive or evasive in deep dive | R5 | Practice with a peer who pushes back hard |
| Generic behavioral answers | R5 | Quantify everything; use specific names/dates |
Timeline Expectations
| Stage | Typical latency | Worst-case reported |
|---|---|---|
| Recruiter screen to R1 | 1-2 weeks | 4 weeks |
| R1 result | 3-7 days | 2 weeks |
| R1 to onsite scheduling | 1-2 weeks | 3 weeks |
| Onsite to debrief | 3-10 days | 3 weeks |
| Offer or rejection | 1-5 days post-debrief | 2 weeks |
Total recruiter latency reported in 2024-2025: 2-6 weeks end-to-end. Some loops stall completely mid-funnel — Uber pauses hiring around fiscal planning (Oct-Dec) and after re-orgs. If your recruiter goes silent for more than 10 days, send a polite check-in.
What to Ask Your Interviewers (reverse questions)
Good reverse questions signal maturity. Avoid "what's the work-life balance" — it signals hesitation.
- To engineers: "What's the most painful piece of tech debt on your team, and what's blocking you from paying it down?"
- To HM: "How do you measure success for a new L5A hire at 6 months?"
- To HM: "What's a project you'd want this hire to drive in their first two quarters?"
- To HM: "How does the team handle scope changes mid-quarter?"
- To Bar Raiser (if identified): "What's the biggest culture shift the team has gone through in the last year?"
Cross-References
| For | See |
|---|---|
| DSA problem solutions | Uber Prep — DSA Problems |
| Backend LLD problems | Uber Prep — LLD Problems |
| Backend system design | Uber Prep — HLD |
| Backend fundamentals | 05-backend-fundamentals |
| Machine coding (frontend alt) | 07-frontend-machine-coding-problems |
| Frontend system design (alt) | 08-frontend-system-design |
| JS/React fundamentals (alt) | 09-frontend-js-react-fundamentals |
| Behavioral STAR bank | 06-behavioral-questions |