Uber L5A Frontend — Behavioral STAR Bank
The Hiring Manager round carries 30 minutes of pure behavioral plus a 45-minute project deep dive. Every behavioral answer gets scored against Uber's values. This file gives you a question bank with STAR-framework answers drawn from a Pixis ad-tech context (React + TypeScript, 0-to-1 AI creative-automation products, performance tuning, cross-functional work with PM/design/ML).
Uber's Values (memorize these)
Primary (the three that map to most questions):
| Value | What it means in practice | Signal phrases |
|---|---|---|
| Go Get It | Ownership, hustle, bias toward action, driving outcomes end-to-end | "I saw X wasn't getting done, so I picked it up" |
| See Every Side | Empathy for stakeholders, data-driven tradeoffs, balancing conflicting priorities | "I understood the PM's deadline AND the infra team's concern about..." |
| Build With Heart | User focus, community impact, doing work that actually matters | "This mattered because 40K drivers depended on..." |
Secondary (show up in follow-ups):
| Value | In practice |
|---|---|
| Trip Obsession | Focus on the rider/driver experience — the concrete user at the end of the stack |
| Stand for Safety | Build safely, ship safely, communicate risk |
| Do the Right Thing | Ethics and integrity over short-term wins |
| Great Minds Think Unlike | Diverse approaches, inviting disagreement, intellectual humility |
The Bar Raiser Role
One member of your panel (often in the HM round, but not always) is a Bar Raiser — a calibrated interviewer from an unrelated team. Their only job during debrief is answering: "Would this hire raise the bar for the role?" They have effective veto power. You usually won't know who they are. The implication: treat every interviewer like a Bar Raiser. Every answer should satisfy "would Uber want more engineers like this one?"
STAR Framework Reminder
| Section | What to include | Time budget |
|---|---|---|
| Situation | The context, the stakes, your role. Keep it crisp. | 15-20 seconds |
| Task | Your specific responsibility. Use "I," not "we." | 10-15 seconds |
| Action | The longest section. Concrete decisions, technologies, tradeoffs, what you chose to NOT do. | 60-75 seconds |
| Result | Quantified. Metric, timeline, money, users, latency, team impact. | 15-20 seconds |
Total: ~2 minutes per answer. Longer than 2:30 loses the interviewer. Shorter than 1:15 feels thin.
The quantification rule
Every senior candidate says "we improved performance." L5A answers say:
"I cut autocomplete p95 from 480ms to 110ms by moving the trie search to a web worker and prefetching top-1000 queries into IndexedDB on idle. That lifted campaign-setup completion 8.2% and reduced drop-off on step 3 by 14%."
Numbers are the single largest differentiator between hire and no-hire at this level.
Project Scope Alignment for L5A (critical)
During the HM round, the interviewer is mentally calibrating: "Is this candidate scoping stories at L5A or at L4?" Get this wrong and you'll be leveled down regardless of how strong the answers are.
Right-sized L5A project:
- 1-3 quarters of active work
- 2-4 engineers, sometimes 1-2 cross-functional partners (PM, design, ML)
- You owned a subsystem or feature area, not a single ticket and not the whole product
- Quantifiable impact (DAU, conversion, latency, revenue, cost)
Too small (signals L4):
- "I built a dashboard in a hackathon"
- "I fixed 20 bugs this quarter"
- "I migrated a package"
Too large (signals staff+ or rings false):
- "I led the complete re-platform of the whole frontend over two years"
- "I owned the entire creative-automation product"
- "I managed a team of 12"
Quick Reference Table
| # | Question | Value(s) | Recommended story anchor |
|---|---|---|---|
| 1 | Drove a project end-to-end | Go Get It | AI Creative Generator v2 launch |
| 2 | Disagreement with PM / designer | See Every Side | Campaign builder scope fight |
| 3 | Challenged a technical decision | Go Get It / See Every Side | Redux vs Zustand migration |
| 4 | Missed a deadline / shipped a bug | Build With Heart / Trip Obsession | Creative preview rollback |
| 5 | Mentorship example | Build With Heart | Onboarding a new grad |
| 6 | Decision with incomplete information | Go Get It | Choosing streaming rendering approach |
| 7 | Pushed back on scope / advocated quality | See Every Side | Declining a PM's Q4 ask |
| 8 | Most impactful project | Go Get It | Creative Generator impact |
| 9 | What you'd design differently | Growth mindset | Performance monitoring coverage gap |
| 10 | Why Uber? | Culture | Mobility + scale + frontend depth |
| 11 | Outside your comfort zone | Great Minds Think Unlike | ML model integration work |
| 12 | Influence without authority | See Every Side | Driving design-system adoption |
| 13 | Tech debt vs new features | See Every Side | Campaign builder rewrite tradeoff |
| 14 | Conflict with a peer | See Every Side | State-management library debate |
| 15 | User-facing bug — lesson | Trip Obsession | Preview image corruption |
| 16 | Calculated risk | Go Get It | Shipping AI preview before full QA |
| 17 | Customer over internal preference | Build With Heart | A11y work in ad previews |
| 18 | Biggest failure | Growth mindset | Over-engineered component library |
1. Tell me about a project you drove end-to-end. What was the impact?
Value Alignment: Go Get It
What They're Really Testing: Whether you own outcomes, not just tasks. At L5A, you're expected to drive a project from ambiguous problem statement all the way through launch and iteration.
STAR Framework
Situation: At Pixis (ad-tech startup, Series C), our campaign builder let marketers generate ad creatives one at a time using our in-house AI model. Power users wanted batch generation — dozens of variants across formats — but the tool had no UX for it. Our NPS from that segment was dropping and a few large advertisers threatened to leave.
Task: The PM gave me a rough product brief — "some kind of bulk mode" — and asked me to own the frontend end-to-end. I was the only senior frontend engineer on the squad; one junior engineer and a designer were the rest of the team. Target ship: one quarter.
Action: I started by shadowing two power-user accounts for a day each — watching how they actually generated variants manually — and identified that the real pain wasn't generation, it was reviewing and filtering hundreds of variants quickly. So I pushed back on the "bulk generate" framing and reframed the project as "bulk generate + triage." I drove an RFC covering: (a) a virtualized grid for 500+ creative previews (using react-window because we had SSR constraints), (b) a background generation queue that streamed results via server-sent events instead of a single long POST, (c) a keyboard-first review flow so users could approve/reject with J/K. I negotiated scope with the PM to cut auto-tagging from v1 since it was adding 3 weeks with unclear ROI. I built the SSE client, the virtualized grid, and the state machine; the junior engineer built the queue UI and I reviewed every PR.
Result: We shipped in 11 weeks. Campaign setup time for the top 10% of accounts dropped from ~45 minutes to ~8 minutes. Variant-approval throughput went up 6x. Two at-risk accounts renewed and one expanded by $180K ARR. The pattern (SSE + virtualized review queue) was adopted by two other product surfaces in the following quarter.
Tips
- Lead with the user pain, not the technology. That's what Build With Heart looks like in a Go Get It answer.
- Explicitly mention what you cut from scope. Senior engineers prune; junior engineers gold-plate.
- The result should include both direct (time saved) and second-order (ARR, pattern reuse) impact.
2. Tell me about a disagreement with a PM or designer and how you resolved it.
Value Alignment: See Every Side
What They're Really Testing: Can you collaborate when incentives diverge? Do you understand that the PM's job isn't to be right — it's to represent the business?
STAR Framework
Situation: During the campaign-builder batch-mode project, the PM wanted to hide all "advanced" controls behind a modal on first use — brand colors, format constraints, copy tone — to simplify onboarding. The design supported this. I thought we'd regret it: power users wanted those controls front and center, and 80% of our revenue came from power users.
Task: I needed to either convince the PM to change the default, or accept the call if the data didn't back me up.
Action: I didn't push back in the design review — I asked for a day to look at the data. I pulled usage logs for our top 50 accounts over the last 60 days and ran a quick analysis showing that 92% of sessions for those accounts touched at least one "advanced" control within the first 30 seconds. I also wrote up three alternatives: (a) always-visible controls, (b) progressive disclosure based on account tier, (c) the PM's original modal but skippable forever after first dismiss. I brought this to a 30-minute discussion with the PM and designer. The designer actually proposed a fourth option — a collapsible sidebar that defaulted to expanded for accounts with >5 past campaigns. I liked it more than my own proposals. We shipped the sidebar.
Result: Post-launch, time-to-first-variant for power users stayed flat (we didn't regress), and new-user activation on the onboarding step went up 12%. The PM and I now co-write RFCs whenever there's a UX fork.
Tips
- Don't argue in the meeting — gather data, come back.
- Credit the person who found the best solution (here, the designer). That's See Every Side in practice.
- Show you were willing to be wrong.
3. Tell me about a time you challenged a technical decision your team made.
Value Alignment: Go Get It + See Every Side
What They're Really Testing: Do you have strong opinions loosely held? Do you disagree and commit, or do you quietly let bad decisions ship?
STAR Framework
Situation: My team had standardized on Redux Toolkit for all client state since day one of the product. By year two, we had ~60 slices, and our bundle size was 340KB gzipped just for state. More importantly, our new surface areas were mostly server-state heavy (AI-generated creatives, async pipelines), and we were writing a lot of boilerplate to shoehorn server state into Redux.
Task: I believed we should migrate new surfaces to React Query + Zustand for ephemeral client state, but three other engineers had deep Redux muscle memory and were skeptical.
Action: I didn't propose a full migration — that would have rightly been killed as disruptive. Instead, I proposed a parallel track: new feature surfaces could opt in to the new stack, existing surfaces would stay on Redux until they were being rewritten anyway. I wrote a one-page RFC comparing the two approaches with specific metrics: boilerplate LOC per endpoint, bundle impact, and debuggability (showed Redux DevTools vs React Query devtools). I built a spike branch with one real feature (the creative preview panel) on React Query so people could diff both versions. I ran a 30-minute team meeting and was explicit: "I'm not arguing Redux is bad. I'm arguing React Query is better for server state specifically. Client state keeps Redux or moves to Zustand." Two of the three skeptics came around after the spike; the third preferred Redux for personal reasons and we agreed to disagree and commit.
Result: New surfaces built on the new stack shipped ~30% fewer lines and ~2 weeks faster on average. Our bundle dropped 90KB over the next two quarters as old surfaces were rewritten. More importantly, the senior engineer who had been the holdout led the migration of one of the heaviest Redux slices six months later — he changed his mind based on operational experience.
Tips
- Frame as "both/and" not "either/or" when challenging incumbent tech.
- Ship a spike. Real code beats a deck every time.
- Name the "disagree and commit" explicitly — it signals maturity.
4. Tell me about a time you missed a deadline or shipped a bug.
Value Alignment: Build With Heart / Trip Obsession
What They're Really Testing: Do you own failures? Can you learn from them? Do you blame the process, the teammates, or the circumstances?
STAR Framework
Situation: We were shipping a major update to the creative preview panel — a rewrite that added side-by-side comparison of AI variants. I owned it. We had a soft-launch to 5% of accounts on a Thursday afternoon before a three-day weekend.
Task: I was the on-call engineer for that launch. My personal gate was: ship on Thursday if the smoke tests pass, otherwise roll back and re-ship Monday.
Action: Smoke tests passed. But within two hours of the 5% rollout, users started reporting that preview images were rendering with the wrong aspect ratio — stretched vertically for 9:16 formats. I'd tested 1:1 and 16:9 but not 9:16, because our test data didn't have a vertical-format campaign. I saw the Sentry spike from home at 6pm. I pulled out my laptop, confirmed the issue, and rolled back the feature flag in under 15 minutes. I then paged myself on-call for the weekend (I wasn't officially on) and wrote a runbook for the on-call engineer in case it happened again on another surface. On Monday, I added 9:16 and 4:5 fixtures to our visual-regression suite, fixed the aspect-ratio CSS bug (a flex issue with min-height: 0 missing), and ran a postmortem for the team. I owned the miss — my test matrix was incomplete, I hadn't pulled real customer data when writing fixtures.
Result: We re-shipped the following Tuesday with zero regressions. More importantly, the postmortem action items (visual-regression on all four common aspect ratios, a production-data-sampling tool for pre-launch testing) prevented two similar classes of bug over the next year. Our release process for creative-rendering changes now requires the production-sampling tool before flag flip.
Tips
- Own the root cause as your miss, specifically. Don't say "the tests didn't cover" — say "I didn't include X in my test matrix."
- Show the systemic fix, not just the tactical one.
- Mention rolling back fast. Speed of rollback is a Trip Obsession signal — you prioritize the user.
5. Tell me about a mentorship example.
Value Alignment: Build With Heart
What They're Really Testing: Are you a multiplier? At L5A, you should be making your teammates better, not just doing great individual work.
STAR Framework
Situation: A new grad joined our squad mid-year. Strong algorithmic skills, little React experience, zero production experience. Our codebase had a lot of implicit conventions (hook patterns, state-colocation rules, SSR caveats) that weren't documented.
Task: My manager asked me to be his official mentor. I wasn't a people manager, but I was the senior engineer on his squad.
Action: I set up a weekly 30-minute 1:1 focused on his goals, not tactical PRs. In the first meeting, I asked what he wanted to be known for in 12 months — he said "someone who can own a whole feature." We worked backwards. First three months, I pair-programmed with him on his first feature for 1 hour a day and reviewed every PR within an hour. Months 4-6, I cut pair time in half and started giving him features with ambiguous scope — "here's the customer pain, you write the RFC." I reviewed his RFCs before he shared them with the team and gave him candid feedback (once, I told him a proposal was underbaked and helped him rewrite it rather than let him take a public hit). I explicitly let him run the project for the preview-panel UX improvements — I was just code reviewer and rubber-ducker. Month 9, I stopped attending his standup updates and let him be visible directly to the PM.
Result: He shipped three features end-to-end in his first year and was promoted to SDE II at year-end review, which was faster than our squad's historical norm (usually 18-24 months). In his 1:1, he told me the RFC work was what made him feel like "a real engineer" rather than "someone who just writes code." I kept mentoring him for another six months and he's now mentoring his own new-grad.
Tips
- The story should show a deliberate arc, not a bunch of one-off help.
- Metrics here are the mentee's growth (promotion, scope) and behavior (now mentoring others).
- Say "I" when you made the call; say "he" when he did the work.
6. Tell me about a time you had to make a decision with incomplete information.
Value Alignment: Go Get It
What They're Really Testing: Can you move forward under ambiguity? Do you know when to decide vs when to gather more data?
STAR Framework
Situation: We were building a real-time creative generation pipeline. When a user submitted a brief, our AI model would stream variants back over ~30-90 seconds. We had two rendering options: (a) render each variant as it arrives (streaming), or (b) wait for the full batch and render them together. The backend team could support either — but needed our decision in 48 hours to lock API contracts.
Task: I had to make the call with no user data (the feature didn't exist yet) and only a rough sense of latency (model team estimated 30-90s per batch).
Action: Instead of guessing, I timeboxed data gathering to 24 hours. I (a) pulled competitive data — checked how three similar products handled async generation (two streamed, one batched), (b) pulled internal telemetry on our existing single-creative generator showing users bounced to a different tab ~70% of the time during generation, (c) set up a 20-minute user test with five internal marketers where I showed them Figma mocks of both options and asked which felt more responsive. Four out of five preferred streaming even knowing it meant later variants could be lower quality than earlier ones. I made the call for streaming, documented the three pieces of evidence I used, and agreed with the backend team that we'd re-evaluate after the beta if bounce rate stayed high.
Result: The streaming experience shipped and perceived latency dropped materially — bounce-during-generation went from 70% down to 22%. Two quarters later, we ran a proper A/B and confirmed streaming won on both engagement and conversion. If we'd waited for "complete" data, we'd have delayed the ship by ~6 weeks.
Tips
- Timebox the data gathering. Senior engineers don't wait for certainty, but they also don't shoot from the hip.
- List the three-ish pieces of evidence you did use. Shows you didn't just guess.
- Mention the re-evaluation plan. Committing to revisit is a sign of calibration.
7. Tell me about a time you pushed back on scope or advocated for quality.
Value Alignment: See Every Side
What They're Really Testing: Can you say no without becoming an obstruction? Can you advocate for technical quality in terms the business cares about?
STAR Framework
Situation: In Q4 our PM wanted to ship three new features to support a major customer expansion: (1) bulk export, (2) Slack notifications, (3) a new template gallery. The total estimate was 14 weeks of frontend work. We had 11 weeks.
Task: I was the senior engineer on the squad and needed to either find a way to deliver all three or negotiate scope.
Action: I didn't just say "we can't." I prepared three scoping options for the PM: Option A — cut Slack notifications entirely (lowest-signal feature, only 12% of our customer base actually used Slack based on our integrations telemetry). Option B — ship template gallery as read-only v1 without user uploads (cuts 3 weeks). Option C — ship all three but skip accessibility compliance for the template gallery (I flagged this as an option to show what trade-off it implied, not as my recommendation). I walked the PM through the tradeoffs with data. Specifically, I advocated strongly against Option C because we had several customers in regulated verticals (gov, healthcare) where a11y was a procurement requirement, and skipping it would create a rollback risk later. The PM picked Option B. I also got him to agree that if we hit 9 weeks on track, we'd add Slack notifications back — which we did.
Result: We shipped all three features in 11 weeks. The template gallery read-only shipped on time; we added uploads as a fast-follow in Q1. The a11y compliance held up when two gov-sector customers audited us in Q2. The PM trusted me more on future scoping conversations — we started doing scope discussions earlier in the quarter rather than when things were on fire.
Tips
- Don't say "no" — propose options with trade-offs.
- Use business data (regulated customers, integration telemetry) to justify quality work.
- Mention the relationship outcome — trust, not just the feature.
8. What's the most impactful project you've worked on? Quantify it.
Value Alignment: Go Get It
What They're Really Testing: Can you talk about impact without exaggerating? Do you measure what matters?
STAR Framework
Situation: Pixis's AI creative generator was the core product differentiator but had a long-standing performance problem: the editor took 6-8 seconds to become interactive on first load, and ~12% of sessions bounced before the first variant even generated. Our PM had flagged perf as a Q2-Q3 priority but no one had owned it.
Task: I volunteered to own editor perf end-to-end as a 2-quarter initiative. Two engineers, one designer part-time, me leading.
Action: I started with measurement, not optimization. I instrumented the editor with real-user metrics (FCP, TTI, LCP, custom "time to first variant") segmented by plan tier and geography. Week one of data showed three clear problems: (a) the bundle was 2.1MB gzipped, (b) the AI model list was fetched synchronously on page load, (c) our font loading was blocking render for ~800ms on emerging-market connections. I prioritized by user-impact: bundle split first (biggest TTI win), then async model list with a skeleton, then font-display: optional. I led the route-based code splitting work (React.lazy + suspense boundaries with a real skeleton, not a spinner), which cut initial bundle from 2.1MB to 740KB. I pushed the API team to paginate the model list endpoint. I ran a Core Web Vitals audit with our designer and we cut render-blocking CSS by 60% through critical-path extraction. I also set up a perf budget in CI that blocked PRs if any metric regressed >5%.
Result: Over two quarters: TTI went from 6.2s → 1.9s (p75), FCP from 2.8s → 0.9s. Bounce rate on editor load dropped from 12% to 3.1%. That bounce reduction translated into an estimated 4.2% lift in monthly active usage, which the data team attributed to an additional ~$900K ARR at our conversion rates. The perf budget has caught 14 regressions in the year since. Two other product surfaces adopted our measurement and CI pattern.
Tips
- Start with a real metric and how you measured it. Senior engineers don't optimize blind.
- Explicitly say the order of your optimizations and why.
- Tie the technical win to a business number (ARR). That's the L5A bar.
9. Tell me about a project you'd design differently today.
Value Alignment: Growth mindset (feeds into every value)
What They're Really Testing: Intellectual humility. Self-awareness. Can you critique your own work?
STAR Framework
Situation: Two years ago, I built Pixis's in-house component library — buttons, modals, form elements, data-grid — for our web app. Five engineers used it; it was meant to replace a mix of MUI and bespoke components.
Task: I was the sole owner. I treated it as a side-of-desk project alongside feature work.
Action: I built ~40 components over a quarter, styled with CSS modules + a custom design-token system. I treated the token system as the key differentiator — we could rebrand the whole app by swapping tokens. I documented components in Storybook, ran visual-regression via Chromatic.
What I'd do differently now: Three things.
First, I built it in isolation. I didn't involve the design team in choosing the API shape of components, which meant six months later our designer joined and wanted prop names and behavior I hadn't anticipated (e.g., a size prop that didn't match her design token scale). The rename broke every consumer. I should have made this a two-person project from day one with design as an equal owner.
Second, I overinvested in design tokens. I built a full theming system that could switch brand palettes dynamically. We never used that capability — we rebranded exactly zero times. The complexity cost was real (two layers of indirection in every component). I should have built v1 with hardcoded values and extracted tokens only when we had a second use case.
Third, I didn't build for consumer migration. I didn't provide codemods or a migration guide when I deprecated old MUI components, which meant teammates dragged their feet on adoption. The library was at 60% adoption six months in — far below what I'd expected. A codemod would have gotten us to 90% in a fortnight.
Result: The library is still in use, but the adoption lag cost us maybe ~4 weeks of velocity across the team in year one. The newer libraries I've built since have all had co-owners, minimal abstraction, and migration tooling from day one.
Tips
- Pick a real mistake, not a humblebrag like "I should have documented more."
- Three things is the sweet spot — one feels shallow, five feels unfocused.
- Show what you changed in your next project.
10. Why Uber?
Value Alignment: Culture fit
What They're Really Testing: Have you actually thought about it? Or is Uber just "a senior role at a big company"?
Short-form answer (< 90 seconds)
"Three reasons, in order.
First, the scale and stakes. The frontend at Uber isn't a marketing website — it's a real-time system where a map glitch or a price-update bug affects someone's commute or income. I want to work on product surfaces where the 99th percentile actually matters.
Second, the product surface area. Uber has mature surfaces (rider, driver) and newer ones (Uber for Business, Eats, freight, merchant). That mix of stable-at-scale and 0-to-1 maps to the two halves of my career — two years at a scaling startup on 0-to-1, and now I want to combine that with a stable-at-scale context.
Third, the frontend engineering culture specifically. From reading the Uber Eng blog — the posts on frontend platform, Fusion.js history, Fabric, the shift to the new mobile stacks — it's clear Uber has a real frontend engineering practice, not frontend-as-backend's-sidekick. I want to grow in that kind of environment.
The short version is: I want scale that matters, a mix of mature and greenfield work, and a place where frontend engineering is taken seriously. Uber is the only company on my shortlist where all three line up."
Tips
- Three beats one. Gives you redundancy if one doesn't land.
- Reference something specific (blog posts, a framework history) — shows research.
- Frame it as "what Uber offers me" AND "what I bring" implicitly.
- Do NOT mention comp, brand, or "I've always wanted to work at Uber since I was in college." All weak.
11. Tell me about a time you worked on something outside your comfort zone.
Value Alignment: Great Minds Think Unlike
What They're Really Testing: Are you intellectually curious? Do you avoid hard things outside your core expertise?
STAR Framework
Situation: Our AI model team needed an engineer to help debug a performance issue in the creative-generation model — output latency was spiking for certain input types. They didn't have frontend bandwidth and I wasn't an ML engineer, but the spikes appeared correlated with specific UI-driven input patterns.
Task: Volunteered to investigate the client-side contribution to the latency, partnering with one ML engineer.
Action: I spent the first two days just reading — the team's model architecture doc, the inference pipeline readme, and a paper on the model family they were using. I didn't try to pretend I understood the model internals; I was explicit with my ML counterpart that I was the frontend-side expert and I needed him for the model side. Together we instrumented the request pipeline end-to-end: I added detailed client timings (when the request was constructed, when it was posted, when the first byte came back), he added server-side model-internal timings. We correlated the two and found that large text prompts with certain unicode characters (emoji, some Indic scripts) were being tokenized differently and triggering a slow path. The frontend was sending them as-is without normalization. I wrote a unicode-normalization layer on the client that stripped or canonicalized problematic characters before POST, and we added server-side telemetry to catch new problem inputs.
Result: p99 generation latency for affected inputs dropped from ~22s to ~6s. I learned a lot about how tokenizers work (enough to read papers, not enough to build them). The ML team invited me to their weekly "platform office hours" — I now have a standing collaboration channel with them for frontend-affects-ML issues.
Tips
- Admit the boundary of your knowledge. "Out of your comfort zone" should feel genuinely uncomfortable, not a sideways humblebrag.
- Name the partner and credit them specifically.
- The result should include what you learned, not just what you delivered.
12. Tell me about a time you had to influence without authority.
Value Alignment: See Every Side
What They're Really Testing: Can you get people to follow your lead when you can't order them to? This is the single biggest scaling problem at L5A.
STAR Framework
Situation: We'd built a design system (section 9 — the one I'd build differently today) and adoption across four squads was inconsistent. Our squad and one other were at 95%; two others were at ~40%. I wasn't the engineering manager for those squads and couldn't mandate anything.
Task: Drive up adoption to >80% across all four squads in a quarter, without any formal authority.
Action: I didn't start with "adopt our library." I started with conversations. I scheduled 30-minute chats with one senior engineer from each of the two lagging squads and asked them what the friction was — genuinely, not as a pitch. Squad A said the library didn't cover their use case (complex forms with dynamic fields). Squad B said they didn't trust the library's accessibility (they'd been burned once). Both were legitimate. For Squad A, I paired with their lead for two days and we together designed a DynamicForm component that I then built and added to the library. For Squad B, I ran a full a11y audit of the library (they watched me), fixed two real bugs I found, and published an a11y compliance matrix in Storybook. I also set up a weekly "design-system office hour" where I was available for anyone to bring requests. I explicitly never nagged about adoption — I made the library better and made myself available.
Result: Squad A adoption went from 42% to 88% in six weeks. Squad B took longer (they had more legacy) but crossed 80% by end of quarter. More importantly, Squad A's lead ended up contributing two components back to the library — we got a new maintainer out of it.
Tips
- The move is: understand the objection, address the objection, don't nag.
- Mention that you got contributors, not just consumers. Influence compounds.
- Avoid "I convinced them" language. "They decided to adopt after we fixed X."
13. Tell me about a time you balanced technical debt vs new features.
Value Alignment: See Every Side
What They're Really Testing: Can you articulate trade-offs in business terms? Do you know when to pay down debt vs when to ship?
STAR Framework
Situation: Our campaign builder had a 2500-line CampaignFormReducer that had grown organically over 18 months. It handled form state, validation, API sync, undo/redo. New features routinely took 2-3 days just to plumb state through it. We also had a Q3 roadmap with four PM-driven features.
Task: I was tech lead. I had to decide whether to rewrite the reducer (estimated 4 weeks, would block one feature) or continue shipping features on top of it.
Action: I resisted the urge to just rewrite. I spent a week analyzing: I pulled git blame and tagged every bug the reducer had caused in the last 6 months (17 bugs, ~2.5 dev-weeks of debugging), counted how many feature PRs had touched it (22, average 380 LOC added per touch), and estimated ongoing cost if we kept shipping features against it (another ~6 dev-weeks in Q3 just in reducer-related overhead). I took this to the PM with three options: (a) status quo, (b) full rewrite, blocking one feature, (c) incremental — extract three sub-reducers this quarter while shipping features, do the full cutover next quarter. I recommended (c). The PM agreed. I then paired with two teammates to design the sub-reducer boundaries BEFORE any extraction, so we wouldn't do speculative abstraction. We extracted ValidationReducer, ApiSyncReducer, and HistoryReducer over the quarter alongside feature work.
Result: Q3 shipped all four features on time. Reducer-related bugs in Q3 dropped to 3 (from ~8 the prior quarter). Average PR touching form logic dropped from 380 LOC to ~90. We finished the full cutover in Q4 as a smaller project since the sub-reducers were already extracted. The business didn't lose feature velocity, and the team's velocity compounded.
Tips
- Quantify the debt. "It's hard to work in" is L3. "17 bugs, 2.5 dev-weeks" is L5A.
- Propose multiple options to the PM, recommend one. Shows you see the tradeoffs they see.
- Mention the compounding — that's the business case for debt paydown.
14. Tell me about a conflict with a peer.
Value Alignment: See Every Side
What They're Really Testing: Can you disagree without making it personal? Can you reach a good outcome when you'd normally just escalate?
STAR Framework
Situation: Another senior engineer and I disagreed about which state-management library our team should use for a new product surface. He wanted Zustand; I wanted React Query + Zustand split. The disagreement blocked the project kick-off for about a week.
Task: Reach a decision without escalating to our skip-level.
Action: The first conversation went badly — we both pitched at each other. I took a step back for a day and tried to steelman his position: why would I pick Zustand-only? I realized his concern was that two libraries created two mental models for engineers and that the team (including juniors) might struggle to know where state belonged. That was a fair point I'd been dismissing. I scheduled a 1:1 walk (outside the office, no laptops) and led with "I think I was arguing past your real concern — tell me more about the cognitive-load issue." He appreciated that. We ended up co-designing a decision tree: if it's server state, React Query; if it's ephemeral UI state scoped to one component tree, Zustand; if it's global UI state (theme, auth), Zustand. We wrote that up as a two-page RFC and shared it with the team. Conflict done, project unblocked.
Result: Project kicked off a week late but with a cleaner mental model than either of our original positions. The decision tree is still the team's default 18 months later and got reused by another squad. My relationship with that engineer got stronger — we've since collaborated on two other RFCs.
Tips
- Steelman the other person. Stating their strongest argument in your own words is a superpower.
- Do the hard conversation in person / on a walk, not in Slack.
- Name the relationship outcome. Conflict resolved is good; collaboration strengthened is great.
15. Tell me about a user-facing bug. What did you learn?
Value Alignment: Trip Obsession
What They're Really Testing: Do you feel the user's pain? Do you learn systemically from specific incidents?
STAR Framework
Situation: We shipped an update to the creative preview renderer and started getting support tickets: some image previews were rendering with visible JPEG compression artifacts on a subtle but noticeable level, especially in the skin-tone regions of lifestyle photos. It was a brand-quality issue for our customers.
Task: I was on-call. Triage and fix.
Action: I reproduced by pulling ticket screenshots and comparing to internal test renders — the artifacts were real, not client-side display issues. I traced it back to an image-resize call we'd added that day: we were using <canvas> drawImage with default smoothing and then toDataURL('image/jpeg', 0.85). The quality parameter looked reasonable but the default canvas smoothing was bilinear, which introduced the chroma artifacts on fine detail. The fix was three lines — set imageSmoothingQuality = 'high' and bump jpeg quality to 0.92 for previews (originals were untouched). I rolled out the fix in 3 hours. Then I sat with it: why did we ship this? Answer: our visual regression tests compared pixel diffs below a tolerance threshold, and the artifact pattern was visually obvious but quantitatively small (~1-2% pixel diff). I proposed (and built) a perceptual-diff layer using pixelmatch's anti-aliasing detection to complement our existing tolerance check. I also added a regression set of lifestyle photos with skin tones specifically — our original set was mostly graphics and logos.
Result: Direct fix went out same day. The perceptual-diff upgrade caught two near-misses in the following quarter that would have shipped otherwise. I sent a postmortem to the whole product team (not just engineering) framing it as "our customers noticed something our tests didn't — here's how we're closing that gap." Our designer told me later that was the first postmortem she'd actually read.
Tips
- Start by re-establishing the user's experience, not the technical detail. "Customers were seeing X."
- The fix is the short part. The systemic learning is the long part.
- Mention who you communicated to. At L5A, communication breadth matters.
16. Tell me about a time you took a calculated risk.
Value Alignment: Go Get It
What They're Really Testing: Do you move when it matters? Do you know the difference between calculated risk and recklessness?
STAR Framework
Situation: Our AI creative generator had a demo day for a top-5 prospect coming up. Two weeks out, we had a new AI-preview feature in main that would make the demo materially stronger — real-time streaming of variants as they generated — but it hadn't gone through our full 4-week QA cycle. It had been in internal dogfood for 5 days with no bugs.
Task: My skip-level asked me whether we should demo with the new feature or without.
Action: I laid out the risk: the feature had good internal stability but we hadn't tested it under production-traffic load, on the prospect's specific browsers, or with their typical brief sizes. I gathered specific data: the feature's code paths had ~90% unit test coverage, 5 days with ~200 internal users and zero reported bugs, and we had an instant feature-flag kill switch if anything broke live. I also noted what could go wrong: the streaming could stall under load (mitigated by keeping the non-streaming code path alive behind the flag), and visual glitches on Safari (the prospect's CEO was a known Safari user — I checked). I proposed: ship with the flag on but have an engineer (me) on standby during the demo with one-click rollback. Skip-level agreed. I also did one thing he didn't ask for — I ran a 30-minute dry-run against a production replica with brief sizes 2x the prospect's expected max, just to de-risk the load question.
Result: Demo went well, streaming worked, prospect signed a 6-figure contract the following month. The feature stayed in production and the "new feature, fast track with a standby engineer" became a pattern our team uses for high-leverage launches.
Tips
- The calculation is explicit: what's the downside, what's the mitigation, what's the upside.
- Standby engineer + rollback is a concrete risk-management move. Mention it.
- The payoff should be proportional to the risk. Big risks for stakes that matter.
17. Tell me about a time you prioritized customer needs over internal preferences.
Value Alignment: Build With Heart
What They're Really Testing: Do you advocate for the end user when internal incentives push the other way?
STAR Framework
Situation: Our product managers and designers had built a beautiful visual editor for ad creatives. It relied heavily on drag-and-drop and mouse-hover interactions. During a customer call, a marketing ops manager from a large retailer mentioned that her screen-reader-using teammate couldn't use our tool at all — which meant the teammate was blocked from an entire workflow. The PM wanted to file it as a backlog item. We had a full Q3 roadmap already.
Task: I was the senior frontend engineer. I had to decide whether to accept the backlog and move on, or push for in-quarter work.
Action: I spent a day doing three things. First, I actually used our tool with a screen reader — it was unusable. Second, I checked what percentage of our customer base might include users like the affected teammate; our data couldn't tell me directly, but I found that ~18% of our customers were in sectors (gov, edu, healthcare, enterprise) where a11y was legally or contractually required. Third, I prototyped an accessibility-aware version of the most-used flow (creating a variant) in about 2 days of spike work — keyboard shortcuts, proper ARIA roles, focus management, and screen-reader-friendly status announcements during streaming generation. I brought it back to the PM and designer with three things: (a) the personal anecdote (I tried it with a screen reader and it's broken), (b) the business risk (18% of our customer base), (c) a working prototype showing the fix wasn't a massive effort. I proposed we carve out 3 weeks in Q3 for a11y work on the top three flows. The PM pushed back on timing — I offered to trade one of my own features (bulk tagging) to the next quarter to make room. That unlocked it.
Result: We shipped a11y compliance on the top 3 flows in Q3. Two enterprise prospects mentioned a11y compliance during eval calls in Q4 and cited it as a green flag; one eventually closed at ~$300K ARR citing (among other things) "you're the only tool in the eval that works with screen readers." The screen-reader user from the original customer now uses our tool regularly, per her teammate's followup email.
Tips
- Make it personal (you tried it). That's Trip Obsession + Build With Heart.
- Sacrifice something of your own (your feature). That proves the advocacy is real.
- Name the specific user at the end. "One person" lands harder than "customers."
18. Tell me about your biggest failure and what you learned.
Value Alignment: Growth mindset / cultural fit
What They're Really Testing: Can you take a real failure, own it, learn from it, without spinning? This is a famous trap question — candidates often pick a "failure" that's actually a humblebrag.
STAR Framework
Situation: Early at Pixis, I was given a quarter-long project to build our first-generation component library (mentioned in question 9). I treated the success criteria as "build great components" when the real criteria should have been "drive consistency across the app."
Task: Ship a component library that other teams would adopt.
Action: I built ~40 components in a quarter. They were well-built — well-tested, documented, accessible, with a design-token theming system. I was proud of the engineering. But I'd made three strategic mistakes: (a) I didn't involve the design team in the component API — I assumed good code would win them over, (b) I over-engineered the token system we never needed, and (c) I didn't invest in adoption tooling (migration codemods, usage tracking). Six months after launch, adoption was 60% and I was frustrated that people weren't using my library. I complained about it to my manager in a 1:1 and she said something I still remember: "you built the library, but you didn't build the rollout."
Result: The library is still in use but the slow adoption cost us maybe ~4 weeks of velocity across the team in year one. What I changed: every technical project I've led since has had a rollout plan as a first-class artifact — stakeholder list, adoption metric, migration tooling, decision log, office hours for consumers. The most recent example was the React Query adoption I described earlier, where I explicitly involved the team in the spike evaluation and got a skeptic onboard. I wouldn't have done that two years ago.
What this really taught me: engineering alone isn't enough at the senior level. The output of an L5A engineer isn't code; it's the impact that code has on users and on the team. Great code that no one uses is a failure, full stop.
Tips
- Pick something that's actually a failure. Not "I cared too much about quality."
- Own it without caveats. Don't say "in hindsight" — say "I didn't do X."
- The lesson should connect directly to your current behavior. Show the growth.
Pre-flight Checklist for HM Round
Do all of these before the interview. Check each box honestly.
Project deep-dive prep (for the 45-minute past-project section)
- [ ] Chosen project fits L5A scope — 1-3 quarters, 2-4 engineers, your leadership clear, impact quantified. See the scope-alignment section above.
- [ ] Memorized numbers. Users affected, latency before/after, revenue impact, timeline, team size. Numbers off the top of your head, not from notes.
- [ ] Three alternatives you rejected. For your main architectural choice, know the alternatives you considered and why you chose yours. "I didn't consider alternatives" is a failure answer.
- [ ] Three things that went wrong. Real failures, what you did about them.
- [ ] Three things you'd do differently. Concrete, not "I'd document more."
- [ ] Your specific contribution vs the team's. "I" moments vs "we" moments clearly separated.
- [ ] One written artifact (RFC, postmortem, design doc) you can describe in detail if asked.
- [ ] A one-page doc you've written for yourself summarizing all of the above. Re-read morning of.
STAR story prep
- [ ] 6-8 distinct stories written down with Situation / Task / Action / Result.
- [ ] Each story has at least one quantified result. Numbers, not adjectives.
- [ ] Each story mapped to 1-2 Uber values. Go Get It / See Every Side / Build With Heart at minimum.
- [ ] Timing rehearsed. Each story 90-120 seconds out loud. No longer.
- [ ] You can swap stories mid-answer. If they ask "do you have another example?" you have a backup ready.
- [ ] Variety. Stories span: technical leadership, conflict, failure, mentorship, user empathy, cross-functional.
"Why Uber?" prep
- [ ] Written out, under 90 seconds. Rehearse out loud at least three times.
- [ ] Three distinct reasons. Scale, product surface area, frontend engineering culture (or your own three).
- [ ] Specific references. Blog posts, a framework history, a product you use. Generic praise ("Uber is innovative") is death.
- [ ] No mention of comp, brand, or "I've always wanted to work here."
Reverse questions (prepare 5-7, ask 2-3)
- [ ] For engineers: "What's your team's biggest tech debt and what's blocking paydown?"
- [ ] For HM: "How would you measure my success at 6 months?"
- [ ] For HM: "What's a project you'd want an L5A to drive in their first 2 quarters?"
- [ ] For HM: "What's the biggest culture shift the team has been through in the last year?"
- [ ] For anyone: "What's the most surprising thing about working on this team?"
Logistics
- [ ] Environment set up 10 minutes early. Good lighting, camera at eye level, quiet background, water within reach.
- [ ] Notepad open for names, interesting follow-ups to ask about.
- [ ] Printed one-page cheatsheet — just story titles and key numbers, not full scripts. Scripts make you sound robotic.
Mental prep
- [ ] One mock HM round with a peer who will push back hard, in the 48 hours before the real interview.
- [ ] Between rounds: 5-minute walk, no email. You need to reset.
- [ ] Before the HM round specifically: re-read your chosen project's one-page doc.
Common Pitfalls (last-second checklist)
| Pitfall | How it sounds | How to fix |
|---|---|---|
| "We" when you mean "I" | "We decided to migrate..." | "I proposed and then we agreed..." |
| Blaming others | "The PM forced us to..." | "We disagreed on timing; I proposed X..." |
| Vague metrics | "performance improved a lot" | "p95 went from 480ms to 110ms" |
| Over-rehearsed tone | Sounds scripted | Vary pacing, allow pauses |
| No failures | "I can't think of a big mistake" | Always have a real failure ready |
| Defensiveness on push-back | "Actually that's not quite right" | "Good point — I hadn't framed it that way" |
| Exceeding 2:30 | Interviewer's eyes glaze | Practice with a timer, hard-stop at 2:15 |
| Scope mis-alignment | Project sounds too small or too vast | Use the L5A scope criteria above |
Cross-References
| Topic | Where |
|---|---|
| Interview loop, HM round dynamics | 01-interview-loop |
| Amazon LP behavioral (overlapping themes — conflict, ownership, failure) | /amazon-prep/notes/01-leadership-principles-part1 |
| Project framing and system design for HM deep-dive | Uber Prep — HLD |