02 — The Delivery Framework
Why You Need a Framework
Senior engineers often resist frameworks. The reflex is understandable — frameworks smell like the kind of scaffolding a junior needs to hide the fact that they've never designed a system before. But that reflex misreads the room. A framework in an LLD interview isn't training wheels. It's a speaker's notes tool that keeps you out of three specific failure modes that experienced candidates fall into under pressure.
The three failure modes a framework prevents
- The solutioning sprint. You hear "design a parking lot" and you're drawing
Vehicle,Slot,Ticketbefore the interviewer has stopped talking. Ten minutes in, you realize they wanted reservations, multi-level pricing, and EV charging — none of which fit your skeleton. You now have a choice between erasing and panicking. - The infinite clarify. You've read that asking questions is good, so you ask questions. Fifteen minutes later, there's still nothing on the whiteboard. The interviewer is scribbling notes you can't see, and they say "great, let's see some code" in a tone that isn't encouraging.
- The "tour of everything." You know a lot. You want them to see that. You mention the Observer pattern, mention you could use a priority queue, mention thread safety, mention sharding — all inside three minutes, none tied to the problem. The interviewer walks away with a blurry impression of a candidate who can't commit to a design.
The framework is what prevents all three. It gives you permission to commit to scope at minute 8, permission to stop clarifying at minute 5, and permission to defer cool ideas until the deep-dive. It's not a script — the beats vary with the problem — but the sequence and the time allocation are close to invariant across companies.
If you've done 15 LLD interviews as an interviewer (as most SDE3+ have), you've seen strong candidates implicitly use this framework without knowing its name. Explicit use just lets you move faster on the parts that don't need thought so you have more time on the parts that do.
The Framework at a Glance
An eight-step loop for a 45-minute interview. Longer interviews stretch step 8; shorter ones compress step 5.
| # | Step | Target | What lives on the whiteboard |
|---|---|---|---|
| 1 | Clarify Requirements | 5 min | Bullet list of in-scope operations + 2-3 nonfunctional constraints |
| 2 | Define Scope | 3 min | Explicit in-scope / out-of-scope split |
| 3 | Identify Core Entities | 5 min | 4-7 nouns with one-line responsibilities |
| 4 | Design Contracts / Interfaces | 5 min | Interface signatures for the primary abstractions |
| 5 | Build Class Relationships | 8 min | Class sketch with ownership arrows and key methods |
| 6 | Handle Edge Cases | 5 min | Short list of boundary + failure scenarios with resolutions |
| 7 | Discuss Scale and Extensibility | 5 min | 2-3 named future requirements and the seam each plugs into |
| 8 | Optimize if Asked | ~9 min | Deep-dive: concurrency, algorithmic choice, or an alternative design |
Total: ~45 min. Notice that actual code-writing isn't a standalone step — it's spread across steps 4, 5, and 8. This is deliberate: the board should never be blank, but it also shouldn't be a wall of code with no narrative structure.
Step 1: Clarify Requirements (5 min)
Goal
Convert a one-line prompt into a bounded, prioritized list of functional and nonfunctional requirements that you and the interviewer agree on. By the end of step 1, both of you should be able to say "the system does X, Y, Z and explicitly does not do W."
Exact phrases to use
"Before I sketch entities, I want to pin down three things: the primary user actions, any scale or concurrency constraints, and whether persistence is in scope."
"Let me restate what I heard and you tell me what I got wrong: [one-sentence restatement]."
"I'll ask a few clarifying questions. I'll lead with the ones that would change the data model, then the ones that would change the interface."
"Is this a single-writer system or do I need to reason about concurrent mutations?"
"For this problem, is the bar 'correct for a small workload' or 'correct at production scale'? The answer changes my storage choice."
What to clarify
Clarifying questions sort into four tiers. Spend your 5 minutes top-down — ask tier 1 first, tier 4 last, skip a tier entirely if the prompt answers it.
- Questions that change the data model. What are the core operations? What's the read/write ratio? Are reads point-lookups or range scans?
- Questions that change the interface. Who's the caller — a human UI, another service, a batch job? Are operations synchronous or can they be async?
- Questions that change the invariants. Single-writer or concurrent? Is ordering guaranteed? Is a dirty read acceptable? Is eventual consistency okay?
- Questions that change scope. Persistence? Auth? Logging? Metrics? Multi-tenancy?
Common failure mode
Asking questions at random or in random order. A candidate who asks "what language should I use?" in minute 1 before asking "is this system concurrent?" signals that they can't prioritize. Prioritize by blast radius: a wrong answer to "is it concurrent?" costs you 20 minutes of redesign. A wrong answer to "what language?" costs you nothing.
Success signal
The interviewer says "good questions" or starts volunteering constraints without being asked. That's them signaling that you've hit the important axes and they're ready to move on. Take the hint — move on.
Step 2: Define Scope (3 min)
Goal
Produce two lists on the whiteboard: In Scope and Out of Scope. The second list is more important than the first. It's what protects you from the interviewer (or yourself) smuggling requirements back in at minute 35.
Exact phrases to use
"Here's what I'm treating as in scope: [list]. Here's what I'm explicitly not solving: [list]. Let me know if any of the out-of-scope items need to come back in."
"I want to flag one tradeoff in my scope: I'm treating persistence as out of scope to fit the timebox. If we have time at the end, I'll sketch how I'd add it."
"I'm going to scope down to single-node, single-writer. A distributed version is a different design and I don't want to hand-wave it."
What to clarify
- What must work for the design to be considered complete.
- What's interesting but optional — you'll revisit only if time permits.
- What's a rabbit hole — multi-region replication, full-text search, ML ranking — that you're not going near.
Common failure mode
Writing an In-Scope list without an Out-of-Scope list. You've said yes to everything and no to nothing. The interviewer's "what if we added X?" now sounds like a requirements gap, not an extension question, because you never drew the line.
Success signal
When the interviewer says "what about persistence?", you can point to the out-of-scope column and say "I explicitly scoped that out for now, happy to add it in the last 5 minutes." That line reframes the question from gap to extension.
Step 3: Identify Core Entities (5 min)
Goal
List the 4-7 nouns that make up the domain. Each gets a one-sentence responsibility. You are not designing classes yet — you are building the dictionary of the conversation.
Exact phrases to use
"Let me name the entities first. I'll name them from the domain, not from the patterns —
ParkingLot,Slot,Ticket, notManager,Factory,Strategy."
"I'm going to resist the urge to add entities I don't have a requirement for yet. If we need them later, I'd rather add them deliberately than build in abstractions I can't justify."
"Here's my entity list. Before I pick interfaces, do these match the domain as you see it?"
What to clarify
- Which entities are aggregates (own their children's lifecycle) vs. references (point at them)?
- Which entities are value objects (immutable, compared by value) vs. entities (identity matters)?
- Which entities carry behavior vs. being pure data?
These three questions aren't always asked out loud in the interview, but making them explicit in your head is what separates a domain model from a flat list of nouns.
Common failure mode
Over-naming. A candidate who lists twelve entities for a parking-lot problem is solving a problem that isn't on the whiteboard. Every entity is a future class, a future test, a future bug. Ruthlessly delete anything whose responsibility overlaps with another.
Under-naming. The opposite trap — collapsing Reservation and Ticket into one entity because "they both hold a slot ID." They have different lifecycles, different mutation rules, different consumers. Keep them separate.
Success signal
Every entity has one sentence of responsibility you can say without hedging. If you catch yourself saying "SlotManager sort of manages slots and also kind of tracks availability and also..." — you have a responsibility smell. Split it.
Step 4: Design Contracts / Interfaces (5 min)
Goal
Write the interface signatures for the 2-3 most important abstractions. Not all of them — just the ones where the contract itself is a design decision.
Exact phrases to use
"Before I implement, I want to write the interfaces. The interface is the contract — it's where I make my behavioral commitments."
"I'm using an interface here, not an abstract class, because I want to allow multiple unrelated implementations. If I had shared state between implementations I'd use an abstract class."
"Notice the return type is
Result<T>, notT. I want the caller to have to handle the failure case — silent nulls are a source of bugs at 2am."
What to clarify
- Input / output types — is it a domain object, a primitive, a wrapper type?
- Error signaling — exceptions, return codes,
Result<T>? - Preconditions — what does the caller have to guarantee?
- Postconditions — what does the implementer guarantee?
- Idempotency — is calling the method twice equivalent to calling it once?
Common failure mode
Writing a class with public methods and calling it "the interface." A class isn't an interface — it commits you to one implementation. If you don't extract the interface, you can't swap implementations, you can't mock in tests, and step 7 (extensibility) has nothing to hang on.
The other failure: writing interfaces for every class. Most classes don't need an interface. You need interfaces where there are real or plausible alternative implementations. A Ticket data class doesn't need an ITicket — that's a C#-isms from a 2005 codebase.
Success signal
The interviewer asks "how would you swap X for Y?" and your answer is "it's behind the IX interface, so it's a new class, not a refactor." That line is the whole point of steps 3 and 4.
Step 5: Build Class Relationships (8 min)
Goal
Draw the class sketch: classes, their fields, their key methods, and the ownership / dependency arrows between them. This is the largest single block of time in the framework because it's where the design actually lives.
Exact phrases to use
"I'll sketch the classes now. I'll fill in method bodies selectively — only the ones where the logic is the design, not the ones where it's obvious."
"This arrow is ownership —
ParkingLotownsSlotinstances, their lifecycle is tied. This one is a dependency —Ticketreferences aSlot, it doesn't own it."
"I'm injecting
PricingStrategythrough the constructor, not constructing it insideParkingLot. That's what makes it testable and swappable."
"I'm keeping
ParkingLotas the facade. Callers don't touch slots directly — they go through the lot."
What to clarify
- Composition vs. inheritance. Default to composition. Inheritance is for cases where the subclass is a literal behavioral substitute (LSP) — rare in LLD problems.
- Mutability. Which objects are immutable value objects? Which are mutable entities? Which are mutable with invariants (need internal locking)?
- Construction. Who constructs what? Direct constructor calls, factory, builder, DI?
- Ownership. If A owns B, A controls B's lifecycle. If A depends on B, A just uses B. This distinction drives where nulls, destructors, and teardown logic live.
Common failure mode
Drawing without method signatures. A box labeled "ParkingLot" with no methods tells the interviewer nothing. Write the 3-5 key public methods with their signatures. You don't need bodies for all of them — but the signatures show whether you understand the contracts.
Pattern-dropping. Saying "I'll use the Strategy pattern here" without explaining which behavior varies and why that variation matters. The pattern is a compression of the explanation, not a substitute for it.
Success signal
The interviewer nods, then asks a concrete follow-up about a specific method or relationship. That means they've accepted the skeleton and want to probe depth. If instead they ask "can you explain what X does again?" — your sketch is under-annotated.
Step 6: Handle Edge Cases (5 min)
Goal
Enumerate the boundary conditions and failure modes, and resolve each one in the design. This is where senior candidates separate from mid-level: junior candidates wait to be asked about edge cases, seniors volunteer them.
Exact phrases to use
"Let me walk the edge cases before we talk extensibility. I'd rather surface them than have them surface in review."
"Three categories: boundary conditions, concurrency, and failure. For each, here's how the design handles it."
"This one — a reservation expiring while it's being paid — is a race. I'd handle it with optimistic locking on the reservation version, and the pay path returns a
Conflictthe caller retries."
"Empty input, single input, max-size input. Let me walk what happens at each."
What to clarify
- Boundary conditions. Empty, single, at-capacity, just-over-capacity.
- Concurrency hazards. What happens if two callers hit the same method simultaneously?
- Partial failure. What if step 2 of a 3-step operation fails? Do we roll back, compensate, or leave the system in a half-state?
- Idempotency. What if the caller retries? Do we do the thing twice?
- Time-based edge cases. Clock skew, reservation expiry, TTL semantics.
Common failure mode
Listing edge cases but not resolving them. "What if two people reserve the same slot?" — yes, what do you do? If the answer is "I'd add a lock" and you don't say where, you haven't handled the case, you've just acknowledged it.
The other trap: inventing edge cases that don't exist in the scope. If the problem is single-writer, don't burn 2 minutes explaining how you'd handle concurrent writes. Protect the timebox.
Success signal
The interviewer adds an edge case you didn't think of, and you absorb it into the existing design without erasing anything. That's the signal that your design is robust to perturbation — which is half of what senior-level means.
Step 7: Discuss Scale and Extensibility (5 min)
Goal
Show how the design absorbs two or three future requirements without being redesigned. This is where the interfaces from step 4 pay off. If you did step 4 correctly, step 7 is almost free.
Exact phrases to use
"Three extensions I'd flag. For each, I'll name the seam where it plugs in."
"If we wanted multi-level pricing, it's a new
PricingStrategyimplementation. Zero changes toParkingLot."
"If we needed this to scale horizontally, the bottleneck would be the slot-availability store. I'd partition by floor or zone, which the design already accommodates because
SlotRegistryis an interface."
"One extension I'd not handle cleanly in this design is distributed reservations across regions. That's a rewrite, not an extension — I'd flag that early to the PM."
What to clarify
- Which requirements are within a hop of the current design (new strategy, new handler, new subclass)?
- Which require schema / contract changes (new field, new interface method)?
- Which are a rewrite (distributed, multi-tenant, collaborative)?
Being honest about the third bucket is a senior signal. Junior candidates claim the design handles everything.
Common failure mode
Vague scalability claims. "This scales because I used an interface" is not a scale argument. Scale is about where the bottleneck is and how much the design bends before it breaks. Name the bottleneck, name the next step, name what it costs.
Infinite extensibility. Claiming the design accommodates every possible future requirement is a red flag. It either means you've over-abstracted (YAGNI violation) or you're blustering. Pick your strong extensions and defend them; acknowledge the weak ones.
Success signal
The interviewer says "okay, let's go deeper on X" — picking one of your extensions. That's them validating that your extension list was credible and they want to see you execute on one.
Step 8: Optimize if Asked (rest of time)
Goal
Take the design deeper along an axis the interviewer picks — usually concurrency, algorithmic optimization, or an alternative data structure. You don't initiate this step; the interviewer does, by asking a specific question.
Exact phrases to use
"Before I optimize, let me confirm the constraint you're asking about — is it throughput, memory, tail latency, or something else?"
"Two options. Option A: a lock around the whole registry — simple, but it serializes writes. Option B: per-partition locks — more throughput, more code. At the scale you described, I'd start with A and measure."
"I'd push back slightly on optimizing this path. The call frequency is low, and the simple version is easier to prove correct. I'd optimize the [other path] first."
"To make this thread-safe, I'd make
SlotRegistryresponsible for its own locking — callers don't reason about it. The critical section is just thereservemethod."
What to clarify
- What axis is being optimized — latency, throughput, memory, correctness under concurrency, code simplicity?
- What's the current bottleneck in the design? Don't optimize what isn't slow.
- What's the cost of the optimization? Code complexity, testability, operational burden?
Common failure mode
Optimizing without a target. Launching into "I'd use a red-black tree" when the interviewer asked "how would this behave under load?" — those aren't the same question. Pin down the axis before you change anything.
Gold-plating. Making six optimizations when one was asked. Each optimization needs its own justification, and each one raises the surface area the interviewer can probe. Land one, land it well, let them ask for more.
Success signal
The interviewer stops asking "what if" and starts asking implementation-level questions ("would you use a ReentrantLock or a synchronized block?"). That's them validating that you're at the right altitude and probing your language-level fluency.
Timing Table
A minute-by-minute view of a 45-minute interview. What should be on the board at each checkpoint, and what you should be saying.
| Time | On the board | What you're saying |
|---|---|---|
| T+0 | Nothing | "Let me restate the problem and ask a few clarifying questions." |
| T+5 | Functional requirements list + 2-3 nonfunctional constraints | "Here's what I heard. Before I design, let me confirm scope." |
| T+8 | In-scope / out-of-scope columns | "I'm scoping out persistence and multi-node. I'll sketch extensions if time permits." |
| T+13 | 4-7 named entities with one-line responsibilities | "Before I pick interfaces, do these entities match your mental model?" |
| T+18 | 2-3 interface signatures for the critical abstractions | "These are the contracts. Implementations are next." |
| T+26 | Full class sketch with method signatures and relationship arrows | "That's the core design. Let me walk edge cases before we talk extensions." |
| T+31 | Edge-case list with resolutions | "Three categories: boundaries, concurrency, failure. Here's how each is handled." |
| T+36 | Extension list with seams named | "Three plausible future requirements and where each plugs in." |
| T+45 | Deep-dive on one axis the interviewer picked | "Summary: design supports X, scales to Y, extends to Z cleanly, and has these known tradeoffs." |
The specific times drift with the problem — a concurrency-heavy problem shifts time from step 7 to step 6 — but the sequence should not. If at T+20 your board is still blank because you're arguing about entities, that's the framework telling you to move on.
The 30-minute rule, restated
By T+30, the class sketch should exist and you should be answering "what if we added X?" questions. Candidates still finalizing entity names at T+30 almost never recover.
Anti-Patterns — Ways Candidates Sabotage the Framework
Jumping to code too early. You skip steps 1-4 because "code is faster than talking." The interviewer sees a candidate who can't think at the abstraction level they're hiring for. Writing code is a debugging tool for your design, not a substitute for designing.
Pattern-dropping without context. Naming "Strategy" or "Observer" as if the name is the design. The pattern name compresses the explanation — it doesn't replace it. Say what varies, why it varies, and who the caller is before you say "Strategy."
Not naming assumptions. You silently assume single-threaded, in-memory, one tenant. The interviewer assumes the opposite. You both proceed for 20 minutes before discovering the mismatch. Speak every assumption out loud and write the big ones on the whiteboard.
Clarifying infinitely. Eight minutes in, still asking questions, nothing on the board. Clarifying past the 5-minute mark is a defense mechanism against committing. Commit with the information you have, note what you'd confirm with more time, and move on.
Erasing instead of evolving. Interviewer asks "what if we need X?" You erase a class and start rebuilding. That action itself is a negative signal — it tells them your design has no seams. Even if the new design is better, you've shown you don't design for extensibility. Evolve in-place; add, don't rewrite.
Tour-of-everything. Mentioning every pattern, technology, and tradeoff you know, unanchored to the problem. It sounds comprehensive; it reads as unfocused. Pick the 2-3 things that matter most for this problem and go deep.
Ignoring the interviewer's tells. They say "interesting, how about concurrency?" and you answer the concurrency question in one sentence and go back to your original flow. That one sentence was not the answer they wanted. Interviewer probes are invitations to spend 3-5 minutes in that region.
Defending instead of absorbing. "That wouldn't happen because..." when the interviewer asks about an edge case. Even if you're right, the reflex reads as defensive. Absorb the case first, then show why the design already handles it (or adapt the design if it doesn't). "Good catch, here's how I'd handle it" beats "that's out of scope" nine times out of ten.
The Pushback Playbook
Senior interviews are conversations, not monologues. The interviewer will push. Here's how to handle the five probes that come up in almost every interview, with the exact phrases to deploy.
Probe 1: "What if we added X?"
The most common probe. They're testing whether your design has seams.
"Good extension to think through. Where it plugs in: [name the interface or class]. The change is [additive / a new implementation / a new method]. Existing callers don't change."
If your design can't absorb X cleanly:
"Honest answer — this design would need a change to absorb X cleanly. I'd add an interface at [seam] and refactor [class] to depend on it. That's a 30-minute refactor, not a rewrite."
The second version is fine. Admitting a design isn't infinitely flexible is a senior signal; pretending it is, is not.
Probe 2: "Can this scale?"
They want a concrete bottleneck and a concrete next step, not a reassurance.
"Depends on the axis. For reads, the bottleneck is [X] — I'd cache. For writes, the bottleneck is [Y] — I'd partition by [Z]. Each has a cost: the cache adds staleness, the partitioning adds cross-partition query complexity."
If the problem is explicitly in-scope for single-node:
"I scoped this to single-node. To scale horizontally, I'd pull [stateful class] behind an interface and back it with [store]. The rest of the design wouldn't change — that's the seam."
Probe 3: "Is this thread-safe?"
The worst answer is "yes" without elaboration. The second-worst is "no" without a plan.
"Not out of the box. The contended resource is [X] inside [class]. I'd make [class] responsible for its own locking — the critical section is [method]. Callers stay oblivious to the lock."
"Two options for the locking strategy: one coarse lock around the whole class, or a finer lock per-[unit]. At the contention level we'd expect, I'd start coarse and measure."
Key move: pinpoint the contended resource. Juniors hand-wave about "thread safety." Seniors name the resource.
Probe 4: "Why did you pick X over Y?"
They're checking whether your choice was deliberate or memorized.
"Three criteria I weighed: [complexity, performance, testability]. X wins on [A, B], loses on [C]. For this problem, A and B matter more than C because [domain reason]. If the workload shifted toward C, I'd reconsider."
Never answer with "I'm familiar with X" — that's the answer that fails the level bar.
Probe 5: "Isn't this overengineered?"
Sometimes the interviewer pushes the other direction — challenging you to simplify.
"Fair pushback. The abstraction I'd defend is [X] because [concrete pressure]. The one I'd drop if we're optimizing for simplicity is [Y] — we can always add it back when the need is real."
Being willing to remove abstractions is a senior signal. Junior candidates accumulate; seniors subtract.
Probe 6: "How would you test this?"
They want to hear about seams and observable behavior, not "I'd write unit tests."
"Three layers. Unit tests against [core class] — easy because [dependency] is injected through an interface so I can fake it. Integration tests across [module boundary]. One end-to-end test on the happy path. I'd put the bulk of effort at the unit level."
The framework is the floor, not the ceiling
You will run every interview against this framework. The quality of what you put in each box — the specificity of your questions, the sharpness of your abstractions, the depth of your edge-case analysis — is where SDE3 separates from SDE2. The framework gets you to the starting line consistently. The substance wins the race.