07 - Low-Level Design Problems
Why This Matters for Temple
Temple (ex-Zomato founding team) builds core food delivery infrastructure. LLD rounds test whether you can translate business requirements into clean, extensible class designs. Expect questions around order flows, notification pipelines, and rate limiting — all bread-and-butter for a high-traffic food delivery platform.
Quick Reference
Scan this table in 5 minutes before your interview.
| Problem | Key Classes | Key Patterns | Complexity Focus |
|---|---|---|---|
| Food Ordering System | User, Restaurant, MenuItem, Order, DeliveryAgent, OrderStatus | State Machine, Enum-driven transitions | State validation, illegal transition prevention |
| Notification System | NotificationChannel, EmailChannel, SMSChannel, PushChannel, NotificationService | Observer, Strategy, Template Method | Open/Closed principle, adding new channels without modifying existing code |
| Rate Limiter | FixedWindowLimiter, SlidingWindowLogLimiter, TokenBucketLimiter, LeakyBucketLimiter | Strategy (swappable algorithms), Template Method | Time complexity per request, memory trade-offs |
Problem 1: Food Ordering System
Requirements
- Users can browse restaurants and place orders with multiple menu items
- Orders follow a strict state machine: not every transition is legal
- Delivery agents are assigned after order is confirmed
- Cancellation is only allowed from
PLACEDorCONFIRMEDstates - Each state transition is validated before execution
Key Classes
- User — customer placing the order
- Restaurant — has a menu of items
- MenuItem — name, price, availability
- Order — ties user, restaurant, items, status, and delivery agent together
- DeliveryAgent — assigned to fulfill the order
- OrderStatus — enum representing the state machine
State Machine
PLACED ──→ CONFIRMED ──→ PREPARING ──→ READY ──→ PICKED_UP ──→ DELIVERED
│ │
▼ ▼
CANCELLED CANCELLEDFull Implementation
// ── Enums & Types ──────────────────────────────────────────────
enum OrderStatus {
PLACED = "PLACED",
CONFIRMED = "CONFIRMED",
PREPARING = "PREPARING",
READY = "READY",
PICKED_UP = "PICKED_UP",
DELIVERED = "DELIVERED",
CANCELLED = "CANCELLED",
}
// Legal transitions map — the single source of truth
const VALID_TRANSITIONS: Record<OrderStatus, OrderStatus[]> = {
[OrderStatus.PLACED]: [OrderStatus.CONFIRMED, OrderStatus.CANCELLED],
[OrderStatus.CONFIRMED]: [OrderStatus.PREPARING, OrderStatus.CANCELLED],
[OrderStatus.PREPARING]: [OrderStatus.READY],
[OrderStatus.READY]: [OrderStatus.PICKED_UP],
[OrderStatus.PICKED_UP]: [OrderStatus.DELIVERED],
[OrderStatus.DELIVERED]: [],
[OrderStatus.CANCELLED]: [],
};
// ── Domain Classes ─────────────────────────────────────────────
class MenuItem {
constructor(
public readonly id: string,
public readonly name: string,
public readonly price: number,
public available: boolean = true
) {}
}
class Restaurant {
private menu: Map<string, MenuItem> = new Map();
constructor(public readonly id: string, public readonly name: string) {}
addMenuItem(item: MenuItem): void {
this.menu.set(item.id, item);
}
getMenuItem(itemId: string): MenuItem | undefined {
return this.menu.get(itemId);
}
getAvailableItems(): MenuItem[] {
return [...this.menu.values()].filter((item) => item.available);
}
}
class User {
constructor(
public readonly id: string,
public readonly name: string,
public readonly phone: string
) {}
}
class DeliveryAgent {
public isAvailable: boolean = true;
constructor(
public readonly id: string,
public readonly name: string
) {}
}
// ── Order (the core state machine) ─────────────────────────────
class Order {
private status: OrderStatus = OrderStatus.PLACED;
private deliveryAgent: DeliveryAgent | null = null;
public readonly createdAt: Date = new Date();
constructor(
public readonly id: string,
public readonly user: User,
public readonly restaurant: Restaurant,
public readonly items: { item: MenuItem; quantity: number }[]
) {}
getStatus(): OrderStatus {
return this.status;
}
getTotal(): number {
return this.items.reduce(
(sum, { item, quantity }) => sum + item.price * quantity,
0
);
}
// ── State transition (single method, validates every move) ──
private transitionTo(newStatus: OrderStatus): void {
const allowed = VALID_TRANSITIONS[this.status];
if (!allowed.includes(newStatus)) {
throw new Error(
`Invalid transition: ${this.status} → ${newStatus}`
);
}
console.log(`Order ${this.id}: ${this.status} → ${newStatus}`);
this.status = newStatus;
}
confirmOrder(): void {
this.transitionTo(OrderStatus.CONFIRMED);
}
startPreparing(): void {
this.transitionTo(OrderStatus.PREPARING);
}
markReady(): void {
this.transitionTo(OrderStatus.READY);
}
assignDeliveryAgent(agent: DeliveryAgent): void {
if (!agent.isAvailable) {
throw new Error(`Agent ${agent.id} is not available`);
}
this.deliveryAgent = agent;
agent.isAvailable = false;
}
pickUp(): void {
if (!this.deliveryAgent) {
throw new Error("Cannot pick up — no delivery agent assigned");
}
this.transitionTo(OrderStatus.PICKED_UP);
}
deliver(): void {
this.transitionTo(OrderStatus.DELIVERED);
if (this.deliveryAgent) {
this.deliveryAgent.isAvailable = true; // free the agent
}
}
cancelOrder(): void {
this.transitionTo(OrderStatus.CANCELLED);
if (this.deliveryAgent) {
this.deliveryAgent.isAvailable = true;
this.deliveryAgent = null;
}
}
}
// ── OrderService (orchestrates the flow) ───────────────────────
class OrderService {
private orders: Map<string, Order> = new Map();
private orderCounter = 0;
placeOrder(
user: User,
restaurant: Restaurant,
items: { item: MenuItem; quantity: number }[]
): Order {
const unavailable = items.filter((i) => !i.item.available);
if (unavailable.length > 0) {
throw new Error(
`Unavailable items: ${unavailable.map((i) => i.item.name).join(", ")}`
);
}
const orderId = `ORD-${++this.orderCounter}`;
const order = new Order(orderId, user, restaurant, items);
this.orders.set(orderId, order);
console.log(
`Order ${orderId} placed — ₹${order.getTotal()} from ${restaurant.name}`
);
return order;
}
getOrder(orderId: string): Order | undefined {
return this.orders.get(orderId);
}
}Key Design Decisions
- Centralized transition map —
VALID_TRANSITIONSis the single source of truth. Adding a new status (e.g.,OUT_FOR_DELIVERY) means updating one object, not scattered if-else blocks. - Private
transitionTo()— all public methods (confirmOrder,deliver, etc.) delegate to one validation gate. Impossible to skip validation. - Agent lifecycle —
isAvailableis toggled on assignment and on delivery/cancellation, preventing double-assignment.
Follow-Up Questions
- How would you add order history per user? (Store
Order[]onUser, or a separateOrderHistoryservice.) - How would you handle partial cancellation (cancel one item)? (Move to item-level status or introduce
OrderItemwith its own state.) - How would you add estimated delivery time? (Track timestamps per transition, compute deltas.)
- How would you make transitions event-driven for a microservices setup? (Emit events on each transition, other services subscribe.)
Problem 2: Notification System
Requirements
- Support multiple channels: Email, SMS, Push notification
- Different event types route to different channel combinations
- Easy to add new channels without modifying existing code
- Notification content is formatted per channel (HTML for email, plain text for SMS, etc.)
Key Classes
- NotificationChannel — interface with
send()method - EmailChannel, SMSChannel, PushChannel — concrete implementations
- NotificationService — routes notifications to the right channels based on event type
- NotificationTemplate — formats content per channel
Patterns Used
- Observer —
NotificationServiceis the subject; channels are observers for specific event types - Strategy — different routing strategies per event type (order confirmation vs. promo)
- Template Method — base formatting with channel-specific overrides
Full Implementation
// ── Types ──────────────────────────────────────────────────────
enum EventType {
ORDER_CONFIRMED = "ORDER_CONFIRMED",
DELIVERY_UPDATE = "DELIVERY_UPDATE",
PROMOTION = "PROMOTION",
ORDER_CANCELLED = "ORDER_CANCELLED",
}
interface NotificationPayload {
userId: string;
eventType: EventType;
data: Record<string, string>; // e.g. { orderId, restaurantName, eta }
}
// ── Channel Interface (Strategy) ───────────────────────────────
interface NotificationChannel {
readonly channelName: string;
send(userId: string, title: string, body: string): Promise<void>;
}
class EmailChannel implements NotificationChannel {
readonly channelName = "Email";
async send(userId: string, title: string, body: string): Promise<void> {
// In production: call email service (SendGrid, SES, etc.)
console.log(`[EMAIL → ${userId}] Subject: ${title}`);
console.log(` Body: ${body}`);
}
}
class SMSChannel implements NotificationChannel {
readonly channelName = "SMS";
async send(userId: string, title: string, body: string): Promise<void> {
// In production: call SMS gateway (Twilio, MSG91, etc.)
const plainText = `${title}: ${body}`.slice(0, 160); // SMS char limit
console.log(`[SMS → ${userId}] ${plainText}`);
}
}
class PushChannel implements NotificationChannel {
readonly channelName = "Push";
async send(userId: string, title: string, body: string): Promise<void> {
// In production: call FCM / APNs
console.log(`[PUSH → ${userId}] ${title} — ${body}`);
}
}
// ── Template: format content per event type ────────────────────
type ContentFormatter = (
data: Record<string, string>
) => { title: string; body: string };
const FORMATTERS: Record<EventType, ContentFormatter> = {
[EventType.ORDER_CONFIRMED]: (data) => ({
title: "Order Confirmed!",
body: `Your order #${data.orderId} from ${data.restaurantName} is confirmed. ETA: ${data.eta}.`,
}),
[EventType.DELIVERY_UPDATE]: (data) => ({
title: "Delivery Update",
body: `Your order #${data.orderId} is ${data.status}. ${data.agentName} is on the way.`,
}),
[EventType.PROMOTION]: (data) => ({
title: data.promoTitle ?? "Special Offer!",
body: data.promoBody ?? "Check out today's deals on the app.",
}),
[EventType.ORDER_CANCELLED]: (data) => ({
title: "Order Cancelled",
body: `Your order #${data.orderId} has been cancelled. Refund will be processed in 3-5 days.`,
}),
};
// ── Routing: which channels fire for which event type ──────────
const EVENT_CHANNEL_MAP: Record<EventType, string[]> = {
[EventType.ORDER_CONFIRMED]: ["Email", "Push"],
[EventType.DELIVERY_UPDATE]: ["Push", "SMS"],
[EventType.PROMOTION]: ["Email"],
[EventType.ORDER_CANCELLED]: ["Email", "Push", "SMS"],
};
// ── NotificationService (Observer + Router) ────────────────────
class NotificationService {
private channels: Map<string, NotificationChannel> = new Map();
registerChannel(channel: NotificationChannel): void {
this.channels.set(channel.channelName, channel);
}
async notify(payload: NotificationPayload): Promise<void> {
const { userId, eventType, data } = payload;
// 1. Format content using the template for this event type
const formatter = FORMATTERS[eventType];
if (!formatter) {
throw new Error(`No formatter for event type: ${eventType}`);
}
const { title, body } = formatter(data);
// 2. Determine which channels to fire
const channelNames = EVENT_CHANNEL_MAP[eventType] ?? [];
// 3. Fan out to all channels in parallel
const promises = channelNames.map((name) => {
const channel = this.channels.get(name);
if (!channel) {
console.warn(`Channel "${name}" not registered — skipping`);
return Promise.resolve();
}
return channel.send(userId, title, body);
});
await Promise.all(promises);
}
}
// ── Usage ──────────────────────────────────────────────────────
const notificationService = new NotificationService();
notificationService.registerChannel(new EmailChannel());
notificationService.registerChannel(new SMSChannel());
notificationService.registerChannel(new PushChannel());
// Order confirmed → fires Email + Push
notificationService.notify({
userId: "user-42",
eventType: EventType.ORDER_CONFIRMED,
data: { orderId: "ORD-1001", restaurantName: "Biryani Blues", eta: "35 min" },
});
// Delivery update → fires Push + SMS
notificationService.notify({
userId: "user-42",
eventType: EventType.DELIVERY_UPDATE,
data: { orderId: "ORD-1001", status: "out for delivery", agentName: "Rahul" },
});Key Design Decisions
- Open/Closed — adding a new channel (e.g., WhatsApp) means creating a new class implementing
NotificationChanneland callingregisterChannel(). Zero changes to existing code. - Event-to-channel mapping is data, not logic —
EVENT_CHANNEL_MAPis a plain config object. Easy to make it dynamic (load from DB or feature flags). - Parallel fan-out —
Promise.allsends to all channels concurrently. One slow channel doesn't block others. - Formatters are pure functions — easy to test, no side effects.
Follow-Up Questions
- How would you handle notification preferences (user opts out of SMS)? (Filter
channelNamesagainst user preferences before fan-out.) - How would you add retry with backoff for failed sends? (Wrap each
channel.send()in a retry utility, or push to a dead-letter queue.) - How would you make this async at scale? (Push notification events to a message queue — Kafka/SQS — and let workers process them.)
- How would you add rate limiting per user to prevent notification spam? (Per-user counters, or a sliding window per channel per user — segue to Problem 3.)
Problem 3: Rate Limiter
Requirements
- Limit requests per user/IP to protect backend services
- Support multiple algorithms — each has different trade-offs
- Clean interface so the algorithm can be swapped without changing calling code
- Food delivery context: protect the order placement API during flash sales (e.g., "₹1 biryani" promo)
Common Interface
interface RateLimiter {
/** Returns true if the request is allowed, false if rate-limited */
allowRequest(key: string): boolean;
}Algorithm 1: Fixed Window
How it works: Divide time into fixed windows (e.g., 1-minute intervals). Count requests in the current window. If count exceeds the limit, reject.
Window 1 (00:00-01:00) Window 2 (01:00-02:00)
┌────────────────────┐ ┌────────────────────┐
│ ■ ■ ■ ■ ■ (5 req) │ │ ■ ■ ■ (3 req) │
└────────────────────┘ └────────────────────┘
Limit = 5 → FULL Limit = 5 → OK
Problem: 5 requests at 00:59 + 5 at 01:01 = 10 in 2 seconds
but both windows say "OK" individually.class FixedWindowLimiter implements RateLimiter {
private windows: Map<string, { count: number; windowStart: number }> =
new Map();
constructor(
private maxRequests: number, // e.g., 100
private windowSizeMs: number // e.g., 60_000 (1 minute)
) {}
allowRequest(key: string): boolean {
const now = Date.now();
const currentWindow = Math.floor(now / this.windowSizeMs);
const entry = this.windows.get(key);
if (!entry || entry.windowStart !== currentWindow) {
// New window — reset counter
this.windows.set(key, { count: 1, windowStart: currentWindow });
return true;
}
if (entry.count < this.maxRequests) {
entry.count++;
return true;
}
return false; // rate limited
}
}Trade-offs:
| Pros | Cons |
|---|---|
| Simple to implement | Boundary spike problem (2x burst at window edges) |
| O(1) time and space per key | Not smooth — counter resets abruptly |
| Low memory footprint | Unfair to users who arrive late in the window |
Algorithm 2: Sliding Window Log
How it works: Store the timestamp of every request. For each new request, remove timestamps older than the window, then check if the count exceeds the limit.
Rolling window (last 60 seconds)
──────────────────────────────────────────────► time
╳ ╳ ╳ ╳ ╳ ╳ ╳ ╳
│ └─ remove (older than 60s ago) │ └─ new request
└─ remove └─ kept
Count remaining timestamps. If count < limit → allow.class SlidingWindowLogLimiter implements RateLimiter {
private logs: Map<string, number[]> = new Map(); // key → sorted timestamps
constructor(
private maxRequests: number,
private windowSizeMs: number
) {}
allowRequest(key: string): boolean {
const now = Date.now();
const windowStart = now - this.windowSizeMs;
let timestamps = this.logs.get(key) ?? [];
// Remove expired timestamps
timestamps = timestamps.filter((t) => t > windowStart);
if (timestamps.length < this.maxRequests) {
timestamps.push(now);
this.logs.set(key, timestamps);
return true;
}
this.logs.set(key, timestamps);
return false; // rate limited
}
}Trade-offs:
| Pros | Cons |
|---|---|
| Most accurate — no boundary spike | O(n) memory per key (stores every timestamp) |
| Smooth rolling window | Filter step is O(n) per request |
| Easy to reason about | Not practical for very high request rates |
Algorithm 3: Token Bucket
How it works: A bucket holds tokens (up to a max capacity). Tokens are added at a fixed refill rate. Each request consumes one token. If the bucket is empty, the request is rejected. Allows controlled bursts.
Bucket (capacity = 5, refill = 1 token/sec)
Time 0: [■ ■ ■ ■ ■] 5 tokens — full
Time 0: Request → consume → [■ ■ ■ ■ ·] 4 tokens
Time 0: Request → consume → [■ ■ ■ · ·] 3 tokens
Time 0: Request → consume → [■ ■ · · ·] 2 tokens ← burst allowed!
Time 3: 3 seconds pass → refill 3 → [■ ■ ■ ■ ■] back to full
Time 3: Request → consume → [■ ■ ■ ■ ·] 4 tokensclass TokenBucketLimiter implements RateLimiter {
private buckets: Map<
string,
{ tokens: number; lastRefillTime: number }
> = new Map();
constructor(
private capacity: number, // max tokens (burst size)
private refillRate: number // tokens added per second
) {}
allowRequest(key: string): boolean {
const now = Date.now();
let bucket = this.buckets.get(key);
if (!bucket) {
bucket = { tokens: this.capacity, lastRefillTime: now };
this.buckets.set(key, bucket);
}
// Refill tokens based on elapsed time
const elapsed = (now - bucket.lastRefillTime) / 1000; // seconds
const tokensToAdd = elapsed * this.refillRate;
bucket.tokens = Math.min(this.capacity, bucket.tokens + tokensToAdd);
bucket.lastRefillTime = now;
if (bucket.tokens >= 1) {
bucket.tokens -= 1;
return true;
}
return false; // rate limited
}
}Trade-offs:
| Pros | Cons |
|---|---|
| Allows controlled bursts (great for UX) | Slightly more complex than fixed window |
| O(1) time and space per key | Burst size = capacity, which may need tuning |
| Smooth rate limiting over time | Doesn't guarantee perfectly even spacing |
| Used by AWS, Stripe, most real APIs | — |
Algorithm 4: Leaky Bucket
How it works: Requests enter a queue (the bucket). The queue is processed (leaked) at a fixed rate. If the queue is full, new requests are rejected. This smooths out bursts into a steady flow.
Incoming requests Leaky Bucket (capacity = 5) Processed
─────────────────► ┌──────────────────────────────┐ ─────────────────►
■ ■ ■ ■ ■ ■ ■ │ ■ ■ ■ ■ ■ (full!) │ ■ ... ■ ... ■
(burst of 7) │ overflow → reject 2 │ (steady rate)
└──────────────────────────────┘
drip drip drip (1 per sec)class LeakyBucketLimiter implements RateLimiter {
private buckets: Map<
string,
{ queueSize: number; lastLeakTime: number }
> = new Map();
constructor(
private capacity: number, // max queue size
private leakRate: number // requests processed per second
) {}
allowRequest(key: string): boolean {
const now = Date.now();
let bucket = this.buckets.get(key);
if (!bucket) {
bucket = { queueSize: 0, lastLeakTime: now };
this.buckets.set(key, bucket);
}
// Leak (process) requests based on elapsed time
const elapsed = (now - bucket.lastLeakTime) / 1000;
const leaked = elapsed * this.leakRate;
bucket.queueSize = Math.max(0, bucket.queueSize - leaked);
bucket.lastLeakTime = now;
if (bucket.queueSize < this.capacity) {
bucket.queueSize += 1;
return true;
}
return false; // rate limited — queue is full
}
}Trade-offs:
| Pros | Cons |
|---|---|
| Perfectly smooth output rate | No burst tolerance — even legitimate spikes are queued/rejected |
| Predictable server load | Higher latency if requests are queued (in queue-based impl) |
| O(1) time and space per key | Less forgiving for bursty clients |
| Great for protecting downstream services | — |
Algorithm Comparison
| Algorithm | Burst Handling | Memory | Accuracy | Best For |
|---|---|---|---|---|
| Fixed Window | Poor (boundary spikes) | O(1) per key | Low | Simple use cases, internal tools |
| Sliding Window Log | None (strict) | O(n) per key | Highest | Low-traffic APIs needing precision |
| Token Bucket | Controlled bursts | O(1) per key | High | Public APIs, user-facing endpoints (most popular) |
| Leaky Bucket | No bursts (smoothed) | O(1) per key | High | Protecting downstream services, queue processing |
Key Design Decisions
- Common
RateLimiterinterface — all four algorithms implement the same interface. The calling code doesn't know or care which algorithm is behind it. Swap in production via config. - Per-key tracking — the
keyparameter lets you rate-limit by user ID, IP address, API key, or any other dimension. - Lazy cleanup — expired entries are cleaned up on access (no background timer needed). For production, you'd add periodic eviction or use Redis with TTL.
Follow-Up Questions
- How would you implement this in a distributed system (multiple servers)? (Use Redis with atomic Lua scripts for counter operations.)
- How would you handle different limits for different endpoints? (Rate limiter per route, or a tiered system: global → per-user → per-endpoint.)
- How would you add graceful degradation instead of hard rejection? (Return
Retry-Afterheader, queue requests, or degrade to cached responses.) - How would you implement a sliding window counter (hybrid of fixed window + sliding window log)? (Weighted average of current and previous window counts based on overlap.)