03 — Low-Level Design Problems
Why This Matters for Paytm
Paytm interviews test whether you can translate a vague product requirement into clean, extensible classes with the right design patterns. In a fintech context, every LLD question carries implicit constraints — ACID guarantees on money movement, audit trails, concurrency control, and graceful degradation. Nail the structure first, then show you understand the domain.
Quick Reference (scan in 5 min)
| Problem | Key Classes | Key Patterns | Complexity Focus |
|---|---|---|---|
| Digital Wallet | Wallet, User, Transaction, TransactionType | Optimistic Locking, Repository | ACID compliance, concurrent balance updates |
| In-Memory Database | InMemoryDB | Command, Memento (stack-based rollback) | Nested transactions, isolation |
| DataService with Caching | CacheHandler, MemoryCache, RedisCache, ApiHandler | Chain of Responsibility | TTL expiry, cache invalidation |
| Logger | Logger, LogObserver, ConsoleLogger, FileLogger | Singleton, Observer | Log-level filtering, extensibility |
| LinkedIn Feed | User, Post, Feed, FeedAlgorithm | Strategy | Feed ranking, pagination |
Problem 1: Digital Wallet System
Requirements (what the interviewer might ask)
- Users can add money to their wallet (top-up from bank/card).
- Users can send money to other users (P2P transfer).
- Every money movement must be recorded as a transaction.
- Balance must never go negative — enforce at the domain level.
- Concurrent transfers on the same wallet must not cause lost updates.
- Provide transaction history per user.
Key Classes
User
├── id: string
├── name: string
└── wallet: Wallet
Wallet
├── id: string
├── balance: number
├── version: number ← optimistic locking
└── transactions: Transaction[]
Transaction
├── id: string
├── type: TransactionType
├── amount: number
├── fromWalletId?: string
├── toWalletId?: string
├── timestamp: Date
└── status: "SUCCESS" | "FAILED"
TransactionType (enum)
├── CREDIT
├── DEBIT
└── TRANSFERFull Implementation
// ---- Enums & Errors ----
enum TransactionType {
CREDIT = "CREDIT",
DEBIT = "DEBIT",
TRANSFER = "TRANSFER",
}
class InsufficientFundsError extends Error {
constructor(walletId: string, requested: number, available: number) {
super(
`Wallet ${walletId}: requested ${requested}, available ${available}`
);
this.name = "InsufficientFundsError";
}
}
class TransactionFailedError extends Error {
constructor(reason: string) {
super(`Transaction failed: ${reason}`);
this.name = "TransactionFailedError";
}
}
class OptimisticLockError extends Error {
constructor(walletId: string) {
super(
`Wallet ${walletId} was modified by another transaction. Retry.`
);
this.name = "OptimisticLockError";
}
}
// ---- Domain Models ----
interface Transaction {
id: string;
type: TransactionType;
amount: number;
fromWalletId?: string;
toWalletId?: string;
timestamp: Date;
status: "SUCCESS" | "FAILED";
}
class Wallet {
public id: string;
public balance: number;
public version: number; // optimistic lock
private transactions: Transaction[];
constructor(id: string, initialBalance = 0) {
this.id = id;
this.balance = initialBalance;
this.version = 0;
this.transactions = [];
}
/** Credit money — used for top-ups and incoming transfers. */
credit(amount: number, expectedVersion: number): Transaction {
this.checkVersion(expectedVersion);
this.balance += amount;
this.version++;
const txn = this.recordTransaction(TransactionType.CREDIT, amount, {
toWalletId: this.id,
});
return txn;
}
/** Debit money — validates sufficient funds before deducting. */
debit(amount: number, expectedVersion: number): Transaction {
this.checkVersion(expectedVersion);
if (this.balance < amount) {
throw new InsufficientFundsError(this.id, amount, this.balance);
}
this.balance -= amount;
this.version++;
const txn = this.recordTransaction(TransactionType.DEBIT, amount, {
fromWalletId: this.id,
});
return txn;
}
getTransactionHistory(): ReadonlyArray<Transaction> {
return [...this.transactions];
}
private checkVersion(expected: number): void {
if (expected !== this.version) {
throw new OptimisticLockError(this.id);
}
}
private recordTransaction(
type: TransactionType,
amount: number,
ids: { fromWalletId?: string; toWalletId?: string }
): Transaction {
const txn: Transaction = {
id: crypto.randomUUID(),
type,
amount,
...ids,
timestamp: new Date(),
status: "SUCCESS",
};
this.transactions.push(txn);
return txn;
}
}
class User {
constructor(
public id: string,
public name: string,
public wallet: Wallet
) {}
}
// ---- Wallet Service (orchestration layer) ----
class WalletService {
private users = new Map<string, User>();
registerUser(id: string, name: string): User {
const wallet = new Wallet(`wallet-${id}`);
const user = new User(id, name, wallet);
this.users.set(id, user);
return user;
}
/** Top-up: external money in. */
addMoney(userId: string, amount: number): Transaction {
const user = this.getUser(userId);
return user.wallet.credit(amount, user.wallet.version);
}
/** P2P transfer between two users. */
sendMoney(
senderId: string,
receiverId: string,
amount: number
): { debitTxn: Transaction; creditTxn: Transaction } {
const sender = this.getUser(senderId);
const receiver = this.getUser(receiverId);
// Capture versions BEFORE mutation — if either wallet was
// concurrently modified, the version check inside debit/credit
// will throw OptimisticLockError.
const senderVersion = sender.wallet.version;
const receiverVersion = receiver.wallet.version;
// Debit first — fail fast if insufficient funds.
const debitTxn = sender.wallet.debit(amount, senderVersion);
try {
const creditTxn = receiver.wallet.credit(amount, receiverVersion);
return { debitTxn, creditTxn };
} catch (err) {
// Compensating transaction: reverse the debit.
sender.wallet.credit(amount, sender.wallet.version);
throw new TransactionFailedError(
`Credit to ${receiverId} failed after debiting ${senderId}. Rolled back.`
);
}
}
getBalance(userId: string): number {
return this.getUser(userId).wallet.balance;
}
getTransactionHistory(userId: string): ReadonlyArray<Transaction> {
return this.getUser(userId).wallet.getTransactionHistory();
}
private getUser(userId: string): User {
const user = this.users.get(userId);
if (!user) throw new Error(`User ${userId} not found`);
return user;
}
}ACID Compliance — How Each Property Applies
| Property | How It Is Enforced |
|---|---|
| Atomicity | sendMoney either completes both debit + credit or rolls back via a compensating credit. No partial state. |
| Consistency | Balance can never go negative — debit() validates before mutating. Version check prevents stale writes. |
| Isolation | Optimistic locking via version field — concurrent writes to the same wallet are detected and rejected. In a real DB you would use SELECT ... FOR UPDATE or a serializable isolation level. |
| Durability | In-memory here, but in production every transaction would be persisted to a WAL / database before acknowledging. |
Key Design Decisions
- Optimistic locking over pessimistic locking — better throughput for read-heavy fintech dashboards; conflicts are rare and retries are cheap.
- Compensating transaction on failure — instead of two-phase commit (complex), we reverse the debit if the credit fails. Simpler for an in-process LLD.
- Transaction history is append-only — mirrors real audit logs; never mutate or delete past records.
Follow-up Questions
- How would you handle concurrent sendMoney calls that both pass the version check? (Answer: In a real DB, use row-level locks or serializable transactions.)
- How would you add support for currency conversion? (Answer: Introduce a
Currencytype on Wallet, inject anExchangeRateService, convert before credit.) - How do you handle idempotency for retried top-ups? (Answer: Accept a client-generated
idempotencyKey, deduplicate in the service layer.)
Problem 2: In-Memory Database
Requirements
- Support
get(key),set(key, value),delete(key)on a string key-value store. - Support transactions with
begin(),commit(),rollback(). - Transactions can be nested — inner rollback should not affect outer transaction.
- Only
commit()on the outermost transaction persists changes.
Key Classes
InMemoryDB
├── store: Map<string, string> ← committed state
├── transactionStack: Map<string, string | null>[] ← stack of scopes
├── get(key): string | null
├── set(key, value): void
├── delete(key): void
├── begin(): void
├── commit(): void
└── rollback(): voidEach entry in transactionStack is a Map that records only the keys touched in that scope. A value of null means "this key was deleted in this scope."
Full Implementation
class InMemoryDB {
private store = new Map<string, string>();
private transactionStack: Map<string, string | null>[] = [];
// ---- Core operations ----
get(key: string): string | null {
// Walk the stack top-down — most recent scope wins.
for (let i = this.transactionStack.length - 1; i >= 0; i--) {
const scope = this.transactionStack[i];
if (scope.has(key)) {
const val = scope.get(key)!;
return val === null ? null : val; // null means deleted
}
}
return this.store.get(key) ?? null;
}
set(key: string, value: string): void {
if (this.inTransaction()) {
this.currentScope().set(key, value);
} else {
this.store.set(key, value);
}
}
delete(key: string): void {
if (this.inTransaction()) {
this.currentScope().set(key, null); // tombstone
} else {
this.store.delete(key);
}
}
// ---- Transaction operations ----
begin(): void {
this.transactionStack.push(new Map());
}
commit(): void {
if (!this.inTransaction()) {
throw new Error("No active transaction to commit");
}
const scope = this.transactionStack.pop()!;
if (this.inTransaction()) {
// Nested commit — merge into parent scope.
const parent = this.currentScope();
for (const [key, value] of scope) {
parent.set(key, value);
}
} else {
// Outermost commit — apply to the real store.
for (const [key, value] of scope) {
if (value === null) {
this.store.delete(key);
} else {
this.store.set(key, value);
}
}
}
}
rollback(): void {
if (!this.inTransaction()) {
throw new Error("No active transaction to rollback");
}
// Simply discard the current scope — previous state is untouched.
this.transactionStack.pop();
}
// ---- Helpers ----
private inTransaction(): boolean {
return this.transactionStack.length > 0;
}
private currentScope(): Map<string, string | null> {
return this.transactionStack[this.transactionStack.length - 1];
}
}
// ---- Usage ----
const db = new InMemoryDB();
db.set("upi_id", "user@paytm");
console.log(db.get("upi_id")); // "user@paytm"
db.begin(); // outer transaction
db.set("upi_id", "merchant@paytm");
db.begin(); // nested transaction
db.set("upi_id", "nested@paytm");
console.log(db.get("upi_id")); // "nested@paytm"
db.rollback(); // discard nested scope
console.log(db.get("upi_id")); // "merchant@paytm" — outer scope intact
db.commit(); // commit outer
console.log(db.get("upi_id")); // "merchant@paytm" — persistedHow Rollback Works — Step by Step
begin()pushes a fresh emptyMaponto the stack.- Every
set/deleteinside that scope writes only to the top-of-stack map. rollback()pops the top map and discards it — the parent scope (or committed store) is untouched.commit()pops the top map and merges it down — into the parent scope if nested, or intothis.storeif outermost.
This means each scope is an overlay. Reading walks top-down through the overlays and falls through to the committed store.
Key Design Decisions
- Stack of Maps vs. cloning the entire store — the overlay approach is memory-efficient; we only store diffs.
- Tombstone (
null) for deletes — distinguishes "this key was deleted" from "this key was never touched in this scope." - Nested commit merges into parent — changes are only truly persisted when the outermost transaction commits.
Follow-up Questions
- How would you add TTL (time-to-live) per key? (Answer: Store
{ value, expiresAt }instead of raw string; check expiry inget().) - How would you support concurrent transactions? (Answer: Each transaction gets its own overlay stack; on commit, detect conflicts via version vectors or snapshot isolation.)
- How would you persist to disk? (Answer: Write-ahead log — append every mutation before applying; replay on startup.)
Problem 3: DataService with Caching Chain
Requirements
- Fetch data by key from a multi-level cache.
- L1: in-memory
Mapwith TTL (fast, small). - L2: simulated Redis with TTL (slower, larger).
- L3: actual API call (slowest, always fresh).
- Each level checks its own store first; on miss, delegates to the next level.
- On a hit at any level, back-fill all upstream caches.
- Support TTL expiry — stale entries must not be served.
Key Classes
CacheHandler (interface)
├── get(key): Promise<string | null>
├── set(key, value, ttlMs): Promise<void>
└── setNext(handler): CacheHandler
MemoryCacheHandler implements CacheHandler ← L1
RedisCacheHandler implements CacheHandler ← L2 (simulated)
ApiHandler implements CacheHandler ← L3 (terminal)Full Implementation
interface CacheHandler {
get(key: string): Promise<string | null>;
set(key: string, value: string, ttlMs?: number): Promise<void>;
setNext(handler: CacheHandler): CacheHandler; // returns next for chaining
}
// ---- L1: In-Memory Cache ----
interface CacheEntry {
value: string;
expiresAt: number; // Date.now() + ttlMs
}
class MemoryCacheHandler implements CacheHandler {
private cache = new Map<string, CacheEntry>();
private next: CacheHandler | null = null;
private defaultTtlMs: number;
constructor(defaultTtlMs = 30_000) {
this.defaultTtlMs = defaultTtlMs;
}
setNext(handler: CacheHandler): CacheHandler {
this.next = handler;
return handler;
}
async get(key: string): Promise<string | null> {
const entry = this.cache.get(key);
if (entry && entry.expiresAt > Date.now()) {
console.log(`[L1 Memory] HIT: ${key}`);
return entry.value;
}
// Expired or missing — evict and delegate.
this.cache.delete(key);
console.log(`[L1 Memory] MISS: ${key}`);
if (this.next) {
const value = await this.next.get(key);
if (value !== null) {
await this.set(key, value); // back-fill L1
}
return value;
}
return null;
}
async set(key: string, value: string, ttlMs?: number): Promise<void> {
this.cache.set(key, {
value,
expiresAt: Date.now() + (ttlMs ?? this.defaultTtlMs),
});
}
}
// ---- L2: Simulated Redis Cache ----
class RedisCacheHandler implements CacheHandler {
private store = new Map<string, CacheEntry>();
private next: CacheHandler | null = null;
private defaultTtlMs: number;
constructor(defaultTtlMs = 120_000) {
this.defaultTtlMs = defaultTtlMs;
}
setNext(handler: CacheHandler): CacheHandler {
this.next = handler;
return handler;
}
async get(key: string): Promise<string | null> {
// Simulate network latency.
await this.simulateLatency();
const entry = this.store.get(key);
if (entry && entry.expiresAt > Date.now()) {
console.log(`[L2 Redis] HIT: ${key}`);
return entry.value;
}
this.store.delete(key);
console.log(`[L2 Redis] MISS: ${key}`);
if (this.next) {
const value = await this.next.get(key);
if (value !== null) {
await this.set(key, value); // back-fill L2
}
return value;
}
return null;
}
async set(key: string, value: string, ttlMs?: number): Promise<void> {
this.store.set(key, {
value,
expiresAt: Date.now() + (ttlMs ?? this.defaultTtlMs),
});
}
private simulateLatency(): Promise<void> {
return new Promise((res) => setTimeout(res, 5));
}
}
// ---- L3: API Handler (terminal node) ----
class ApiHandler implements CacheHandler {
private fetchFn: (key: string) => Promise<string | null>;
constructor(fetchFn: (key: string) => Promise<string | null>) {
this.fetchFn = fetchFn;
}
setNext(_handler: CacheHandler): CacheHandler {
throw new Error("ApiHandler is the terminal node — no next handler.");
}
async get(key: string): Promise<string | null> {
console.log(`[L3 API] Fetching: ${key}`);
return this.fetchFn(key);
}
async set(_key: string, _value: string): Promise<void> {
// No-op — API is read-only in this chain.
}
}
// ---- Composing the chain ----
function buildCacheChain(): CacheHandler {
const l1 = new MemoryCacheHandler(30_000); // 30s TTL
const l2 = new RedisCacheHandler(120_000); // 2min TTL
const l3 = new ApiHandler(async (key) => {
// Simulate fetching merchant config from Paytm's API.
const mockData: Record<string, string> = {
"merchant:config:12345": JSON.stringify({ mcc: "5411", name: "Grocery Mart" }),
"user:kyc:67890": JSON.stringify({ status: "VERIFIED", tier: "FULL_KYC" }),
};
return mockData[key] ?? null;
});
// Chain: L1 → L2 → L3
l1.setNext(l2).setNext(l3);
return l1; // entry point
}
// ---- Usage ----
async function demo() {
const dataService = buildCacheChain();
// First call — cache cold, goes all the way to API.
const config = await dataService.get("merchant:config:12345");
// [L1 Memory] MISS → [L2 Redis] MISS → [L3 API] Fetching → back-fills L2 → back-fills L1
// Second call — served from L1 in-memory.
const cached = await dataService.get("merchant:config:12345");
// [L1 Memory] HIT
}Key Design Decisions
- Chain of Responsibility — each handler is decoupled; you can add/remove levels (e.g., an L1.5 LRU cache) without touching other handlers.
- Back-fill on miss — when L3 returns data, it propagates back up through L2 and L1 so subsequent reads are fast.
- TTL per level — L1 has a short TTL (data freshness in-process), L2 has a longer TTL (shared across services). This mirrors how Paytm might cache merchant configs.
setNextreturns next — enables fluent chaining:l1.setNext(l2).setNext(l3).
Follow-up Questions
- How do you handle cache stampede (many concurrent misses on the same key)? (Answer: Use a single-flight / request-coalescing mechanism — only one fetch per key, other callers await the same promise.)
- How would you add cache invalidation? (Answer: Pub/sub — when the API source changes, publish an event; each cache handler subscribes and evicts the key.)
- What if L2 (Redis) is down? (Answer: Catch errors in
RedisCacheHandler.get(), log, and fall through to L3. The chain degrades gracefully.)
Problem 4: Logger Class
Requirements
- Only one logger instance in the entire application (Singleton).
- Support multiple log destinations: console, file, remote server (Observer pattern).
- Log levels:
DEBUG,INFO,WARN,ERROR— each destination can filter by minimum level. - Easy to add new destinations without modifying the Logger class.
Key Classes
Logger (Singleton)
├── static getInstance(): Logger
├── addObserver(observer, minLevel): void
├── removeObserver(observer): void
├── debug(message): void
├── info(message): void
├── warn(message): void
└── error(message): void
LogObserver (interface)
└── log(level, message, timestamp): void
ConsoleLogger implements LogObserver
FileLogger implements LogObserver
RemoteLogger implements LogObserverFull Implementation
// ---- Log Levels ----
enum LogLevel {
DEBUG = 0,
INFO = 1,
WARN = 2,
ERROR = 3,
}
// ---- Observer Interface ----
interface LogObserver {
log(level: LogLevel, message: string, timestamp: Date): void;
}
// ---- Concrete Observers ----
class ConsoleLogger implements LogObserver {
log(level: LogLevel, message: string, timestamp: Date): void {
const label = LogLevel[level];
const time = timestamp.toISOString();
console.log(`[${time}] [${label}] ${message}`);
}
}
class FileLogger implements LogObserver {
private buffer: string[] = [];
constructor(private filePath: string) {}
log(level: LogLevel, message: string, timestamp: Date): void {
const label = LogLevel[level];
const line = `[${timestamp.toISOString()}] [${label}] ${message}`;
this.buffer.push(line);
// In production: fs.appendFileSync(this.filePath, line + '\n');
}
/** Expose buffer for testing / inspection. */
getBuffer(): ReadonlyArray<string> {
return [...this.buffer];
}
}
class RemoteLogger implements LogObserver {
constructor(private endpoint: string) {}
log(level: LogLevel, message: string, timestamp: Date): void {
const payload = {
level: LogLevel[level],
message,
timestamp: timestamp.toISOString(),
service: "paytm-payments",
};
// Fire-and-forget — do not block the caller.
// In production: use a batching queue (e.g., send every 5s or 100 logs).
fetch(this.endpoint, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(payload),
}).catch(() => {
// Silently fail — logging should never crash the app.
});
}
}
// ---- Singleton Logger ----
interface ObserverEntry {
observer: LogObserver;
minLevel: LogLevel;
}
class Logger {
private static instance: Logger | null = null;
private observers: ObserverEntry[] = [];
private constructor() {}
static getInstance(): Logger {
if (!Logger.instance) {
Logger.instance = new Logger();
}
return Logger.instance;
}
/** Reset for testing — not for production use. */
static resetInstance(): void {
Logger.instance = null;
}
addObserver(observer: LogObserver, minLevel: LogLevel = LogLevel.DEBUG): void {
this.observers.push({ observer, minLevel });
}
removeObserver(observer: LogObserver): void {
this.observers = this.observers.filter((e) => e.observer !== observer);
}
debug(message: string): void {
this.notify(LogLevel.DEBUG, message);
}
info(message: string): void {
this.notify(LogLevel.INFO, message);
}
warn(message: string): void {
this.notify(LogLevel.WARN, message);
}
error(message: string): void {
this.notify(LogLevel.ERROR, message);
}
private notify(level: LogLevel, message: string): void {
const timestamp = new Date();
for (const { observer, minLevel } of this.observers) {
if (level >= minLevel) {
observer.log(level, message, timestamp);
}
}
}
}
// ---- Usage ----
const logger = Logger.getInstance();
// Console gets everything; file gets WARN+; remote gets ERROR only.
logger.addObserver(new ConsoleLogger(), LogLevel.DEBUG);
logger.addObserver(new FileLogger("/var/log/paytm/payments.log"), LogLevel.WARN);
logger.addObserver(new RemoteLogger("https://logs.paytm.internal/ingest"), LogLevel.ERROR);
logger.debug("Initiating UPI payment flow"); // → console only
logger.info("Payment intent created: pi_abc"); // → console only
logger.warn("Retry #2 for bank gateway timeout"); // → console + file
logger.error("Payment failed: bank_declined"); // → console + file + remoteKey Design Decisions
- Singleton — one logger instance avoids duplicate log streams and ensures consistent observer configuration across the app.
- Observer pattern — adding a new destination (e.g., Slack alerts for ERROR) requires zero changes to the
Loggerclass. Just implementLogObserverand calladdObserver. - Per-observer
minLevel— different destinations have different verbosity needs. Console is noisy (DEBUG), remote is expensive (ERROR only). - Fire-and-forget for RemoteLogger — logging must never block or crash the main payment flow. Errors in the logging pipeline are swallowed.
Follow-up Questions
- How would you make this thread-safe in a multi-threaded environment? (Answer: Use a mutex/lock around
notify(), or use a lock-free queue that a dedicated logging thread drains.) - How would you add structured logging (JSON)? (Answer: Change
messagetoRecord<string, unknown>, let each observer serialize as needed.) - How would you handle log rotation for FileLogger? (Answer: Check file size before each write; if over threshold, rename current file with timestamp suffix and start a new one.)
Problem 5: LinkedIn Feed (Brief)
Requirements
- Users can create posts, like posts, and comment on posts.
- A feed service returns a ranked list of posts for a given user.
- Feed ranking can be swapped between strategies (chronological vs. relevance-based).
Key Classes
User { id, name, connections: Set<string> }
Post { id, authorId, content, timestamp, likes: Set<string>, comments: Comment[] }
Comment { id, authorId, content, timestamp }
Feed { getFeed(userId): Post[] }
FeedAlgorithm (Strategy interface)
├── ChronologicalFeed
└── RelevanceFeedImplementation
interface FeedAlgorithm {
rank(posts: Post[], userId: string): Post[];
}
class ChronologicalFeed implements FeedAlgorithm {
rank(posts: Post[]): Post[] {
return [...posts].sort(
(a, b) => b.timestamp.getTime() - a.timestamp.getTime()
);
}
}
class RelevanceFeed implements FeedAlgorithm {
private connections: Set<string>;
constructor(connections: Set<string>) {
this.connections = connections;
}
rank(posts: Post[], _userId: string): Post[] {
return [...posts].sort((a, b) => {
const scoreA = this.score(a);
const scoreB = this.score(b);
return scoreB - scoreA;
});
}
private score(post: Post): number {
let s = post.likes.size * 2 + post.comments.length;
if (this.connections.has(post.authorId)) s += 10; // boost connections
// Decay by age — posts lose relevance over time.
const ageHours =
(Date.now() - post.timestamp.getTime()) / (1000 * 60 * 60);
s -= ageHours * 0.5;
return s;
}
}
// ---- Domain Models ----
interface Comment {
id: string;
authorId: string;
content: string;
timestamp: Date;
}
interface Post {
id: string;
authorId: string;
content: string;
timestamp: Date;
likes: Set<string>;
comments: Comment[];
}
// ---- Feed Service ----
class FeedService {
private posts = new Map<string, Post>();
private userConnections = new Map<string, Set<string>>();
private algorithm: FeedAlgorithm;
constructor(algorithm: FeedAlgorithm) {
this.algorithm = algorithm;
}
setAlgorithm(algorithm: FeedAlgorithm): void {
this.algorithm = algorithm;
}
createPost(authorId: string, content: string): Post {
const post: Post = {
id: crypto.randomUUID(),
authorId,
content,
timestamp: new Date(),
likes: new Set(),
comments: [],
};
this.posts.set(post.id, post);
return post;
}
likePost(postId: string, userId: string): void {
const post = this.posts.get(postId);
if (!post) throw new Error("Post not found");
post.likes.add(userId);
}
commentOnPost(postId: string, authorId: string, content: string): Comment {
const post = this.posts.get(postId);
if (!post) throw new Error("Post not found");
const comment: Comment = {
id: crypto.randomUUID(),
authorId,
content,
timestamp: new Date(),
};
post.comments.push(comment);
return comment;
}
getFeed(userId: string, limit = 20): Post[] {
const allPosts = Array.from(this.posts.values());
const ranked = this.algorithm.rank(allPosts, userId);
return ranked.slice(0, limit);
}
}Why Strategy Pattern Here
The ranking algorithm is the most volatile part of a feed system — product will constantly A/B test new ranking formulas. By extracting it behind FeedAlgorithm, the FeedService never changes when you swap ChronologicalFeed for RelevanceFeed or a future MLRankedFeed. This is textbook Open/Closed Principle.
Follow-up Questions
- How would you paginate the feed? (Answer: Cursor-based pagination — return the last post's
(score, id)as the cursor; next page starts after that cursor.) - How do you handle real-time updates (new post appears while scrolling)? (Answer: WebSocket push for new posts; client inserts at the top or shows a "new posts available" banner.)
- How would you scale this to millions of users? (Answer: Fan-out-on-write for active users (precompute feeds), fan-out-on-read for celebrity accounts (compute at read time). Hybrid approach like Twitter uses.)
General Interview Tips for LLD Rounds
- Clarify requirements first — spend 2-3 minutes asking: "Should I handle concurrency? Do we need persistence? What is the scale?" This sets scope and shows maturity.
- Start with the interface, not the implementation — define the public API (
addMoney,sendMoney,getBalance) before writing any internals. - Name your patterns — say "I am using Chain of Responsibility here" out loud. Interviewers give explicit credit for pattern recognition.
- Think about error cases early — insufficient funds, concurrent writes, network failures. Fintech interviewers specifically look for this.
- Mention what you would do differently in production — persistence, retries, idempotency, monitoring. Shows you have shipped real systems.