Skip to content

14 — LLD: LRU Cache with TTL

Understanding the Problem

An LRU (Least Recently Used) Cache is a fixed-size key-value store that evicts the least recently accessed entry when capacity is reached. Adding TTL (Time-To-Live) means each entry has an expiration time — after which it is considered stale and should be removed, regardless of how recently it was used.

What is an LRU Cache with TTL?

Think of your browser cache. It stores the most recently visited pages for fast access. Pages you haven't visited in a while get evicted when space runs low (LRU). But even a recently visited page becomes stale after its cache headers expire (TTL). This design combines both eviction strategies: access-recency and time-based expiry.


Requirements

Clarifying Questions

You: The basic operations are get(key) and put(key, value, ttl). Should both be O(1)?

Interviewer: Yes. That is the standard expectation for LRU cache.

You: For TTL — when a key expires, should it be cleaned up immediately, or lazily when someone tries to access it?

Interviewer: Good question. Discuss both approaches, but implement lazy deletion with an optional active cleanup method.

You: What happens when I put a key that already exists? Does the TTL reset?

Interviewer: Yes. A put on an existing key updates the value, resets the TTL, and moves it to the most-recently-used position.

You: Should get update the "last accessed" time for LRU ordering?

Interviewer: Yes. Getting a key makes it the most recently used.

You: Do I need to handle thread safety?

Interviewer: Discuss it, and show how you would add it. You can start with a single-threaded version.

You: Should I support a delete(key) operation?

Interviewer: Yes.

Final Requirements

  1. O(1) get(key), put(key, value, ttl), and delete(key).
  2. Fixed capacity — evicts LRU entry when full.
  3. TTL per entry — expired entries are treated as non-existent.
  4. Lazy expiry on access + optional active cleanup method.
  5. put on existing key updates value, resets TTL, and refreshes LRU position.
  6. Thread safety discussed, with a locking strategy shown.

Out of scope: Distributed caching, cache-aside/write-through patterns, persistence.


Core Entities and Relationships

EntityResponsibility
NodeDoubly linked list node — holds key, value, expiry time, and pointers
DoublyLinkedListMaintains LRU ordering — head is most recent, tail is least recent
LRUCacheOrchestrates HashMap + DLL for O(1) operations with TTL

Why a HashMap + Doubly Linked List? This is the classic combination for O(1) LRU:

  • HashMap gives O(1) key lookup.
  • Doubly Linked List gives O(1) insertion, deletion, and move-to-front.
  • Together, they give O(1) for get, put, and delete.

You cannot achieve O(1) LRU with either data structure alone:

  • HashMap alone does not track access order.
  • Linked list alone does not give O(1) key lookup.

Why not just use a Map? JavaScript's Map does preserve insertion order and you can achieve LRU-like behavior by deleting and re-inserting keys. But it does not have TTL built in, and understanding the underlying data structure is what the interviewer is testing.


Class Design

Node

State:

  • key: string — needed for eviction (to remove from HashMap)
  • value: T
  • expiresAt: number — Unix timestamp (ms) when this entry expires
  • prev: Node | null
  • next: Node | null

Why store key in the node? When evicting the LRU entry (tail of the list), you need to also remove it from the HashMap. Without the key in the node, you would have to search the HashMap — O(n).

DoublyLinkedList

State:

  • head: Node — sentinel (dummy head)
  • tail: Node — sentinel (dummy tail)

Methods:

  • addToFront(node) — insert right after head sentinel
  • remove(node) — unlink a node
  • removeLast() -> Node — remove the node before tail sentinel (LRU entry)
  • moveToFront(node) — remove + addToFront (marks as recently used)
  • isEmpty() -> boolean

Sentinel nodes eliminate edge cases. You never need to check "is head null?" because head and tail always exist.

LRUCache

State:

  • capacity: number
  • cache: Map<string, Node> — HashMap for O(1) lookup
  • dll: DoublyLinkedList — for LRU ordering
  • defaultTtl: number — default TTL in milliseconds

Methods:

  • get(key) -> value | undefined
  • put(key, value, ttl?)
  • delete(key) -> boolean
  • cleanup() — remove all expired entries
  • size -> number

Final Class Design

typescript
class Node<T> {
  public key: string;
  public value: T;
  public expiresAt: number;
  public prev: Node<T> | null = null;
  public next: Node<T> | null = null;

  constructor(key: string = "", value: T = undefined as T, expiresAt: number = 0) {
    this.key = key;
    this.value = value;
    this.expiresAt = expiresAt;
  }

  isExpired(): boolean {
    return Date.now() > this.expiresAt;
  }
}

Using a generic Node<T> gives type safety for stored values.

Design principles at play:

  • Single Responsibility: Node stores data. DLL manages ordering. LRUCache orchestrates both with TTL logic.
  • Encapsulation: The DLL internal structure (sentinels, pointer manipulation) is hidden from LRUCache. The cache only calls addToFront, remove, moveToFront.
  • Composition: LRUCache composes a HashMap and a DLL. It does not inherit from either.

Implementation

Doubly Linked List with Sentinels

typescript
class DoublyLinkedList<T> {
  public head: Node<T>;
  public tail: Node<T>;

  constructor() {
    this.head = new Node<T>(); // sentinel
    this.tail = new Node<T>(); // sentinel
    this.head.next = this.tail;
    this.tail.prev = this.head;
  }

  addToFront(node: Node<T>): void {
    // Insert node right after head sentinel (most recently used position)
    node.prev = this.head;
    node.next = this.head.next;
    this.head.next!.prev = node;
    this.head.next = node;
  }

  remove(node: Node<T>): void {
    // Unlink node from the list
    node.prev!.next = node.next;
    node.next!.prev = node.prev;
    node.prev = null;
    node.next = null;
  }

  removeLast(): Node<T> | null {
    // Remove and return the LRU node (just before tail sentinel)
    if (this.isEmpty()) return null;
    const lru = this.tail.prev!;
    this.remove(lru);
    return lru;
  }

  moveToFront(node: Node<T>): void {
    // Mark node as most recently used
    this.remove(node);
    this.addToFront(node);
  }

  isEmpty(): boolean {
    return this.head.next === this.tail;
  }
}

LRU Cache with TTL

Lazy vs. Active Expiry:

StrategyWhen cleanup happensProsCons
LazyOn get() — if the entry is expired, delete it and return undefinedNo background threads, simplerExpired entries consume memory until accessed
ActiveBackground interval periodically scans and removes expired entriesMemory is reclaimed promptlyRequires a background timer and careful cleanup
HybridLazy on access + periodic active cleanupBest of both worldsSlightly more complex

We implement the hybrid approach:

typescript
class LRUCache<T> {
  private capacity: number;
  private defaultTtl: number;
  private cache: Map<string, Node<T>> = new Map();
  private dll = new DoublyLinkedList<T>();

  constructor(capacity: number, defaultTtlMs: number = 300_000) {
    if (capacity <= 0) {
      throw new Error("Capacity must be positive");
    }
    this.capacity = capacity;
    this.defaultTtl = defaultTtlMs;
  }

  get(key: string): T | undefined {
    const node = this.cache.get(key);
    if (!node) return undefined;

    // Lazy expiry check
    if (node.isExpired()) {
      this.removeNode(node);
      return undefined;
    }

    // Move to front (most recently used)
    this.dll.moveToFront(node);
    return node.value;
  }

  put(key: string, value: T, ttlMs?: number): void {
    const ttl = ttlMs ?? this.defaultTtl;
    const expiresAt = Date.now() + ttl;

    if (this.cache.has(key)) {
      // Update existing entry
      const node = this.cache.get(key)!;
      node.value = value;
      node.expiresAt = expiresAt;
      this.dll.moveToFront(node);
    } else {
      // Evict if at capacity
      if (this.cache.size >= this.capacity) {
        this.evict();
      }

      // Create new entry
      const node = new Node<T>(key, value, expiresAt);
      this.cache.set(key, node);
      this.dll.addToFront(node);
    }
  }

  delete(key: string): boolean {
    const node = this.cache.get(key);
    if (!node) return false;
    this.removeNode(node);
    return true;
  }

  private removeNode(node: Node<T>): void {
    // Remove node from both the DLL and the HashMap
    this.dll.remove(node);
    this.cache.delete(node.key);
  }

  private evict(): void {
    // Remove the LRU entry. If it is expired, great. If not, evict anyway.
    const lru = this.dll.removeLast();
    if (lru) {
      this.cache.delete(lru.key);
    }
  }

  cleanup(): void {
    // Active cleanup: remove all expired entries
    const now = Date.now();
    const expiredKeys: string[] = [];
    for (const [key, node] of this.cache) {
      if (now > node.expiresAt) {
        expiredKeys.push(key);
      }
    }
    for (const key of expiredKeys) {
      this.removeNode(this.cache.get(key)!);
    }
  }

  get size(): number {
    return this.cache.size;
  }
}

Edge Cases

  1. Get a key that just expired — Lazy check catches it: returns undefined and removes the entry.
  2. Put when cache is full and all entries are fresh — The LRU entry is evicted even though it has not expired. LRU eviction is about capacity, not time.
  3. Put with TTL = 0 — The entry expires immediately. The next get returns undefined. This effectively makes it a "write-only" operation, which is odd but valid.
  4. Rapid put/get on the same key — Each put resets TTL. Each get refreshes LRU position. The entry stays alive as long as it is actively used.
  5. Thread safety — In a Node.js environment, synchronous code is single-threaded. For async scenarios or worker threads, you would wrap operations with a mutex/lock library.
  6. Evicting an expired entry — If the LRU entry happens to be expired, evict removes it anyway. No special handling needed.

Verification

Setup: Cache with capacity=3, default TTL=10 seconds.

Step 1: Put three entries.

cache.put("a", 1)    // DLL order: a
cache.put("b", 2)    // DLL order: b -> a
cache.put("c", 3)    // DLL order: c -> b -> a
  • HashMap:
  • DLL: [head] <-> c <-> b <-> a <-> [tail]
  • LRU entry: a

Step 2: Get "a" — moves to front.

cache.get("a")       // returns 1
  • DLL: [head] <-> a <-> c <-> b <-> [tail]
  • LRU entry: b (now)

Step 3: Put "d" — cache is full, evicts LRU ("b").

cache.put("d", 4)
  • "b" evicted (it was LRU).
  • DLL: [head] <-> d <-> a <-> c <-> [tail]
  • HashMap:

Step 4: Wait for "c" to expire (say TTL was 5s, and 6s have passed).

cache.get("c")       // c is expired -> returns undefined, removes c
  • DLL: [head] <-> d <-> a <-> [tail]
  • HashMap:
  • Cache size: 2

Step 5: Put "e" — no eviction needed (size 2 < capacity 3).

cache.put("e", 5)
  • DLL: [head] <-> e <-> d <-> a <-> [tail]
  • HashMap:

Complete Code Implementation

typescript
class Node<T> {
  public key: string;
  public value: T;
  public expiresAt: number;
  public prev: Node<T> | null = null;
  public next: Node<T> | null = null;

  constructor(key: string = "", value: T = undefined as T, expiresAt: number = 0) {
    this.key = key;
    this.value = value;
    this.expiresAt = expiresAt;
  }

  isExpired(): boolean {
    return Date.now() > this.expiresAt;
  }
}

class DoublyLinkedList<T> {
  public head: Node<T>;
  public tail: Node<T>;

  constructor() {
    this.head = new Node<T>();
    this.tail = new Node<T>();
    this.head.next = this.tail;
    this.tail.prev = this.head;
  }

  addToFront(node: Node<T>): void {
    node.prev = this.head;
    node.next = this.head.next;
    this.head.next!.prev = node;
    this.head.next = node;
  }

  remove(node: Node<T>): void {
    node.prev!.next = node.next;
    node.next!.prev = node.prev;
    node.prev = null;
    node.next = null;
  }

  removeLast(): Node<T> | null {
    if (this.isEmpty()) return null;
    const lru = this.tail.prev!;
    this.remove(lru);
    return lru;
  }

  moveToFront(node: Node<T>): void {
    this.remove(node);
    this.addToFront(node);
  }

  isEmpty(): boolean {
    return this.head.next === this.tail;
  }
}

class LRUCache<T> {
  private capacity: number;
  private defaultTtl: number;
  private cache: Map<string, Node<T>> = new Map();
  private dll = new DoublyLinkedList<T>();

  constructor(capacity: number, defaultTtlMs: number = 300_000) {
    if (capacity <= 0) {
      throw new Error("Capacity must be positive");
    }
    this.capacity = capacity;
    this.defaultTtl = defaultTtlMs;
  }

  get(key: string): T | undefined {
    const node = this.cache.get(key);
    if (!node) return undefined;
    if (node.isExpired()) {
      this.removeNode(node);
      return undefined;
    }
    this.dll.moveToFront(node);
    return node.value;
  }

  put(key: string, value: T, ttlMs?: number): void {
    const ttl = ttlMs ?? this.defaultTtl;
    const expiresAt = Date.now() + ttl;

    if (this.cache.has(key)) {
      const node = this.cache.get(key)!;
      node.value = value;
      node.expiresAt = expiresAt;
      this.dll.moveToFront(node);
    } else {
      if (this.cache.size >= this.capacity) {
        this.evict();
      }
      const node = new Node<T>(key, value, expiresAt);
      this.cache.set(key, node);
      this.dll.addToFront(node);
    }
  }

  delete(key: string): boolean {
    const node = this.cache.get(key);
    if (!node) return false;
    this.removeNode(node);
    return true;
  }

  private removeNode(node: Node<T>): void {
    this.dll.remove(node);
    this.cache.delete(node.key);
  }

  private evict(): void {
    const lru = this.dll.removeLast();
    if (lru) {
      this.cache.delete(lru.key);
    }
  }

  cleanup(): void {
    const now = Date.now();
    const expiredKeys: string[] = [];
    for (const [key, node] of this.cache) {
      if (now > node.expiresAt) {
        expiredKeys.push(key);
      }
    }
    for (const key of expiredKeys) {
      this.removeNode(this.cache.get(key)!);
    }
  }

  get size(): number {
    return this.cache.size;
  }

  toString(): string {
    const items: string[] = [];
    let node = this.dll.head.next;
    while (node !== this.dll.tail) {
      const expiredTag = node!.isExpired() ? " [EXPIRED]" : "";
      items.push(`${node!.key}=${node!.value}${expiredTag}`);
      node = node!.next;
    }
    return `LRUCache(size=${this.size}/${this.capacity}, items=[${items.join(", ")}])`;
  }
}

// ---- Demo ----

const cache = new LRUCache<number>(3, 10_000);

cache.put("a", 1);
cache.put("b", 2);
cache.put("c", 3);
console.log(`After 3 puts: ${cache}`);

console.log(`get('a') = ${cache.get("a")}`);
console.log(`After get('a'): ${cache}`);

cache.put("d", 4);
console.log(`After put('d') — 'b' evicted: ${cache}`);

console.log(`get('b') = ${cache.get("b")}`); // undefined — evicted

// Demonstrate TTL
cache.put("short", 99, 100); // expires in 100ms
console.log(`\nBefore expiry: get('short') = ${cache.get("short")}`);
setTimeout(() => {
  console.log(`After expiry: get('short') = ${cache.get("short")}`);
  console.log(`Size after expiry: ${cache.size}`);
}, 200);

Extensibility

1. Eviction Callback

Notify the caller when an entry is evicted (useful for write-back caches):

typescript
class LRUCache<T> {
  private onEvict?: (key: string, value: T) => void;

  constructor(
    capacity: number,
    defaultTtlMs: number = 300_000,
    onEvict?: (key: string, value: T) => void,
  ) {
    // ...
    this.onEvict = onEvict;
  }

  private evict(): void {
    const lru = this.dll.removeLast();
    if (lru) {
      this.cache.delete(lru.key);
      this.onEvict?.(lru.key, lru.value);
    }
  }
}

Tradeoff: If the callback is slow (e.g., writing to disk), it blocks all cache operations. Consider collecting evicted entries and processing them asynchronously.

2. LFU (Least Frequently Used) Variant

Replace the DLL with a frequency-indexed structure. Each entry tracks an access count. Eviction removes the entry with the lowest frequency (and among ties, the least recent).

typescript
// Conceptual: use a min-heap or frequency buckets
// Each bucket is a DLL of entries with the same access count
// O(1) is achievable with the "LFU with O(1)" algorithm using two hash maps

Tradeoff: LFU is better for workloads with stable hot keys. LRU is better for workloads with temporal locality. Many real systems use a hybrid (e.g., Redis's allkeys-lfu).

3. Background Cleanup Timer

typescript
class LRUCacheWithCleanup<T> extends LRUCache<T> {
  private cleanupTimer: ReturnType<typeof setInterval>;

  constructor(
    capacity: number,
    defaultTtlMs: number = 300_000,
    cleanupIntervalMs: number = 60_000,
  ) {
    super(capacity, defaultTtlMs);
    this.cleanupTimer = setInterval(() => this.cleanup(), cleanupIntervalMs);
  }

  destroy(): void {
    clearInterval(this.cleanupTimer);
  }
}

Tradeoff: Uses a timer that runs periodically. The cleanup itself iterates all entries, which can briefly block the event loop if many entries are expired. For large caches, consider a probabilistic approach: sample a subset of entries and clean only those (this is what Redis does).


What is Expected at Each Level

LevelExpectations
JuniorImplement basic LRU with HashMap + DLL. Understand why both are needed. May not handle TTL.
Mid-levelO(1) get/put/delete with sentinel nodes in the DLL. TTL with lazy expiry. Explain why the key is stored in the node. Handle the "update existing key" case correctly.
SeniorEverything above plus: generics for type safety, thread safety discussion (mutexes for multi-threaded environments), hybrid lazy+active cleanup, eviction callbacks, comparison with LFU, analysis of what Redis does differently.

Frontend interview preparation reference.