Skip to content

42 β€” LLD: LRU + LFU Cache ​

Frequently Asked at Salesforce β€” Reported repeatedly in SMTS technical rounds (2024-2026).

Understanding the Problem ​

Implement a fixed-capacity key-value cache with O(1) get and put. Two variants: LRU evicts the key touched least recently, while LFU evicts the key accessed fewest times, breaking ties by recency within the same frequency.

What is a cache?

A bounded store that keeps the hot working set in memory. When it fills up, something must go β€” the eviction policy decides who. The trick of this problem is achieving O(1) on every operation by combining a hash map with an intrusive doubly linked list.


Requirements ​

Clarifying Questions ​

You: Do we need TTL?

Interviewer: Not for v1, but the interface should allow adding it.

Leaves a seam for expiresAt later.

You: Generic types?

Interviewer: Yes β€” generic K and V.

Rules out any shortcuts.

You: Thread-safe?

Interviewer: Start single-threaded, I will ask about threading.

Design pattern: write clean single-threaded code first, then layer striping or locking.

You: Stats like hit-rate counters?

Interviewer: Expose a simple counter.

Two fields: hits and misses.

Final Requirements ​

  1. get(key) returns value or undefined, O(1) amortised.
  2. put(key, value) inserts or updates, O(1) amortised.
  3. LRU evicts least recently used on capacity overflow.
  4. LFU evicts least frequently used; within a frequency, evict the least recently used.
  5. Generic over K, V.
  6. Expose hits and misses.
  7. Thread-safety discussion in the follow-up.

Out of scope: persistence, TTL, distributed caching.


Core Entities and Relationships ​

EntityResponsibility
Cache<K,V>Interface. get, put, size.
DLLNode<K,V>Doubly linked list node with optional frequency.
DLL<K,V>O(1) add-to-front, remove-node, pop-tail.
LRUCacheHashMap key→node plus one DLL in recency order.
LFUCacheHashMap key→node plus map of frequency→DLL plus minFreq.

Why a custom DLL instead of a JS Map? Because in Java/C++ you need this plumbing anyway, and Salesforce interviewers want to see the pointer surgery.

Why keep DLL and HashMap separate? Single Responsibility β€” the list only knows about order, the map only knows about presence.

Design is iterative β€” start with LRU, then add LFU on top of the same DLL.


Class Design ​

DLLNode<K,V> ​

State:

  • key: K, value: V
  • prev: DLLNode | null, next: DLLNode | null
  • freq: int (for LFU)

DLL<K,V> ​

Methods:

  • addToFront(node)
  • remove(node)
  • popTail() -> node | null

Sentinel head and tail simplify edge cases.

LRUCache<K,V> ​

State:

  • map: Map<K, DLLNode<K,V>>
  • list: DLL<K,V>
  • capacity: int, hits: int, misses: int

Methods:

  • get(key) β€” move to front on hit, increment counters
  • put(key, value) β€” insert or update, evict tail on overflow

LFUCache<K,V> ​

State:

  • map: Map<K, DLLNode<K,V>>
  • freqLists: Map<int, DLL<K,V>>
  • minFreq: int

Methods:

  • bump(node) β€” move to the next frequency list
  • get(key) β€” bump and return
  • put(key, value) β€” insert at freq 1 or bump existing

Design principles at play:

  • Single Responsibility β€” DLL, cache, policy each own one concern.
  • Strategy Pattern β€” LRU vs LFU as swappable eviction policies (see problem 48 for the formal strategy split).
  • Information Expert β€” frequency lives on the node because only the node knows its history.

Implementation ​

Core Logic: LRU get / put ​

Bad approach: Map plus Array<K> where order lives in the array.

  • Array.indexOf is O(n); violates the constraint.

Good approach: JS Map with delete + set to move keys to the end.

  • Works in JS because Map preserves insertion order, but the interviewer still wants the DLL.

Great approach: HashMap key β†’ DLL node. Move-to-front is O(1) because the node owns its pointers.

Verification ​

LRU capacity 3. Start empty.

  1. put(a, 1) β†’ map={a}, list=a. size=1.
  2. put(b, 2) → list=b→a.
  3. put(c, 3) → list=c→b→a.
  4. get(a) → hit, list=a→c→b.
  5. put(d, 4) → full; pop tail b; list=d→a→c; map={a,c,d}.
  6. get(b) β†’ miss; misses=1.

LFU capacity 2.

  1. put(a, 1) β†’ freqLists={1:[a]}, minFreq=1.
  2. put(b, 2) β†’ freqLists={1:[b,a]}.
  3. get(a) β†’ bump a to freq 2; freqLists={1:[b], 2:[a]}; minFreq=1.
  4. put(c, 3) β†’ full; evict from minFreq=1 list tail which is b; insert c at freq 1; freqLists={1:[c], 2:[a]}.

Thread Safety ​

  • Single lock around get/put works at low contention but serialises all access.
  • Stripe the cache: N shards keyed by hash(key) % N, each with its own LRU and lock. Different keys go in parallel.
  • Caffeine-style: reads append to a ring buffer, a single writer drains it. Reads become lock-free. Great for read-heavy workloads.
  • For LFU the minFreq pointer is the race hotspot; the simplest correct thing is a coarse lock, and the next step is shard-per-key.

Complete Code Implementation ​

java
import java.util.*;

class DLLNode<K, V> {
    K key; V value; int freq = 1;
    DLLNode<K, V> prev, next;
    DLLNode(K k, V v) { this.key = k; this.value = v; }
}

class DLL<K, V> {
    DLLNode<K, V> head = new DLLNode<>(null, null);
    DLLNode<K, V> tail = new DLLNode<>(null, null);
    int size = 0;
    DLL() { head.next = tail; tail.prev = head; }

    void addToFront(DLLNode<K, V> n) {
        n.prev = head; n.next = head.next;
        head.next.prev = n; head.next = n; size++;
    }
    void remove(DLLNode<K, V> n) {
        n.prev.next = n.next; n.next.prev = n.prev; size--;
    }
    DLLNode<K, V> popTail() {
        if (size == 0) return null;
        DLLNode<K, V> n = tail.prev; remove(n); return n;
    }
}

class LRUCache<K, V> {
    private final int capacity;
    private final Map<K, DLLNode<K, V>> map = new HashMap<>();
    private final DLL<K, V> list = new DLL<>();
    public int hits, misses;

    public LRUCache(int capacity) {
        if (capacity <= 0) throw new IllegalArgumentException();
        this.capacity = capacity;
    }

    public synchronized V get(K key) {
        DLLNode<K, V> n = map.get(key);
        if (n == null) { misses++; return null; }
        hits++;
        list.remove(n); list.addToFront(n);
        return n.value;
    }

    public synchronized void put(K key, V value) {
        DLLNode<K, V> existing = map.get(key);
        if (existing != null) {
            existing.value = value;
            list.remove(existing); list.addToFront(existing);
            return;
        }
        if (map.size() >= capacity) {
            DLLNode<K, V> ev = list.popTail();
            if (ev != null) map.remove(ev.key);
        }
        DLLNode<K, V> n = new DLLNode<>(key, value);
        list.addToFront(n); map.put(key, n);
    }
    public int size() { return map.size(); }
}

class LFUCache<K, V> {
    private final int capacity;
    private final Map<K, DLLNode<K, V>> map = new HashMap<>();
    private final Map<Integer, DLL<K, V>> freqLists = new HashMap<>();
    private int minFreq = 0;

    public LFUCache(int capacity) { this.capacity = capacity; }

    private void bump(DLLNode<K, V> n) {
        DLL<K, V> old = freqLists.get(n.freq);
        old.remove(n);
        if (old.size == 0) {
            freqLists.remove(n.freq);
            if (minFreq == n.freq) minFreq++;
        }
        n.freq++;
        freqLists.computeIfAbsent(n.freq, k -> new DLL<>()).addToFront(n);
    }

    public synchronized V get(K key) {
        DLLNode<K, V> n = map.get(key);
        if (n == null) return null;
        bump(n);
        return n.value;
    }

    public synchronized void put(K key, V value) {
        if (capacity == 0) return;
        DLLNode<K, V> existing = map.get(key);
        if (existing != null) { existing.value = value; bump(existing); return; }
        if (map.size() >= capacity) {
            DLL<K, V> list = freqLists.get(minFreq);
            DLLNode<K, V> ev = list.popTail();
            if (ev != null) map.remove(ev.key);
            if (list.size == 0) freqLists.remove(minFreq);
        }
        DLLNode<K, V> n = new DLLNode<>(key, value);
        freqLists.computeIfAbsent(1, k -> new DLL<>()).addToFront(n);
        map.put(key, n);
        minFreq = 1;
    }
}
cpp
#include <list>
#include <mutex>
#include <unordered_map>

template <typename K, typename V>
class LRUCache {
public:
    LRUCache(int cap) : capacity(cap) {}

    bool get(const K& key, V& out) {
        std::lock_guard<std::mutex> g(mu);
        auto it = map.find(key);
        if (it == map.end()) { misses++; return false; }
        hits++;
        order.splice(order.begin(), order, it->second);
        out = it->second->second;
        return true;
    }

    void put(const K& key, const V& value) {
        std::lock_guard<std::mutex> g(mu);
        auto it = map.find(key);
        if (it != map.end()) {
            it->second->second = value;
            order.splice(order.begin(), order, it->second);
            return;
        }
        if ((int)map.size() >= capacity) {
            map.erase(order.back().first);
            order.pop_back();
        }
        order.emplace_front(key, value);
        map[key] = order.begin();
    }

    int hits = 0, misses = 0;
private:
    int capacity;
    std::list<std::pair<K, V>> order;
    std::unordered_map<K, typename std::list<std::pair<K, V>>::iterator> map;
    std::mutex mu;
};

template <typename K, typename V>
class LFUCache {
public:
    LFUCache(int cap) : capacity(cap) {}

    bool get(const K& key, V& out) {
        std::lock_guard<std::mutex> g(mu);
        auto it = map.find(key);
        if (it == map.end()) return false;
        bump(it);
        out = it->second.value;
        return true;
    }

    void put(const K& key, const V& value) {
        std::lock_guard<std::mutex> g(mu);
        if (capacity == 0) return;
        auto it = map.find(key);
        if (it != map.end()) { it->second.value = value; bump(it); return; }
        if ((int)map.size() >= capacity) {
            auto& lst = freqLists[minFreq];
            map.erase(lst.back());
            lst.pop_back();
            if (lst.empty()) freqLists.erase(minFreq);
        }
        freqLists[1].push_front(key);
        map[key] = { value, 1, freqLists[1].begin() };
        minFreq = 1;
    }

private:
    struct Entry {
        V value;
        int freq;
        typename std::list<K>::iterator it;
    };
    int capacity, minFreq = 0;
    std::unordered_map<K, Entry> map;
    std::unordered_map<int, std::list<K>> freqLists;
    std::mutex mu;

    void bump(typename std::unordered_map<K, Entry>::iterator it) {
        int f = it->second.freq;
        freqLists[f].erase(it->second.it);
        if (freqLists[f].empty()) {
            freqLists.erase(f);
            if (minFreq == f) minFreq = f + 1;
        }
        it->second.freq++;
        freqLists[it->second.freq].push_front(it->first);
        it->second.it = freqLists[it->second.freq].begin();
    }
};
typescript
interface Cache<K, V> {
  get(key: K): V | undefined;
  put(key: K, value: V): void;
  readonly size: number;
}

class DLLNode<K, V> {
  prev: DLLNode<K, V> | null = null;
  next: DLLNode<K, V> | null = null;
  freq = 1;
  constructor(public key: K, public value: V) {}
}

class DLL<K, V> {
  head = new DLLNode<K, V>(null as unknown as K, null as unknown as V);
  tail = new DLLNode<K, V>(null as unknown as K, null as unknown as V);
  size = 0;
  constructor() { this.head.next = this.tail; this.tail.prev = this.head; }

  addToFront(node: DLLNode<K, V>) {
    node.prev = this.head;
    node.next = this.head.next;
    this.head.next!.prev = node;
    this.head.next = node;
    this.size += 1;
  }
  remove(node: DLLNode<K, V>) {
    node.prev!.next = node.next;
    node.next!.prev = node.prev;
    this.size -= 1;
  }
  popTail(): DLLNode<K, V> | null {
    if (this.size === 0) return null;
    const node = this.tail.prev!;
    this.remove(node);
    return node;
  }
}

export class LRUCache<K, V> implements Cache<K, V> {
  private map = new Map<K, DLLNode<K, V>>();
  private list = new DLL<K, V>();
  hits = 0; misses = 0;
  constructor(private capacity: number) {
    if (capacity <= 0) throw new Error("capacity>0");
  }
  get size() { return this.map.size; }

  get(key: K): V | undefined {
    const node = this.map.get(key);
    if (!node) { this.misses += 1; return undefined; }
    this.hits += 1;
    this.list.remove(node);
    this.list.addToFront(node);
    return node.value;
  }

  put(key: K, value: V): void {
    const existing = this.map.get(key);
    if (existing) {
      existing.value = value;
      this.list.remove(existing);
      this.list.addToFront(existing);
      return;
    }
    if (this.map.size >= this.capacity) {
      const evicted = this.list.popTail();
      if (evicted) this.map.delete(evicted.key);
    }
    const node = new DLLNode(key, value);
    this.list.addToFront(node);
    this.map.set(key, node);
  }
}

export class LFUCache<K, V> implements Cache<K, V> {
  private map = new Map<K, DLLNode<K, V>>();
  private freqLists = new Map<number, DLL<K, V>>();
  private minFreq = 0;
  constructor(private capacity: number) {}
  get size() { return this.map.size; }

  private bump(node: DLLNode<K, V>) {
    const oldList = this.freqLists.get(node.freq)!;
    oldList.remove(node);
    if (oldList.size === 0) {
      this.freqLists.delete(node.freq);
      if (this.minFreq === node.freq) this.minFreq += 1;
    }
    node.freq += 1;
    const nextList = this.freqLists.get(node.freq) ?? new DLL<K, V>();
    nextList.addToFront(node);
    this.freqLists.set(node.freq, nextList);
  }

  get(key: K): V | undefined {
    const node = this.map.get(key);
    if (!node) return undefined;
    this.bump(node);
    return node.value;
  }

  put(key: K, value: V): void {
    if (this.capacity === 0) return;
    const existing = this.map.get(key);
    if (existing) { existing.value = value; this.bump(existing); return; }
    if (this.map.size >= this.capacity) {
      const list = this.freqLists.get(this.minFreq)!;
      const evicted = list.popTail();
      if (evicted) this.map.delete(evicted.key);
      if (list.size === 0) this.freqLists.delete(this.minFreq);
    }
    const node = new DLLNode(key, value);
    node.freq = 1;
    const list = this.freqLists.get(1) ?? new DLL<K, V>();
    list.addToFront(node);
    this.freqLists.set(1, list);
    this.map.set(key, node);
    this.minFreq = 1;
  }
}

Extensibility ​

1. "Add TTL" ​

Add an expiresAt on each node. On get, treat an expired node as a miss and evict it in place. Background reaper optional but cheap.

typescript
class DLLNode<K, V> { expiresAt?: number; }
// inside get:
if (node.expiresAt && node.expiresAt < Date.now()) {
  this.list.remove(node); this.map.delete(key);
  this.misses += 1; return undefined;
}

2. "Write-through vs write-back" ​

Wrap the cache in a LoadingCache decorator that accepts a CacheLoader for read-through and a CacheWriter for write-through. Keeps the core cache focused on eviction.

3. "Sharded thread-safe cache" ​

ShardedCache holds an array of LRUCache instances selected by hash(key) % N, each with its own lock. Scales concurrent reads linearly with shard count.


What is Expected at Each Level ​

LevelExpectations
MidLRU using Map plus insertion-order tricks, no stats.
Senior / SMTSLFU correctness around minFreq, DLL + HashMap implementation, thread-safety story with striping, clean interfaces.
StaffAdmission policy (TinyLFU sketch), observability, hit-rate SLO, discussion of Caffeine-style lock-free reads.

Frontend interview preparation reference.