48 — LLD: In-Memory Cache with Custom Eviction Policy
Understanding the Problem
Design a cache where the eviction policy is injected. Support LRU, LFU, FIFO, TTL — and make it possible to add more without touching the cache itself. The cache stores key-value pairs under a capacity bound; when full, it asks the policy who to evict.
What is a pluggable eviction cache?
It is the textbook Strategy pattern with three callbacks. The cache tells the policy "this key was inserted / accessed / removed" and asks "who should I evict?" — and the policy's implementation of that conversation is the only difference between LRU and LFU.
Requirements
Clarifying Questions
You: What policies must be built in?
Interviewer: LRU, LFU, FIFO, TTL. Others should be easy to add.
Four concrete implementations. TTL is interesting because it is time-based.
You: Can policies be composed — say, TTL + LRU?
Interviewer: Yes, support stacking via Decorator.
Decorator pattern so TTL + LRU is a wrapping composition.
You: Concurrency?
Interviewer: Thread-safe under modest contention.
Single lock is fine at this scope; discuss striping as a follow-up.
You: Stats?
Interviewer: Hit and miss counts.
Two counters.
Final Requirements
Cache<K, V>withget,put,size.- Pluggable
EvictionPolicy<K>— notified on insert, access, remove; picks victims. - Four built-in policies: LRU, LFU, FIFO, TTL.
- Composition via Decorator for combined policies.
- Thread-safe.
Out of scope: loading / write-through (separate decorator), distribution, persistence.
Core Entities and Relationships
| Entity | Responsibility |
|---|---|
EvictionPolicy<K> | Interface with onInsert, onAccess, onRemove, pickVictim. |
Cache<K, V> | Storage + coordination with policy. |
LRUPolicy, LFUPolicy, FIFOPolicy, TTLPolicy | Concrete strategies. |
CompositePolicy | Decorator stacking two policies. |
Why externalise the policy? So the cache obeys Open/Closed — adding a new eviction algorithm means shipping a new class, not editing the cache.
Why four callbacks? Because a policy needs to know what happened to update its internal ordering. LRU bumps on access; FIFO does not.
Design is iterative — start with the interface, implement LRU first (since its invariant is the strictest), then slot in the others.
Class Design
EvictionPolicy<K> (interface)
onInsert(key)
onAccess(key)
onRemove(key)
pickVictim() -> K | nullCache<K, V>
State:
data: Map<K, V>capacity: intpolicy: EvictionPolicy<K>hits: int,misses: int
Methods:
get(key)— on hit,policy.onAccess(key)put(key, value)— evict viapolicy.pickVictimif full, thenpolicy.onInsert(key)
Design principles at play:
- Strategy Pattern — the policy interface.
- Dependency Inversion — cache depends on the policy interface, not any concrete class.
- Open/Closed — new policy without editing the cache.
- Decorator Pattern —
CompositePolicywraps two policies.
Implementation
Core Logic
Bad approach: implement LRU inside the cache as a special case and pretend it is pluggable.
- Breaks the invariant; adding LFU duplicates half the cache.
Good approach: policy owns all ordering data. Cache is a dumb Map<K,V> plus a capacity check.
Great approach: same, plus a policy that can return a null victim (for TTL when nothing is expired), in which case the cache refuses the insert or evicts by a fallback policy. Composition handles that.
Verification
Cache capacity 3 with LRU.
put(a, 1)→ data={a}, policy.onInsert(a). size=1.put(b, 2)→ size=2.put(c, 3)→ size=3.get(a)→ hit; policy.onAccess(a). Order is now a, c, b (b is oldest).put(d, 4)→ full;policy.pickVictim()returns b. data.delete(b), policy.onRemove(b). Insert d.- Switch policy to FIFO with same events — victim after
put(d,4)is a, because insertion order is a,b,c,d and LRU behaviour is irrelevant.
Thread Safety
- Single
ReentrantLockaroundgetandputis simplest. - For higher throughput, stripe by
hash(key) % Ninto independent cache shards. - Policy state must also be guarded; if you stripe the cache, give each shard its own policy instance.
Complete Code Implementation
import java.util.*;
import java.util.concurrent.locks.ReentrantLock;
interface EvictionPolicy<K> {
void onInsert(K key);
void onAccess(K key);
void onRemove(K key);
K pickVictim();
}
class Cache<K, V> {
private final Map<K, V> data = new HashMap<>();
private final int capacity;
private final EvictionPolicy<K> policy;
private final ReentrantLock lock = new ReentrantLock();
public int hits, misses;
public Cache(int capacity, EvictionPolicy<K> policy) {
this.capacity = capacity; this.policy = policy;
}
public V get(K key) {
lock.lock();
try {
V v = data.get(key);
if (v == null) { misses++; return null; }
hits++; policy.onAccess(key);
return v;
} finally { lock.unlock(); }
}
public void put(K key, V value) {
lock.lock();
try {
if (data.containsKey(key)) { data.put(key, value); policy.onAccess(key); return; }
if (data.size() >= capacity) {
K victim = policy.pickVictim();
if (victim != null) { data.remove(victim); policy.onRemove(victim); }
}
data.put(key, value);
policy.onInsert(key);
} finally { lock.unlock(); }
}
public int size() { return data.size(); }
}
class FIFOPolicy<K> implements EvictionPolicy<K> {
private final Deque<K> q = new ArrayDeque<>();
public void onInsert(K k) { q.offerLast(k); }
public void onAccess(K k) {}
public void onRemove(K k) { q.remove(k); }
public K pickVictim() { return q.peekFirst(); }
}
class LRUPolicy<K> implements EvictionPolicy<K> {
private final LinkedHashMap<K, Long> order = new LinkedHashMap<>();
private long counter = 0;
public void onInsert(K k) { order.put(k, ++counter); }
public void onAccess(K k) { order.remove(k); order.put(k, ++counter); }
public void onRemove(K k) { order.remove(k); }
public K pickVictim() { return order.isEmpty() ? null : order.entrySet().iterator().next().getKey(); }
}
class LFUPolicy<K> implements EvictionPolicy<K> {
private final Map<K, Integer> freq = new HashMap<>();
public void onInsert(K k) { freq.put(k, 1); }
public void onAccess(K k) { freq.merge(k, 1, Integer::sum); }
public void onRemove(K k) { freq.remove(k); }
public K pickVictim() {
K best = null; int min = Integer.MAX_VALUE;
for (Map.Entry<K, Integer> e : freq.entrySet()) if (e.getValue() < min) { min = e.getValue(); best = e.getKey(); }
return best;
}
}
class TTLPolicy<K> implements EvictionPolicy<K> {
private final Map<K, Long> expiry = new HashMap<>();
private final long ttlMs;
public TTLPolicy(long ttlMs) { this.ttlMs = ttlMs; }
public void onInsert(K k) { expiry.put(k, System.currentTimeMillis() + ttlMs); }
public void onAccess(K k) {}
public void onRemove(K k) { expiry.remove(k); }
public K pickVictim() {
long now = System.currentTimeMillis();
for (Map.Entry<K, Long> e : expiry.entrySet()) if (e.getValue() <= now) return e.getKey();
return null; // nothing expired; caller must fall back
}
}#include <chrono>
#include <deque>
#include <mutex>
#include <unordered_map>
template <typename K>
struct EvictionPolicy {
virtual ~EvictionPolicy() = default;
virtual void onInsert(const K& k) = 0;
virtual void onAccess(const K& k) = 0;
virtual void onRemove(const K& k) = 0;
virtual bool pickVictim(K& out) = 0;
};
template <typename K, typename V>
class Cache {
public:
Cache(int cap, std::shared_ptr<EvictionPolicy<K>> p) : capacity(cap), policy(std::move(p)) {}
bool get(const K& key, V& out) {
std::lock_guard<std::mutex> g(mu);
auto it = data.find(key);
if (it == data.end()) { misses++; return false; }
hits++; policy->onAccess(key); out = it->second; return true;
}
void put(const K& key, const V& value) {
std::lock_guard<std::mutex> g(mu);
auto it = data.find(key);
if (it != data.end()) { it->second = value; policy->onAccess(key); return; }
if ((int)data.size() >= capacity) {
K victim;
if (policy->pickVictim(victim)) { data.erase(victim); policy->onRemove(victim); }
}
data[key] = value;
policy->onInsert(key);
}
int hits = 0, misses = 0;
private:
int capacity;
std::unordered_map<K, V> data;
std::shared_ptr<EvictionPolicy<K>> policy;
std::mutex mu;
};
template <typename K>
class FIFOPolicy : public EvictionPolicy<K> {
std::deque<K> q;
public:
void onInsert(const K& k) override { q.push_back(k); }
void onAccess(const K&) override {}
void onRemove(const K& k) override { for (auto it = q.begin(); it != q.end(); ++it) if (*it == k) { q.erase(it); break; } }
bool pickVictim(K& out) override { if (q.empty()) return false; out = q.front(); return true; }
};
template <typename K>
class LRUPolicy : public EvictionPolicy<K> {
std::unordered_map<K, long long> order;
long long counter = 0;
public:
void onInsert(const K& k) override { order[k] = ++counter; }
void onAccess(const K& k) override { order[k] = ++counter; }
void onRemove(const K& k) override { order.erase(k); }
bool pickVictim(K& out) override {
if (order.empty()) return false;
auto best = order.begin();
for (auto it = order.begin(); it != order.end(); ++it) if (it->second < best->second) best = it;
out = best->first; return true;
}
};interface EvictionPolicy<K> {
onInsert(key: K): void;
onAccess(key: K): void;
onRemove(key: K): void;
pickVictim(): K | null;
}
class Cache<K, V> {
private data = new Map<K, V>();
hits = 0; misses = 0;
constructor(private capacity: number, private policy: EvictionPolicy<K>) {}
get(key: K): V | undefined {
const v = this.data.get(key);
if (v === undefined) { this.misses += 1; return undefined; }
this.hits += 1;
this.policy.onAccess(key);
return v;
}
put(key: K, value: V): void {
if (this.data.has(key)) {
this.data.set(key, value);
this.policy.onAccess(key);
return;
}
if (this.data.size >= this.capacity) {
const victim = this.policy.pickVictim();
if (victim !== null) {
this.data.delete(victim);
this.policy.onRemove(victim);
}
}
this.data.set(key, value);
this.policy.onInsert(key);
}
}
class FIFOPolicy<K> implements EvictionPolicy<K> {
private q: K[] = [];
onInsert(k: K) { this.q.push(k); }
onAccess(_k: K) {}
onRemove(k: K) { const i = this.q.indexOf(k); if (i >= 0) this.q.splice(i, 1); }
pickVictim() { return this.q.shift() ?? null; }
}
class LRUPolicy<K> implements EvictionPolicy<K> {
private order = new Map<K, number>();
private counter = 0;
onInsert(k: K) { this.order.set(k, ++this.counter); }
onAccess(k: K) { this.order.set(k, ++this.counter); }
onRemove(k: K) { this.order.delete(k); }
pickVictim() {
let minKey: K | null = null, min = Infinity;
for (const [k, t] of this.order) if (t < min) { min = t; minKey = k; }
return minKey;
}
}
class LFUPolicy<K> implements EvictionPolicy<K> {
private freq = new Map<K, number>();
onInsert(k: K) { this.freq.set(k, 1); }
onAccess(k: K) { this.freq.set(k, (this.freq.get(k) ?? 0) + 1); }
onRemove(k: K) { this.freq.delete(k); }
pickVictim() {
let minKey: K | null = null, min = Infinity;
for (const [k, f] of this.freq) if (f < min) { min = f; minKey = k; }
return minKey;
}
}
class TTLPolicy<K> implements EvictionPolicy<K> {
private expiry = new Map<K, number>();
constructor(private ttlMs: number) {}
onInsert(k: K) { this.expiry.set(k, Date.now() + this.ttlMs); }
onAccess(_k: K) {}
onRemove(k: K) { this.expiry.delete(k); }
pickVictim() {
const now = Date.now();
for (const [k, e] of this.expiry) if (e <= now) return k;
return null;
}
}
class CompositePolicy<K> implements EvictionPolicy<K> {
constructor(private primary: EvictionPolicy<K>, private fallback: EvictionPolicy<K>) {}
onInsert(k: K) { this.primary.onInsert(k); this.fallback.onInsert(k); }
onAccess(k: K) { this.primary.onAccess(k); this.fallback.onAccess(k); }
onRemove(k: K) { this.primary.onRemove(k); this.fallback.onRemove(k); }
pickVictim() { return this.primary.pickVictim() ?? this.fallback.pickVictim(); }
}(Note: the map-based LRU/LFU above is O(n) on eviction. Production uses the DLL-based LRU from problem 42. The intent here is illustrating the policy boundary.)
Extensibility
1. "Composite TTL + LRU"
Use the
CompositePolicydecorator. Primary is TTL — if anything expired, evict it. Otherwise fall back to LRU. The cache does not know there are two policies.
2. "Size-weighted eviction"
New
EvictionPolicythat tracks bytes per key.pickVictimreturns the largest entry. Useful for an image cache.
3. "Admission policy"
A
BouncerPolicydecorator that short-circuitsputwhen the key's predicted frequency is too low. TinyLFU sketch lives here. Cache interface unchanged.
What is Expected at Each Level
| Level | Expectations |
|---|---|
| Mid | Cache + one policy, O(n) eviction. |
| Senior / SMTS | Four policies, Decorator for composition, cache is ignorant of policy details, thread-safety discussion. |
| Staff | TinyLFU admission, shard-per-lock throughput, observability, memory accounting. |