Frequently Asked at Google — This problem has been reported multiple times in recent Google L4 interviews.
LRU Cache (LC 146) ​
Problem Statement ​
Design a data structure that supports these operations in O(1) average time:
get(key)— return the value if present, else -1. Marks the key as most recently used.put(key, value)— insert or update. If capacity is exceeded, evict the least recently used key.
Example:
LRUCache cache = new LRUCache(2);
cache.put(1, 1); // cache: {1=1}
cache.put(2, 2); // cache: {1=1, 2=2}
cache.get(1); // returns 1, cache: {2=2, 1=1}
cache.put(3, 3); // evicts key 2, cache: {1=1, 3=3}
cache.get(2); // returns -1
cache.put(4, 4); // evicts key 1, cache: {3=3, 4=4}
cache.get(1); // returns -1
cache.get(3); // returns 3
cache.get(4); // returns 4Constraints:
1 <= capacity <= 30000 <= key, value <= 10^4- Up to
2 * 10^5calls mixed acrossgetandput.
Difficulty: Medium
Pattern Recognition ​
- Input structure: sequence of operations on a key-value store with a capacity cap.
- What are we optimizing? Every operation must be O(1).
- Pattern: Hash map for O(1) lookup + doubly linked list for O(1) reorder and eviction.
- Why: A hash map alone gets you O(1) lookup but not ordering. A linked list gives ordering but not O(1) lookup. Combine them: the map points from key to a list node, the list maintains recency.
Google asks this to test whether you can design a composite data structure and reason about pointer bookkeeping under time pressure.
Approach ​
Brute Force ​
Use an array or linked list keyed by insertion order. get scans in O(n), put shifts elements. Unacceptable at the stated op volume (2 * 10^5).
Optimal — Hash Map + Doubly Linked List ​
Key Insight: A doubly linked list lets you remove any node in O(1) if you already have a pointer to it. The hash map hands you that pointer.
- Maintain a dummy
headandtail. The node right afterheadis the most recently used; the node right beforetailis the least recently used. get(key):- Miss → return -1.
- Hit → unlink the node from its current position, relink it right after
head, return its value.
put(key, value):- If key exists, update its value and move node to front (same as
get). - Else create a new node, link it after
head, store in map. - If size exceeds capacity, remove the node before
tail, delete from map.
- If key exists, update its value and move node to front (same as
Time: O(1) per operation. Space: O(capacity).
You can also use an ordered map (Java LinkedHashMap, C++ list<pair> + unordered_map<iterator>) but implementing the doubly linked list manually is what interviewers grade at Google.
Walkthrough ​
Capacity = 2. Ops: put(1,1); put(2,2); get(1); put(3,3); get(2).
| Op | Map | List (head ↔ tail) | Notes |
|---|---|---|---|
put(1,1) | {1→n1} | H ↔ n1 ↔ T | insert |
put(2,2) | {1→n1, 2→n2} | H ↔ n2 ↔ n1 ↔ T | newest at front |
get(1) | same | H ↔ n1 ↔ n2 ↔ T | move n1 to front, return 1 |
put(3,3) | {1→n1, 3→n3} | H ↔ n3 ↔ n1 ↔ T | size 3 > 2 → evict n2 (tail.prev) |
get(2) | same | same | return -1 |
Note how get(1) changed the eviction target: it was next in line, but touching it saved it.
Implementation ​
class LRUCache {
private capacity: number;
private map = new Map<number, DListNode>();
private head: DListNode; // dummy
private tail: DListNode; // dummy
constructor(capacity: number) {
this.capacity = capacity;
this.head = new DListNode(0, 0);
this.tail = new DListNode(0, 0);
this.head.next = this.tail;
this.tail.prev = this.head;
}
get(key: number): number {
const node = this.map.get(key);
if (!node) return -1;
this.moveToFront(node);
return node.value;
}
put(key: number, value: number): void {
const existing = this.map.get(key);
if (existing) {
existing.value = value;
this.moveToFront(existing);
return;
}
const node = new DListNode(key, value);
this.map.set(key, node);
this.addToFront(node);
if (this.map.size > this.capacity) {
const lru = this.tail.prev!;
this.remove(lru);
this.map.delete(lru.key);
}
}
private addToFront(node: DListNode) {
node.prev = this.head;
node.next = this.head.next;
this.head.next!.prev = node;
this.head.next = node;
}
private remove(node: DListNode) {
node.prev!.next = node.next;
node.next!.prev = node.prev;
}
private moveToFront(node: DListNode) {
this.remove(node);
this.addToFront(node);
}
}
class DListNode {
key: number;
value: number;
prev: DListNode | null = null;
next: DListNode | null = null;
constructor(key: number, value: number) {
this.key = key;
this.value = value;
}
}class LRUCache {
struct Node {
int key, value;
Node* prev;
Node* next;
Node(int k, int v) : key(k), value(v), prev(nullptr), next(nullptr) {}
};
int capacity;
unordered_map<int, Node*> cache;
Node* head; // dummy
Node* tail; // dummy
void addToFront(Node* node) {
node->prev = head;
node->next = head->next;
head->next->prev = node;
head->next = node;
}
void removeNode(Node* node) {
node->prev->next = node->next;
node->next->prev = node->prev;
}
void moveToFront(Node* node) { removeNode(node); addToFront(node); }
public:
LRUCache(int cap) : capacity(cap) {
head = new Node(0, 0);
tail = new Node(0, 0);
head->next = tail;
tail->prev = head;
}
int get(int key) {
auto it = cache.find(key);
if (it == cache.end()) return -1;
moveToFront(it->second);
return it->second->value;
}
void put(int key, int value) {
auto it = cache.find(key);
if (it != cache.end()) {
it->second->value = value;
moveToFront(it->second);
return;
}
Node* node = new Node(key, value);
cache[key] = node;
addToFront(node);
if ((int)cache.size() > capacity) {
Node* lru = tail->prev;
removeNode(lru);
cache.erase(lru->key);
delete lru;
}
}
};Watch out:
- Forgetting to update the hash map when you evict leads to O(n) growth and memory leaks.
- Using a singly linked list forces O(n) removal because you need the predecessor. Always pick doubly linked.
- When
putupdates an existing key, you must also move it to the front — forgetting this makes eviction wrong.
Complexity Analysis ​
| Time per op | Space | |
|---|---|---|
| Brute Force (list) | O(n) | O(capacity) |
| Hash Map + DLL | O(1) | O(capacity) |
Bottleneck: the linked-list reorder is the only non-trivial constant; everything else is a hash operation.
Edge Cases ​
- Capacity 1 — every
putevicts the previous key (if different). Your logic must not assume at least two slots. - Repeated
putwith same key — update value, do not double-insert or evict. geton missing key — must return -1 and must not disturb ordering.- Capacity equal to total ops — no evictions ever; ensure your eviction branch is correctly guarded by size check.
- Evicting the only node — dummy sentinels guarantee
head.nextandtail.prevare always valid pointers.
Related Problems ​
- LC 460 — LFU Cache — Evict by frequency, break ties by recency. Same skeleton with a frequency bucket map.
- LC 432 — All O`one Data Structure — Generalized count tracking with O(1) get-min/max. Similar double-map pattern.
- LC 588 — Design In-Memory File System — Tests composite data structure design under different constraints.
- LC 1756 — Design Most Recently Used Queue — Simpler version: track recency only, no eviction.
What Is Expected at Each Level ​
Junior: Use the language's built-in ordered map (LinkedHashMap / OrderedDict), get a correct O(1) solution, explain the data-structure pick.
Mid-level: Implement the doubly linked list by hand, use sentinel nodes, handle the update-existing-key path without bugs.
Senior (L4 target): Write the implementation from scratch, discuss tradeoffs (sentinels vs null checks, thread safety, generic types), sketch how you would extend to LFU or TTL eviction, and discuss production concerns like lock contention and shard-based LRU.