Frequently Asked at Uber — This problem has been reported multiple times in recent Uber L5A interviews.
LRU Cache ​
Problem Statement ​
Design a Least Recently Used cache with fixed capacity supporting two operations in O(1):
get(key)— return the value if present, else -1. Accessing a key marks it as most recently used.put(key, value)— insert or update. If inserting would exceed capacity, evict the least recently used key first.
Example:
LRUCache cache = new LRUCache(2);
cache.put(1, 1); // cache = {1=1}
cache.put(2, 2); // cache = {1=1, 2=2}
cache.get(1); // returns 1, cache = {2=2, 1=1} (1 is most recent)
cache.put(3, 3); // evicts key 2, cache = {1=1, 3=3}
cache.get(2); // returns -1
cache.put(4, 4); // evicts key 1, cache = {3=3, 4=4}
cache.get(1); // returns -1
cache.get(3); // returns 3
cache.get(4); // returns 4Constraints:
1 <= capacity <= 3000- Up to
10^5operations.
Difficulty: Medium
Pattern Recognition ​
- What is the structure of the problem? A data-structure design question that requires two orthogonal capabilities in O(1).
- What are we optimizing? Constant-time lookup AND constant-time "move to most recent" / "evict least recent."
- What pattern does this suggest? Hash map + doubly linked list. The map gives O(1) lookup; the DLL gives O(1) reordering.
- Why does this pattern fit? A plain hash map knows keys but not their recency order. A plain DLL knows order but not how to jump to a key. Combine them: the map stores key -> node pointer; the node lives in the DLL.
Approach ​
Brute Force — Array with timestamps ​
Store (key, value, lastAccessTime). get scans for the key and updates timestamp. put scans for the LRU entry to evict.
Time: O(N) per op. Space: O(capacity). Why insufficient: The problem explicitly demands O(1) per op.
Optimal — Hash Map + Doubly Linked List ​
Key Insight: Put the nodes in a doubly linked list so you can unlink any node in O(1) given a pointer. Store
key -> nodein a hash map so you can find that pointer in O(1). Keep most-recent at head, least-recent at tail, guarded by two sentinels so you never special-case null prev/next.
Step-by-step:
- Create dummy
headandtailnodes withhead.next = tailandtail.prev = head. get(key):- If the map has the key, unlink that node and re-insert at head (move-to-front). Return its value.
- Else return -1.
put(key, value):- If the key exists, update value, move node to head.
- Else create a new node, insert at head, store in map. If size exceeds capacity, evict
tail.prev(the actual LRU) and delete its map entry.
Time: O(1) per op. Space: O(capacity).
Walkthrough ​
Capacity 2. Run: put(1,1), put(2,2), get(1), put(3,3), get(2).
| Op | List (head to tail) | Map | Return |
|---|---|---|---|
| put(1,1) | H <-> 1 <-> T | — | |
| put(2,2) | H <-> 2 <-> 1 <-> T | — | |
| get(1) | H <-> 1 <-> 2 <-> T | 1 | |
| put(3,3) | H <-> 3 <-> 1 <-> T | {1, 3} (2 evicted since tail.prev was 2) | — |
| get(2) | H <-> 3 <-> 1 <-> T | -1 |
Eviction correctness check: after get(1) and put(3,3), the node at tail.prev was 2 — the least recently used. Evicting it matches LRU semantics.
Implementation ​
class DLLNode {
constructor(
public key: number,
public value: number,
public prev: DLLNode | null = null,
public next: DLLNode | null = null
) {}
}
class LRUCache {
private map = new Map<number, DLLNode>();
private head = new DLLNode(-1, -1);
private tail = new DLLNode(-1, -1);
constructor(private capacity: number) {
this.head.next = this.tail;
this.tail.prev = this.head;
}
get(key: number): number {
const node = this.map.get(key);
if (!node) return -1;
this.remove(node);
this.addToHead(node);
return node.value;
}
put(key: number, value: number): void {
const existing = this.map.get(key);
if (existing) {
existing.value = value;
this.remove(existing);
this.addToHead(existing);
return;
}
if (this.map.size === this.capacity) {
const lru = this.tail.prev!;
this.remove(lru);
this.map.delete(lru.key);
}
const node = new DLLNode(key, value);
this.addToHead(node);
this.map.set(key, node);
}
private addToHead(node: DLLNode) {
node.prev = this.head;
node.next = this.head.next;
this.head.next!.prev = node;
this.head.next = node;
}
private remove(node: DLLNode) {
node.prev!.next = node.next;
node.next!.prev = node.prev;
}
}class LRUCache {
struct Node {
int key, value;
Node *prev, *next;
Node(int k, int v) : key(k), value(v), prev(nullptr), next(nullptr) {}
};
int capacity;
unordered_map<int, Node*> map;
Node *head, *tail;
void addToHead(Node* node) {
node->prev = head;
node->next = head->next;
head->next->prev = node;
head->next = node;
}
void remove(Node* node) {
node->prev->next = node->next;
node->next->prev = node->prev;
}
public:
LRUCache(int cap) : capacity(cap) {
head = new Node(-1, -1);
tail = new Node(-1, -1);
head->next = tail;
tail->prev = head;
}
int get(int key) {
if (!map.count(key)) return -1;
Node* node = map[key];
remove(node);
addToHead(node);
return node->value;
}
void put(int key, int value) {
if (map.count(key)) {
map[key]->value = value;
remove(map[key]);
addToHead(map[key]);
return;
}
if ((int)map.size() == capacity) {
Node* lru = tail->prev;
remove(lru);
map.erase(lru->key);
delete lru;
}
Node* node = new Node(key, value);
addToHead(node);
map[key] = node;
}
};Watch out: Forgetting to delete the evicted key from the map. You will have a stale pointer leak and future
getcalls will return garbage.
Watch out: Skipping the "move to head on update" step inside
putwhen the key already exists. Correctness depends on updates also counting as recent access.
Watch out: Not using sentinels. Every
addToHeadandremovethen needs null-checks — easy to get wrong under interview pressure.
Complexity Analysis ​
| Time | Space | |
|---|---|---|
| Brute Force (array scan) | O(N) per op | O(N) |
| Optimal (map + DLL) | O(1) per op | O(N) |
Bottleneck: All DLL pointer surgeries are O(1), the map's amortized ops are O(1), so per-call work is genuinely constant.
Edge Cases ​
- Capacity 1 — Every
putto a new key evicts the previous one. Sentinels keep this clean. - Repeated
putof same key — Update value and bump to head; don't grow size. geton missing key — Return -1 without touching the list.- Fill then refill with the same keys — No eviction, just repeated moves to head.
putthat grows past capacity via new key — Evicttail.prev, then insert.
Related Problems ​
- LC 460 — LFU Cache: Least Frequently Used. Adds a frequency dimension — harder: you need a map-of-doubly-linked-lists keyed by frequency.
- LC 432 — All O(1) Data Structure: Increment/decrement counters with O(1) min/max access. Same bucket-of-DLL idea.
- LC 1756 — Design Most Recently Used Queue: Simplified variant focused on the DLL-move-to-front trick.
What Is Expected at Each Level? ​
Junior / New Grad: State that a map alone or a list alone does not meet both O(1) requirements. With a nudge, combine them. May need help sketching the DLL invariants.
Mid-level: Describe the full map + DLL design up front, implement with sentinels, explain why each op is O(1). Handle update-existing and evict-and-insert cleanly.
Senior: Discuss thread-safety (per-bucket locks vs a single mutex), compare to LFU's bucket-of-DLLs architecture, talk about cache stampede / TTL extensions, and propose approximate LRU (like Redis) when strict O(1) is unnecessary.