30 — LLD: Hit Counter
Frequently Asked at Uber — Reported repeatedly in recent L5A R3 (Coding-Depth) rounds.
Understanding the Problem
A hit counter records events over time and answers "how many hits happened in the last 5 minutes?" It is the primitive underlying rate limiters, throughput dashboards, and abuse detection. The core tension is speed versus memory: you want O(1) hits at high QPS while keeping the window bounded.
What is a Hit Counter?
Think of it as a moving window that remembers the last 300 seconds of activity. Every hit(t) registers an event; every getHits(t) returns the count within [t-299, t]. Older hits age out automatically as time marches forward.
Requirements
Clarifying Questions
You: So timestamps are passed in by the caller. Are they monotonic, or can a late hit arrive with an older timestamp?
Interviewer: Monotonic for now — treat it like
System.currentTimeMillis().
Good — you do not need to handle out-of-order arrivals in the core version. You will mention it as a follow-up.
You: Granularity is one second. Do you need per-hit timestamps preserved, or is bucketing fine?
Interviewer: Bucketed per second is fine. Memory should be bounded.
That unlocks the ring-buffer trick. You get constant space for the window.
You: What is the expected QPS? Will
hit()be contended?
Interviewer: Assume up to 100k hits per second at peak on a single host. The multi-threaded variant is the interesting one.
So a synchronized method on the whole object will be the wrong call at that scale — you need striped atomics.
You: For
getHits(t), do you include the current second or only strictly past?
Interviewer: Include the current second — the window is
[t-299, t].
One tiny off-by-one clarification that prevents a bug 20 minutes later.
Final Requirements
hit(timestamp: int)— register a hit at the given second.getHits(timestamp: int)— return hits in[timestamp - 299, timestamp].- Bounded memory — do not grow with total hits.
- Thread-safe variant supporting ~100k concurrent
hitcalls per second.
Out of scope: Out-of-order arrivals, sub-second resolution, cross-host aggregation, persistence.
Deferred tradeoff: A tiny read-side inconsistency in the concurrent variant (a handful of hits counted on the boundary of a bucket reset) is acceptable; exact correctness would require per-bucket locks.
Core Entities and Relationships
The interesting decision here is not to introduce a Hit class. A hit is a fleeting event. What matters is the aggregated count per second.
| Entity | Responsibility |
|---|---|
HitCounter | The facade. Owns the ring buffer and exposes hit/getHits. |
| Bucket (conceptual) | (timestamp, count) pair at index t % 300. The latest timestamp wins; stale buckets are reset on touch. |
You deliberately collapse the bucket into two parallel arrays (times[] and counts[]) rather than a Bucket class — it is tighter, cache-friendly, and easier to reason about for atomics later.
Design is iterative. If ordering becomes non-monotonic, you will promote buckets to a sorted map.
Class Design
HitCounter State
times[300]— last-written timestamp per slot.counts[300]— hit count for that timestamp.
Why two arrays instead of a Bucket[]? When the concurrent variant rewrites this, both times and counts will be wrapped in atomic types. Keeping them parallel means each field can use the right primitive — AtomicInteger for the timestamp, LongAdder for the count.
HitCounter
public class HitCounter {
private static final int WINDOW = 300;
private final int[] times = new int[WINDOW];
private final int[] counts = new int[WINDOW];
public void hit(int timestamp) { /* ... */ }
public int getHits(int timestamp) { /* ... */ }
}class HitCounter {
static constexpr int WINDOW = 300;
std::array<int, WINDOW> times{};
std::array<int, WINDOW> counts{};
public:
void hit(int timestamp);
int getHits(int timestamp);
};export class HitCounter {
private static readonly WINDOW = 300;
private readonly times: number[] = new Array(HitCounter.WINDOW).fill(0);
private readonly counts: number[] = new Array(HitCounter.WINDOW).fill(0);
hit(timestamp: number): void { /* ... */ }
getHits(timestamp: number): number { /* ... */ }
}Design principle at play: Information Expert — the counter holds everything needed to answer its own question. No auxiliary state, no external coordination.
Implementation
Core Logic: hit(t)
- Compute
idx = t % 300. - If
times[idx] != t, the slot is stale — resettimes[idx] = t; counts[idx] = 1. - Else increment
counts[idx].
Edge cases: First-ever hit (slot starts at 0, 0 != t triggers reset). Same-second hits (fast-path increment). Wraparound after 300 seconds (old bucket overwritten — by design).
Core Logic: getHits(t)
Sum counts[i] for every i where t - times[i] < 300. Linear scan over 300 buckets — fast, cache-hot.
Implementations
public void hit(int timestamp) {
int idx = timestamp % WINDOW;
if (times[idx] != timestamp) {
times[idx] = timestamp;
counts[idx] = 1;
} else {
counts[idx]++;
}
}
public int getHits(int timestamp) {
int total = 0;
for (int i = 0; i < WINDOW; i++) {
if (timestamp - times[i] < WINDOW) {
total += counts[i];
}
}
return total;
}void HitCounter::hit(int timestamp) {
int idx = timestamp % WINDOW;
if (times[idx] != timestamp) {
times[idx] = timestamp;
counts[idx] = 1;
} else {
counts[idx]++;
}
}
int HitCounter::getHits(int timestamp) {
int total = 0;
for (int i = 0; i < WINDOW; ++i) {
if (timestamp - times[i] < WINDOW) total += counts[i];
}
return total;
}hit(timestamp: number): void {
const idx = timestamp % HitCounter.WINDOW;
if (this.times[idx] !== timestamp) {
this.times[idx] = timestamp;
this.counts[idx] = 1;
} else {
this.counts[idx]++;
}
}
getHits(timestamp: number): number {
let total = 0;
for (let i = 0; i < HitCounter.WINDOW; i++) {
if (timestamp - this.times[i] < HitCounter.WINDOW) {
total += this.counts[i];
}
}
return total;
}Concurrency: From Naive to Striped
Bad Solution: synchronized on both methods. Single lock serializes all hits across the host. At 100k QPS, the lock becomes the bottleneck — this is the wrong tradeoff.
Good Solution: One ReentrantLock per bucket. 300 locks = 300-way stripe. Correct, but allocates 300 lock objects and costs an acquire/release per hit.
Great Solution: AtomicInteger for the timestamp + LongAdder for the count, per slot. hit() compare-and-sets the timestamp if the bucket is stale, then increments the LongAdder. Contention is automatically striped across 300 buckets and further striped internally by LongAdder.
public class ConcurrentHitCounter {
private static final int WINDOW = 300;
private final AtomicInteger[] times = new AtomicInteger[WINDOW];
private final LongAdder[] counts = new LongAdder[WINDOW];
public ConcurrentHitCounter() {
for (int i = 0; i < WINDOW; i++) {
times[i] = new AtomicInteger(0);
counts[i] = new LongAdder();
}
}
public void hit(int timestamp) {
int idx = timestamp % WINDOW;
int prev = times[idx].get();
if (prev != timestamp) {
if (times[idx].compareAndSet(prev, timestamp)) {
counts[idx].reset();
}
}
counts[idx].increment();
}
public long getHits(int timestamp) {
long total = 0;
for (int i = 0; i < WINDOW; i++) {
if (timestamp - times[i].get() < WINDOW) total += counts[i].sum();
}
return total;
}
}There is a subtle race: between the CAS that flips the timestamp and the reset(), another writer can increment. In practice the error is at most a handful of hits — a tolerable loss for metrics. Call this out explicitly. If exact correctness matters, switch to a per-bucket ReentrantLock.
Verification
Seed: hit(1), hit(2), hit(3), hit(300), hit(301), getHits(301).
| Step | idx | buckets (time, count) | Returns |
|---|---|---|---|
| hit(1) | 1 | times[1]=1, counts[1]=1 | - |
| hit(2) | 2 | times[2]=2, counts[2]=1 | - |
| hit(3) | 3 | times[3]=3, counts[3]=1 | - |
| hit(300) | 0 | times[0]=300, counts[0]=1 | - |
| hit(301) | 1 | overwrite: times[1]=301, counts[1]=1 | - |
| getHits(301) | - | sum buckets where 301 - t < 300 | 4 |
The eligible buckets are t=2 (diff 299), t=3 (298), t=300 (1), t=301 (0). The t=1 bucket was overwritten. Total = 4.
Complete Code Implementation
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.LongAdder;
public class ConcurrentHitCounter {
private static final int WINDOW = 300;
private final AtomicInteger[] times = new AtomicInteger[WINDOW];
private final LongAdder[] counts = new LongAdder[WINDOW];
public ConcurrentHitCounter() {
for (int i = 0; i < WINDOW; i++) {
times[i] = new AtomicInteger(0);
counts[i] = new LongAdder();
}
}
public void hit(int timestamp) {
int idx = timestamp % WINDOW;
int prev = times[idx].get();
if (prev != timestamp) {
if (times[idx].compareAndSet(prev, timestamp)) {
counts[idx].reset();
}
}
counts[idx].increment();
}
public long getHits(int timestamp) {
long total = 0;
for (int i = 0; i < WINDOW; i++) {
if (timestamp - times[i].get() < WINDOW) total += counts[i].sum();
}
return total;
}
}#include <array>
#include <atomic>
class HitCounter {
static constexpr int WINDOW = 300;
std::array<std::atomic<int>, WINDOW> times{};
std::array<std::atomic<long>, WINDOW> counts{};
public:
void hit(int timestamp) {
int idx = timestamp % WINDOW;
int prev = times[idx].load();
if (prev != timestamp) {
if (times[idx].compare_exchange_strong(prev, timestamp)) {
counts[idx].store(0);
}
}
counts[idx].fetch_add(1);
}
long getHits(int timestamp) {
long total = 0;
for (int i = 0; i < WINDOW; ++i) {
if (timestamp - times[i].load() < WINDOW) total += counts[i].load();
}
return total;
}
};export class HitCounter {
private static readonly WINDOW = 300;
private readonly times: number[] = new Array(HitCounter.WINDOW).fill(0);
private readonly counts: number[] = new Array(HitCounter.WINDOW).fill(0);
hit(timestamp: number): void {
const idx = timestamp % HitCounter.WINDOW;
if (this.times[idx] !== timestamp) {
this.times[idx] = timestamp;
this.counts[idx] = 1;
} else {
this.counts[idx]++;
}
}
getHits(timestamp: number): number {
let total = 0;
for (let i = 0; i < HitCounter.WINDOW; i++) {
if (timestamp - this.times[i] < HitCounter.WINDOW) {
total += this.counts[i];
}
}
return total;
}
}Extensibility
1. "What if hits can arrive out of order?"
Replace the ring with a TreeMap<Long, LongAdder>. On every hit, insert into the map; evict keys older than the window at the start of each call. getHits sums the tail.
private final TreeMap<Long, LongAdder> buckets = new TreeMap<>();
public void hit(long t) {
buckets.computeIfAbsent(t, k -> new LongAdder()).increment();
buckets.headMap(t - WINDOW + 1).clear();
}Tradeoff: O(log N) per op instead of O(1). You pay for correctness when the input breaks monotonicity.
2. "What if the window is configurable at runtime?"
Accept windowSeconds in the constructor; size the ring accordingly. If the window can change dynamically (e.g., switch from 5 min to 1 hour without restart), the TreeMap version is mandatory — the ring size is fixed at construction.
3. "What if you need percentiles of inter-hit time?"
This is no longer a counter problem — it is a sketch problem (t-digest, HDRHistogram). Call out the scope change to the interviewer before redesigning.
What is Expected at Each Level
| Level | Expectations |
|---|---|
| Junior (L4) | Single-threaded ring buffer. Get the modulo trick right. May need a hint for the stale-bucket-reset logic. |
| Mid (L5) | Ring buffer plus synchronized for the concurrent variant. Discusses lock granularity when prompted. |
| Senior (L5A) | Striped AtomicInteger + LongAdder without prompting. Articulates the tiny inconsistency and defends it as acceptable for metrics. Anticipates the out-of-order follow-up with TreeMap. |
| Staff (L6) | Discusses cross-host aggregation (metrics emitted every N seconds to a central aggregator), backpressure, memory pooling, and composition with a rate limiter. |