Skip to content

HLD: Auction Platform (eBay) ​

Understanding the Problem ​

What is an Auction Platform? ​

An auction platform lets sellers list items for timed bidding, where buyers compete by placing progressively higher bids. The winner is the highest bidder when the auction closes. The core engineering challenge is handling bid concurrency -- when thousands of users bid on a hot item in the final seconds, we need to guarantee exactly one winner with the correct highest bid, all while keeping the experience real-time and responsive.

Functional Requirements ​

Core (above the line):

  1. Create auction -- sellers list an item with a starting price, description, and end time
  2. Place a bid -- buyers submit a bid that must exceed the current highest bid
  3. End auction -- when the timer expires, the system determines the winner and notifies them
  4. View auction -- real-time display of the current highest bid, bid count, and time remaining

Below the line (mention but don't design):

  • User registration and profiles
  • Payment processing and escrow
  • Seller ratings and reviews
  • Auction search and discovery
  • Proxy/automatic bidding
  • Buy-it-now option

Non-Functional Requirements ​

  1. Bid consistency -- no two conflicting bids should both succeed; the system must resolve races deterministically
  2. Low latency -- bid placement acknowledged within 200ms; bid updates visible to all viewers within 1 second
  3. High availability -- 99.99% uptime during active auctions
  4. Scale -- 10M concurrent auctions, 1M bids/day with spikes during auction endings (10x burst), 50M DAU browsing

The Set Up ​

Core Entities ​

EntityDescription
Userid, username, email, role (buyer/seller/both)
Auctionid, sellerId, title, description, startingPrice, currentHighBid, startTime, endTime, status (active/ended/cancelled)
Bidid, auctionId, bidderId, amount, timestamp
AuctionResultauctionId, winnerId, winningBidId, finalPrice, settledAt

API Design ​

Create an auction:

POST /api/auctions
Authorization: Bearer <token>

Request:
{
  "title": "Vintage Gibson Les Paul 1959",
  "description": "Original finish, plays beautifully...",
  "startingPrice": 50000,
  "endTime": "2024-02-01T18:00:00Z",
  "images": ["url1", "url2"]
}

Response: 201 Created
{
  "auctionId": "auc_abc123",
  "status": "active",
  "startTime": "2024-01-25T18:00:00Z",
  "endTime": "2024-02-01T18:00:00Z"
}

Place a bid:

POST /api/auctions/{auctionId}/bids
Authorization: Bearer <token>

Request:
{
  "amount": 55000,
  "clientBidId": "uuid-from-client"  // idempotency key
}

Response: 201 Created
{
  "bidId": "bid_xyz789",
  "auctionId": "auc_abc123",
  "amount": 55000,
  "isCurrentHigh": true,
  "timestamp": "2024-01-28T14:30:05Z"
}

Response: 409 Conflict
{
  "error": "BID_TOO_LOW",
  "currentHighBid": 56000,
  "message": "Your bid must exceed the current highest bid of $56,000"
}

Get auction details (with live updates via SSE):

GET /api/auctions/{auctionId}

Response: 200 OK
{
  "auctionId": "auc_abc123",
  "title": "Vintage Gibson Les Paul 1959",
  "currentHighBid": 55000,
  "bidCount": 47,
  "endTime": "2024-02-01T18:00:00Z",
  "status": "active",
  "topBidder": "user_***456"  // partially masked
}

Subscribe to auction updates (SSE):

GET /api/auctions/{auctionId}/stream
Accept: text/event-stream

data: {"type": "new_bid", "amount": 56000, "bidCount": 48, "timestamp": "..."}
data: {"type": "auction_ending", "remainingSeconds": 60}
data: {"type": "auction_ended", "winnerId": "user_789", "finalPrice": 56000}

Get bid history:

GET /api/auctions/{auctionId}/bids?limit=20&cursor=bid_xyz700

Response: 200 OK
{
  "bids": [
    { "bidId": "bid_xyz789", "bidderId": "user_***123", "amount": 55000, "timestamp": "..." }
  ],
  "nextCursor": "bid_xyz680"
}

High-Level Design ​

Flow 1: Placing a Bid ​

[Client] --> [API Gateway] --> [Bid Service] --> [Redis (current high bid)]
                                    |                    |
                                    v                    v
                              [Kafka (bid events)] --> [Bid Processor]
                                                          |
                                                    [PostgreSQL (bids table)]
                                                          |
                                                    [Notification Service] --> [SSE/WebSocket to viewers]

Step-by-step:

  1. Client submits bid via POST /api/auctions/{auc_abc123}/bids with amount $55,000
  2. API Gateway authenticates the user, applies rate limiting (max 5 bids/second per user per auction)
  3. Bid Service receives the request and performs the critical compare-and-set operation on Redis:
    -- Redis Lua script (atomic)
    local currentHigh = tonumber(redis.call('GET', 'auction:auc_abc123:highBid'))
    local auctionEnd = tonumber(redis.call('GET', 'auction:auc_abc123:endTime'))
    if os.time() > auctionEnd then return {0, 'AUCTION_ENDED'} end
    if 55000 > currentHigh then
        redis.call('SET', 'auction:auc_abc123:highBid', 55000)
        redis.call('SET', 'auction:auc_abc123:highBidder', 'user_123')
        redis.call('INCR', 'auction:auc_abc123:bidCount')
        return {1, 'ACCEPTED'}
    else
        return {0, currentHigh}
    end
  4. If accepted: Bid Service publishes a BidPlaced event to Kafka, returns 201 to client
  5. If rejected: returns 409 with the current high bid so the client can show the latest price
  6. Bid Processor (Kafka consumer) persists the bid to PostgreSQL for the permanent record and audit trail
  7. Notification Service pushes the new bid to all users watching this auction via SSE

Flow 2: Creating an Auction ​

  1. Seller submits auction details via POST /api/auctions
  2. Auction Service validates the input (valid end time in the future, starting price > 0, images uploaded)
  3. Auction is written to PostgreSQL with status active
  4. Redis is initialized with auction:{id}:highBid = startingPrice, auction:{id}:endTime = endTime
  5. A scheduled event is created in the Auction Scheduler to trigger auction closure at endTime
  6. Auction appears in the browse/search feed (Elasticsearch is updated asynchronously via CDC or Kafka)

Flow 3: Ending an Auction ​

  1. Auction Scheduler (cron or event-driven) triggers at the auction's endTime
  2. Scheduler publishes an AuctionEnding event to Kafka
  3. Auction Closer Service consumes the event: a. Reads the final highBid and highBidder from Redis b. Marks the auction as ended in PostgreSQL c. Creates an AuctionResult record with the winner d. Publishes AuctionEnded event
  4. Notification Service notifies the winner ("You won!") and the seller ("Your item sold for $55,000!")
  5. Outbid users receive "Auction ended -- you did not win" notifications
  6. Redis keys for this auction are cleaned up after a grace period (keep for 1 hour for any late reads)

Flow 4: Viewing an Auction (Real-Time Updates) ​

  1. User opens the auction page. Client makes GET /api/auctions/{id} for current state
  2. Client opens an SSE connection: GET /api/auctions/{id}/stream
  3. SSE Service subscribes to the Redis Pub/Sub channel auction_updates:{id}
  4. Whenever a new bid is placed, the Bid Service publishes to this Redis channel
  5. SSE Service forwards the update to all connected clients
  6. Client updates the UI with the new highest bid, bid count, and countdown timer

Potential Deep Dives ​

Deep Dive 1: Optimistic Locking vs Pessimistic Locking for Bids ​

The Problem: Two users bid at the same millisecond. User A bids $55K, User B bids $56K. We must ensure only the higher bid wins and no money is double-committed.

Bad Solution -- No locking (read-then-write in application code):

currentHigh = db.query("SELECT highBid FROM auctions WHERE id = ?")
if newBid > currentHigh:
    db.query("UPDATE auctions SET highBid = ? WHERE id = ?", newBid)

Race condition: both threads read $54K, both think they are higher, both write. Last write wins regardless of amount. Broken.

Good Solution -- Pessimistic locking (SELECT FOR UPDATE):

sql
BEGIN;
SELECT highBid FROM auctions WHERE id = 'auc_123' FOR UPDATE;
-- row is locked, other transactions block here
IF :newBid > highBid THEN
    UPDATE auctions SET highBid = :newBid WHERE id = 'auc_123';
END IF;
COMMIT;

Correct, but slow under contention. If 1,000 bids/sec hit a hot auction, they serialize. Each transaction takes ~5ms with disk I/O = throughput of 200 bids/sec max per auction. Creates a bottleneck.

Great Solution -- Redis atomic compare-and-set (what we use): We use a Lua script in Redis (as shown in Flow 1). Lua scripts execute atomically in Redis -- no other command runs between the GET and SET. Redis is single-threaded per shard, so there is zero lock contention.

Performance: Redis handles ~100K operations/sec on a single key. Even the hottest auction will not exceed a few thousand bids/sec. This is more than sufficient.

Trade-off: Redis is in-memory and could lose data on crash. We mitigate this by:

  • Using Redis with AOF persistence (fsync every second)
  • Writing every accepted bid to Kafka immediately (durable log)
  • The PostgreSQL write (via Kafka consumer) is the source of truth for the bid history
  • If Redis crashes, we rebuild the current high bid by replaying the Kafka topic for active auctions

Deep Dive 2: Cache Invalidation for Write-Heavy Workloads ​

The Problem: Auction pages are read-heavy (10K viewers per popular auction) but also write-heavy (bids coming in every second). Traditional caching (cache-aside with TTL) causes either stale data or excessive cache misses.

Bad Solution -- Cache-aside with short TTL (1 second): Set a 1-second TTL on the cached auction state. Problem: during the last 10 seconds of a hot auction, you get ~10K cache misses per second hitting the database. This is a thundering herd.

Good Solution -- Write-through cache: On every bid, update both Redis (the source of truth for current bid) and the cache simultaneously. Viewers always read from cache. No TTL needed for the bid amount.

On bid accepted:
1. Update Redis highBid (already done atomically)
2. Publish bid event to Redis Pub/Sub for real-time viewers
3. Async: persist to PostgreSQL via Kafka

This works because Redis IS our cache for the hot path. We are not caching a DB result -- Redis is the primary store for the current auction state during the active period.

Great Solution -- CQRS (Command Query Responsibility Segregation): Separate the write path (bid placement via Redis + Kafka) from the read path entirely:

  • Write side: Redis for atomic bid validation, Kafka for event log, PostgreSQL for persistence
  • Read side: Denormalized read models in Redis or a fast read store. Auction detail pages are served from the read model.
  • Events flow from write side to read side via Kafka consumers that update the read model.

Trade-off: Eventual consistency between write and read sides (typically < 100ms). Viewers might briefly see a stale bid count, but the "current high bid" from Redis is always fresh since we read it directly.

Deep Dive 3: Auction End Timer -- Scheduled Jobs vs Event-Driven ​

The Problem: We have 10M active auctions. Each ends at a specific time. How do we trigger auction closure at the exact right moment?

Bad Solution -- Polling cron job: Run a job every second: SELECT * FROM auctions WHERE endTime <= NOW() AND status = 'active'. Scanning 10M rows every second is brutal on the database, even with an index.

Good Solution -- Scheduled job queue (e.g., Redis sorted set as delay queue):

ZADD auction_endings <endTime_as_score> <auctionId>

A worker runs every second:

ZRANGEBYSCORE auction_endings -inf <now> LIMIT 0 100

This returns auctions that should end now. Process them and remove from the set. O(log N) for the range query. Handles 10M auctions efficiently.

Great Solution -- Event-driven with SQS/Kafka delayed messages: When an auction is created, publish a message to Amazon SQS with a DelaySeconds (up to 15 minutes) or use a SQS FIFO queue with message timers. For longer delays, use a scheduler service (like Amazon EventBridge Scheduler) that triggers a Lambda/ECS task at the exact endTime.

Architecture:

  1. On auction creation: EventBridge Scheduler.createSchedule(endTime, { action: "closeAuction", auctionId }).
  2. At endTime, EventBridge invokes a Lambda function.
  3. Lambda publishes AuctionEnding to Kafka.
  4. Auction Closer Service processes it (idempotently -- checks if already closed).

Trade-off: EventBridge Scheduler is accurate to ~1 second. For auctions, that is acceptable. The idempotent check prevents double-closure if the message is delivered twice.

Deep Dive 4: Bid Sniping Prevention ​

The Problem: "Sniping" is when a bidder waits until the last second to place a bid, giving no one time to respond. This frustrates other bidders and can depress final prices.

Bad Solution -- Do nothing: Snipers win regularly. Other bidders feel cheated. Seller gets lower prices. Bad marketplace dynamics.

Good Solution -- Anti-snipe extension: If a bid is placed in the last N minutes (e.g., 5 minutes), automatically extend the auction end time by N more minutes. This mirrors physical auction behavior ("Going once, going twice...").

Implementation:

-- In the Redis Lua bid script, add:
local timeLeft = auctionEnd - os.time()
if timeLeft < 300 then  -- less than 5 minutes remaining
    local newEnd = os.time() + 300
    redis.call('SET', 'auction:auc_123:endTime', newEnd)
    -- Also update the EventBridge schedule
end

After updating Redis, publish an AuctionExtended event so that:

  • The Auction Scheduler reschedules the closure
  • SSE clients are notified of the new end time
  • The database is updated asynchronously

Great Solution -- Combine anti-snipe with proxy bidding: Allow users to set a maximum bid. The system automatically bids on their behalf up to that maximum, incrementing by the minimum bid increment. This means snipers cannot win just by waiting -- the proxy bid responds instantly.

-- Proxy bid logic (in Lua):
if manualBid > currentHigh then
    -- Check if there's a proxy bid from the current leader
    local proxyMax = redis.call('GET', 'auction:auc_123:proxy:' .. currentHighBidder)
    if proxyMax and tonumber(proxyMax) >= manualBid then
        -- Auto-bid on behalf of current leader
        local autoBid = manualBid + minIncrement
        redis.call('SET', 'auction:auc_123:highBid', autoBid)
        -- Notify manual bidder they were outbid
    else
        -- Manual bidder takes the lead
        redis.call('SET', 'auction:auc_123:highBid', manualBid)
    end
end

Trade-off: Proxy bidding adds complexity to the Lua script and creates questions around transparency ("Why was I immediately outbid?"). Clear UX communication is needed.

Deep Dive 5: Hot Auction Scaling ​

Back-of-envelope for a viral auction:

  • 500K concurrent viewers on one auction page
  • 10K bids per minute in the final 5 minutes
  • Each viewer needs real-time bid updates

SSE fan-out challenge: 500K SSE connections, each getting ~10 updates/minute. That is 5M messages/minute = 83K messages/sec from one auction's update stream.

Solution -- Tiered fan-out:

  1. Single Redis Pub/Sub channel per auction
  2. Multiple SSE Gateway servers subscribe to the same Redis channel
  3. Each SSE Gateway handles ~50K connections
  4. 500K viewers / 50K per server = 10 SSE Gateway instances for this one auction
  5. Redis Pub/Sub delivers the message to each of the 10 subscribers
  6. Each SSE Gateway fans out to its 50K connected clients

This is essentially a CDN for real-time data. The Redis Pub/Sub is a single publish point; the fan-out happens at the edge (SSE Gateways).


What is Expected at Each Level ​

Mid-Level ​

  • Design basic auction CRUD with a relational database
  • Identify the concurrency problem with simultaneous bids
  • Propose some form of locking (even pessimistic) to handle bid races
  • Basic understanding of how to show real-time updates (polling or SSE)

Senior ​

  • Use Redis for atomic bid validation with Lua scripts
  • Design the full event-driven architecture with Kafka for bid events
  • Explain cache invalidation strategy for a write-heavy workload
  • Design the auction ending mechanism (scheduled jobs)
  • Back-of-envelope calculations for bid throughput and connection count
  • Discuss trade-offs: optimistic vs pessimistic locking, consistency vs latency
  • Handle edge cases: bid sniping, auction extension, idempotent bid placement

Staff+ ​

  • Design CQRS pattern separating read and write paths
  • Hot auction scaling with tiered SSE fan-out
  • Multi-region considerations: what if bidders are in different regions? How do we maintain bid ordering?
  • Fraud detection: bid shilling (seller bidding on own item), bid rigging patterns
  • Proxy bidding system design with its UX and trust implications
  • Design for graceful degradation: if Redis goes down mid-auction, how do we recover without losing bids? (Answer: replay from Kafka)
  • Financial accuracy: handle currency precision, bid increment rules, reserve prices

Frontend interview preparation reference.