Skip to content

Sharding

How databases horizontally partition data across multiple servers for scalability, using partition keys to distribute and route data efficiently

TL;DR

Sharding is a database architecture pattern that horizontally partitions data across multiple servers (shards), where each shard holds a subset of the total data. A shard key determines which shard stores each record, enabling systems to scale beyond single-server capacity while maintaining reasonable query performance.

Visual Overview

Sharding Overview

Core Explanation

What is Sharding?

Sharding is a database partitioning technique that splits a large dataset horizontally across multiple independent databases (shards). Each shard contains a unique subset of the data, determined by a shard key (also called partition key).

Think of sharding like dividing a massive library across multiple buildings:

  • Single library (no sharding): All 10 million books in one building, limited by building capacity
  • Sharded library (4 buildings): Books divided alphabetically - Building A holds A-F, Building B holds G-L, etc. Each building handles fewer books and concurrent visitors

Key Characteristics:

  • Horizontal partitioning: Splits data by rows, not columns
  • Shard key: Determines which shard stores each record
  • Independent shards: Each shard is a separate database server
  • Distributed queries: Queries may hit one shard (ideal) or multiple shards

Shard Key Selection

The shard key is the most critical design decision in sharding. A good shard key:

  1. Distributes data evenly: Avoids “hot shards” with disproportionate load
  2. Aligns with query patterns: Minimizes cross-shard queries
  3. Has high cardinality: Many unique values to distribute across shards
  4. Is immutable: Changing a shard key requires data migration

Example: E-commerce Database

Shard Key Selection

Shard Key Strategies

1. Hash-Based Sharding (Most Common)

# Distribute data evenly using hash function
shard_number = hash(user_id) % number_of_shards

# Example:
hash(user_12345) % 4 = 1  → Store in Shard 1
hash(user_67890) % 4 = 3  → Store in Shard 3

Pros:

  • ✓ Even distribution (hash function randomizes)
  • ✓ Simple to implement
  • ✓ Predictable shard lookup

Cons:

  • ✕ Difficult to add/remove shards (rehashing required)
  • ✕ Can’t do range queries across shard key (user_id > 50000)

2. Range-Based Sharding

Range-Based Sharding

Pros:

  • ✓ Supports range queries (all users with id > 500,000)
  • ✓ Easy to add shards (just split ranges)

Cons:

  • ✕ Risk of uneven distribution (new users may cluster in recent ranges)
  • ✕ Hot shards if data is time-based (newest users get all traffic)

3. Geographic Sharding

Geographic Sharding

Pros:

  • ✓ Reduces latency (data close to users)
  • ✓ Regulatory compliance (data residency requirements)
  • ✓ Simple routing (IP-based or user-selected)

Cons:

  • ✕ Uneven distribution (more users in some regions)
  • ✕ Cross-region queries are slow

Query Routing

When a query arrives, the application layer must determine which shard(s) to query:

Single-Shard Query (Ideal - Fast)

-- Query includes shard key
SELECT * FROM orders WHERE user_id = 12345;

Routing:
1. Extract shard key: user_id = 12345
2. Calculate shard: hash(12345) % 4 = 1
3. Query only Shard 1
4. Return results

Performance: O(1) shard lookup + single database query

Multi-Shard Query (Scatter-Gather - Slow)

-- Query does NOT include shard key
SELECT * FROM orders WHERE product_id = 'XYZ123';

Routing:
1. No shard key → Must query all shards
2. Send query to Shards 1, 2, 3, 4 in parallel
3. Merge results from all shards
4. Sort/paginate combined results
5. Return to client

Performance: O(N) where N = number of shards

Query Pattern Optimization:

Query Pattern Optimization

Rebalancing and Resharding

As data grows, you may need to add more shards. This is challenging:

The Resharding Problem:

The Resharding Problem

Solutions:

  1. Consistent Hashing: Minimizes data movement when adding shards
  2. Virtual Shards: More shards than physical servers, easier rebalancing
  3. Pre-sharding: Start with more shards than needed (e.g., 256 shards on 4 servers)

Tradeoffs

Advantages:

  • Horizontal scalability: Add more servers to handle more data
  • Improved throughput: Queries distributed across multiple databases
  • Fault isolation: One shard failure doesn’t affect others
  • Reduced latency: Smaller datasets per shard = faster queries

Disadvantages:

  • Increased complexity: Application must handle routing logic
  • Cross-shard queries are expensive: Scatter-gather operations slow
  • Transactions across shards: Difficult or impossible (need distributed transactions)
  • Rebalancing is hard: Adding/removing shards requires data migration
  • Hot shards: Poor shard key choice leads to uneven load

Real Systems Using Sharding

MongoDB (Auto-Sharding)

  • Implementation: Chunk-based sharding with automatic balancing
  • Shard Key: Chosen by user (e.g., user_id, timestamp)
  • Scale: Supports thousands of shards
  • Typical Setup: Start with 3 shards, auto-split and rebalance as data grows
MongoDB Sharding Architecture

Cassandra (Hash Partitioning)

  • Implementation: Consistent hashing with virtual nodes
  • Partition Key: First part of primary key
  • Scale: Designed for massive scale (Instagram uses 1000+ nodes)
  • Typical Setup: 256 virtual nodes per physical server

DynamoDB (Managed Sharding)

  • Implementation: Automatic partitioning by AWS
  • Partition Key: Required in table schema
  • Scale: Auto-scales partitions based on throughput
  • Typical Setup: Transparent to user (AWS manages shards)

Instagram (Custom Sharding)

  • Implementation: PostgreSQL with application-level sharding
  • Shard Key: user_id
  • Scale: Thousands of database servers
  • Strategy: Store all user data (photos, likes, followers) on same shard for single-shard queries

When to Use Sharding

✓ Perfect Use Cases

High Write Throughput

High Write Throughput

Large Dataset That Doesn’t Fit on One Server

Large Dataset Storage

Read-Heavy Workload with Query Patterns

Read-Heavy Workload

✕ When NOT to Use Sharding

Small Dataset (under 100GB)

Small Dataset

Frequent Cross-Shard Queries

Frequent Cross-Shard Queries

Need for ACID Transactions Across Entities

ACID Transaction Requirements

Interview Application

Common Interview Question 1

Q: “Design a database for Twitter. How would you shard the data?”

Strong Answer:

“I’d shard by user_id using hash-based partitioning. Here’s why:

Rationale:

  • Most queries are user-centric: get user’s tweets, timeline, followers
  • Sharding by user_id means all user data lives on one shard
  • Single-shard queries are fast and don’t require cross-shard operations

Shard Key: hash(user_id) % number_of_shards

Data Co-location:

  • User profile → Shard X
  • User’s tweets → Shard X
  • User’s followers → Shard X
  • User’s timeline cache → Shard X

Query Patterns:

  • Get user profile: Single-shard query ✓
  • Get user’s tweets: Single-shard query ✓
  • Post new tweet: Single-shard write ✓

Cross-Shard Challenge:

  • Building home timeline (tweets from followed users) requires cross-shard queries
  • Solution: Pre-compute timelines using fan-out on write (write tweets to follower timelines)

Scaling Strategy:

  • Start with 16 shards (over-provision)
  • As users grow, add more shards using consistent hashing
  • Use virtual shards (256 virtual shards, 16 physical servers initially)”

Why This Answer Works:

  • Identifies appropriate shard key with reasoning
  • Explains query pattern optimization
  • Addresses cross-shard challenge with solution
  • Discusses scaling strategy

Common Interview Question 2

Q: “Your sharded database has a ‘hot shard’ that’s getting 10x more traffic than others. How do you fix it?”

Strong Answer:

“Hot shard indicates poor shard key distribution. Here’s how I’d address it:

Immediate Fix (Short-term):

  1. Vertical scaling: Upgrade the hot shard’s hardware temporarily
  2. Read replicas: Add read replicas for hot shard to distribute read load
  3. Caching: Cache frequently accessed data from hot shard

Root Cause Analysis:

  • Is it a specific celebrity user? (data skew)
  • Is it timestamp-based clustering? (recent data hotspot)
  • Is it a geographic region? (regional load)

Long-term Fix (Depends on cause):

If celebrity users:

  • Give top 1% users dedicated shards
  • Use composite shard key: (is_celebrity, user_id)
  • Celebrities distributed separately

If timestamp clustering:

  • Switch from range-based to hash-based sharding
  • Use: hash(user_id) instead of timestamp ranges

If geographic:

  • Further subdivide hot region
  • E.g., Split ‘North America’ into US-East, US-West, Canada

Rebalancing Strategy:

  • Use consistent hashing to minimize data movement
  • Perform migration during low-traffic hours
  • Keep old shard online during migration (dual writes)
  • Cutover once new shard is caught up

Prevention:

  • Monitor shard metrics (CPU, throughput, latency)
  • Alert when shard imbalance >20%
  • Choose shard keys with high cardinality and even distribution”

Why This Answer Works:

  • Immediate actions + root cause analysis
  • Multiple solutions depending on scenario
  • Rebalancing strategy with minimal downtime
  • Preventive measures

Red Flags to Avoid

  • ✕ Suggesting sharding for small datasets (under 100GB)
  • ✕ Not considering query patterns when choosing shard key
  • ✕ Ignoring cross-shard query challenges
  • ✕ Not explaining how to handle hot shards
  • ✕ Forgetting about rebalancing complexity

Quick Self-Check

Before moving on, can you:

  • Explain sharding in 60 seconds?
  • Draw a diagram showing data distributed across shards?
  • Explain 3 shard key strategies (hash, range, geographic)?
  • Describe the difference between single-shard and cross-shard queries?
  • Identify when to use vs NOT use sharding?
  • Explain how to handle a hot shard?

Prerequisites

None - foundational database scaling concept

Used In Systems

  • Twitter: User-sharded database
  • Instagram: Photo and user data sharded by user_id
  • Uber: Trips sharded by geohash

Explained In Detail

  • Scaling Databases - Comprehensive sharding strategies (coming soon)

Next Recommended: Consensus - Learn how shards coordinate in distributed systems

Interview Notes
⭐ Must-Know
💼80% of system design interviews
Interview Relevance
80% of system design interviews
🏭MongoDB, Cassandra, Instagram
Production Impact
Powers systems at MongoDB, Cassandra, Instagram
High throughput
Performance
High throughput query improvement
📈Thousands to billions of records
Scalability
Thousands to billions of records