Skip to content

CQRS (Command Query Responsibility Segregation)

11 min Advanced Patterns Interview: 75%

An architectural pattern that separates read and write operations into distinct models, optimizing each for its specific use case

⭐ Must-Know
πŸ’Ό 75% of L6+ interviews
Interview Relevance
75% of L6+ interviews
🏭 Netflix, Amazon, Uber
Production Impact
Powers systems at Netflix, Amazon, Uber
⚑ 10-100x
Performance
10-100x query improvement
πŸ“ˆ Independent scaling
Scalability
Independent scaling

TL;DR

CQRS separates write operations (commands) from read operations (queries) into distinct models, each optimized for its specific purpose. Commands modify state and are validated against business rules. Queries read from denormalized, optimized read models. An event backbone (like Kafka) propagates changes from write model to read models asynchronously, enabling independent scaling and optimization of each side.

Visual Overview

TRADITIONAL (Single Model):
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚         Single Database            β”‚
β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”‚
β”‚   β”‚      Users Table         β”‚    β”‚
β”‚   β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€    β”‚
β”‚   β”‚ id | name | email | ...  β”‚    β”‚
β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
            ↑             ↓
         [WRITE]       [READ]
            β”‚             β”‚
      Normalized    Complex JOINs
      ACID txns     Slow queries
      Strong        Same schema
      consistency   for both

CQRS (Separate Models):
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”           β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   WRITE MODEL       β”‚           β”‚   READ MODELS       β”‚
β”‚                     β”‚           β”‚                     β”‚
β”‚  [Command Handler]  β”‚           β”‚  [Query Handler]    β”‚
β”‚         β”‚           β”‚           β”‚         ↑           β”‚
β”‚         β–Ό           β”‚           β”‚         β”‚           β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚  Events   β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚
β”‚  β”‚Domain Model  β”‚   β”‚  ────────▢│  β”‚User View     β”‚   β”‚
β”‚  β”‚- Validation  β”‚   β”‚   Kafka   β”‚  β”‚(Denormalized)β”‚   β”‚
β”‚  β”‚- Business    β”‚   β”‚           β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚
β”‚  β”‚  Rules       β”‚   β”‚           β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚
β”‚  β”‚- ACID        β”‚   β”‚           β”‚  β”‚Stats View    β”‚   β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚           β”‚  β”‚(Aggregated)  β”‚   β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚           β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚
β”‚  β”‚Event Store   β”‚   β”‚           β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚
β”‚  β”‚(Write DB)    β”‚   β”‚           β”‚  β”‚Search Index  β”‚   β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚           β”‚  β”‚(Elasticsearch)β”‚   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜           β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚
                                  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

KEY PRINCIPLES:
β”œβ”€β”€ Different models: Optimized for read vs write
β”œβ”€β”€ Eventual consistency: Read models lag slightly
β”œβ”€β”€ Event-driven: Events propagate changes
β”œβ”€β”€ Independent scaling: Scale reads/writes separately
└── Specialized storage: Different DBs for different needs

Core Explanation

What is CQRS?

CQRS (Command Query Responsibility Segregation) is based on a simple principle:

  • Commands: Change state (write operations)
  • Queries: Return data (read operations)
  • Never mix them: Commands don’t return data, queries don’t change state
BEFORE CQRS (Traditional):
public User updateUser(String userId, String newEmail) {
    User user = userRepo.findById(userId);  // Read
    user.setEmail(newEmail);                // Write
    userRepo.save(user);                    // Write
    return user;                            // Return data (mixing!)
}

AFTER CQRS (Separated):

// COMMAND SIDE (Write):
public void updateUserEmail(UpdateEmailCommand cmd) {
    User user = commandRepo.findById(cmd.getUserId());

    // Business validation
    if (!emailValidator.isValid(cmd.getNewEmail())) {
        throw new InvalidEmailException();
    }

    // Generate event
    EmailChangedEvent event = new EmailChangedEvent(
        cmd.getUserId(),
        user.getEmail(),
        cmd.getNewEmail()
    );

    // Persist event
    eventStore.append(event);

    // Update write model
    user.setEmail(cmd.getNewEmail());
    commandRepo.save(user);

    // Publish event to Kafka
    kafkaProducer.send("user-events", event);
}

// QUERY SIDE (Read):
public UserView getUserById(String userId) {
    // Read from optimized read model
    return queryRepo.findById(userId);
}

public List<UserView> searchUsers(String namePrefix) {
    // Query from specialized search index
    return elasticsearchClient.search(namePrefix);
}

Write Side (Command Model)

Responsibilities:

  • Validate business rules
  • Enforce invariants
  • Maintain consistency
  • Generate events
  • Persist state changes
// Command definition
public class UpdateEmailCommand {
    private final String userId;
    private final String newEmail;
    private final String requesterId;

    // Commands are immutable
}

// Command handler
@CommandHandler
public class UserCommandHandler {
    private final UserRepository writeRepo;
    private final EventStore eventStore;
    private final KafkaProducer<String, DomainEvent> eventBus;

    public void handle(UpdateEmailCommand command) {
        // Load aggregate
        User user = writeRepo.findById(command.getUserId())
            .orElseThrow(() -> new UserNotFoundException());

        // Business validation
        validateEmailChange(user, command.getNewEmail());

        // Business logic
        String oldEmail = user.getEmail();
        user.setEmail(command.getNewEmail());
        user.setVersion(user.getVersion() + 1);

        // Generate domain event
        EmailChangedEvent event = EmailChangedEvent.builder()
            .aggregateId(command.getUserId())
            .oldEmail(oldEmail)
            .newEmail(command.getNewEmail())
            .version(user.getVersion())
            .timestamp(Instant.now())
            .userId(command.getRequesterId())
            .build();

        // Persist (atomic transaction)
        writeRepo.save(user);
        eventStore.append(command.getUserId(), event);

        // Publish to event bus (after commit)
        eventBus.send(new ProducerRecord<>("user-events",
            command.getUserId(), event));
    }

    private void validateEmailChange(User user, String newEmail) {
        if (user.getEmail().equals(newEmail)) {
            throw new NoChangeException("Email unchanged");
        }

        if (!EmailValidator.isValid(newEmail)) {
            throw new InvalidEmailException(newEmail);
        }

        if (emailAlreadyTaken(newEmail)) {
            throw new EmailTakenException(newEmail);
        }
    }
}

Read Side (Query Model)

Responsibilities:

  • Optimize for query patterns
  • Denormalize data
  • Pre-compute aggregations
  • Support multiple views
  • Handle eventual consistency
// Query model (denormalized)
public class UserView {
    private String userId;
    private String email;
    private String fullName;
    private UserStatus status;
    private int orderCount;           // Pre-computed
    private BigDecimal totalSpent;    // Pre-computed
    private Instant lastLoginAt;
    private Instant createdAt;
    private Instant updatedAt;

    // Optimized for reads - no business logic
}

// Query handler
@QueryHandler
public class UserQueryHandler {
    private final UserViewRepository queryRepo;
    private final ElasticsearchClient searchClient;
    private final RedisTemplate<String, UserView> cache;

    public UserView getUser(String userId) {
        // Try cache first
        UserView cached = cache.opsForValue().get(userId);
        if (cached != null) return cached;

        // Read from optimized read DB
        UserView user = queryRepo.findById(userId)
            .orElseThrow(() -> new UserNotFoundException());

        // Cache for future reads
        cache.opsForValue().set(userId, user,
            Duration.ofMinutes(5));

        return user;
    }

    public List<UserView> searchUsers(String query, int page, int size) {
        // Use specialized search index
        return searchClient.search(query, page, size);
    }

    public UserStatsView getUserStats(String userId) {
        // Query from pre-aggregated stats view
        return statsRepo.findStatsByUserId(userId);
    }
}

Event Propagation (Write to Read)

Event-Driven Synchronization:

// Projection builder (updates read models)
@Component
public class UserProjectionBuilder {

    @KafkaListener(topics = "user-events",
                   groupId = "user-view-projection")
    public void handleEvent(DomainEvent event) {

        switch (event.getEventType()) {
            case "UserCreated" ->
                handleUserCreated((UserCreatedEvent) event);

            case "EmailChanged" ->
                handleEmailChanged((EmailChangedEvent) event);

            case "StatusUpgraded" ->
                handleStatusUpgraded((StatusUpgradedEvent) event);
        }
    }

    private void handleUserCreated(UserCreatedEvent event) {
        // Create new read model entry
        UserView view = UserView.builder()
            .userId(event.getAggregateId())
            .email(event.getEmail())
            .fullName(event.getName())
            .status(UserStatus.ACTIVE)
            .orderCount(0)
            .totalSpent(BigDecimal.ZERO)
            .createdAt(event.getTimestamp())
            .build();

        userViewRepo.save(view);

        // Also update search index
        searchClient.index(view);

        // Invalidate cache
        cache.delete(event.getAggregateId());
    }

    private void handleEmailChanged(EmailChangedEvent event) {
        // Update existing read model
        UserView view = userViewRepo.findById(event.getAggregateId())
            .orElseThrow();

        view.setEmail(event.getNewEmail());
        view.setUpdatedAt(event.getTimestamp());

        userViewRepo.save(view);
        searchClient.update(view);
        cache.delete(event.getAggregateId());
    }
}

Multiple Read Models

Specialized Views for Different Use Cases:

EVENT STREAM (Kafka):
[UserCreated] [EmailChanged] [OrderPlaced] [OrderPlaced] ...
      β”‚              β”‚              β”‚              β”‚
      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                      β”‚
        β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
        β–Ό             β–Ό             β–Ό              β–Ό

READ MODEL 1:      READ MODEL 2:   READ MODEL 3:  READ MODEL 4:
User Details       User Stats      Search Index    Analytics
(PostgreSQL)       (Aggregated)    (Elasticsearch) (ClickHouse)

{                  {               {               {
  userId: "123"      userId: "123"   userId: "123"   date: "2025-01-15"
  email: "a@x"       orderCount: 47  name: "John"    signups: 127
  name: "John"       totalSpent: $2K searchable: Y   conversions: 23
  status: VIP        avgOrder: $42   filters: ...    revenue: $12K
}                    lastOrder: ...  }               }
                   }

USE CASE:          USE CASE:       USE CASE:       USE CASE:
Profile page       Dashboard       Search page     Analytics
Account settings   Analytics       Admin tools     Reports

Handling Eventual Consistency

Challenges and Solutions:

public class EventualConsistencyPatterns {

    // Pattern 1: Optimistic UI (assume success)
    public void updateEmail_OptimisticUI(UpdateEmailCommand cmd) {
        // Send command
        commandBus.send(cmd);

        // Immediately update UI (optimistic)
        ui.updateEmail(cmd.getNewEmail());

        // Later: Event arrives and confirms
        eventBus.subscribe("user-events", event -> {
            if (event instanceof EmailChangedEvent) {
                ui.confirmEmailChanged(); // Remove "pending" indicator
            }
        });
    }

    // Pattern 2: Versioning (detect stale reads)
    public UserView getUserWithVersion(String userId) {
        UserView user = queryRepo.findById(userId);

        // Include version in response
        // Client can detect if data is stale by comparing versions
        return user; // version included
    }

    // Pattern 3: Eventual consistency window
    public void waitForConsistency(String userId, long expectedVersion) {
        // Poll until read model catches up
        int maxAttempts = 10;
        int attempt = 0;

        while (attempt < maxAttempts) {
            UserView view = queryRepo.findById(userId);

            if (view.getVersion() >= expectedVersion) {
                return; // Consistent now
            }

            Thread.sleep(100); // Wait 100ms
            attempt++;
        }

        throw new EventualConsistencyTimeoutException();
    }

    // Pattern 4: Read from write model (fallback)
    public UserView getUserStronglyConsistent(String userId) {
        // If strong consistency required, read from write model
        User domainModel = writeRepo.findById(userId);

        // Convert to view (slower, but consistent)
        return UserView.fromDomainModel(domainModel);
    }
}

Tradeoffs

Advantages:

  • Optimized models for read and write
  • Independent scaling (scale reads separately)
  • Multiple read models from single write model
  • Improved performance (denormalized reads)
  • Better security (separate permissions)
  • Flexibility (different databases for different needs)

Disadvantages:

  • Eventual consistency complexity
  • More code and infrastructure
  • Data duplication across read models
  • Synchronization overhead
  • Debugging complexity (distributed events)
  • Steep learning curve

Real Systems Using This

Netflix

  • Implementation: CQRS with Cassandra (write) + Elasticsearch (read)
  • Scale: Billions of events per day
  • Use case: User profiles, viewing history, recommendations
  • Benefit: Independent scaling of reads (99% traffic) vs writes

Amazon

  • Implementation: Event-sourcing + CQRS for order processing
  • Write model: DynamoDB for order commands
  • Read models: Multiple views for different teams (finance, logistics, customer service)
  • Benefit: Different teams optimize their own read models

Uber

  • Implementation: CQRS for trip lifecycle
  • Write: Trip commands (request, accept, complete)
  • Read: Multiple projections (driver view, rider view, analytics)
  • Benefit: Real-time rider/driver apps with stale analytics (acceptable)

When to Use CQRS

Perfect Use Cases

High Read-to-Write Ratio

Scenario: Social media (1000 reads : 1 write)
Why: Scale reads independently with caching
Write: Post creation (rare)
Read: Timeline, profile views (constant)

Complex Queries on Write-Optimized Data

Scenario: E-commerce with complex filtering
Why: Normalized writes, denormalized reads
Write: Order placement (ACID, normalized)
Read: Product search with 20+ filters

Multiple Consumers of Same Data

Scenario: Event data consumed by many teams
Why: Each team builds optimized read model
Write: User activity events
Read: Analytics, ML, reporting, customer service (each optimized)

Different Scalability Requirements

Scenario: Video streaming platform
Why: Massive read scale (streaming), moderate write scale (uploads)
Write: Video upload metadata
Read: Video catalog, recommendations (cached, replicated)

When NOT to Use

Simple CRUD Applications

Problem: CQRS adds complexity without benefit
Alternative: Traditional single-model with indexes
Example: Internal admin panels, simple forms

Strong Consistency Requirements

Problem: CQRS uses eventual consistency
Alternative: Traditional ACID transactions
Example: Financial ledgers, inventory counts

Small Scale Systems

Problem: Overhead not justified for small scale
Alternative: Simpler architecture
Example: Startup MVPs, internal tools

Interview Application

Common Interview Question

Q: β€œDesign Netflix’s recommendation system that handles billions of viewing events while serving personalized recommendations in real-time.”

Strong Answer:

β€œI’d use CQRS to separate high-volume writes (viewing events) from low-latency reads (recommendations):

WRITE SIDE (Event Ingestion):

Events: ViewStarted, ViewCompleted, ViewSkipped, VideoRated

Command Handler:
- Validate event (user exists, video exists)
- Append to Kafka (topic: viewing-events)
- Update write model (user viewing history in DynamoDB)

Kafka Configuration:
- Partitions: 1000 (partition by userId)
- Retention: 90 days
- Throughput: 1M events/sec

READ SIDE (Multiple Projections):

Projection 1: Real-time Recommendations (Low Latency)

Storage: Redis (cached recommendations)
Update: Every 5 minutes via Kafka Streams
Query: <10ms p99
Data: Top 50 recommendations per user (pre-computed)

Kafka Streams Processor:
viewing-events β†’ aggregate by user β†’ ML model β†’ Redis

Projection 2: Historical Analytics (Complex Queries)

Storage: ClickHouse (columnar DB)
Update: Batch processing every hour
Query: Complex aggregations across billions of events
Data: Watch time by genre/region/time

Projection 3: User Profile View (Fast Lookups)

Storage: Cassandra (denormalized)
Update: Real-time via projection builder
Query: Recently watched, continue watching
Data: {userId, recentVideos[], continueWatching[]}

Event Flow:

User watches video
  β†’ ViewStarted event β†’ Kafka
  β†’ Projection 1: Update Redis recommendations (5min lag OK)
  β†’ Projection 2: Append to ClickHouse (1hr lag OK)
  β†’ Projection 3: Update Cassandra (real-time)

Benefits:

  • Write throughput: 1M+ events/sec (Kafka can handle)
  • Read latency: <10ms for recommendations (Redis)
  • Complex analytics: Supported (ClickHouse)
  • Independent scaling: Scale read replicas without affecting writes
  • Eventual consistency: Acceptable for recommendations

Tradeoff: Recommendations lag by 5 minutes, but that’s acceptable for this use case.”

Why this is good:

  • Complete architecture
  • Multiple specialized read models
  • Specific technologies and configurations
  • Quantifies performance
  • Explains consistency tradeoffs
  • Shows real-world thinking

Red Flags to Avoid

  • Not understanding eventual consistency
  • Using CQRS for simple CRUD
  • Not explaining why CQRS is needed
  • Confusing CQRS with event sourcing (they’re different!)
  • Not considering operational complexity

Quick Self-Check

Before moving on, can you:

  • Explain CQRS in 60 seconds?
  • Draw the separation between command and query sides?
  • Describe how events propagate from write to read?
  • Explain eventual consistency challenges?
  • Identify when to use vs not use CQRS?
  • Show how multiple read models work?

Prerequisites

Used In Systems

Explained In Detail


Next Recommended: Review all concepts and explore deep dives for implementation details