Periodic signals sent between nodes to indicate liveness, enabling failure detection in distributed systems
TL;DR
A heartbeat is a periodic “I’m alive” message sent by nodes in a distributed system. If heartbeats stop arriving, the sender is presumed failed. Heartbeats are the foundation of failure detection, enabling leader election, cluster membership, and automatic failover. The core trade-off is detection speed vs false positive rate—shorter timeouts detect failures faster but trigger more false alarms.
Visual Overview
Core Explanation
What is a Heartbeat?
Real-World Analogy: Think of a scuba diving buddy system. You and your partner agree to check in every 30 seconds with an “OK” hand signal. If your partner doesn’t respond for 2 minutes, you assume something’s wrong and start emergency procedures. That hand signal is a heartbeat—a simple, periodic “I’m fine” message.
The same principle works in distributed systems: nodes periodically announce they’re alive. Silence means trouble.
How Heartbeats Work
Push vs Pull: Heartbeat vs Health Check
The Configuration Trade-off
Real Systems Using Heartbeats
| System | Interval | Timeout | Notes |
|---|---|---|---|
| Kubernetes | 10s (default) | 40s (default) | Kubelet to API server |
| Apache ZooKeeper | tickTime × 2 | Session timeout (configurable) | Heartbeat in session |
| etcd | Configurable | Election timeout | Raft heartbeats |
| Consul | 1s (default) | 10s (default) | Gossip-based |
| Amazon ELB | Configurable | Unhealthy threshold × interval | Health checks |
Note: Defaults vary by version. Always verify in current documentation.
Heartbeat Patterns in Practice
When to Use Heartbeats
✓ Perfect Use Cases
✕ When NOT to Use (or Use Carefully)
Interview Application
Common Interview Question
Q: “Explain how heartbeats work in distributed systems and the trade-offs involved in configuring them.”
Strong Answer:
“Heartbeats are periodic ‘I’m alive’ messages used for failure detection. Here’s how they work:
Mechanism:
- Sender: Every N seconds, send a heartbeat to the monitor
- Receiver: Track ‘last seen’ timestamp per node
- Detection: If no heartbeat for timeout period, mark node as suspected failed
The Core Trade-off:
Config Detection Speed False Positives Example 100ms/300ms Very fast High Leader election 1s/3s Fast Moderate Database HA 5s/15s Slow Low Service mesh Why false positives matter:
- Network hiccup during timeout window = healthy node marked dead
- Consequence: unnecessary failover, split brain risk
Why detection speed matters:
- Slow detection = traffic continues to dead node
- Consequence: errors, latency, data loss
Rule of thumb: timeout = 3× interval. For a 1-second interval, use 3-second timeout—tolerates 2 missed heartbeats before suspecting failure.
Real-world example: Kubernetes uses 10s heartbeat interval with 40s timeout (pod eviction after ~40s of no heartbeats). This is tuned for stability over speed—Kubernetes prioritizes avoiding false positives.”
Follow-up: How do you handle the case where a heartbeat succeeds but the service is actually broken?
“Heartbeats only prove the process is running, not that it’s healthy. A service can send heartbeats while:
- Its database connection is dead
- It’s in an infinite loop
- It’s out of memory but not crashed
Solutions:
- Liveness + Readiness separation (Kubernetes model):
- Liveness probe: Is the process alive? (heartbeat)
- Readiness probe: Can it serve traffic? (deeper health check)
- Application-level heartbeat:
- Include health status in heartbeat message
{ alive: true, db_connected: true, queue_healthy: true }- Hierarchical health checks:
- Heartbeat for fast liveness
- Periodic deep health check (every 30s) for readiness
Best practice: Use heartbeats for ‘is the process running?’ and separate health checks for ‘can it serve requests?’”
Follow-up: What’s the difference between a heartbeat and a lease?
“They’re related but serve different purposes:
Heartbeat: Continuous signal—‘I’m still here.’ Monitor tracks last-seen timestamp. No explicit acknowledgment required.
Lease: Time-limited grant—‘You have permission until T.’ Must be renewed before expiry. Server explicitly grants/extends.
Key difference:
- Heartbeat: Detection is passive (monitor notices absence)
- Lease: Detection is active (lease holder knows when it expires)
Example:
- ZooKeeper sessions: Heartbeat keeps session alive
- Distributed locks: Lease on lock auto-expires if not renewed
Leases add safety: if a node partitions, it knows its lease expires and should stop acting as leader. With pure heartbeats, a partitioned node might keep acting as leader, thinking it’s fine.”
Code Example
Heartbeat System (Python)
import time
import threading
from dataclasses import dataclass, field
from typing import Dict, Callable, Optional
from enum import Enum
class NodeStatus(Enum):
ALIVE = "alive"
SUSPECTED = "suspected"
DEAD = "dead"
@dataclass
class NodeState:
"""Tracked state for a node."""
node_id: str
last_heartbeat: float
status: NodeStatus = NodeStatus.ALIVE
metadata: dict = field(default_factory=dict)
class HeartbeatMonitor:
"""
Centralized heartbeat monitor.
Tracks node liveness and triggers callbacks on status changes.
"""
def __init__(
self,
timeout: float = 3.0,
check_interval: float = 1.0,
on_suspected: Optional[Callable[[str], None]] = None,
on_dead: Optional[Callable[[str], None]] = None,
on_alive: Optional[Callable[[str], None]] = None,
):
"""
Args:
timeout: Seconds without heartbeat before suspected
check_interval: How often to check for expired nodes
on_suspected: Callback when node first suspected
on_dead: Callback when node confirmed dead
on_alive: Callback when node comes back alive
"""
self.timeout = timeout
self.check_interval = check_interval
self.on_suspected = on_suspected
self.on_dead = on_dead
self.on_alive = on_alive
self.nodes: Dict[str, NodeState] = {}
self._lock = threading.Lock()
self._running = False
self._checker_thread: Optional[threading.Thread] = None
def receive_heartbeat(
self,
node_id: str,
metadata: Optional[dict] = None
) -> None:
"""Process incoming heartbeat from a node."""
now = time.time()
with self._lock:
if node_id in self.nodes:
node = self.nodes[node_id]
was_suspected = node.status in (
NodeStatus.SUSPECTED,
NodeStatus.DEAD
)
node.last_heartbeat = now
node.status = NodeStatus.ALIVE
if metadata:
node.metadata = metadata
if was_suspected and self.on_alive:
self.on_alive(node_id)
else:
self.nodes[node_id] = NodeState(
node_id=node_id,
last_heartbeat=now,
metadata=metadata or {}
)
if self.on_alive:
self.on_alive(node_id)
def _check_expired(self) -> None:
"""Check for nodes that have missed heartbeats."""
while self._running:
now = time.time()
with self._lock:
for node in self.nodes.values():
elapsed = now - node.last_heartbeat
if elapsed > self.timeout * 2:
if node.status != NodeStatus.DEAD:
node.status = NodeStatus.DEAD
if self.on_dead:
self.on_dead(node.node_id)
elif elapsed > self.timeout:
if node.status == NodeStatus.ALIVE:
node.status = NodeStatus.SUSPECTED
if self.on_suspected:
self.on_suspected(node.node_id)
time.sleep(self.check_interval)
def start(self) -> None:
"""Start the background checker thread."""
self._running = True
self._checker_thread = threading.Thread(
target=self._check_expired,
daemon=True
)
self._checker_thread.start()
def stop(self) -> None:
"""Stop the background checker thread."""
self._running = False
if self._checker_thread:
self._checker_thread.join()
def get_status(self, node_id: str) -> Optional[NodeStatus]:
"""Get current status of a node."""
with self._lock:
if node_id in self.nodes:
return self.nodes[node_id].status
return None
def get_alive_nodes(self) -> list[str]:
"""Get list of currently alive nodes."""
with self._lock:
return [
n.node_id for n in self.nodes.values()
if n.status == NodeStatus.ALIVE
]
class HeartbeatSender:
"""Sends periodic heartbeats to a monitor."""
def __init__(
self,
node_id: str,
monitor: HeartbeatMonitor,
interval: float = 1.0,
metadata_fn: Optional[Callable[[], dict]] = None
):
self.node_id = node_id
self.monitor = monitor
self.interval = interval
self.metadata_fn = metadata_fn
self._running = False
self._sender_thread: Optional[threading.Thread] = None
def _send_loop(self) -> None:
"""Continuously send heartbeats."""
while self._running:
metadata = self.metadata_fn() if self.metadata_fn else None
self.monitor.receive_heartbeat(self.node_id, metadata)
time.sleep(self.interval)
def start(self) -> None:
"""Start sending heartbeats."""
self._running = True
self._sender_thread = threading.Thread(
target=self._send_loop,
daemon=True
)
self._sender_thread.start()
def stop(self) -> None:
"""Stop sending heartbeats."""
self._running = False
if self._sender_thread:
self._sender_thread.join()
# Usage example
if __name__ == "__main__":
print("=== Heartbeat Demo ===\n")
# Create monitor with callbacks
def on_suspected(node_id: str):
print(f"⚠️ Node {node_id} SUSPECTED (missing heartbeats)")
def on_dead(node_id: str):
print(f"💀 Node {node_id} DEAD (confirmed failure)")
def on_alive(node_id: str):
print(f"✅ Node {node_id} ALIVE")
monitor = HeartbeatMonitor(
timeout=2.0,
check_interval=0.5,
on_suspected=on_suspected,
on_dead=on_dead,
on_alive=on_alive
)
monitor.start()
# Create senders (workers)
worker1 = HeartbeatSender("worker-1", monitor, interval=0.5)
worker2 = HeartbeatSender("worker-2", monitor, interval=0.5)
worker1.start()
worker2.start()
print("Workers started, sending heartbeats...\n")
time.sleep(3)
print("\nSimulating worker-1 failure (stopping heartbeats)...")
worker1.stop()
# Wait for detection
time.sleep(5)
print("\nFinal status:")
print(f" worker-1: {monitor.get_status('worker-1')}")
print(f" worker-2: {monitor.get_status('worker-2')}")
print(f" Alive nodes: {monitor.get_alive_nodes()}")
# Cleanup
worker2.stop()
monitor.stop()
Heartbeat with Metadata (Production Pattern)
import psutil
def get_node_health() -> dict:
"""Collect node health metrics to include in heartbeat."""
return {
"cpu_percent": psutil.cpu_percent(),
"memory_percent": psutil.virtual_memory().percent,
"disk_percent": psutil.disk_usage('/').percent,
"load_avg": psutil.getloadavg()[0],
"connections": len(psutil.net_connections()),
}
# Usage
sender = HeartbeatSender(
node_id="worker-1",
monitor=monitor,
interval=1.0,
metadata_fn=get_node_health # Include health in each heartbeat
)
Related Content
See It In Action:
- Heartbeat & Failure Detection Explainer - Visual walkthrough of timeout detection
Related Concepts:
- Failure Detection - The broader problem heartbeats solve
- Health Checks - Pull-based alternative
- Consensus - Uses heartbeats for leader detection
Quick Self-Check
- Can explain heartbeats in 60 seconds?
- Understand the trade-off between detection speed and false positives?
- Know the difference between heartbeat (push) and health check (pull)?
- Can implement a basic heartbeat monitor with timeouts?
- Understand why timeout = 3× interval is a common rule of thumb?
- Know when to use heartbeats vs leases?
Interview Notes
65% of distributed systems interviews
Powers systems at All distributed systems
Failure detection latency query improvement
O(N) messages per interval