Quick reference
Instance Size | CPU | Memory | Use Case | Throughput |
---|
Minimal | 1 vCPU | 1-2 GB | Development, testing | Up to 10k ops/sec |
Small | 2 vCPU | 4 GB | Small production | Up to 15k ops/sec |
Medium | 4 vCPU | 8 GB | Standard production | Up to 30k ops/sec |
Large | 8+ vCPU | 16+ GB | High-volume production | 50k+ ops/sec |
Minimal instance (1 vCPU, 1-2 GB)
Configuration
MAX_MEMORY_MB=1500
MEMORY_BUFFER_PERCENT=30
PG_POOL_SIZE=5
REDIS_POOL_SIZE=3
DEFAULT_WORKERS_PER_SINK=10
Not recommended for: >10k ops/sec, large backfills (>10M rows), or multiple high-throughput sinks.
Small instance (2 vCPU, 4 GB)
Configuration
MAX_MEMORY_MB=3500
MEMORY_BUFFER_PERCENT=20
PG_POOL_SIZE=10
REDIS_POOL_SIZE=5
DEFAULT_WORKERS_PER_SINK=50
When to scale
Monitor these saturation metrics (scale when >90%):
sequin_ingestion_saturation_percent
sequin_processing_saturation_percent
sequin_delivery_saturation_percent
Resource exhaustion indicators:
- Processes using >100MB memory
- Message queues with >1000 pending
- Growing ETS tables
erlang_vm_memory_bytes_total
approaching MAX_MEMORY_MB
Optimization tips
Batch configuration
# High-latency destinations (webhooks)
batch_size: 50
batch_timeout: 100ms
# Low-latency destinations (Kafka, Redis)
batch_size: 100
batch_timeout: 10ms
- Filter specific tables instead of entire schemas
- Use filters to reduce message volume
- Use transforms to reduce message sizes
- Set
BACKFILL_MAX_PENDING_MESSAGES
lower (e.g., 100k) for small instances
Cloud instances
Provider | Minimal | Small | Production |
---|
AWS | t3.micro | t3.small | t3.medium+ |
GCP | e2-micro | e2-small | e2-medium+ |
Azure | B1s | B2s | D2s_v3+ |
Getting help