AWS Messaging: SQS, SNS, and EventBridge
SQS, SNS, and EventBridge are the three primary messaging services on AWS — each with different delivery semantics, ordering guarantees, and integration patterns. SQS provides durable queues with at-least-once delivery. SNS fans out messages to multiple subscribers simultaneously. EventBridge routes events from AWS services and your applications to multiple targets based on pattern matching. This covers SQS (standard vs FIFO, DLQs, visibility timeout, long polling), SNS (topic fan-out, message filtering, cross-account delivery), EventBridge (custom buses, event patterns, Scheduler), and how these services compose into reliable event-driven architectures.

Event-driven architecture on AWS almost always involves at least one of these three services. SQS is a queue — producers write, consumers read, and the queue absorbs traffic spikes. SNS is a topic — one message goes to many subscribers simultaneously. EventBridge is a bus — events from AWS services and your applications are routed to targets based on content-based pattern matching.
The three are complementary. A common pattern: an upstream service publishes to SNS, which fans out to multiple SQS queues (one per downstream service), each with its own Lambda consumer and dead-letter queue. EventBridge handles the AWS service events and scheduled jobs that don't originate from your application.
SQS
Standard vs FIFO Queues
| Standard | FIFO | |
|---|---|---|
| Throughput | Nearly unlimited | 300 msg/s (3,000 with batching); up to 70,000 msg/s with High Throughput Mode |
| Ordering | Best-effort | Strictly ordered within a message group |
| Delivery | At least once (duplicates possible) | Exactly once (deduplication) |
| Use case | High-throughput, order-independent | Financial transactions, inventory updates |
Standard queue is the right default. The at-least-once delivery means your consumers must be idempotent — processing the same message twice should produce the same result. This is generally achievable and the throughput benefit is significant.
FIFO queue when order matters within a business entity (all events for a given orderId must be processed in sequence) or when exactly-once processing is required. Use a MessageGroupId to partition ordering — messages with the same group ID are ordered relative to each other; messages across different group IDs are ordered independently.
1# Create a standard queue with a dead-letter queue
2aws sqs create-queue \
3 --queue-name payments-dlq \
4 --attributes '{"MessageRetentionPeriod": "1209600"}' # 14 days
5
6DLQ_ARN=$(aws sqs get-queue-attributes \
7 --queue-url https://sqs.us-east-1.amazonaws.com/012345678901/payments-dlq \
8 --attribute-names QueueArn \
9 --query Attributes.QueueArn --output text)
10
11aws sqs create-queue \
12 --queue-name payments-queue \
13 --attributes "{
14 \"VisibilityTimeout\": \"180\",
15 \"MessageRetentionPeriod\": \"86400\",
16 \"RedrivePolicy\": \"{\\\"deadLetterTargetArn\\\":\\\"${DLQ_ARN}\\\",\\\"maxReceiveCount\\\":3}\"
17 }"
18
19# Create a FIFO queue
20aws sqs create-queue \
21 --queue-name orders.fifo \
22 --attributes '{
23 "FifoQueue": "true",
24 "ContentBasedDeduplication": "true",
25 "MessageRetentionPeriod": "86400"
26 }'Dead-Letter Queues
The RedrivePolicy on a queue specifies:
deadLetterTargetArn: the queue to send messages to after they failmaxReceiveCount: how many times a message can be received (and fail) before being moved to the DLQ
After maxReceiveCount failed receive attempts, the message moves to the DLQ. Set maxReceiveCount to at least 3 — transient errors (network blips, temporary service unavailability) should not send messages to the DLQ immediately.
1# Monitor DLQ depth — alert when messages arrive
2aws cloudwatch put-metric-alarm \
3 --alarm-name payments-dlq-not-empty \
4 --namespace AWS/SQS \
5 --metric-name ApproximateNumberOfMessagesVisible \
6 --dimensions Name=QueueName,Value=payments-dlq \
7 --statistic Maximum \
8 --period 60 \
9 --evaluation-periods 1 \
10 --threshold 0 \
11 --comparison-operator GreaterThanThreshold \
12 --alarm-actions arn:aws:sns:us-east-1:012345678901:platform-alertsAn alarm on ApproximateNumberOfMessagesVisible > 0 for the DLQ gives immediate visibility into message processing failures — every message in the DLQ represents a message that failed maxReceiveCount times.
Visibility Timeout
When a consumer receives a message, SQS makes it invisible to other consumers for the visibility timeout period. If the consumer processes successfully and deletes the message within the timeout, it's removed from the queue. If the consumer fails or takes too long, the message becomes visible again and can be re-received by another consumer.
1# Receive a message (makes it invisible for VisibilityTimeout seconds)
2aws sqs receive-message \
3 --queue-url https://sqs.us-east-1.amazonaws.com/012345678901/payments-queue \
4 --max-number-of-messages 10 \
5 --wait-time-seconds 20 # Long polling: wait up to 20s for messages (reduces empty responses)
6
7# Delete after successful processing
8aws sqs delete-message \
9 --queue-url https://sqs.us-east-1.amazonaws.com/012345678901/payments-queue \
10 --receipt-handle <receipt-handle-from-receive>
11
12# Extend visibility timeout if processing will take longer than expected
13aws sqs change-message-visibility \
14 --queue-url https://sqs.us-east-1.amazonaws.com/012345678901/payments-queue \
15 --receipt-handle <receipt-handle> \
16 --visibility-timeout 300 # Extend by 300 more secondsVisibility timeout sizing: set it to at least 6× the configured Lambda function timeout (AWS's recommendation for Lambda event source mappings). If your Lambda function timeout is 30s, set VisibilityTimeout to at least 180s. Using the function timeout (not observed processing time) accounts for worst-case execution under slow-path conditions. The maximum visibility timeout is 12 hours (43,200 seconds). A timeout that's too short causes in-flight messages to become visible and be re-processed by another consumer, causing duplicates.
Long polling (WaitTimeSeconds > 0): the API call waits up to 20 seconds for a message to arrive before returning an empty response. Without long polling (short polling), the API returns immediately even if the queue is empty — this causes unnecessary API calls and charges. Always use long polling for self-managed consumers.
Sending Messages
1# Send a single message
2aws sqs send-message \
3 --queue-url https://sqs.us-east-1.amazonaws.com/012345678901/payments-queue \
4 --message-body '{"orderId": "ord-123", "amount": 99.99, "currency": "USD"}' \
5 --message-attributes '{
6 "event-type": {"DataType": "String", "StringValue": "order.created"},
7 "priority": {"DataType": "Number", "StringValue": "1"}
8 }'
9
10# Send a batch (up to 10 messages, reduces API call count)
11aws sqs send-message-batch \
12 --queue-url https://sqs.us-east-1.amazonaws.com/012345678901/payments-queue \
13 --entries '[
14 {"Id": "1", "MessageBody": "{\"orderId\": \"ord-123\"}"},
15 {"Id": "2", "MessageBody": "{\"orderId\": \"ord-124\"}"}
16 ]'SQS messages have a maximum size of 256 KB. For larger payloads, store the data in S3 and send the S3 object key in the SQS message (the SQS Extended Client pattern).
SNS
Topics and Subscriptions
SNS delivers a single published message to all active subscriptions simultaneously (fan-out). Supported subscription protocols: Lambda, SQS, HTTP/HTTPS, email, SMS, mobile push, and Amazon Data Firehose.
1# Create topic
2aws sns create-topic --name order-events
3
4# Subscribe an SQS queue (for durable delivery)
5aws sns subscribe \
6 --topic-arn arn:aws:sns:us-east-1:012345678901:order-events \
7 --protocol sqs \
8 --notification-endpoint arn:aws:sqs:us-east-1:012345678901:fulfillment-queue
9
10# Subscribe a Lambda function
11aws sns subscribe \
12 --topic-arn arn:aws:sns:us-east-1:012345678901:order-events \
13 --protocol lambda \
14 --notification-endpoint arn:aws:lambda:us-east-1:012345678901:function:send-confirmation-email
15
16# Publish a message to the topic
17aws sns publish \
18 --topic-arn arn:aws:sns:us-east-1:012345678901:order-events \
19 --message '{"orderId": "ord-123", "status": "created"}' \
20 --message-attributes '{
21 "event-type": {"DataType": "String", "StringValue": "order.created"},
22 "customer-tier": {"DataType": "String", "StringValue": "premium"}
23 }'SNS to SQS requires a resource policy on the SQS queue that allows SNS to call sqs:SendMessage. Without this policy, SNS silently fails to deliver to the queue:
aws sqs set-queue-attributes \
--queue-url https://sqs.us-east-1.amazonaws.com/012345678901/fulfillment-queue \
--attributes '{
"Policy": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"sns.amazonaws.com\"},\"Action\":\"sqs:SendMessage\",\"Resource\":\"arn:aws:sqs:us-east-1:012345678901:fulfillment-queue\",\"Condition\":{\"ArnEquals\":{\"aws:SourceArn\":\"arn:aws:sns:us-east-1:012345678901:order-events\"}}}]}"
}'Message Filtering
SNS filter policies let each subscription receive only the messages it cares about, based on message attributes. This avoids delivering all events to all subscribers and requiring downstream services to filter.
1# Subscription filter — this queue only receives premium customer orders
2aws sns set-subscription-attributes \
3 --subscription-arn arn:aws:sns:us-east-1:012345678901:order-events:abc123 \
4 --attribute-name FilterPolicy \
5 --attribute-value '{
6 "event-type": ["order.created", "order.updated"],
7 "customer-tier": ["premium", "enterprise"]
8 }'
9
10# Filter by numeric comparison — only high-value orders
11aws sns set-subscription-attributes \
12 --subscription-arn arn:aws:sns:us-east-1:012345678901:order-events:abc123 \
13 --attribute-name FilterPolicy \
14 --attribute-value '{
15 "order-value": [{"numeric": [">=", 1000]}]
16 }'Filter policies support string exact match, prefix matching, suffix matching, numeric comparison ranges, and anything-but negation. By default, filter policies match against message attributes. Set FilterPolicyScope: MessageBody on the subscription to match against the JSON message body instead.
FIFO Topics
SNS FIFO topics, combined with SQS FIFO queues, provide ordered fan-out with exactly-once delivery:
1# Create FIFO topic
2aws sns create-topic \
3 --name order-events.fifo \
4 --attributes '{"FifoTopic": "true", "ContentBasedDeduplication": "true"}'
5
6# Subscribe a FIFO SQS queue to a FIFO topic
7aws sns subscribe \
8 --topic-arn arn:aws:sns:us-east-1:012345678901:order-events.fifo \
9 --protocol sqs \
10 --notification-endpoint arn:aws:sqs:us-east-1:012345678901:fulfillment-queue.fifo
11
12# Publish to FIFO topic (MessageGroupId required)
13aws sns publish \
14 --topic-arn arn:aws:sns:us-east-1:012345678901:order-events.fifo \
15 --message '{"orderId": "ord-123"}' \
16 --message-group-id "ord-123" # All events for an order are ordered relative to each otherFIFO topics support SQS FIFO queue and Lambda subscribers. HTTP/HTTPS, email, SMS, and mobile push are not supported.
EventBridge
Event Buses
EventBridge has three types of buses:
Default bus: receives events from AWS services (EC2, S3, ECS, RDS, etc.). You can also send custom events to the default bus via PutEvents, though using a dedicated custom bus per domain is recommended for isolation and access control.
Custom buses: receive events from your application via PutEvents. Create a custom bus per domain or environment.
Partner buses: receive events from SaaS partners (Salesforce, Zendesk, Shopify, etc.) via EventBridge partner event sources.
1# Create a custom event bus
2aws events create-event-bus --name payments-events
3
4# Put custom events onto the bus
5aws events put-events \
6 --entries '[
7 {
8 "EventBusName": "payments-events",
9 "Source": "payments.api",
10 "DetailType": "Order Created",
11 "Detail": "{\"orderId\": \"ord-123\", \"amount\": 99.99, \"customerId\": \"cust-456\"}"
12 }
13 ]'Event Patterns and Rules
Rules match events based on patterns and route matched events to targets. Pattern matching is content-based — any field in the event can be matched.
1# Rule: route all order.created events to a Lambda
2aws events put-rule \
3 --name route-order-created \
4 --event-bus-name payments-events \
5 --event-pattern '{
6 "source": ["payments.api"],
7 "detail-type": ["Order Created"],
8 "detail": {
9 "amount": [{"numeric": [">=", 100]}]
10 }
11 }' \
12 --state ENABLED
13
14# Add Lambda target
15aws events put-targets \
16 --rule route-order-created \
17 --event-bus-name payments-events \
18 --targets '[
19 {
20 "Id": "fulfillment-lambda",
21 "Arn": "arn:aws:lambda:us-east-1:012345678901:function:fulfillment-processor",
22 "InputTransformer": {
23 "InputPathsMap": {
24 "order_id": "$.detail.orderId",
25 "amount": "$.detail.amount"
26 },
27 "InputTemplate": "{\"orderId\": \"<order_id>\", \"amount\": <amount>, \"priority\": \"standard\"}"
28 }
29 }
30 ]'InputTransformer reshapes the event before delivery — extract only the fields the target needs and add static values. This decouples the event schema from what each target expects.
Dead-Letter Queues for EventBridge
EventBridge targets can have a DLQ for failed deliveries:
1aws events put-targets \
2 --rule route-order-created \
3 --event-bus-name payments-events \
4 --targets '[
5 {
6 "Id": "fulfillment-lambda",
7 "Arn": "arn:aws:lambda:us-east-1:012345678901:function:fulfillment-processor",
8 "DeadLetterConfig": {
9 "Arn": "arn:aws:sqs:us-east-1:012345678901:eventbridge-dlq"
10 },
11 "RetryPolicy": {
12 "MaximumRetryAttempts": 3,
13 "MaximumEventAgeInSeconds": 3600
14 }
15 }
16 ]'EventBridge retries failed target invocations with exponential backoff, up to MaximumRetryAttempts times within MaximumEventAgeInSeconds. Events that exceed both limits are sent to the DLQ.
EventBridge Scheduler
EventBridge Scheduler creates scheduled invocations — more powerful than the older CloudWatch Events cron rules:
1# One-time scheduled invocation
2aws scheduler create-schedule \
3 --name send-quarterly-report \
4 --schedule-expression "at(2026-06-30T23:00:00)" \
5 --flexible-time-window '{"Mode": "FLEXIBLE", "MaximumWindowInMinutes": 15}' \
6 --target '{
7 "Arn": "arn:aws:lambda:us-east-1:012345678901:function:quarterly-report",
8 "RoleArn": "arn:aws:iam::012345678901:role/SchedulerRole",
9 "Input": "{\"quarter\": \"Q2-2026\"}"
10 }'
11
12# Recurring schedule
13aws scheduler create-schedule \
14 --name daily-reconciliation \
15 --schedule-expression "cron(0 2 * * ? *)" \
16 --flexible-time-window '{"Mode": "OFF"}' \
17 --schedule-expression-timezone "America/New_York" \
18 --target '{
19 "Arn": "arn:aws:sqs:us-east-1:012345678901:reconciliation-queue",
20 "RoleArn": "arn:aws:iam::012345678901:role/SchedulerRole",
21 "SqsParameters": {"MessageGroupId": "daily"},
22 "Input": "{\"action\": \"reconcile\"}"
23 }'FlexibleTimeWindow allows the schedule to fire within a window around the target time — useful for distributing load from many simultaneous schedules (e.g., sending 10,000 notification emails across a 15-minute window rather than simultaneously).
Cross-Account Event Delivery
EventBridge supports sending events to another account's event bus:
1# In the target account — attach a resource policy to the custom bus
2# (aws events put-resource-policy is the recommended API for custom buses)
3aws events put-resource-policy \
4 --event-bus-name central-security-bus \
5 --policy '{
6 "Version": "2012-10-17",
7 "Statement": [{
8 "Sid": "AllowProdAccount",
9 "Effect": "Allow",
10 "Principal": {"AWS": "arn:aws:iam::012345678901:root"},
11 "Action": "events:PutEvents",
12 "Resource": "arn:aws:events:us-east-1:999999999999:event-bus/central-security-bus"
13 }]
14 }'
15
16# In the source account — add a rule that routes events to the target bus
17aws events put-rule \
18 --name route-to-security-account \
19 --event-pattern '{"source": ["payments.api"]}' \
20 --state ENABLED
21
22aws events put-targets \
23 --rule route-to-security-account \
24 --targets '[
25 {
26 "Id": "central-security-bus",
27 "Arn": "arn:aws:events:us-east-1:999999999999:event-bus/central-security-bus",
28 "RoleArn": "arn:aws:iam::012345678901:role/EventBridgeCrossAccountRole"
29 }
30 ]'Composing SQS, SNS, and EventBridge
SNS → SQS Fan-Out Pattern
The most reliable asynchronous decoupling pattern on AWS:
Producer → SNS Topic → SQS Queue A (fulfillment, with DLQ)
→ SQS Queue B (notifications, with DLQ)
→ SQS Queue C (analytics, with DLQ)
Each downstream service owns its queue. If the fulfillment service is slow, its queue backs up without affecting the notification service. Each queue has its own DLQ, retry policy, and scaling.
This pattern is preferable to Lambda direct subscriptions from SNS when:
- The downstream service needs durability (Lambda invocations from SNS are asynchronous and can be lost if Lambda throttles)
- The downstream service needs to control processing rate (reserved concurrency on Lambda + queue = natural backpressure)
- The downstream service needs ordering (use FIFO queues/topics)
Transactional Outbox with EventBridge Pipes
EventBridge Pipes connect sources (SQS, DynamoDB Streams, Kinesis) directly to targets (Lambda, SQS, SNS, EventBridge buses, Step Functions) with optional filtering and enrichment:
1# Create a Pipe: DynamoDB Streams → EventBridge bus (CDC pattern)
2aws pipes create-pipe \
3 --name order-cdc-pipe \
4 --source arn:aws:dynamodb:us-east-1:012345678901:table/orders/stream/2026-01-01T00:00:00.000 \
5 --source-parameters '{
6 "DynamoDBStreamParameters": {
7 "StartingPosition": "TRIM_HORIZON",
8 "BatchSize": 10
9 },
10 "FilterCriteria": {
11 "Filters": [{"Pattern": "{\"eventName\": [\"INSERT\"]}"}]
12 }
13 }' \
14 --target arn:aws:events:us-east-1:012345678901:event-bus/payments-events \
15 --target-parameters '{
16 "EventBridgeEventBusParameters": {
17 "DetailType": "Order Inserted",
18 "Source": "dynamodb.orders"
19 }
20 }' \
21 --role-arn arn:aws:iam::012345678901:role/PipeRolePipes provide the managed plumbing between AWS data sources and targets — replacing the common pattern of Lambda reading from DynamoDB Streams and putting events onto EventBridge.
Frequently Asked Questions
When should I use SQS vs EventBridge for routing events?
SQS: use when the producer knows who the consumer is (direct coupling is acceptable), you need durable delivery with backpressure, or you need to control message processing rate.
EventBridge: use when the producer should not know who will consume the event (true decoupling), you need to route a single event to multiple targets based on content, or you're routing AWS service events (CloudTrail, EC2, S3 notifications) that only EventBridge receives.
For most application-to-application communication, SNS→SQS gives you the best combination: fan-out flexibility from SNS, durability and backpressure from SQS.
What's the difference between SQS and Kinesis Data Streams?
Both are durable message stores, but with different semantics:
| SQS | Kinesis Data Streams | |
|---|---|---|
| Consumer model | Any consumer receives a message once | Multiple consumer groups, each reads all messages independently |
| Retention | Up to 14 days | 24 hours default; extendable up to 365 days (additional cost per shard-hour beyond 24 hours) |
| Ordering | FIFO queues only | Ordered within a shard |
| Replay | Not supported (deleted after processing) | Supported (rewind to any position in retention window) |
| Use case | Task queues, work distribution | Event streaming, real-time analytics, audit log |
Use Kinesis when multiple independent consumers need to read the same data stream, or when you need to replay historical events.
How do I ensure exactly-once processing with SQS Standard?
You can't guarantee it at the SQS level — Standard queues can deliver duplicates. Design your consumer to be idempotent instead:
- Natural idempotency: if processing the same message twice produces the same result (e.g., setting a value rather than incrementing it), you're already safe.
- Deduplication with a database: before processing, check if the message ID has been seen before in a DynamoDB table with TTL. If seen, skip and delete.
- Conditional writes: use DynamoDB conditional expressions or RDS transactions to prevent double-writes.
For truly critical exactly-once semantics, use SQS FIFO with ContentBasedDeduplication or explicit MessageDeduplicationId.
What's the SNS message size limit?
SNS has a 256 KB message size limit (same as SQS). For large payloads, use the claim-check pattern: store the payload in S3, send the S3 object key in the SNS message, and have consumers fetch the payload from S3. SNS can deliver a notification reference rather than the full data.
For Lambda consuming SQS messages with ReportBatchItemFailures and partial batch success, see AWS Lambda: Functions, Event Sources, Layers, and Serverless Patterns. For ECS services that consume SQS queues and scale with KEDA, see KEDA: Event-Driven Autoscaling for Kubernetes.
Designing a fan-out event architecture with SNS and SQS, migrating from polling-based integrations to EventBridge event-driven patterns, or implementing the transactional outbox pattern with DynamoDB Streams and EventBridge Pipes? Talk to us at Coding Protocols — we help platform teams design messaging architectures that decouple services reliably without losing events.


