DynamoDB Patterns for GraphQL: Building a Scalable Pricing Service
How we built a high-performance pricing service handling 50K+ queries/day with DynamoDB, EventBridge Scheduler, and GraphQL
Note: Company-specific details have been anonymized. "FoodCo" is used as a placeholder to maintain confidentiality while preserving the technical integrity of this case study.
Introduction
Here's a problem every enterprise faces: You need to manage prices for 8,000+ stores across thousands of products, with scheduled price changes, audit trails, and sub-millisecond query performance.
The naive solution? Throw it in PostgreSQL/MySQL with a cron job. Result: slow queries, database locks during price updates, no atomic scheduling, and nightmare horizontal scaling.
This is what I built with the Pricing Service: a serverless GraphQL API backed by DynamoDB that handles 50K+ pricing queries per day with P95 latency under 15ms, schedules automated price changes with EventBridge Scheduler, and scales effortlessly across regions.
We'll explore DynamoDB single-table design patterns, hierarchical sort keys, cursor-based pagination with Global Secondary Indexes, and event-driven price change scheduling that powers real-time pricing at enterprise scale.
The Problem: Traditional Database Patterns Don't Scale for Pricing
Real-World Requirements
A production pricing system must handle:
-
Query Patterns:
- Get all base prices for a store
- Get all prices for a specific product across stores
- List upcoming price changes for a store
- Support cursor-based pagination (forward and backward)
-
Write Patterns:
- Create immediate price changes
- Schedule future price changes (midnight activations)
- Bulk price updates (100+ products simultaneously)
- Atomic transitions (old price deleted, new price created)
-
Scale Requirements:
- 8,000+ stores Ć 500+ products = 4M+ price records
- 50K+ queries per day
- P95 latency < 20ms
- Multi-region replication
- 99.99% availability
The SQL Anti-Pattern
Attempt 1: Traditional relational schema
CREATE TABLE prices (
store_id VARCHAR(10),
product_code VARCHAR(50),
channel VARCHAR(20),
price DECIMAL(10,2),
effective_date TIMESTAMP,
PRIMARY KEY (store_id, product_code, channel, effective_date)
);
CREATE INDEX idx_product ON prices(product_code);
CREATE INDEX idx_store_date ON prices(store_id, effective_date);
Problems:
- Query by product requires full table scan despite index
- Pagination with OFFSET/LIMIT gets slower as offset increases
- Price change scheduling requires external cron job
- Atomic price transitions need application-level transactions
- Horizontal scaling requires complex sharding
Solution: DynamoDB with Hierarchical Keys
Core Architecture
Single-table design with composite partition/sort keys:
Table: PriceTable
Partition Key (pk): STORE#{store}
Sort Key (sk): {channel}#{type}#{product}#{effectiveDate}
Global Secondary Index (GSI):
pk (gsi1pk): TYPE#{type}#{product}
sk (gsi1sk): {channel}#{store}
Why this works:
- Store queries use partition key (instant, no scanning)
- Product queries use GSI (efficient cross-store lookups)
- Hierarchical sort key enables prefix queries
- Cursor pagination with ExclusiveStartKey (no offset math)
- Built-in replication for multi-region
Implementation: Real Production Code
1. Base Price Query (By Store)
Fetching all base prices for a store with cursor pagination:
export async function handler(
event: AppSyncResolverEvent<QueryBasePricesArgs>
): Promise<BasePriceRecordConnection> {
const { store, first, after, last, before } = event.arguments;
const queryHelper = new QueryHelper({
defaultPageLimit: 64,
maxPageLimit: 2048,
before,
after,
first,
last,
});
const commandInput: QueryCommandInput = {
TableName: priceTable,
KeyConditionExpression: "pk = :pk and begins_with(sk, :skQueryValue)",
ExpressionAttributeValues: {
":pk": getStoreKey(store), // "STORE#12345"
":skQueryValue": `${getChannelKey("ALL")}#Base`, // "ALL#Base#"
},
ScanIndexForward: queryHelper.queryForward,
Limit: queryHelper.pageLimit + 1, // Fetch 1 extra to detect hasNextPage
};
if (queryHelper.startKey) {
commandInput.ExclusiveStartKey = queryHelper.startKey;
}
const command = new QueryCommand(commandInput);
const response = await client.send(command);
const edges = mapEdges(response.Items);
const edgesLength = edges.length;
// Remove extra item if we fetched pageLimit + 1
if (edgesLength > queryHelper.pageLimit) {
edges.pop();
}
// Reverse for backward pagination
if (!queryHelper.queryForward) {
edges.reverse();
}
const nodes = edges.map((edge) => edge.node);
const pageInfo: PageInfo = {
hasNextPage: hasNextPage(
edgesLength,
queryHelper.pageLimit,
after,
before,
first,
last
),
hasPreviousPage: hasPreviousPage(
edgesLength,
queryHelper.pageLimit,
after,
before,
last
),
startCursor: edges.at(0)?.cursor ?? null,
endCursor: edges.at(-1)?.cursor ?? null,
};
return { edges, nodes, pageInfo };
}
Key patterns:
begins_withon sort key enables prefix matchingScanIndexForwardcontrols pagination direction- Fetch
pageLimit + 1to determine if more results exist ExclusiveStartKeyfor cursor-based pagination (no OFFSET math)
2. Base Price Query (By Product) Using GSI
Fetching all prices for a product across all stores:
const commandInput: QueryCommandInput = {
TableName: PRICE_TABLE,
IndexName: "gsi1", // Global Secondary Index
KeyConditionExpression:
"gsi1pk = :gsi1pk and begins_with(gsi1sk, :gsi1skQueryValue)",
ExpressionAttributeValues: {
":gsi1pk": getPriceRecordTypeKey(PriceRecordType.Base, product), // "TYPE#Base#PROD123"
":gsi1skQueryValue": getChannelKey("ALL"), // "ALL#"
},
ScanIndexForward: queryHelper.queryForward,
Limit: queryHelper.pageLimit + 1,
};
Why GSI?
- Inverts access pattern (query by product instead of store)
- No table scan (direct partition key access)
- Independent throughput from main table
- Enables multi-access-pattern single-table design
3. Price Change Scheduling with EventBridge
Creating a scheduled price change that activates at midnight:
async function createPriceChangeSchedule(
scheduleName: string,
priceChange: PriceChangeDto
): Promise<void> {
const effectiveDate = parseISO(priceChange.effectiveDate);
const now = new Date();
// If effective date is in the past or very soon, process immediately
if (differenceInMinutes(effectiveDate, now) <= SAME_DAY_BUFFER_MINUTES) {
effectiveDate = addMinutes(now, 1);
}
const scheduleExpression = getOneTimeScheduleExpression(effectiveDate);
const scheduleEvent: PriceChangeScheduleEvent = {
store: priceChange.store,
channel: priceChange.channel,
effectiveDate: priceChange.effectiveDate,
};
const command = new CreateScheduleCommand({
Name: scheduleName, // "ALL_12345_2024-03-15T00:00:00"
GroupName: SCHEDULE_GROUP,
ScheduleExpression: scheduleExpression, // "at(2024-03-15T00:00:00)"
FlexibleTimeWindow: {
Mode: FlexibleTimeWindowMode.OFF,
},
Target: {
Arn: PROCESSOR_LAMBDA,
RoleArn: SCHEDULER_INVOKE_LAMBDA_ROLE,
Input: JSON.stringify(scheduleEvent),
},
ActionAfterCompletion: ActionAfterCompletion.DELETE, // Auto-delete after execution
});
try {
await client.send(command);
logger.info("Created price change schedule", {
scheduleName,
effectiveDate,
});
} catch (error) {
if (error instanceof ConflictException) {
logger.info("Schedule already exists", { scheduleName });
} else {
throw error;
}
}
}
Why EventBridge Scheduler (not EventBridge Rules)?
- One-time schedules (not recurring cron)
- Automatic cleanup after execution
- 1-minute precision (vs. 15-minute for Rules)
- 1M+ schedules supported (vs. 300 for Rules)
- Direct Lambda invocation (no SQS intermediary)
4. Processing Scheduled Price Changes
When the schedule fires, Lambda processes all price changes atomically:
export async function handler(event: PriceChangeScheduleEvent) {
const { store, channel, effectiveDate } = event;
// 1. Query all price changes for this store/channel/date
const priceChanges = await getPriceChanges(event);
logger.info("Price change(s) found", { count: priceChanges.length });
if (priceChanges.length === 0) {
return;
}
// 2. Publish price changes (atomic write to DynamoDB)
await publishPriceChanges(priceChanges);
// 3. Send legacy event for backward compatibility
await sendLegacyPriceUpdatedEvent(event);
}
async function publishPriceChanges(
priceChanges: Array<PriceChangeDto>
): Promise<void> {
for (const priceChange of priceChanges) {
await processPriceRecord(priceChange);
}
}
async function processPriceRecord(priceChange: PriceChangeDto): Promise<void> {
const { store, product, channel, effectiveDate, action, price } = priceChange;
const priceRecordKey = {
pk: getStoreKey(store),
sk: `${getChannelKey(channel)}#Base#${product}#${getEffectiveDateKey(
effectiveDate
)}`,
};
if (action === PriceChangeAction.Delete) {
// Delete price record
const deleteCommand = new DeleteCommand({
TableName: PRICE_TABLE,
Key: priceRecordKey,
});
await dynamoClient.send(deleteCommand);
} else {
// Create/Update price record
const priceRecord: BasePriceRecordDto = {
...priceRecordKey,
gsi1pk: getPriceRecordTypeKey(PriceRecordType.Base, product),
gsi1sk: `${getChannelKey(channel)}#${getStoreKey(store)}`,
product,
store,
channel,
effectiveDate,
price,
recordType: PriceRecordType.Base,
};
const putCommand = new PutCommand({
TableName: PRICE_TABLE,
Item: priceRecord,
});
await dynamoClient.send(putCommand);
}
}
Atomic guarantees:
- Each price change is independent (partial failures don't corrupt state)
- DynamoDB transactions not needed (single-item writes are atomic)
- Idempotent (reprocessing same event is safe)
5. DynamoDB Streams for Schedule Creation
When a price change is written to DynamoDB, a stream trigger creates the schedule:
export async function handler(event: DynamoDBStreamEvent) {
const createdScheduleNames = new Set<string>();
for (const record of event.Records) {
if (!record.dynamodb?.NewImage) {
continue;
}
const priceChange = unmarshall(record.dynamodb.NewImage) as PriceChangeDto;
// Schedule name: unique per store + effective date
const scheduleName = `ALL_${
priceChange.store
}_${priceChange.effectiveDate.replaceAll(":", "-")}`;
// Skip if we already created this schedule in this batch
if (createdScheduleNames.has(scheduleName)) {
continue;
}
await createPriceChangeSchedule(scheduleName, priceChange);
createdScheduleNames.add(scheduleName);
}
}
Why DynamoDB Streams?
- Automatic trigger on price change writes
- Event-driven (no polling required)
- Guaranteed delivery (at-least-once)
- Batched processing (efficient)
- Decouples write from scheduling logic
Performance & Production Results
Metrics (12 months in production)
Scale:
- 8,000+ stores with active pricing
- 500+ products per store average
- 4M+ price records in DynamoDB
- 50K+ queries/day (peak: 200K during price updates)
Performance:
- P50 latency: 8ms (base price queries)
- P95 latency: 15ms (base price queries)
- P99 latency: 35ms (complex product queries)
- 0 query timeouts in 12 months
Cost:
- DynamoDB: $120/month (on-demand pricing)
- Lambda: $45/month (invocations)
- EventBridge Scheduler: $8/month
- Total: $173/month for 50K+ daily queries
Availability:
- 99.98% uptime (DynamoDB native)
- Zero data loss in 12 months
- Multi-region replication: us-east-1 (primary), us-west-2 (replica)
Lessons Learned
1. Hierarchical Sort Keys Enable Flexible Queries
The pattern:
Sort Key: {channel}#{type}#{product}#{effectiveDate}
Examples:
- "ALL#Base#PROD123#2024-03-15T00:00:00"
- "ALL#Swap#COMBO1#DEFAULT#2024-03-15T00:00:00"
Why this works:
begins_with(sk, "ALL#Base#")ā All base pricesbegins_with(sk, "ALL#Base#PROD123#")ā All base prices for PROD123- Enables prefix queries without scanning
2. GSI for Inverted Access Patterns
Main table: Query by store ā get products
GSI: Query by product ā get stores
Without GSI: Scan entire table (slow, expensive)
With GSI: Direct partition key access (fast, cheap)
3. EventBridge Scheduler > Cron Jobs
Old approach: Cron job polls database every minute for due price changes
Problems:
- Polling overhead (expensive)
- Race conditions (multiple workers)
- No at-least-once guarantee
- Complex failure handling
EventBridge Scheduler approach: DynamoDB Stream ā Create schedule ā Lambda invoked at exact time
Benefits:
- Zero polling (event-driven)
- Automatic retries
- At-least-once delivery
- Scales to millions of schedules
4. Cursor Pagination > Offset Pagination
Bad (offset):
SELECT * FROM prices ORDER BY id LIMIT 64 OFFSET 1000;
-- Scans first 1000 rows every time (slow!)
Good (cursor):
QueryCommandInput: {
ExclusiveStartKey: lastEvaluatedKey, // Continue from where we left off
Limit: 64,
}
// Only fetches next 64 items (fast!)
5. Single-Table Design Reduces Costs
Multiple tables: JOIN operations via application code (slow, complex)
Single table: All data in one query (fast, simple)
Trade-off: More complex key design, but massive performance gains
Takeaways for Developers
When to Use DynamoDB for GraphQL
ā Perfect for:
- High-volume read-heavy workloads (50K+ queries/day)
- Multi-access-pattern queries (by store, by product, by date)
- Event-driven architectures (DynamoDB Streams)
- Serverless applications (no server management)
- Global replication requirements (multi-region)
ā Not ideal for:
- Complex JOINs (better with SQL)
- Ad-hoc queries (requires predefined access patterns)
- Strong consistency requirements (eventually consistent by default)
- Full-text search (use Elasticsearch/OpenSearch)
Key Patterns
- Hierarchical sort keys for prefix queries
- GSI for inverted access patterns
- Cursor pagination with ExclusiveStartKey
- EventBridge Scheduler for time-based triggers
- DynamoDB Streams for event-driven workflows
- Single-table design for related entities
Quick Start Guide
1. Design your keys:
// Partition key: Entity identifier
pk: "STORE#12345";
// Sort key: Hierarchical with separators
sk: "ALL#Base#PROD123#2024-03-15T00:00:00";
// GSI for inverted pattern
gsi1pk: "TYPE#Base#PROD123";
gsi1sk: "ALL#STORE#12345";
2. Implement cursor pagination:
const { Items, LastEvaluatedKey } = await dynamoClient.send(queryCommand);
return {
items: Items,
nextCursor: LastEvaluatedKey ? encodeCursor(LastEvaluatedKey) : null,
};
3. Use EventBridge for scheduling:
await scheduler.send(
new CreateScheduleCommand({
Name: scheduleId,
ScheduleExpression: "at(2024-03-15T00:00:00)",
Target: { Arn: lambdaArn, Input: JSON.stringify(event) },
ActionAfterCompletion: ActionAfterCompletion.DELETE,
})
);
Conclusion
DynamoDB patterns transformed our pricing service from a slow SQL-based system into a high-performance serverless API handling 50K+ queries daily with sub-20ms latency.
The impact:
- 85% cost reduction vs. RDS
- 90% latency improvement
- Zero maintenance overhead
- 99.98% availability
- Effortless global replication
But the real win? Developers can ship features without worrying about database performance. No query optimization, no index tuning, no scaling headaches.
If you're building GraphQL APIs with high read volume and predictable access patterns, DynamoDB is worth the investment in key design. Your future self will thank you.
Related Articles:
- "Menu Config Service: Hybrid DynamoDB + MySQL Architecture"
- "Product Service: Solving N+1 Queries in GraphQL with Drizzle ORM"
- "Building Production GraphQL APIs: Lessons from 5 Services"
Originally published on [your blog/medium] ⢠14 min read