Batched Processing
For high-throughput scenarios where processing messages in batches is more efficient.
How Batching Works
When a batch handler is registered, the framework creates a BatchAccumulator for that endpoint. Each incoming message from the Service Bus processor is held until a dispatch condition is met:
MaxBatchSize- batch fires immediately once 10 messages have accumulatedMaxWait- batch fires after 200 ms even if fewer than 10 messages are queued
Whichever threshold is reached first triggers the handler. The processor callback blocks until the batch has been dispatched, so settlement can happen correctly per message.
AutoCompleteMessages is forced to false for batch handlers - the handler is responsible for settling each BatchMessage<T> individually.
Handler Signature
Declare a handler that receives IReadOnlyList<BatchMessage<T>>. Other parameters are resolved from DI as usual:
messaging.MapQueue<OrderCreated>("orders")
.MapBatchHandler(async (
IReadOnlyList<BatchMessage<OrderCreated>> batch,
IOrderRepository orders,
CancellationToken ct) =>
{
var entities = batch.Select(m => m.Body.ToOrder()).ToList();
await orders.BulkInsertAsync(entities, ct);
foreach (var msg in batch)
await msg.Context.CompleteAsync(ct);
});
// Topic subscriptions use MapBatchSubscription
messaging.MapTopic<OrderCreated>("order-events")
.MapBatchSubscription("analytics", handler);BatchMessage Wrapper
Each BatchMessage<T> exposes:
| Member | Description |
|---|---|
Body | The deserialized message payload |
Context | Full MessageContext - metadata, headers, settlement actions |
Settlement (CompleteAsync, AbandonAsync, DeadLetterAsync) is called on msg.Context per message. Settlement is independent - you can complete some and abandon others within the same batch.
Configuration
Pass an options delegate before the handler to set concurrency, prefetch, or other processor settings:
messaging.MapQueue<OrderCreated>("orders")
.MapBatchHandler(
options => options.WithPrefetch(200),
handler);NOTE
Azure Service Bus considerations for batching:
- Prefetch - Set a higher
PrefetchCount(via.WithPrefetch()) so the SDK has messages ready locally when the accumulator drains. This reduces round-trip latency between batches. - Lock duration - The accumulator holds messages while building a batch and while the handler runs. Ensure the queue's
LockDuration(default: 1 minute) is long enough for your handler, or configure a shorter accumulation window.
resource ordersQueue 'Microsoft.ServiceBus/namespaces/queues@2022-10-01-preview' = {
name: 'orders'
properties: {
lockDuration: 'PT2M' // 2 minutes - increase for slower batch processing
}
}