<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>Particular Software</title>
  <icon>https://particular.net/favicon.ico</icon>
  <subtitle>Build (Much) Better .NET Systems</subtitle>
  <link href="/feed.xml" rel="self"/>
  
  <link href="https://particular.net/"/>
  <updated>2026-04-20T11:16:19.181Z</updated>
  <id>https://particular.net/</id>
  
  <author>
    <name>Particular Software</name>
    <email>info@particular.net</email>
  </author>
  
  <generator uri="http://hexo.io/">Hexo</generator>
  
  <entry>
    <title>Our new Small Business Program</title>
    <link href="https://particular.net/blog/launching-small-business-program"/>
    <id>https://particular.net/blog/launching-small-business-program</id>
    <published>2025-10-29T00:00:00.000Z</published>
    <updated>2026-04-20T11:16:19.181Z</updated>
    
    <content type="html"><![CDATA[<blockquote><p><strong>Update 2026-03-31</strong>: Particular has contributed the SmallBusinessProgram repository to the <a href="https://usewhatworks.org">Use What Works</a> community organization.</p></blockquote><blockquote><p><strong>Update 2026-01-15</strong>: Following <a href="https://www.youtube.com/watch?v=CpI8Wh1V5tM">Dylan Beattie’s video about licensing models in the .NET ecosystem</a>, and the discussions it triggered in the community, we have decided to <a href="https://github.com/UseWhatWorks/SmallBusinessProgram">open-source our Small Business Program</a>. We’re making it available under the <a href="https://creativecommons.org/licenses/by/4.0/">CC-BY license</a> so that everybody can reuse, mix, and modify it without worry of copyright infringement.</p><p>We hope this will make it that extra bit easier for others to balance the needs of enterprise licensing with accessibility to smaller businesses.</p></blockquote><p>Here at Particular, we’ve been closely following recent developments in the .NET ecosystem, as our good friends Jimmy Bogard and Chris Patterson have started exploring commercialization options to find a sustainable path forward for their popular open-source projects. We went through this process back in 2010 and can say many good things about how it worked out for both us and the organizations relying on our platform.</p><p>One of the things Jimmy and Chris are including as a part of their licensing is a free offering for smaller businesses. Jimmy set his <a href="https://automapper.io/#pricing">threshold</a> for that at $5 million, while Chris has yet to publish his details. Other vendors have offered this for quite some time, with some minor differences. For example, <a href="https://duendesoftware.com/products/communityedition">Duende</a> and <a href="https://www.syncfusion.com/products/communitylicense">SyncFusion</a> have thresholds of $1-3 million, subject to certain conditions.</p><p>We believe it is <em>only right</em> that the mature technology that larger enterprises can afford should <em>also</em> be accessible to smaller organizations that don’t have those kinds of budgets.</p><p>But before jumping in and offering something similar, we reached out to a bunch of people to better understand their thinking and learned something interesting.</p><p>Folks were worried about crossing those thresholds in the coming years and then being hit with a big bill all at once.</p><p>That made sense.</p><p>So we gave it some thought and came up with a model that replaced that “cliff” with a more approachable “staircase”, where that initial 100% discount gradually steps down, bit by bit, as organizations grow past different financial levels, like so:</p><table><thead><tr><th>Max annual finances</th><th>Discount</th></tr></thead><tbody><tr><td>$1,000,000</td><td>100%</td></tr><tr><td>$2,000,000</td><td>90%</td></tr><tr><td>$3,000,000</td><td>80%</td></tr><tr><td>$4,000,000</td><td>60%</td></tr><tr><td>$5,000,000</td><td>20%</td></tr></tbody></table><p>This discount structure also layers nicely on top of a variety of pricing models:</p><ul><li>Let’s say you’re a $1.6M org, if full price was $3k/year, after the 90% discount, you’d pay just $300/year or $25/month.</li><li>After growing to $2.4M, if usage also grew to $6k/year, the now 80% discount would bring you to $1200/year or $100/month.</li></ul><p>We’re hearing that these are much more reasonable prices for many of the smaller businesses we’ve talked to. Of course, for those organizations <strong>under $1M, it would be entirely free</strong>.</p><p>Want to learn more? Check out the <a href="/pricing/small-business-program">program details</a> and tell us what you think in the comments below!</p>]]></content>
    
    <summary type="html">
    
      
      
        &lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Update 2026-03-31&lt;/strong&gt;: Particular has contributed the SmallBusinessProgram repository to the &lt;a href=&quot;https://u
      
    
    </summary>
    
    
  </entry>
  
  <entry>
    <title>AWS Enhancements</title>
    <link href="https://particular.net/blog/aws-enhancements-2025"/>
    <id>https://particular.net/blog/aws-enhancements-2025</id>
    <published>2025-10-14T00:00:00.000Z</published>
    <updated>2026-04-20T11:16:19.181Z</updated>
    
    <content type="html"><![CDATA[<p>Check out the latest enhancements to NServiceBus support for AWS. All of these updates aim to give you more control, fewer surprises, and a smoother experience when building distributed message-based systems on AWS.</p><span id="more"></span><p>For SQS, we’ve added <a href="#prevent-premature-reprocessing-with-automatic-message-visibility-renewal">automatic message visibility renewal</a> to prevent premature redelivery of long-running messages. For saga persistence with DynamoDB, we’ve enabled <a href="#optional-support-for-eventual-consistent-reads-on-sagas">optional eventual consistency</a> to help reduce read costs. Plus, with <a href="#supporting-the-aws-lambda-annotations-programming-model">native support for AWS Lambda Annotations</a>, it’s now easier to build serverless message handlers with less boilerplate and better tooling integration. These are just some of the improvements.</p><p>Let’s get into the details.</p><h2 id="message-visibility-renewal"><a class="markdown-anchor" href="#message-visibility-renewal">🔗</a>Message visibility renewal</h2><p><strong>What we did:</strong> NServiceBus now automatically prevents duplicate message processing in Amazon Simple Queue Service (SQS) by extending message visibility timeouts during long-running operations. This update eliminates a common source of race conditions and data corruption in distributed systems.</p><p><strong>Why we did it:</strong> In SQS, each received message becomes invisible for a limited time <sup id="fnref:1:141025"><a href="#fn:1:141025" rel="footnote">1</a></sup>. If processing takes longer, the visibility timeout can expire, and SQS may redeliver the same message to another consumer, even while the first attempt is still in progress. The extra message delivery leads to multiple issues:</p><ul><li><a href="https://particular.net/blog/what-does-idempotent-mean">Non-idempotent</a> handlers can create business or data errors</li><li>The original consumer can’t complete because its receipt becomes invalid</li><li>Message delivery metrics become inaccurate</li><li>A cycle of repeated failures can develop</li></ul><p>To solve this, we introduced <a href="https://docs.particular.net/transports/sqs/configuration-options#message-visibility">automatic message visibility renewal</a>. During processing, NServiceBus extends the visibility timeout in 5-minute increments, with configuration options available to customize the renewal duration:</p><pre><code class="language-csharp">transport.MaxAutoMessageVisibilityRenewalDuration = TimeSpan.FromMinutes(10);</code></pre><h2 id="reserving-payload-space"><a class="markdown-anchor" href="#reserving-payload-space">🔗</a>Reserving payload space</h2><p><strong>What we did:</strong> You can now reserve space for third-party tracing headers (like DataDog) to prevent Amazon Simple Queue Service (SQS) message size calculation failures. This new configuration option eliminates intermittent send failures that were previously difficult to diagnose and resolve.</p><p><strong>Why we did it:</strong> Because SQS messages have a 256 KB limit, NServiceBus has a feature that will offload large messages to S3 to avoid hitting the limit. However, third-party tracing tools like <a href="https://www.datadoghq.com">DataDog</a> inject headers just before message dispatch, after NServiceBus has already checked the 256 KB SQS limit. As a result, messages could pass validation but fail during send when the extra headers pushed them over the limit.</p><p>To address this, we introduced a <a href="https://docs.particular.net/transports/sqs/configuration-options#reserve-bytes-when-calculating-message-size">configuration option</a> that lets you reserve space for these headers:</p><pre><code class="language-csharp">transport.ReserveBytesInMessageSizeCalculation = 512; // Size in bytes</code></pre><p>This setting allows you to proactively reserve space for third-party headers, such as those commonly used by tracing solutions like DataDog, or diagnostic headers from OpenTelemetry, ensuring the actual message size remains safely below the threshold. You can now tailor your SQS payload calculations to your environment’s specific needs.</p><p>The reserved space decreases the maximum allowable message size. That means the transport will trigger the S3 fallback for smaller message sizes, providing a reliable and controlled safety margin for tools that inject headers during send operations.</p><h2 id="support-for-alternate-storage-solutions"><a class="markdown-anchor" href="#support-for-alternate-storage-solutions">🔗</a>Support for alternate storage solutions</h2><p><strong>What we did:</strong> You can now turn off S3 payload signing in the Amazon Simple Queue Service (SQS) transport, making it possible to use alternative storage providers like <a href="https://www.cloudflare.com/pg-cloudflare-r2-vs-aws-s3/">Cloudflare R2</a> that offer S3-compatible APIs but don’t support signed payloads. This option enables lower-cost providers without being locked into AWS storage.</p><p><strong>Why we did it:</strong> Until now, large message bodies in SQS were always offloaded to Amazon S3, which requires payload signing. That created a problem for services like Cloudflare R2, which does not support the Streaming SigV4 signing implementation used by the S3 SDK. Because signing was hardcoded, R2 wasn’t usable with the transport.</p><p>We’ve introduced a new configuration option:</p><pre><code class="language-csharp">transport.DisablePayloadSigning = true;</code></pre><p>When set, the transport skips the signing step, allowing use of S3-compatible storage providers that don’t support signed payloads. For example, Cloudflare R2 provides zero egress costs, which can significantly reduce expenses for systems handling a high volume of large messages.</p><p>If you choose to <a href="https://docs.particular.net/transports/sqs/configuration-options#offload-large-messages-to-s3-payload-signing">turn off payload signing</a>, weigh the trade-offs carefully: it unlocks alternative backends but reduces the security guarantees of signed requests.</p><h2 id="promoting-sqs-message-attributes"><a class="markdown-anchor" href="#promoting-sqs-message-attributes">🔗</a>Promoting SQS message attributes</h2><p><strong>What we did:</strong> SQS message attributes are now automatically promoted to NServiceBus headers, giving direct access to external system metadata without extra integration code.</p><p><strong>Why we did it:</strong> Previously, these attributes were only available by <a href="https://docs.particular.net/transports/sqs/native-integration#accessing-the-native-amazon-sqs-message">accessing the native message on the extension context</a>, which limited their usefulness for scenarios like routing, monitoring, or enrichment.</p><p>With <a href="https://docs.particular.net/transports/sqs/native-integration#native-message-attributes-promotion">automatic promotion of SQS message attributes</a>, any custom or third-party attributes on an incoming SQS message are now available as headers throughout the NServiceBus pipeline. This header promotion allows you to inspect and use them from NServiceBus pipeline behaviors <sup id="fnref:2:141025"><a href="#fn:2:141025" rel="footnote">2</a></sup> to enable all sorts of cross-cutting infrastructure concerns.</p><h2 id="improved-poison-message-handling"><a class="markdown-anchor" href="#improved-poison-message-handling">🔗</a>Improved poison message handling</h2><p><strong>What we did:</strong> The SQS transport now tracks message receive counts (the number of processing attempts before sending a message to the error queue) more accurately and handles poison messages more reliably, reducing duplicate processing, false retries, and wasted compute in production systems.</p><p><strong>Why we did it:</strong> Previously, message receive counts were based only on an in-memory cache, which had drawbacks. In-memory counts can’t be shared across endpoint instances, so in a scaled-out environment with many nodes, it would take <em>many</em> processing attempts for any one node to observe the message enough times to move a message to the error queue. Additionally, in-memory counts were lost during endpoint restarts, resulting in the loss of historical context.</p><p>However, AWS’s <code>ApproximateReceiveCount</code> is not accurate either and has been observed under-reporting actual counts.</p><p>The transport now combines both values to determine the actual receive count:</p><pre><code class="language-text">ActualReceiveCount = Max(LocalCacheValue, ApproximateReceiveCount)</code></pre><p>This hybrid approach improves accuracy by:</p><ul><li>Buffering against AWS under-reporting</li><li>Recovering from cache loss on restarts</li><li>Auto-correcting in competing consumer scenarios</li><li>Remaining compatible with the existing cache mechanism</li></ul><p>Additionally, this improved approach causes poison messages (those that consistently fail) to move to the error queue without invoking business logic. This optimization prevents unnecessary reprocessing, duplicate deliveries, and wasted resources.</p><p>Together, these improvements make retry behavior more predictable, simplify operational diagnostics, and facilitate a smoother transition from other transports, such as SQL Server.</p><h2 id="aws-lambda-annotations"><a class="markdown-anchor" href="#aws-lambda-annotations">🔗</a>AWS Lambda Annotations</h2><p><strong>What we did:</strong> NServiceBus now supports the AWS Lambda Annotations model for .NET, making it easier to build SQS-triggered Lambda functions with less boilerplate and better integration with modern AWS tooling.</p><p><strong>Why we did it:</strong> Lambda Annotations use C# source generators to reduce repetitive glue code and automatically synchronize CloudFormation templates with annotated methods. With this update, you can register the NServiceBus AWS Lambda integration directly into the .NET service collection provided by the annotation model.</p><p>Here’s what a fully functional Lambda handler now looks like:</p><pre><code class="language-csharp">public class SqsLambda(IAwsLambdaSQSEndpoint serverlessEndpoint){    [LambdaFunction]    [SQSEvent(&quot;ServerlessEndpoint&quot;)]    public async Task FunctionHandler(SQSEvent evnt, ILambdaContext context)    {        using var cancellationTokenSource =            new CancellationTokenSource(context.RemainingTime.Subtract(DefaultRemainingTimeGracePeriod));        await serverlessEndpoint.Process(evnt, context, cancellationTokenSource.Token);    }    static readonly TimeSpan DefaultRemainingTimeGracePeriod = TimeSpan.FromSeconds(10);}</code></pre><p>This integration unlocks a more modern and developer-friendly way to build serverless applications using NServiceBus:</p><ul><li>Eliminates boilerplate code needed to wire up Lambda functions manually</li><li>Leverages compile-time source generation for CloudFormation synchronization</li><li>Reduces cognitive load and setup time for teams adopting AWS Lambda and NServiceBus together</li></ul><p>The <a href="https://docs.particular.net/nservicebus/hosting/aws-lambda-simple-queue-service/">NServiceBus documentation for AWS Lambda and SQS</a> contains details on the new features, including a ready-to-use sample. If you’re building .NET applications on AWS Lambda and want first-class integration with modern tooling, this update helps you get there with less code.</p><h2 id="dynamodb-custom-json-serialization"><a class="markdown-anchor" href="#dynamodb-custom-json-serialization">🔗</a>DynamoDB custom JSON serialization</h2><p><strong>What we did:</strong> To enable advanced scenarios like DynamoDB-specific attribute handling and schema-aware transformations, we introduced <a href="https://docs.particular.net/persistence/dynamodb/sagas#saga-data-mapping">support for injecting custom JsonSerializerOptions, JsonTypeInfo, and context resolvers into the saga persistence and mapping APIs</a>.</p><p><strong>Why we did it:</strong> Previously, the mapping pipeline relied on internal, preconfigured JSON options. These defaults worked for simple objects, but made it difficult (or impossible) to:</p><ul><li>Respect custom serialization attributes like <code>DynamoDBProperty</code> or <code>DynamoDBIgnore</code></li><li>Inject custom converters or behaviors</li><li>Support schema-specific transformations required by DynamoDB or other systems</li></ul><p>With this update, you can override the default options at both the persistence and mapper levels. For example, to support DynamoDB-specific attributes:</p><pre><code class="language-csharp">readonly JsonSerializerOptions serializerOptions = new(Mapper.Default){    TypeInfoResolver = new DefaultJsonTypeInfoResolver    {        Modifiers = { SupportObjectModelAttributes }    }};</code></pre><p>The modifier inspects and manipulates <code>JsonTypeInfo</code> at runtime to recognize and apply DynamoDB semantics:</p><pre><code class="language-csharp">public static void SupportObjectModelAttributes(JsonTypeInfo typeInfo){    if (typeInfo.Kind != JsonTypeInfoKind.Object)    {        return;    }    foreach (JsonPropertyInfo property in typeInfo.Properties)    {        if (property.AttributeProvider?.GetCustomAttributes(typeof(DynamoDBRenamableAttribute), true)            .SingleOrDefault() is DynamoDBRenamableAttribute renamable &amp;&amp;            !string.IsNullOrEmpty(renamable.AttributeName))        {            property.Name = renamable.AttributeName;        }        else if (property.AttributeProvider?.GetCustomAttributes(typeof(DynamoDBIgnoreAttribute), true)            .SingleOrDefault() is DynamoDBIgnoreAttribute)        {            property.ShouldSerialize = (_, __) =&gt; false;        }        else        {            property.ShouldSerialize = (_, __) =&gt; false;        }    }}</code></pre><p>Now you can serialize complex domain objects using existing DynamoDB conventions without redundantly decorating them with <code>JsonPropertyName</code> or <code>JsonIgnore</code> attributes. For example, you can now serialize a complex domain object like the following <code>Customer</code> type using DynamoDB conventions:</p><pre><code class="language-csharp">class Customer{    [DynamoDBHashKey(&quot;PK&quot;)]    public string PartitionKey { get; set; }    [DynamoDBRangeKey(&quot;SK&quot;)]    public string SortKey { get; set; }    public string CustomerId    {        get =&gt; PartitionKey;        set        {            PartitionKey = value;            SortKey = value;        }    }    [DynamoDBProperty]    public bool CustomerPreferred { get; set; }    [DynamoDBIgnore]    public string IgnoredProperty { get; set; }}</code></pre><p>Here’s the corresponding message handler, which operates in the same DynamoDB transaction due to the use of the <a href="https://docs.particular.net/persistence/dynamodb/transactions">synchronized storage session</a>:</p><pre><code class="language-csharp">public async Task Handle(MakeCustomerPreferred message, IMessageHandlerContext context){    var session = context.SynchronizedStorageSession.DynamoPersistenceSession();    var customer = await dynamoContext.LoadAsync&lt;Customer&gt;(message.CustomerId, message.CustomerId, context.CancellationToken);    customer.CustomerPreferred = true;    customer.IgnoredProperty = &quot;IgnoredProperty&quot;;    // Thanks to the customized serializer options, the mapper understands the context attributes    var customerMap = Mapper.ToMap(customer, serializerOptions);    session.Add(new TransactWriteItem    {        Put = new()        {            Item = customerMap,            ...        }    });}</code></pre><p>This change streamlines DynamoDB integration, providing fine-grained control with custom converters or source-generated type resolvers.</p><h2 id="eventually-consistent-saga-reads"><a class="markdown-anchor" href="#eventually-consistent-saga-reads">🔗</a>Eventually consistent saga reads</h2><p><strong>What we did:</strong> You can now reduce DynamoDB read costs by opting into eventual consistent reads for saga data when using optimistic concurrency. This consistency option provides flexibility in high-throughput or cost-sensitive environments where occasional retries are acceptable.</p><p><strong>Why we did it:</strong> By default, the NServiceBus DynamoDB saga persister has always used consistent reads to guarantee the most up-to-date saga data and prevent version mismatches or conflicting writes. While safer, this approach consumes twice the read capacity units compared to eventual consistency.</p><p>With the new <a href="https://docs.particular.net/persistence/dynamodb/sagas#saga-concurrency-pessimistic-locking-configuration">global configuration option</a>, you can enable eventual consistent reads:</p><pre><code class="language-csharp">var sagas = persistence.Sagas();sagas.UseEventuallyConsistentReads = true;</code></pre><p>When enabled, saga reads use DynamoDB’s eventual consistency model, lowering read costs while still supporting optimistic concurrency control.</p><h2 id="summary"><a class="markdown-anchor" href="#summary">🔗</a>Summary</h2><p>If you’ve run into any of the friction points above, we hope these changes make your life easier. If you spot other issues we could improve—or have ideas to make <a href="https://github.com/Particular/NServiceBus">NServiceBus</a> and the Particular Service Platform better—please open a GitHub issue in the relevant <a href="https://github.com/Particular">repository</a>.</p>]]></content>
    
    <summary type="html">
    
      &lt;p&gt;Check out the latest enhancements to NServiceBus support for AWS. All of these updates aim to give you more control, fewer surprises, and a smoother experience when building distributed message-based systems on AWS.&lt;/p&gt;
    
    </summary>
    
    
  </entry>
  
  <entry>
    <title>Resistance is futile...unless you have ServicePulse</title>
    <link href="https://particular.net/blog/resistance-is-futile"/>
    <id>https://particular.net/blog/resistance-is-futile</id>
    <published>2025-09-08T00:00:00.000Z</published>
    <updated>2026-04-20T11:16:19.181Z</updated>
    
    <content type="html"><![CDATA[<blockquote><p>We are the Borg. Your messages have failed and will be assimilated. Resistance is futile.</p></blockquote><p>Distributed systems are a lot like Star Trek’s infamous Borg Collective: massively parallel, highly interconnected, and sometimes, <s>drones</s> services go down or are inaccessible. After all, even with a galaxy-class architecture, things sometimes go wrong. Messages can fail due to transient errors, unexpected exceptions, or configuration hiccups. However, well-designed distributed systems take a stronger cue from the Borg; they are built to be resilient. But how? Let’s explore a more peaceful quadrant of the galaxy to illustrate how this works.</p><span id="more"></span><h2 id="the-starship-enterprise-vs-distributed-failure"><a class="markdown-anchor" href="#the-starship-enterprise-vs-distributed-failure">🔗</a>The Starship <em>Enterprise</em> vs. distributed failure</h2><p>In the world of Star Trek, many things can be replicated with a simple command, and whole universes can be generated with a few phrases in the Holodeck. Still, there are plenty of situations where people want the real thing, whether it’s a bottle of Château Picard, a Klingon bat’leth, or an authentic Bajoran earring.</p><p>Imagine you are an operations officer on a Federation starship running the Starfleet Supply Chain system in LCARS. Naturally, it is a distributed system. Orders made on the ship are routed through an NServiceBus-powered system.<sup id="fnref:1:080925"><a href="#fn:1:080925" rel="footnote">1</a></sup> One of the commands looks like this:</p><pre><code class="language-csharp">public class PlaceOrderCommand : ICommand{    public string OrderId { get; set; }    public OrderItem[] Items { get; set; }    public string Destination { get; set; }}public class OrderItem{    public string Sku { get; set; }    public int Quantity { get; set; }}</code></pre><p>Orders are handled via messages like this:</p><pre><code class="language-csharp">public class PlaceOrderHandler : IHandleMessages&lt;PlaceOrderCommand&gt;{    private readonly IFederationSupplyApi _supplyApi;    public PlaceOrderHandler(IFederationSupplyApi supplyApi)    {        _supplyApi = supplyApi;    }    public async Task Handle(PlaceOrderCommand message, IMessageHandlerContext context)    {        // This call to the Federation Supply Service will fail when shields are up        await _supplyApi.SubmitOrderRequest(message.OrderId, message.Items);                Console.WriteLine($&quot;Submitted order {message.OrderId} with {message.Items.Length} items to {message.Destination}.&quot;);    }}</code></pre><p>But oh no! Just as you are placing your own personal order with the computer for a bona fide Klingon d’k tahg, a Romulan ship decloaks off the starboard bow, and the ship’s shields go up. The shields interfere with subspace communications to the Federation Supply Service, causing the external API calls to fail with a <code>ServiceUnavailableException</code>. Did your order go through?</p><h2 id="retry-protocols-engaged"><a class="markdown-anchor" href="#retry-protocols-engaged">🔗</a>Retry protocols engaged</h2><p>NServiceBus detects the failure and applies its <strong>first-level</strong> and <strong>second-level</strong> retry strategies. But if retries are exhausted and the issue remains, the message is moved to the <strong>error queue</strong>—not dead-lettered, not deleted, just safely quarantined.</p><p>This is where ServicePulse beams in.</p><h2 id="servicepulse-your-command-center-for-failed-messages"><a class="markdown-anchor" href="#servicepulse-your-command-center-for-failed-messages">🔗</a>ServicePulse: Your command center for failed messages</h2><p>Unlike many systems that silently drop or bury failed messages in a dead-letter queue that might as well be beamed into space, NServiceBus gives you a <strong>mission control interface</strong>—ServicePulse.</p><p>As shown in the image below, ServicePulse groups failed messages by <strong>exception type</strong>, and <strong>originating endpoint</strong>:</p><p><img src="/images/blog/2025/servicepulse-failed-message-groups.png" alt="ServicePulse Failed Message Groups" title="ServicePulse groups failed messages by exception type and originating endpoint"></p><p>In this case, 174 messages failed with a <code>ServiceUnavailableException</code> inside the <code>PlaceOrderHandler</code> handler. These messages were all of type <code>PlaceOrderCommand</code>.</p><p>It turns out the Romulan ship is actually part of training exercises that are now complete. With shields down, subspace communications to the Federation Supply Service are restored. We can now perform a level 3 diagnostic. From LCARS, we open ServicePulse and see that there is a list of failed messages.</p><h2 id="manual-intervention-isn’t-futile"><a class="markdown-anchor" href="#manual-intervention-isn’t-futile">🔗</a>Manual intervention isn’t futile</h2><p>Sometimes, you want to investigate specific errors more deeply. With ServicePulse, you can drill down into individual messages, inspect the exception stack-trace, message headers, even the message body, and choose to retry a single message, or even a whole group of messages. This level of control means your system is both resilient and humanoid-friendly.</p><p><img src="/images/blog/2025/servicepulse-message-details-stacktrace.png" alt="ServicePulse Message Details - Stack Trace" title="ServicePulse allows you to inspect the full stack trace and exception details of failed messages"></p><p><img src="/images/blog/2025/servicepulse-message-details-body.png" alt="ServicePulse Message Details - Message Body" title="ServicePulse shows the complete message body with all order details in JSON format"></p><p>There is no need to SSH into remote systems to access queues, <sup id="fnref:2:080925"><a href="#fn:2:080925" rel="footnote">2</a></sup> write ad-hoc scripts, or run a recursive algorithm through the main deflector dish—ServicePulse makes distributed systems self-heal like a Zalkonian undergoing transformation.</p><p>In this case, the cause for the failures is clear and the solution straightforward. With a single click on <strong>Retry all</strong>, ServicePulse sends those messages back to the original endpoint’s input queue. With things back to normal, those same messages can now be <strong>replayed successfully</strong> and your own order is on its way.</p><h2 id="nservicebus-vs-the-collective"><a class="markdown-anchor" href="#nservicebus-vs-the-collective">🔗</a>NServiceBus vs. the Collective</h2><p>In any world, message failure is inevitable, but NServiceBus and <a href="https://particular.net/servicepulse">ServicePulse</a> give your distributed system the tools to bounce back. With smart retry logic, UI-driven error recovery, and grouped failure analysis, you don’t have to wonder if your failed messages got assimilated by the Borg. In other words, resistance is not futile.</p><h3 id="looking-to-improve-failure-recovery"><a class="markdown-anchor" href="#looking-to-improve-failure-recovery">🔗</a>Looking to improve failure recovery?</h3><ul><li>Try <a href="https://particular.net/servicepulse">recovering from failure using ServicePulse</a> in one of our tutorials</li><li>Learn about <a href="https://docs.particular.net/nservicebus/recoverability/">NServiceBus recoverability and retries</a></li><li>Read the related blog post <a href="https://particular.net/blog/but-all-my-errors-are-severe">I caught an exception. Now what?</a></li></ul>]]></content>
    
    <summary type="html">
    
      &lt;blockquote&gt;
&lt;p&gt;We are the Borg. Your messages have failed and will be assimilated. Resistance is futile.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Distributed systems are a lot like Star Trek’s infamous Borg Collective: massively parallel, highly interconnected, and sometimes, &lt;s&gt;drones&lt;/s&gt; services go down or are inaccessible. After all, even with a galaxy-class architecture, things sometimes go wrong. Messages can fail due to transient errors, unexpected exceptions, or configuration hiccups. However, well-designed distributed systems take a stronger cue from the Borg; they are built to be resilient. But how? Let’s explore a more peaceful quadrant of the galaxy to illustrate how this works.&lt;/p&gt;
    
    </summary>
    
    
  </entry>
  
  <entry>
    <title>It&#39;s a Trap! The Two Generals&#39; Problem</title>
    <link href="https://particular.net/blog/two-generals"/>
    <id>https://particular.net/blog/two-generals</id>
    <published>2025-05-04T00:00:00.000Z</published>
    <updated>2026-04-20T11:16:19.181Z</updated>
    
    <content type="html"><![CDATA[<p>In distributed systems, coordination is hard—really hard—especially when both parties depend on mutual confirmation to proceed, but there’s no guarantee their messages will arrive. This classic dilemma is known as the <strong>Two Generals’ Problem</strong>. Like most problems in computer science, it’s easier to understand when explained with lasers, spaceships, and sarcastic smugglers.</p><p>Let’s set the stage: It’s <em>Return of the Jedi</em> and the second Death Star looms large over the forest moon of Endor. The Rebel Alliance’s plan hinges on a synchronized attack — Han Solo leads the ground team to destroy the shield generator, while Lando Calrissian leads the space fleet to attack the Death Star itself.</p><p>For the mission to succeed, both parties must execute their part of the plan. If either side decides to abort, the other must as well, or it will be a disaster.</p><p>And here’s the twist: Han and Lando can only communicate through spotty, insecure rebel comms. Sound familiar?</p><span id="more"></span><h2 id="the-rebel-messaging-system"><a class="markdown-anchor" href="#the-rebel-messaging-system">🔗</a>The Rebel messaging system</h2><p>Let’s imagine Han and Lando are nodes in a distributed system. They need to coordinate a <strong>commit</strong> to the plan:</p><ul><li>Han says, “I’ll disable the shield.”</li><li>Lando says, “I’ll attack once the shield is down.”</li></ul><p>But Lando can’t attack until he knows for sure that Han will take out the shield. Han doesn’t want to risk the mission unless he knows Lando is ready to strike at just the right moment.</p><p>So Han sends a message:</p><blockquote><p>“I’m ready to blow up the shield generator at 0300. Are you ready?”</p></blockquote><p>Lando replies:</p><blockquote><p>“Acknowledged. I’ll attack at 0300.”</p></blockquote><p>But… what if Han never receives Lando’s reply?</p><p>Maybe the Empire is jamming the signal — those Stormtroopers aren’t known for their aim, but their comms interference is top-tier. Han now faces a dilemma:</p><ul><li><strong>Proceed</strong>, risking that Lando never got the message.</li><li><strong>Wait</strong>, risking that Lando attacks without backup — or worse, aborts the mission.</li></ul><p>Now Han tries to send another message:</p><blockquote><p>“Got your confirmation—just confirming again we’re still go at 0300?”</p></blockquote><p>And Lando has the same problem:</p><blockquote><p>“Did he get my reply? Did he get my confirmation of his confirmation?”</p></blockquote><p>Welcome to the <strong>infinite confirmation loop</strong> of the Two Generals’ Problem.</p><h2 id="no-reliable-victory-without-a-reliable-channel"><a class="markdown-anchor" href="#no-reliable-victory-without-a-reliable-channel">🔗</a>No reliable victory without a reliable channel</h2><p>The Two Generals’ Problem highlights a core truth in distributed systems: coordination over unreliable communication is fundamentally flawed. Even if messages arrive most of the time, we can’t be certain without an acknowledgment. And even then, we can’t be sure that the acknowledgment itself arrived.</p><p>Back in the day, we tried to solve this with <strong>distributed transactions</strong>, where technologies that use a two-phase commit algorithm like the <strong>Distributed Transaction Coordinator (DTC)</strong> would attempt to coordinate between databases and message queues (say, SQL Server and MSMQ) to ensure both the data and the message were committed atomically. The idea was noble: all or nothing across systems.</p><p>In practice, though? Depending on DTC was like relying on a Stormtrooper to hit a target directly in front of their helmet. Thankfully, we’ve moved past distributed transactions in modern architectures. But that doesn’t mean we can ignore the underlying problem. If anything, it means we have to solve it more thoughtfully.</p><p>Because the Two Generals’ Problem is hard, in distributed systems, we <em>don’t try to solve the unsolvable</em>. Instead, we change the game.</p><p><strong>Avoid requiring perfect coordination.</strong> Don’t make success depend on both sides committing <em>perfectly</em> and deal with any mistakes that arise because of it. You can decide if this sounds like a plan and how to deal with it.</p><p><strong>Design for uncertainty.</strong> Han’s team is intercepted, and when Lando goes to assault the Death Star, he realizes that the shield is still up. But importantly, neither side abandons the mission. They both keep retrying until they succeed.</p><blockquote><p>We won’t get another chance at this, Admiral. Han will have that shield down. We’ve got to give him more time!</p></blockquote><p><strong>Use reliable message delivery.</strong> <a href="https://particular.net/nservicebus">NServiceBus</a> includes safeguards to ensure your messages don’t vanish into hyperspace.</p><p><strong>Leverage the Outbox pattern.</strong> The Rebel Alliance makes the plan while everyone’s in the same room—synchronously agreeing on a coordinated two-pronged attack. In NServiceBus, the <a href="https://docs.particular.net/nservicebus/outbox/">Outbox pattern</a> ensures that either <em>both missions are executed and succeed together, or neither is</em>. Once the operation is in motion, there’s no turning back. Even when comms are jammed, and blaster fire erupts, both Han and Lando stick to the plan. The two generals are deployed in sync—after that, reliable communication is no longer assumed, but consistency is guaranteed.</p><h2 id="trust-the-force-or-better-the-outbox"><a class="markdown-anchor" href="#trust-the-force-or-better-the-outbox">🔗</a>Trust the Force (or better: the Outbox)</h2><p>In the real world, distributed systems don’t involve Death Stars or space smugglers (unfortunately). However, they do involve services trying to coordinate actions in the face of unreliable networks.</p><p>You can’t rely on perfect communication. But you can design systems that <strong>don’t break</strong> when communication is imperfect.</p><p>So the next time you’re designing a distributed system and thinking, “How can I make sure both sides agree before acting?” remember: Lando aborted and tried on a feeling. Don’t bet your business on a feeling, but use the reliability in NServiceBus.</p><p>If you feel like talking to one of our experienced distributed systems Jedi might help, <a href="https://particular.net/proof-of-concept">transmit a distress call on the HoloNet</a>. We’ll help you come up with a plan that ensures no Bothans will come to any harm.</p><p>Design for failure. Use the Outbox. Never, ever bet the galaxy on a single ACK. And may the 4th be with you.</p>]]></content>
    
    <summary type="html">
    
      &lt;p&gt;In distributed systems, coordination is hard—really hard—especially when both parties depend on mutual confirmation to proceed, but there’s no guarantee their messages will arrive. This classic dilemma is known as the &lt;strong&gt;Two Generals’ Problem&lt;/strong&gt;. Like most problems in computer science, it’s easier to understand when explained with lasers, spaceships, and sarcastic smugglers.&lt;/p&gt;
&lt;p&gt;Let’s set the stage: It’s &lt;em&gt;Return of the Jedi&lt;/em&gt; and the second Death Star looms large over the forest moon of Endor. The Rebel Alliance’s plan hinges on a synchronized attack — Han Solo leads the ground team to destroy the shield generator, while Lando Calrissian leads the space fleet to attack the Death Star itself.&lt;/p&gt;
&lt;p&gt;For the mission to succeed, both parties must execute their part of the plan. If either side decides to abort, the other must as well, or it will be a disaster.&lt;/p&gt;
&lt;p&gt;And here’s the twist: Han and Lando can only communicate through spotty, insecure rebel comms. Sound familiar?&lt;/p&gt;
    
    </summary>
    
    
  </entry>
  
  <entry>
    <title>Achieving lean controllers: Incremental refactoring with Transactional Session</title>
    <link href="https://particular.net/blog/achieving-lean-controllers-incremental-refactoring-with-transactional-sessions"/>
    <id>https://particular.net/blog/achieving-lean-controllers-incremental-refactoring-with-transactional-sessions</id>
    <published>2025-02-19T00:00:00.000Z</published>
    <updated>2026-04-20T11:16:19.181Z</updated>
    
    <content type="html"><![CDATA[<p>It doesn’t matter if you’re developing using MVC, WebAPI, or Razor pages—you want your controller code to be nice and lean. The more bloated that code is, the more coupling you have, and the closer you are to an unmanageable big ball of mud.</p><p>You probably already know that, but I’d bet not all of your controller code is as lean as you’d like it to be. Is it?</p><p>So that leaves the question… How do we get there?</p><span id="more"></span><h2 id="getting-lean"><a class="markdown-anchor" href="#getting-lean">🔗</a>Getting lean</h2><p>What if I told you that you could make your controllers “lean and mean” and modernize in incremental steps?<br>Decoupling the web tier from processing logic is achievable through messaging. Sending messages from the web tier to dedicated backend message handlers allows for a gradual migration of complexity away from the controllers, preventing the need for a risky, all-at-once approach.<br>And as a bonus, your system gets <a href="https://particular.net/blog/but-all-my-errors-are-severe">reliability</a>, <a href="https://particular.net/blog/autosave-for-your-business">resilience</a>, and better <a href="https://particular.net/blog/what-starbucks-can-teach-us-about-software-scalability">scalability</a>.</p><p>As you begin to move the complex logic out of the controller and replace it with sending messages and publishing events, your controller gets leaner, more manageable, and really quite boring.<sup id="fnref:1:190225"><a href="#fn:1:190225" rel="footnote">1</a></sup></p><p>However, there’s a catch. We want to maintain atomicity and consistency between our data and message operations. We don’t want data committed to the database unless the related messages/events get dispatched successfully. The same is true in the other direction. We don’t want any messages/events emitted unless the database transaction also succeeds.</p><p>Systems tend to use the <a href="https://microservices.io/patterns/data/transactional-outbox.html">transactional outbox pattern</a> to ensure that data and messaging operations are kept consistent, and NServiceBus implements this pattern in its <a href="https://docs.particular.net/nservicebus/outbox/">outbox feature</a>, but that only works inside a message handler.</p><p>So, what do we do if we can’t avoid data operations in our controller actions and also need to send messages? Sometimes, this is unavoidable due to the design of a system. Some data has to be added to the database immediately, or the table view <sup id="fnref:2:190225"><a href="#fn:2:190225" rel="footnote">2</a></sup> won’t be updated when the page refreshes. How do we maintain consistency between data and message operations then?</p><h2 id="web-tier-consistency"><a class="markdown-anchor" href="#web-tier-consistency">🔗</a>Web tier consistency</h2><p>The NServiceBus TransactionalSession feature allows the Outbox feature to work in the web tier. Using the TransactionalSession feature, you can store data and send messages in the web tier without having to refactor your controller or redesign your UI.</p><p>Combining the outbox with the transactional session will solve the problem of messages sent or published outside the context of a message handler while maintaining consistency between your message and data operations.</p><p>Let’s see how easy it is to set up this feature using NServiceBus.</p><p>The first step is to add a reference to the NuGet package related to your chosen persistence and register it. <sup id="fnref:3:190225"><a href="#fn:3:190225" rel="footnote">3</a></sup> Next, the NServiceBus configuration code will need to enable the Outbox and TransactionalSession features:</p><pre><code class="language-csharp">endpointConfiguration.EnableOutbox();//Each persistence has a specific Configure methodvar persistence = config.UsePersistence&lt;YourPersistenceOfChoice&gt;();persistence.EnableTransactionalSession();</code></pre><p>Once you’ve configured the endpoint, you will have access to a session that can be obtained inside your controllers, and you can use them inside your controller actions.</p><pre><code class="language-csharp">public async Post(MyModel model, [FromServices]ITransactionalSession session){   await session.Open(new YourPersistenceOpenSessionOptions(),   cancellationToken: cancellationToken);   await session.Send(new YourMessage(), cancellationToken);   await session.Commit(cancellationToken);}</code></pre><p>Now that everything is configured, you can easily benefit from using <code>TransactionalSession</code> in your controllers and leave the big refactoring for later.</p><h2 id="the-big-win"><a class="markdown-anchor" href="#the-big-win">🔗</a>The big win</h2><p>Previously, trying to introduce messaging to our controllers involved moving all of the logic to the back end to avoid consistency problems like <a href="https://docs.particular.net/nservicebus/outbox/#the-consistency-problem">zombie records and ghost messages</a>. This wasn’t always possible without significant code refactoring or UI redesigns.</p><p>By <a href="https://docs.particular.net/nservicebus/transactional-session/">using the TransactionalSession feature</a>, you can still keep some of those mixed concerns in your controllers, while staying safe from any data inconsistencies in your system.<br>It is still ideal to keep the <a href="https://en.wikipedia.org/wiki/Single-responsibility_principle">Single Responsibility principle</a> in mind and, as much as possible, try to extract any data operations from the controllers to be handled asynchronously as part of a message handler. But that work can come later, you don’t have to do it <em>right now</em>.</p><p>In short, <code>TransactionalSession</code> mitigates risky and rushed code changes while maintaining consistency. You can defer refactoring to a later point when you’re not facing endless requests from the business stakeholders while racing towards hard deadlines and make these changes at a time when you have a bit more breathing room to refactor without burning the house down.</p><p>You can focus on your business priorities without investing time in a big refactor or overcomplicated configuration. The clean code aspect can be handled later.</p><p>Check out the <a href="https://docs.particular.net/nservicebus/transactional-session/">transactional session documentation</a> or download one of our <a href="https://docs.particular.net/samples/transactional-session/">transactional session samples</a> to get started.</p><p>Using the transactional session keeps things simple and reliable; it guarantees atomicity with the infrastructure and technology already at your disposal. You get that rock-solid atomicity without the hassle of a massive overhaul or confusing setup. Your existing tools are all you need!</p>]]></content>
    
    <summary type="html">
    
      &lt;p&gt;It doesn’t matter if you’re developing using MVC, WebAPI, or Razor pages—you want your controller code to be nice and lean. The more bloated that code is, the more coupling you have, and the closer you are to an unmanageable big ball of mud.&lt;/p&gt;
&lt;p&gt;You probably already know that, but I’d bet not all of your controller code is as lean as you’d like it to be. Is it?&lt;/p&gt;
&lt;p&gt;So that leaves the question… How do we get there?&lt;/p&gt;
    
    </summary>
    
    
  </entry>
  
  <entry>
    <title>Managing success and growing pains</title>
    <link href="https://particular.net/blog/managing-success-and-growing-pains"/>
    <id>https://particular.net/blog/managing-success-and-growing-pains</id>
    <published>2024-12-10T00:00:00.000Z</published>
    <updated>2026-04-20T11:16:19.181Z</updated>
    
    <content type="html"><![CDATA[<p>Every software system evolves through different stages of complexity. They start simple—attempting to solve a problem that might not yet be well-defined. As they grow, problems become more well-defined, and then they grow some more. Just like with lanky teenagers, this growth can sometimes cause growing pains. A skilled architect knows how to watch for the signs of these growing pains and how to apply more robust architectural patterns to ensure the system can continue to grow and flourish.</p><p>This post is the story of the growing pains experienced by our friends at VECOZO, a system integrator that ensures safe communication between numerous healthcare-related companies. They knew it would be irresponsible to design every piece of software to handle massive scale, even if it only had a few users. So, when the architects started to see telltale signs, they knew it was time to deploy more robust architectural patterns.</p><p>As you read on, maybe you’ll find that some of the challenges they faced sound familiar…</p><span id="more"></span><h2 id="unblocking"><a class="markdown-anchor" href="#unblocking">🔗</a>Unblocking</h2><p>Initially, an application could call another application using synchronous remote calls. During each request, the calling application had to wait for the remote system to complete. But that means when there’s a problem with one application, this affects other applications. For this, VECOZO came up with different solutions that suited their needs. They switched to asynchronous messaging instead of synchronous calls. But instead of manually creating a solution, they decided to use a ready-made framework called NServiceBus.</p><p>NServiceBus worked so well that they introduced messaging in multiple other applications and added more NServiceBus features to the system. Other departments caught on and introduced NServiceBus in their relationship management and healthcare purchasing systems.</p><p>Let’s look deeper at the problems VECOZO experienced with synchronous remote calls and what we need to remember when using messaging.</p><h2 id="when-it’s-synchronous"><a class="markdown-anchor" href="#when-it’s-synchronous">🔗</a>When it’s synchronous</h2><p>Synchronous remote calls, whether over HTTP or any other protocol, assume all involved parties are available, ready, and quick to respond upon request. A glitch in the request chain will ripple back to the original caller like a snowball rolling down a slope. If the caller is a few hops away from the failure, it cannot survive the error because it lacks context about the original reasons for the exception.</p><p>Systems evolve, and decisions made early can have long-lasting and unforeseen consequences. Usually, it’s not a problem until suddenly it’s a <em>big</em> problem. An experienced team doesn’t necessarily prevent every single issue like this, but when they happen, they diagnose it quickly and take proper steps to mitigate it, as was the case here.</p><p>When VECOZO started suffering from those effects, they laid out a plan to address the limitations of the current design, namely, how to reduce the temporal coupling introduced by the synchronous remote procedure calls.</p><h2 id="what-about-retries"><a class="markdown-anchor" href="#what-about-retries">🔗</a>What about retries</h2><p>To solve the issue, the team could have introduced a short-term fix to include <a href="https://particular.net/blog/but-all-my-errors-are-severe">automatic error retry</a> mechanism as a short-term solution. When a remote procedure call failed, the calling code would retry it.</p><p>However, determining an appropriate retry strategy can be more art than science. How many times do you retry? Do you use an exponential backoff strategy? What happens if your retries inadvertently cause a denial of service attack? After investigating this option, the team realized such a solution might not be ideal.</p><h2 id="messaging-a-better-approach"><a class="markdown-anchor" href="#messaging-a-better-approach">🔗</a>Messaging, a better approach</h2><p>A message-based solution replaces synchronous procedure calls with sending a message asynchronously. Essentially, instead of saying, “Hey claims service, can you perform layout checks on this claim? I’ll wait,” the calling process would say: “Hey claims service, can you perform layout checks on this claim? Take your time; I’ll continue when you’re done.”</p><p>Now, whether a component was available or not didn’t matter anymore. Async messages replaced direct calls. Messages are naturally stacked up in a queue, waiting for the component to become available again. So, while some of the team still working on a home-grown solution was looking into rate limiting, the developers on the team switching to NServiceBus worried much less about overloading their components. They experimented with the optimal number of threads that would process messages in parallel. As a result, the components could go as fast as possible without ever overloading resources like a database.</p><p>That’s not to say that the team ignored a retry strategy. Since NServiceBus <a href="https://docs.particular.net/architecture/recoverability#transient-errors">has this functionality built-in</a>, it was easy to enable it. But as a bonus, when a process did fail, the offending message could be delivered to an error queue along with the context of the failure in the form of the exception details. The operations and development teams could work together to investigate the issue, define a fix, deploy it, and finally put the <a href="https://docs.particular.net/servicepulse/intro-failed-message-retries">message back in the queue</a> to retry. The business processes could resume where they left, with only a short delay.</p><h2 id="messaging-considerations"><a class="markdown-anchor" href="#messaging-considerations">🔗</a>Messaging considerations</h2><p>Things are never as straightforward as they appear on the surface, and moving from synchronous calls to asynchronous ones is no different. Due to their nature, synchronous calls often rely on ordering. Things will happen as they are declared in code—one after the other. Changing processes to be asynchronous introduces entropy. Processes can no longer rely on the steps’ execution order, as messages are <a href="https://particular.net/blog/you-dont-need-ordered-delivery">processed out of order</a>, and thus, certain components require some redesign.</p><p>Another key difference between retrying a synchronous remote call and retrying a message is who’s responsible for what. When the invoker sees its call fail and needs to retry, it has little to no knowledge about the context of the failure, and as such, its options are limited to a backoff retry policy.</p><p>When using messages, the failing message fails at the receiver end. The receiver has all the context and knowledge to make more educated decisions about how to retry and whether it’s worth it.</p><p>By forcing the sender/invoker to try, we’re violating an ownership boundary by making the receiver problems a sender/invoker concern. We should never offload issues to someone with little to no knowledge of addressing them.</p><h2 id="retrospective"><a class="markdown-anchor" href="#retrospective">🔗</a>Retrospective</h2><p>Even though NServiceBus provides developers with the flexibility to do what they want in code, the team appreciated the NServiceBus “pit of success” philosophy, which makes it harder to do things the wrong way. Best practices are everywhere, embedded inside NServiceBus and its API and in great documentation. It provides a standard way of working, percolating throughout the system. The team especially appreciated the Particular Software support team, consisting solely of NServiceBus developers with experience building complex, distributed systems. Having one solution removed the need to create, maintain, and document a homemade framework.</p><p>The focus on code was one of the best features that made the team choose NServiceBus. The developers already felt most comfortable in a code editor, with the ability to safely commit changes to source control.</p><p>Automatic retries, error queues, and the design of loosely coupled event-driven applications paved the way for adding new functionalities by adding new subscribers for existing events, as well as no-downtime releases during working hours. Developers and operations personnel had the safety measures to release confidently.</p><h2 id="summary"><a class="markdown-anchor" href="#summary">🔗</a>Summary</h2><p>Any software system used for a considerable amount of time will go through growing pains as it graduates from prototype to minimum viable product to critical business system. The patterns used to accelerate its development in the early stages can’t always provide the stability and scalability required later in life.</p><p>The key for software professionals is to recognize the patterns that indicate a system is beginning to grow beyond its architectural capabilities and how to replace earlier architectural patterns like HTTP and synchronous calls with more robust patterns like asynchronous messaging.</p>]]></content>
    
    <summary type="html">
    
      &lt;p&gt;Every software system evolves through different stages of complexity. They start simple—attempting to solve a problem that might not yet be well-defined. As they grow, problems become more well-defined, and then they grow some more. Just like with lanky teenagers, this growth can sometimes cause growing pains. A skilled architect knows how to watch for the signs of these growing pains and how to apply more robust architectural patterns to ensure the system can continue to grow and flourish.&lt;/p&gt;
&lt;p&gt;This post is the story of the growing pains experienced by our friends at VECOZO, a system integrator that ensures safe communication between numerous healthcare-related companies. They knew it would be irresponsible to design every piece of software to handle massive scale, even if it only had a few users. So, when the architects started to see telltale signs, they knew it was time to deploy more robust architectural patterns.&lt;/p&gt;
&lt;p&gt;As you read on, maybe you’ll find that some of the challenges they faced sound familiar…&lt;/p&gt;
    
    </summary>
    
    
  </entry>
  
  <entry>
    <title>What they don&#39;t tell you about migrating a message-based system to the cloud</title>
    <link href="https://particular.net/blog/messaging-bridge-migrating-to-the-cloud"/>
    <id>https://particular.net/blog/messaging-bridge-migrating-to-the-cloud</id>
    <published>2023-09-12T00:00:00.000Z</published>
    <updated>2026-04-20T11:16:19.181Z</updated>
    
    <content type="html"><![CDATA[<p>Migrating a message-based system from on-premises to the cloud is a colossal undertaking. If you search for “how to migrate to the cloud”, there are reams of articles that encourage you to understand your system, evaluate cloud providers, choose the right messaging service, and manage security and compliance.</p><p>Curiously, what’s often missing from the discussion is <em>details</em>, like how to handle all the other on-prem systems that integrate with your system, both upstream and downstream, that can’t be upgraded at the same time. This gets even more tricky when those integrations are over on-prem-only technologies, like MSMQ, that don’t integrate out-of-the-box with cloud alternatives like Azure Service Bus or Amazon SQS. It’s as if they’re saying, “Have you documented your system? Great! Have you chosen a cloud provider? Awesome! Do you have all the services in place? Wonderful! Now go rewrite all your code… we’ll wait…are you done yet?..What are you looking at me for? I’ve already told you to plan carefully, I can’t do EVERYTHING for you”</p><span id="more"></span><p><img src="/images/blog/2023/bridge-migrate-to-cloud/msmq-to-azure.png" alt="&quot;We'll take care of the rest in sprint planning&quot;"></p><p>In short, there’s a big gap between “everything works on-prem” and “everything works entirely on the cloud” that often gets glossed over. So we’re going to explore this scenario with a small, fictitious airline, called (and really, what else would we call it) ParticulAir.</p><h2 id="i-want-to-move-one-of-my-on-prem-systems-to-the-cloud"><a class="markdown-anchor" href="#i-want-to-move-one-of-my-on-prem-systems-to-the-cloud">🔗</a>I want to move one of my on-prem systems to the cloud</h2><p>ParticulAir has a legacy system that’s been running successfully for many years with a number of features, including flight upgrades. These upgrades are handled asynchronously, as the airline wants to prioritize upgrades for its most valuable frequent flyers over others. Technically, this is all done over MSMQ where requests are processed and eventually granted or rejected, notifying other services of the outcome. Here’s a simplified diagram of how that works:</p><p><img src="/images/blog/2023/bridge-migrate-to-cloud/basic-flow.png" alt="Basic message flow with MSMQ"></p><p>Now, the business wants a new mobile app that will enable users to do all of the things currently available over the web, including requesting flight upgrades. They’re also thinking of migrating the legacy system to the cloud to save on costs, get dynamic scaling, and all the other benefits of the cloud.</p><p>While they would like to eventually migrate/refactor/rewrite the system to be cloud-native, that could take potentially years for a big system. However, if they could get that new mobile app up and running by integrating it with the existing systems, that shorter time-to-market would definitely be appreciated.</p><p>Luckily, there’s a way to do just that.</p><h2 id="the-messaging-bridge-pattern"><a class="markdown-anchor" href="#the-messaging-bridge-pattern">🔗</a>The Messaging Bridge Pattern</h2><p>The <a href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/MessagingBridge.html">Messaging Bridge</a> is an intermediary component that receives messages from one queuing system and transfers them to a compatible queuing system elsewhere.</p><p>In ParticulAir’s case, that would mean the cloud-hosted back-end for the mobile app would put a message in the relevant cloud queuing service (Azure Service Bus or Amazon SQS) and use the “bridge” to route it to the legacy MSMQ on-prem system. Here’s what that would look like:</p><p><img src="/images/blog/2023/bridge-migrate-to-cloud/with-bridge.png" alt="The messaging bridge takes Azure Service Bus messages and sends them to MSMQ"></p><p>The immediate benefit of a bridge in this scenario is that new functionality (e.g. the mobile app) can be built using modern, cloud-based technology while still leveraging the tried-and-true code in the various legacy systems. This provides some breathing room for the cloud migration. New features can be added without having to re-write the legacy system at the same time. Even better, depending on the implementation of the bridge, the legacy systems may not even need to be touched at all. As long as they receive the MSMQ messages with the required data, they shouldn’t care where it originated.</p><p>Now eventually, ParticulAir <em>does</em> want to migrate their systems away from the on-prem, MSMQ technology. This is another instance where the Messaging Bridge Pattern can help. With a bridge in place, the entire system doesn’t need to be migrated all at once. Instead, a more gradual process can be used, moving one endpoint at a time from MSMQ to the cloud, with the bridge transparently taking care of the routing. This can remove a lot of the complexity and risk inherent in a large-scale migration. Let’s see how with an example.</p><h2 id="don’t-everyone-migrate-all-at-once-now"><a class="markdown-anchor" href="#don’t-everyone-migrate-all-at-once-now">🔗</a>Don’t everyone migrate all at once now</h2><p>Remember from the diagram above that the Upgrade component also publishes UpgradeFulfilled events that the Marketing component listens to (all using MSMQ). When that Upgrade component is migrated to Azure, then when it publishes those very same events, they will go to an Azure Service Bus topic called “UpgradeFulfilled”. With a bridge in place configured to route messages from the “UpgradeFulfilled” Azure Service Bus topic to the MSMQ “UpgradeFulfilled” queue, the Marketing component can continue running unchanged in the on-prem environment.</p><p><img src="/images/blog/2023/bridge-migrate-to-cloud/one-service-migrated.png" alt="After the Upgrade component has been migrated to Azure Service Bus"></p><p>Without using some kind of bridge, both components would need to be migrated or at the very least “duplicated” (after modifying the on-prem component to talk to a cloud-accessible database). The thing is, that Marketing component probably talks to other components itself, which then would have to go through the same migration or duplication exercise (together with the components they talk to).</p><p>This is far riskier than if just one component could be migrated, because it means all those components would need to be tested and deployed in tandem. Imagine if any issues arose during testing or (<em>shudder</em>) in production, and you had to pinpoint where the problem lay. This is <em>much</em> easier if you deployed only a single component compared to a series of interdependent ones. Not to mention that it would be far easier to roll back a single component to a previous version. It gets even more complicated if the different components are managed by different teams.</p><p>All of this would also slow down the timeline for the mobile application to release its’ flight upgrade feature.</p><p>These problems go away if you have the ability to migrate a single endpoint at a time. Once a messaging bridge is in place and configured, teams can migrate their endpoints however they see fit without worrying about how their outgoing messages get routed to other endpoints.</p><p>So far, we’ve been almost as hand-wavy as most of the traditional cloud migration literature has been. It’s all well and good to say, “just use a bridge” but how do you implement one?</p><p>Here’s the good part: you don’t have to.</p><h2 id="the-nservicebus-messaging-bridge"><a class="markdown-anchor" href="#the-nservicebus-messaging-bridge">🔗</a>The NServiceBus Messaging Bridge</h2><p>The <a href="https://docs.particular.net/nservicebus/bridge">NServiceBus Messaging Bridge</a> was designed specifically for these scenarios. It’s an implementation of the Messaging Bridge Pattern that takes care of routing messages between different queuing systems.</p><p>In our initial mobile app Flight Upgrade scenario, the bridge sits between the Azure-hosted mobile back-end, routing messages from Azure Service Bus to the MSMQ-based on-prem system:</p><p><img src="/images/blog/2023/bridge-migrate-to-cloud/with-nsb-bridge.png" alt="With the NServiceBus Messaging Bridge"></p><p>The code in the mobile back-end can send messages to an Azure Service Bus queue named the same as the MSMQ queue of the on-premises upgrade component, say, “ParticulAir.UpgradeService” ignoring that a bridge is being used at all, as if the upgrade component was also hosted in Azure. By configuring the NServiceBus Messaging Bridge appropriately, messages from that Azure Service Bus queue will be forwarded on to where they need to go transparently.</p><p>This means when we eventually migrate our Upgrade component to Azure, listening to the “ParticulAir.UpgradeService” Azure Service Bus queue, we won’t need to touch our mobile back-end. Instead we would re-configure the Bridge to stop listening to the “ParticulAir.UpgradeService” queue and have it listen to the “ParticulAir.UpgradeFulfilled” Azure Service Bus topic, forwarding those events over MSMQ to the downstream Marketing component, which wouldn’t need to be modified either.</p><p>Through this process, we could migrate all the relevant components in this scenario, one at a time, to run on the cloud. When the last component is migrated, we’d remove the Bridge from the solution completely.</p><p><img src="/images/blog/2023/bridge-migrate-to-cloud/migrated-system.png" alt="No more bridge!"></p><p>Until this goal is met, however, the messaging bridge can make sure your migration can happen safely and in smaller, more manageable chunks.</p><h2 id="summary"><a class="markdown-anchor" href="#summary">🔗</a>Summary</h2><p>We’ll reiterate the standard advice that migrating a complex distributed system to the cloud requires a well-planned, incremental approach that maintains system integrity and minimizes risks.</p><p>The Messaging Bridge Pattern can be a crucial component to your migration and, if your system uses NServiceBus, you can even wash your hands of most of the implementation details.</p><p>To see it in action, check out our sample on <a href="https://docs.particular.net/samples/bridge/azure-service-bus-msmq-bridge/">bridging messages between endpoints using MSMQ and Azure Service Bus</a>.</p>]]></content>
    
    <summary type="html">
    
      &lt;p&gt;Migrating a message-based system from on-premises to the cloud is a colossal undertaking. If you search for “how to migrate to the cloud”, there are reams of articles that encourage you to understand your system, evaluate cloud providers, choose the right messaging service, and manage security and compliance.&lt;/p&gt;
&lt;p&gt;Curiously, what’s often missing from the discussion is &lt;em&gt;details&lt;/em&gt;, like how to handle all the other on-prem systems that integrate with your system, both upstream and downstream, that can’t be upgraded at the same time. This gets even more tricky when those integrations are over on-prem-only technologies, like MSMQ, that don’t integrate out-of-the-box with cloud alternatives like Azure Service Bus or Amazon SQS. It’s as if they’re saying, “Have you documented your system? Great! Have you chosen a cloud provider? Awesome! Do you have all the services in place? Wonderful! Now go rewrite all your code… we’ll wait…are you done yet?..What are you looking at me for? I’ve already told you to plan carefully, I can’t do EVERYTHING for you”&lt;/p&gt;
    
    </summary>
    
    
  </entry>
  
  <entry>
    <title>Using anti-requirements to find system boundaries</title>
    <link href="https://particular.net/blog/antirequirements"/>
    <id>https://particular.net/blog/antirequirements</id>
    <published>2023-05-23T00:00:00.000Z</published>
    <updated>2026-04-20T11:16:19.180Z</updated>
    
    <content type="html"><![CDATA[<p>We all love building greenfield projects. <sup id="fnref:1:230523"><a href="#fn:1:230523" rel="footnote">1</a></sup> But inevitably, starting a new project involves lots of meetings with business stakeholders to hash out initial requirements and canonical data models. Those are…not so fun.</p><p>When one of those meetings occurs after a carb-heavy lunch, it’s easy for your mind to drift away…back to those university lectures about entity design. Think of the nouns and what attributes they have. A dog and a cat are both animals and have 4 legs. Except now it’s Customers, Orders, Products, and Shopping Carts.</p><p>Is this the best way to build a system, though? Didn’t we do the exact same thing on the previous greenfield project we’re now rewriting? Surely we won’t make all the same mistakes we made last time…right?</p><span id="more"></span><h2 id="one-cart-to-rule-them-all"><a class="markdown-anchor" href="#one-cart-to-rule-them-all">🔗</a>One cart to rule them all</h2><p>As the meeting continues, the design for the shopping cart begins to take shape. <code>ShoppingCart</code> is a noun, after all, and it’s got a list of items in it, each of which has simple attributes like <code>Price</code> and <code>Quantity</code>. Here’s the shopping cart part of the entity relationship diagram we’ll print out and keep at our desk <sup id="fnref:2:230523"><a href="#fn:2:230523" rel="footnote">2</a></sup> like a holy article of software design scripture:</p><p><img alt="First version of the cart" src="/images/blog/2023/anti-requirements/cart-1.png" class="center text-center" style="max-width: 250px;margin-left:auto !important;margin-right:auto !important;"></p><p>We’ve also realized that a cart has some behavior associated with it as well, operations like <code>AddToCart()</code>, <code>SaveForLater()</code>, and <code>Checkout()</code>. So we’re now combining data and behavior together…this is essentially an <strong>aggregate</strong> which means now we’re doing domain-driven design!</p><h2 id="more-attributes-more-problems"><a class="markdown-anchor" href="#more-attributes-more-problems">🔗</a>More attributes, more problems</h2><p>During development, we start to see some flaws in the plan.</p><p>First, we learn that if the price of an item goes down, the new lower price should also be reflected in the shopping cart. So whenever a price changes, we must copy that value to any shopping cart containing that item. However, if the price of an item goes up, we need to warn the user about it and make them accept the new price. So now the cart items need to store the current price <em>and</em> the previous price, and we have to do a lot of copying whenever any related data changes.</p><p>Next, we realize that we need the inventory level to accurately reflect the available inventory in the warehouse. The business intends to use this to pressure customers to purchase before it’s gone. <sup id="fnref:3:230523"><a href="#fn:3:230523" rel="footnote">3</a></sup></p><p>To keep this value up-to-date, every time the inventory of any item changes in the warehouse, we would need to check every active shopping cart for an instance of that item and update its value. You may be able to join tables to get this information, but that’s not always an option. For example, you might need the data to be denormalized for performance, or the warehouse data might exist on a physically different system that can’t participate in a database join.</p><p>It gets worse. As it turns out, we discover similar concerns around delivery estimates, item names, and descriptions. So every time any of these values change, they’ll also need to be copied from their source of truth to any shopping cart with a matching item. At least the marketing folks insist that changes to product names and descriptions should be infrequent and primarily limited to typos. Let’s hope that’s true.</p><p>So now our shopping cart starts to look a lot messier, and we’re starting to get worried thinking about all the <a href="/blog/death-to-the-batch-job">batch jobs</a> we’ll need to write to keep this thing updated.</p><p><img src="/images/blog/2023/anti-requirements/cart-2.png" alt="The shopping cart has gotten complex"></p><p>Our Cart object no longer looks like a proper DDD aggregate, with everything dependent upon everything else and data being copied everywhere.</p><p>The sinking feeling of déjà vu from the old project starts to creep in. What happened? And, more importantly, how can we fix it?</p><h2 id="anti-requirements-to-the-rescue"><a class="markdown-anchor" href="#anti-requirements-to-the-rescue">🔗</a>Anti-requirements to the rescue</h2><p>To help decompose a complex domain, we can use <strong>anti-requirements</strong> <sup id="fnref:4:230523"><a href="#fn:4:230523" rel="footnote">4</a></sup> to find attributes incorrectly lumped together on the same entity. Using anti-requirements is a powerful way to increase autonomy by breaking your domain into separate islands that can evolve independently. <sup id="fnref:5:230523"><a href="#fn:5:230523" rel="footnote">5</a></sup></p><p>Anti-requirements are deceptively simple: you create some fake requirement concerning two attributes and present it to business stakeholders. “If the product has more than 20 characters in its name,” you say to them, “then its price must be at least $20.”</p><p>When they laugh at you, that’s a hint that although those two attributes are verbally associated with the same noun, there isn’t any meaningful logical relationship between them. <sup id="fnref:6:230523"><a href="#fn:6:230523" rel="footnote">6</a></sup></p><p>Without anti-requirements, teasing out these details can be tricky. Since business domain experts tend to think of this stuff as <em>obvious</em>, which makes them unlikely to volunteer this information. They’re generally surprised that developers don’t know it already. That makes it our job as developers and architects to dig for it.</p><p>So with this in mind, let’s go back to our shopping cart and ask ourselves: Will the business people think I’ve lost it if I ask what business rules might operate on Attribute A and Attribute B? If the answer is yes, you’ve likely found an anti-requirement.</p><h2 id="a-new-and-improved-cart"><a class="markdown-anchor" href="#a-new-and-improved-cart">🔗</a>A new and improved cart</h2><p>Let’s start teasing out some anti-requirements and see what effect that has on our shopping cart, beginning with the concept of price.</p><ul><li>When the price of a product exceeds $100, the name should be changed to all caps. <em>Ridiculous!</em></li><li>When a product description is longer than 3000 characters, the price should be increased by 10%. <em>Ludicrous!</em></li><li>When the inventory for an item is higher than 1000, we should charge 10% more. <em>Inconceivable!</em></li></ul><p>But wait, we need to be careful. When hearing that last anti-requirement, our business stakeholder <em>might</em> say that while that is indeed inconceivable, it <em>could</em> be possible that we’d need to charge more when inventory is low. After all, that’s just supply and demand in action. By using anti-requirements in this way, you might accidentally discover business requirements that could have been overlooked otherwise.</p><p>But whatever anti-requirements we dream up, it remains clear that price and quantity are related. After all, you must multiply <code>price</code> × <code>quantity</code> to get the total cost.</p><p>This suggests that the highly-coupled price and quantity values could be extracted elsewhere.</p><p><img src="/images/blog/2023/anti-requirements/cart-3.png" alt="Price and quantity extracted to Sales"></p><p>In the same way, we can start to analyze other pairs of attributes, crafting anti-requirements for each and using how ridiculous they sound to determine whether to extract other groups of attributes that are more tightly coupled.</p><ul><li>The name of a product affects estimated delivery because we ship products alphabetically. <em>Absurd!</em></li><li>We must update the description of an item every time the inventory level changes. <em>Preposterous!</em></li><li>The more inventory we have of an item, the longer it will take to ship them. <em>Wackadoodle!</em> <sup id="fnref:7:230523"><a href="#fn:7:230523" rel="footnote">7</a></sup> <sup id="fnref:8:230523"><a href="#fn:8:230523" rel="footnote">8</a></sup></li></ul><p><img src="/images/blog/2023/anti-requirements/cart-4.png" alt="Final view of the shopping cart in multiple services"></p><p>Remember that shopping cart entity? We used anti-requirements as a club to bash it into pieces. It turns out that while a shopping cart is a noun used by the business, there is no “cart” anymore…only a simple <code>CartId</code> rather than a full-blown entity or aggregate.</p><p>Eagle-eyed readers will notice here that the <code>Quantity</code> is not owned by any one thing but is shared between Sales, Shipping, and Warehouse. It’s important to realize that even single attributes don’t always mean the same thing. In Sales, quantity is a multiplier for the price. In Shipping, it’s how many items to put in a box…or even multiple boxes. In Warehouse, it’s how many things to reserve and restock. The values just happen to come from the same place, and we’ll show how to handle that a little later.</p><p>This shows that not all the nouns the business uses need to have a corresponding entity in your domain model.</p><h2 id="improved-efficiency"><a class="markdown-anchor" href="#improved-efficiency">🔗</a>Improved efficiency</h2><p>Only grouping together data that changes together has a lot of technical and organizational advantages as well.</p><p>From a technical perspective, attributes that change together should also be cached similarly. For example, after a product is published, its name and description do not change frequently and can be cached for a long time, but price and inventory would probably change a lot more. Storing those attributes in different entities allows us to use the most appropriate caching strategy for each. In our case, serving the product name and description from a JSON file hosted on a content delivery network (CDN) might be a better and more scalable approach than using a server-side cache like Redis.</p><p>In fact, if you’re storing product images on a CDN based on a convention like <code>https://mycdn.com/products/{ProductID}/{Size}.png</code>, then you’ve already begun decomposing your domain using these strategies.</p><p>From an organizational perspective, you no longer need to get all the business stakeholders together simultaneously. <sup id="fnref:9:230523"><a href="#fn:9:230523" rel="footnote">9</a></sup> If you need to add the capability to deliver digital goods that don’t require shipping, the number of people that need to be involved is now significantly reduced to only the people with insights into the relevant parts of the “cart.”</p><p>The only remaining problem for our shopping cart is taking all of Humpty Dumpty’s pieces and putting them back together again.</p><h2 id="viewmodel-composition"><a class="markdown-anchor" href="#viewmodel-composition">🔗</a>ViewModel composition</h2><p>Everything has a cost, and decomposing using anti-requirements is no different. Each component gains greater autonomy, but it can feel (at first) that this comes with a price of greater complexity.</p><p>Our users still think of a “shopping cart” as a thing and expect to see all the attributes we’ve separated on a shopping cart page together.</p><p>We can integrate all the shopping cart attributes on the same page using a strategy called <a href="https://www.viewmodelcomposition.com/"><strong>ViewModel composition</strong></a> using tools like <a href="https://github.com/ServiceComposer/ServiceComposer.AspNetCore">ServiceComposer</a> where independent components provided by services or microservices <sup id="fnref:10:230523"><a href="#fn:10:230523" rel="footnote">10</a></sup> can query their own data from disparate back-end systems and combine it into a single ViewModel without reintroducing coupling at the UI layer.</p><p>In ViewModel composition, each component registers its interest in providing data for specific URI route patterns. Then, for each web request, all the interested data providers are asked to fetch their data, which is added to a dynamic ViewModel. Finally, a separate service (let’s call it Branding) takes the ViewModel and renders it to HTML.</p><p>The story is similar for POST requests. Components register handlers for POST routes to communicate back to their respective back-end systems, usually with async messages. This is how each service persists the <code>Quantity</code> value back to its own data store without any knowledge of the other components.</p><p>Using ViewModel composition does add some complexity, at least in the short term. It will not make building the first screen faster, but it <em>will</em> make building the 10th, 50th, and 100th screen faster. Limiting unnecessary coupling, even in the UI layer, makes it easier to continue creating new features well into the future.</p><p>ViewModel composition techniques also allow flexibility in terms of data storage technology. For example, one service could be powered by a traditional relational database. At the same time, another could use a graph database or key-value store, a heavy caching layer with Redis, or even JSON files on a CDN.</p><p>And when a single service only has to worry about its own <a href="https://jimmybogard.com/vertical-slice-architecture/">vertical slice</a> <sup id="fnref:11:230523"><a href="#fn:11:230523" rel="footnote">11</a></sup> of the overall system, you’ll find that a database diagram <em>actually can</em> fit comfortably on a single sheet of paper.</p><p>There’s much more to say about ViewModel composition than can be covered in a single blog post. Check out Mauro Servienti’s <a href="https://milestone.topics.it/series/view-model-composition.html">blog series on ViewModel composition</a> or his webinar <a href="/webinars/all-our-aggregates-are-wrong">All our aggregates are wrong</a> <sup id="fnref:12:230523"><a href="#fn:12:230523" rel="footnote">12</a></sup> to learn more.</p><h2 id="summary"><a class="markdown-anchor" href="#summary">🔗</a>Summary</h2><p>In today’s increasingly complex software systems, the “noun has an attribute” approach to modeling is bound to result in classes, components, and systems that become a mess of coupling. Anti-requirements are one strategy we can use to <a href="/webinars/finding-your-service-boundaries-a-practical-guide">find our logical service boundaries</a>, helping us to discover which attributes belong together and which have no business being anywhere near each other.</p><p>Over time, too much coupling causes the system to evolve into a big ball of mud. Eventually, making changes anywhere without breaking something seemingly unrelated becomes impossible.</p><p>Organizations that decouple into autonomous services will be able to be more nimble and deliver value to the business years into the future. After all, unlike most other business projects, <a href="/videos/own-the-future">software isn’t ever really “done”</a>.</p><p>Everyone else will be stuck rewriting the system in 3 years. <em><strong>Again.</strong></em></p>]]></content>
    
    <summary type="html">
    
      &lt;p&gt;We all love building greenfield projects. &lt;sup id=&quot;fnref:1:230523&quot;&gt;&lt;a href=&quot;#fn:1:230523&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; But inevitably, starting a new project involves lots of meetings with business stakeholders to hash out initial requirements and canonical data models. Those are…not so fun.&lt;/p&gt;
&lt;p&gt;When one of those meetings occurs after a carb-heavy lunch, it’s easy for your mind to drift away…back to those university lectures about entity design. Think of the nouns and what attributes they have. A dog and a cat are both animals and have 4 legs. Except now it’s Customers, Orders, Products, and Shopping Carts.&lt;/p&gt;
&lt;p&gt;Is this the best way to build a system, though? Didn’t we do the exact same thing on the previous greenfield project we’re now rewriting? Surely we won’t make all the same mistakes we made last time…right?&lt;/p&gt;
    
    </summary>
    
    
  </entry>
  
  <entry>
    <title>Increase your system&#39;s observability with OpenTelemetry support in NServiceBus</title>
    <link href="https://particular.net/blog/open-telemetry-tracing-support"/>
    <id>https://particular.net/blog/open-telemetry-tracing-support</id>
    <published>2023-05-09T00:00:00.000Z</published>
    <updated>2026-04-20T11:16:19.181Z</updated>
    
    <content type="html"><![CDATA[<p>When code breaks, our first move is carefully inspecting the call stack. It helps us find the needle in the haystack by understanding how, where, and why the failure occurred, including how we got there.</p><p>However, in a message-based system, we no longer have a single call stack. We’ve exchanged it for a <em>haystack of call stacks</em>, which makes finding the needle (the root cause of the failure) even more difficult.</p><span id="more"></span><h2 id="the-problem"><a class="markdown-anchor" href="#the-problem">🔗</a>The problem</h2><p>Because message-driven systems are asynchronous and run in multiple processes, debugging is naturally more complex than in a single-process application. Failures surface as failed messages in a specific endpoint, which could be symptoms of an issue happening further upstream. We need to understand how and where the business transaction started, how the message flow works, the order in which messages were processed, and have a thorough understanding of the components involved.</p><p>Debugging these issues is tedious, cumbersome, and sometimes downright painful. In addition, setting up a debuggable environment is a serious challenge as we need to understand the components involved in the scenario we’re debugging, which could span different solutions, repositories, or endpoints, each of which may require additional infrastructure to be available.</p><p>Luckily, we can address these concerns with observability.</p><h2 id="observability"><a class="markdown-anchor" href="#observability">🔗</a>Observability</h2><p>Observability is how well we can figure out what we <em>don’t</em> know when it comes to system behavior based on its external outputs. For example, in a highly observable system, we can easily infer the internal state and behavior of the system without going back into the code to make changes when something unexpected occurs.</p><p>The <a href="https://docs.particular.net/platform/#particular-service-platform">Particular Service Platform</a> includes a wide range of observability features, including ServiceInsight’s <a href="https://docs.particular.net/serviceinsight/sequence-diagram/">sequence diagram</a> and <a href="https://docs.particular.net/serviceinsight/managing-errors-and-retries#the-flow-diagram">flow diagram</a>. When auditing is enabled, ServiceInsight can visualize the flow of messages across multiple NServiceBus endpoints. In addition, you can inspect the message headers and body, allowing us to figure out what’s going on in any flow of messages.</p><p>ServicePulse also exposes a <a href="https://docs.particular.net/monitoring/metrics/">range of metrics</a> that provide insight into the system’s behavior. One of the main benefits of the Particular Service Platform is that it allows for black-box instrumentation. Apart from enabling <a href="https://docs.particular.net/nservicebus/operations/auditing">auditing</a> and <a href="https://docs.particular.net/tutorials/monitoring-setup/">metrics</a> in your configuration, it doesn’t require any code changes to gather instrumentation from your endpoints.</p><p>However, distributed systems are inherently complex and consist of many components: message brokers, databases, distributed caches, integration points, REST APIs, front-ends, and more. All of these components can participate in a business transaction. Still, the information captured in messages may represent only a small part of that business transaction—not enough to tell the whole story.</p><h2 id="a-sample-interaction"><a class="markdown-anchor" href="#a-sample-interaction">🔗</a>A sample interaction</h2><p>Let’s consider a system with an order process flow that looks like this:</p><pre><code class="language-mermaid">flowchart TD    A[ASP.NET Core OrderController] --&gt;|1. Place order| B[Sales]    B --&gt; |2. Start order process| C[Order saga]    C --&gt; |3. Charge order| D[Payments]    D --&gt; |4. Order charged| C[Order saga]    C --&gt; |5a. Ship order| E[Shipping]    C --&gt; |5b. Bill order| F[Billing]</code></pre><p>In the controller method, nothing fancy is happening; we’re just mapping the data collected in the web request to a message and sending it to the Sales endpoint.</p><p>But what if we mess up and swap the shipping and billing addresses?</p><p>As far as the Sales endpoint is concerned, that input is perfectly valid. But unfortunately, the endpoints cannot validate that we didn’t swap the data, and inspecting the message bodies won’t give us insight into where we messed up. After all, both look like valid addresses. So now we’re shipping orders to the wrong address and wonder why so many customers are complaining they never received their order.</p><p>We’re facing a blind spot because we can’t investigate what context, decision-making, or flow led to the creation of that initial message. We can’t see the entire story of the interaction all at once. The example might be simplistic, but it shows why we need insight into the whole business transaction, especially in large and complex distributed systems where additional decisions are made every step of the way.</p><h2 id="opentelemetry-and-nservicebus"><a class="markdown-anchor" href="#opentelemetry-and-nservicebus">🔗</a>OpenTelemetry and NServiceBus</h2><p>Enter <a href="https://opentelemetry.io/docs/what-is-opentelemetry/">OpenTelemetry</a>: a vendor-agnostic, cross-platform, and open-source standard for observability to help standardize how we instrument, collect, and export instrumentation from our applications. OpenTelemetry includes tools and SDKs for each programming language to capture and export your applications’ metrics, traces, and logs.</p><p>And now, OpenTelemetry is available in NServiceBus.</p><p>OpenTelemetry tracing was adopted in .NET in the <a href="https://learn.microsoft.com/en-us/dotnet/api/system.diagnostics.activity">System.Diagnostics.Activity</a> API in .NET 5. <sup id="fnref:1:090523"><a href="#fn:1:090523" rel="footnote">1</a></sup> It’s also available for earlier versions <sup id="fnref:2:090523"><a href="#fn:2:090523" rel="footnote">2</a></sup> using the dedicated <a href="https://www.nuget.org/packages/System.Diagnostics.DiagnosticSource">System.Diagnostics.DiagnosticSource</a> NuGet package.</p><p>With the advancements made in the industry for observability with OpenTelemetry, we want to strengthen the observability of the platform further. Therefore, from NServiceBus version 8 onwards, you can enable OpenTelemetry on your NServiceBus endpoints to seamlessly capture instrumentation and export it to your observability backend.</p><p>By enabling available instrumentation libraries, we can observe quite a bit without adding application-specific tracing. For example, we can solve the address-swapping example above by enabling instrumentation in both ASP.​NET Core and NServiceBus, and then adding information to the spans emitted by these libraries. The ASP.​NET Core instrumentation library creates a span that represents the request submitted by the user, to which we can add a span containing the shipping and business address. NServiceBus already adds a span when ingesting a message from the queue, at which point we could add any other relevant tags.</p><p>Whether in a controller method or a message handler, we could add code like this:</p><pre><code class="language-csharp">Activity.Current?.AddTag(&quot;order.id&quot;, request.OrderId);Activity.Current?.AddTag(&quot;order.shipping_address&quot;, request.ShippingAddress);Activity.Current?.AddTag(&quot;order.billing_address&quot;, request.BillingAddress);</code></pre><p>Based on the added tags, if we carefully compare the initial request’s tags with the message processed in the endpoint, we would quickly identify that we reversed the billing and shipping addresses.</p><blockquote><p><em><strong>Note:</strong> Even though we’re using addresses in this example, adding personally identifiable information (PII) to telemetry is <strong>never</strong> a good idea! Remember that this information is sent to an external system, your observability backend. Imagine getting a <a href="https://gdpr.eu/right-to-be-forgotten/">right to be forgotten request</a> to remove data when you have exposed personal information in your telemetry. The flow in this article is just a simple example to illustrate the point…and an opportunity to remind you of this best practice at the same time.</em></p></blockquote><p>Here’s a screenshot of how an enriched span would look like in <a href="https://www.jaegertracing.io/">Jaeger</a>:</p><p><img src="/images/blog/2023/opentelemetry/jaeger.jpg" alt="View of an enriched span in Jaeger"></p><p>…but we could also use <a href="https://learn.microsoft.com/en-us/azure/azure-monitor/app/opentelemetry-overview">Azure Application Insights</a>, <a href="https://www.elastic.co/">Elastic</a>, <a href="https://www.honeycomb.io/">Honeycomb</a>, or a different tracing backend. The main advantage of using OpenTelemetry is that you can use any vendor that best suits your needs without changing the instrumentation code or libraries. The only change required would be configuring the appropriate exporter to send your instrumentation data to the right place.</p><p>With OpenTelemetry enabled across all components of a system, we can create system-wide observability and understand which components take part in the business transaction from beginning to end.</p><p>To enable OpenTelemetry in your NServiceBus endpoint, add the following:</p><pre><code class="language-csharp">endpointconfiguration.EnableOpenTelemetry();</code></pre><p>NServiceBus will then capture instrumentation for every incoming and outgoing message operation in an endpoint. If the underlying message queue initiates a trace, NServiceBus will create a child span when processing that message. Otherwise, NServiceBus will create a new one. Any subsequent outgoing operations result in child spans on the incoming message span.</p><p>OpenTelemetry operates under an opt-in model, so even though OpenTelemetry is enabled on the endpoint, you still need to set up a <code>TracerProvider</code> to collect that instrumentation and export it to a tool of your choice.</p><pre><code class="language-csharp">var tracingProviderBuilder = Sdk.CreateTracerProviderBuilder()    .AddSource(&quot;NServiceBus.Core&quot;)    // ... Add other trace sources    // ... Add exporters    .Build();</code></pre><p>Then you can see the full trace:</p><p><img src="/images/blog/2023/opentelemetry/full-trace.jpg" alt="Full trace of order create operation"></p><p>If you want to enrich the telemetry created by NServiceBus, you can do so inside the message handler. You can either start a dedicated span, which will automatically be created as a child span by the .NET Activity API and propagate the context as expected, or add additional tags and events to the active span without creating a new span, as in the example. If you want to add specific details across all spans, you can <a href="https://docs.particular.net/samples/open-telemetry/customizing/">customize the OpenTelemetry traces using a custom processor</a>.</p><p>Logging is important too. We’ve been doing that for years, but with OpenTelemetry’s focus on <a href="https://opentelemetry.io/docs/reference/specification/logs/">telemetry correlation</a> it gets a lot more powerful. We can connect those logs with the traces we’re now collecting to tie it all together. Check out our sample on <a href="https://docs.particular.net/samples/open-telemetry/logging/">connecting OpenTelemetry traces and logs</a> to see how to do that.</p><h2 id="summary"><a class="markdown-anchor" href="#summary">🔗</a>Summary</h2><p>Ready to try it out on your own? In that case, we have <a href="https://docs.particular.net/samples/open-telemetry/">multiple samples</a> available for you to try out, including dedicated code samples for <a href="https://docs.particular.net/samples/open-telemetry/prometheus-grafana/">Prometheus and Grafana</a>, <a href="https://docs.particular.net/samples/open-telemetry/application-insights/">Azure Application Insights</a>, and <a href="https://docs.particular.net/samples/open-telemetry/jaeger/">Jaeger</a>. As you use OpenTelemetry to improve your root-cause analysis, please <a href="/contact">let us know</a> how it’s going and how we can further improve the observability of the platform.</p><p>All this is baked right into NServiceBus version 8. However, if you’re still using NServiceBus version 7, you can use <a href="https://github.com/jbogard/NServiceBus.Extensions.Diagnostics">Jimmy Bogard’s community package</a> <sup id="fnref:3:090523"><a href="#fn:3:090523" rel="footnote">3</a></sup> to bridge the gap.</p><p>When you’re debugging a distributed system, haystacks are everywhere. But with OpenTelemetry, you have the necessary tools to find the needle.</p>]]></content>
    
    <summary type="html">
    
      &lt;p&gt;When code breaks, our first move is carefully inspecting the call stack. It helps us find the needle in the haystack by understanding how, where, and why the failure occurred, including how we got there.&lt;/p&gt;
&lt;p&gt;However, in a message-based system, we no longer have a single call stack. We’ve exchanged it for a &lt;em&gt;haystack of call stacks&lt;/em&gt;, which makes finding the needle (the root cause of the failure) even more difficult.&lt;/p&gt;
    
    </summary>
    
    
  </entry>
  
  <entry>
    <title>Lost messages are just as bad as lost luggage</title>
    <link href="https://particular.net/blog/lost-messages-are-just-as-bad-as-lost-luggage"/>
    <id>https://particular.net/blog/lost-messages-are-just-as-bad-as-lost-luggage</id>
    <published>2023-04-04T00:00:00.000Z</published>
    <updated>2026-04-20T11:16:19.181Z</updated>
    
    <content type="html"><![CDATA[<p>You’re standing in the airport, waiting to pick up your bag. Did you ever stop to think about all the software systems involved in tracking your luggage on your journey? From the moment you drop it off at your departure airport to the moment you breathe that sigh of relief as it shows up on the baggage carousel is a complex story of messaging and system integrations.</p><p>I recently had the opportunity to chat with the lead developer for the luggage arrival system at a major Asian international hub. <sup id="fnref:1:040423"><a href="#fn:1:040423" rel="footnote">1</a></sup> He told me how using NServiceBus made it possible to get all the systems in the airport to work together reliably.</p><span id="more"></span><h2 id="reading-from-the-firehose"><a class="markdown-anchor" href="#reading-from-the-firehose">🔗</a>Reading from the firehose</h2><p>When an aircraft is inbound to the airport, the first step is to read all the bag source messages representing the bags in the aircraft’s hold <sup id="fnref:2:040423"><a href="#fn:2:040423" rel="footnote">2</a></sup> from a service called <a href="https://www.collinsaerospace.com/what-we-do/industries/airports/baggage-systems/baglink">ARINC BagLink</a>.</p><p>ARINC is a global service used by many airlines, but it’s not the easiest service to work with. Once you connect and subscribe, you process the information coming in through the TCP connection. This is not JSON or XML information—it’s a low-level byte stream with fields that are fixed numbers of bytes. Only an arcane set of rules determines what constitutes a message, let alone how they’re formed. Translating the stream of incoming bytes into a series of messages can be tricky, given that there’s little to distinguish even where one message ends and the next begins.</p><p>The biggest challenge here is how to reliably process this data because it’s not straightforward to <em>reread</em> the information from ARINC. If you miss it the first time, you have a problem. Usually, messages only arrive at a rate of about 20 messages per second, but the system needs to be reliable and scalable enough to handle between 1000 to 2000 messages per second.</p><p>The only safe way to deal with information like this is to immediately append it to a file on disk, and then pass it to a message queue using NServiceBus.</p><p>Once the bag source message information is contained in an NServiceBus message, we don’t have to worry about losing it. The <a href="https://docs.particular.net/nservicebus/recoverability/">recovery capability</a> in NServiceBus makes sure that any failure will go through a series of message retries in case the error is transient or is safely written to an error queue where <a href="https://docs.particular.net/servicepulse/intro-failed-messages">developers can inspect the problem</a>, retry the messages, or even <a href="https://docs.particular.net/servicepulse/intro-editing-messages">fix malformed messages</a> before retrying them.</p><p>Without processing the bag source messages through a queue, it would be impossible to complete all the steps required for the processing of each bag message—including parsing the messages, saving them to a database, and then matching up bag information with flight information—at the speed required by the incoming data stream coming from ARINC. Additionally, any faulty data or flaw in application logic could result in a loss of baggage information from ARINC that can’t be easily recovered.</p><h2 id="flight-tracking-and-matching"><a class="markdown-anchor" href="#flight-tracking-and-matching">🔗</a>Flight tracking and matching</h2><p>Incoming luggage has to be matched up with flights as well. Flight information comes from a separate flight information system, which is the same system that drives the large Arrivals and Departures screens inside the airport terminal.</p><p>Then, baggage and flight information must be joined together into a bag list containing a bag number (found in the barcode attached to the bag), a flight number, and a departure date (typically with no year) for each bag.</p><p>The problem is that it’s actually really complex to uniquely identify a specific flight.</p><p>Assuming a made-up airline code <code>XX</code>, the canonical form of a flight number is <code>XX0460</code>, but some systems might represent that as a shorter <code>XX460</code>, but that’s just the start of it.</p><p>Depending on the flight, the arrival time could differ significantly from the departure date, especially for long-haul flights crossing the Pacific Ocean and the International Date Line. But there are other factors, such as if the flight gets delayed, or canceled and rescheduled. Even a canceled and rescheduled flight would carry the original departure date—not necessarily the date the bag got loaded onto the plane.</p><p>Flights can also be cross-listed on other airlines, such as when a Delta flight is “operated by” KLM, one of its airline partners.</p><p>A myriad of logic like this goes into matching bag numbers with the flights. Using NServiceBus allowed the lead developer to divide this logic up using different message handlers and <a href="https://docs.particular.net/tutorials/nservicebus-step-by-step/4-publishing-events/">publish/subscribe techniques</a> so that they could design the overall flow of messages through the system, and pass off the implementation of individual message handlers (representing much more contained and well-defined problems) to other developers on their team.</p><h2 id="on-arrival"><a class="markdown-anchor" href="#on-arrival">🔗</a>On arrival</h2><p>When a flight arrives, some bags will be routed to connecting flights, while others will make their way to the arrivals hall to be picked up at a baggage carousel.</p><p>For the bags headed to connecting flights, NServiceBus message handlers translate queue messages back to the byte-level protocol to be transmitted back to the ARINC service. But for the bags headed to the arrival hall, the lead developer wanted to automate the system that displays which flights are being served by which carousels and the status for each carousel.</p><p>As the bags are unloaded, a baggage crew member armed with a barcode scanner scans the barcode on each bag before it is placed on the conveyor belt. The scanner connects to an API that generates an NServiceBus message, and then one by one, each bag on the bag list is accounted for.</p><p>When each bag is checked off the list, the bag arrival system is automatically updated to display that all bags have been unloaded. It’s at this moment that, unfortunately, some weary traveler might realize their bag isn’t going to appear on the carousel after all, and they will need to go report a lost bag.</p><p>Many of these systems can be overridden manually, for instance, to say that all bags have been unloaded. Still, for the most part, the automation that occurs by tracking each bag allows the whole system to operate completely autonomously, allowing operators to focus on other tasks.</p><h2 id="summary"><a class="markdown-anchor" href="#summary">🔗</a>Summary</h2><p>An airport is a prime example of a very public place where a whole lot of software and a litany of business rules come together in ways that most people never think about, let alone fully comprehend. By comparison, other business domains might appear simple at first, but every domain has this tendency to hide its own complexity until you really start to dive into the scenarios.</p><p>In all these business domains, NServiceBus can provide the ability to break down this complexity. Each problem gets broken down into processes, each process into a series of steps, and each step is represented as a message handler processing a message.</p><p>The safety of business data encoded into messages means you can safely read from a firehose of external data, knowing you can’t lose data. The discrete nature of messages means it’s easier to reason about what’s happening within one well-defined interaction or related to one specific business rule. And the ability to build orchestrations around the results of multiple messages makes it easier to design and build processes around individual events over longer periods of time.</p><p>Air travel is only one example. So how could NServiceBus make your business domain better? <a href="/proof-of-concept">Give us a shout</a>, and let’s talk about it.</p>]]></content>
    
    <summary type="html">
    
      &lt;p&gt;You’re standing in the airport, waiting to pick up your bag. Did you ever stop to think about all the software systems involved in tracking your luggage on your journey? From the moment you drop it off at your departure airport to the moment you breathe that sigh of relief as it shows up on the baggage carousel is a complex story of messaging and system integrations.&lt;/p&gt;
&lt;p&gt;I recently had the opportunity to chat with the lead developer for the luggage arrival system at a major Asian international hub. &lt;sup id=&quot;fnref:1:040423&quot;&gt;&lt;a href=&quot;#fn:1:040423&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; He told me how using NServiceBus made it possible to get all the systems in the airport to work together reliably.&lt;/p&gt;
    
    </summary>
    
    
  </entry>
  
  <entry>
    <title>Cancellation in NServiceBus 8</title>
    <link href="https://particular.net/blog/cancellation-in-nservicebus-8"/>
    <id>https://particular.net/blog/cancellation-in-nservicebus-8</id>
    <published>2022-12-06T00:00:00.000Z</published>
    <updated>2026-04-20T11:16:19.179Z</updated>
    
    <content type="html"><![CDATA[<p>NServiceBus endpoints have always been good at running reliably, but they could have been better at <em>stopping</em>. And when you want an endpoint to stop, you want it to stop…<em><strong>now</strong></em>.</p><p>In NServiceBus 8.0, which is <a href="https://discuss.particular.net/t/nservicebus-8-0-0-major-release-available/3270">now available</a>, we have introduced support for <strong>cooperative cancellation</strong>, which will give you greater control over how an NServiceBus endpoint behaves when you need to shut it down.</p><p>Let’s talk about what cancellation is, how it relates to NServiceBus, and how we’re delivering cancellation in NServiceBus 8 without forcing a massive breaking change on your existing systems.</p><span id="more"></span><h2 id="what-is-cancellation"><a class="markdown-anchor" href="#what-is-cancellation">🔗</a>What is cancellation?</h2><p>As simply as possible, cooperative cancellation in .NET means passing around a <code>CancellationToken</code> that is associated with a <code>CancellationTokenSource</code> so that the code observing the cancellation token can later be told to stop or cancel.</p><p>You can observe a token and then return from a loop when the token is signaled:</p><pre><code class="language-csharp">public async Task BigLoop(CancellationToken cancellationToken)&#123;    while (!cancellationToken.IsCancellationRequested)    &#123;        // Do more work    &#125;&#125;</code></pre><p>However, if you consult Microsoft’s <a href="https://devblogs.microsoft.com/premier-developer/recommended-patterns-for-cancellationtoken/">recommended patterns for CancellationToken</a>, it’s better to always throw an <code>OperationCanceledException</code> so your caller knows that the work was interrupted:</p><pre><code class="language-csharp">public async Task DoSomething(CancellationToken cancellationToken)&#123;    while (KeepDoingStuff)    &#123;        cancellationToken.ThrowIfCancellationRequested();        // Do more work    &#125;&#125;</code></pre><p>When calling a cancellable API, you can create a <code>CancellationToken</code> that automatically expires after a given amount of time, or you can use <code>CancellationToken.None</code> or <code>default</code> to say you don’t care about cancellation and that you want the method to complete no matter what:</p><pre><code class="language-csharp">// Cancel after 10 secondsusing (var tokenSource = new CancellationTokenSource(TimeSpan.FromSeconds(10)))&#123;  var token = tokenSource.Token;  await DoSomething(token);&#125;// Do not cancel (the following two lines are equivalent)await DoSomething(CancellationToken.None);await DoSomething(default);</code></pre><p>If you’re designing an API, you can also make the cancellation token optional:</p><pre><code>public async Task DoSomething(CancellationToken cancellationToken = default)&#123;    // Do stuff&#125;// These calls are equivalent due to the parameter defaultawait DoSomething();await DoSomething(CancellationToken.None);await DoSomething(default);</code></pre><p>There are a <a href="https://docs.microsoft.com/en-us/dotnet/standard/threading/cancellation-in-managed-threads">lot of other fancy things</a> you can do with cooperative cancellation, but let’s talk about what cancellation means for NServiceBus.</p><h2 id="nservicebus-and-cancellation"><a class="markdown-anchor" href="#nservicebus-and-cancellation">🔗</a>NServiceBus and cancellation</h2><p>In an NServiceBus endpoint, the main reason to care about cancellation is when the endpoint is shutting down. When the endpoint shuts down, you want it to stop cleanly in a reasonable amount of time without having to forcibly kill the process. This is especially true if, for example, you are hosting your endpoints as Docker containers in Kubernetes, and the cluster needs to move your node from one host to another.</p><p>In this case, you want to stop receiving new messages from the queue, then give the existing “in-flight” messages a chance to complete processing successfully before shutting down the process cleanly.</p><p>But as a developer coding a message handler, there’s nothing you can do in NServiceBus version 7 because you can’t access a <code>CancellationToken</code> within that handler.</p><p>If a handler just started running a SQL query that could run for 100 seconds, what do you do?</p><p>In NServiceBus version 8, you can access a <code>CancellationToken</code> and pass it to the SQL query so you can shut down efficiently. We’ve also done it in such a way that you won’t have to change every single message handler.</p><h2 id="breaking-changes"><a class="markdown-anchor" href="#breaking-changes">🔗</a>Breaking changes</h2><p>Correctly implementing cancellation is an important addition for NServiceBus, but we couldn’t do it without breaking changes. That’s why we did it in a major version, which is NServiceBus 8.</p><p>The accepted method of implementing cancellation is to add <code>CancellationToken</code> parameters to all async methods and for a caller method to pass the token along to the callee, kind of like a bucket brigade passing the same token value all the way down the call stack.</p><p>This pattern is supported by tooling in Visual Studio. For example, there’s a Roslyn analyzer <a href="https://docs.microsoft.com/en-us/dotnet/fundamentals/code-analysis/quality-rules/ca2016">CA2016: Forward the CancellationToken parameter to methods that take one</a> that reinforces this pattern and offers a code fix that will fix all violations of the rule, even in an entire solution, with just a few clicks.</p><div class="text-center"><figure class="figure"><img src="/images/blog/2022/cancellation/ca2016-code-fix.png" class="figure-img img-fluid rounded border" alt="CA2016 code fix in action" /><figcaption>CA2016 code fix in action</figcaption></figure></div><p>Just click the <strong>Solution</strong> link at the bottom of the pop-up and Visual Studio will fix every instance where you forgot to forward the <code>CancellationToken</code> parameter. It’s pretty slick.</p><p>Unfortunately, CA2016 is only an informational message, not a warning or build error. And you might not even have the analyzer at all unless your project is set up correctly.</p><p><strong>To make sure you forward cancellation tokens correctly, we recommend <a href="https://makolyte.com/how-to-enable-the-built-in-net-analyzers/">enabling the built-in .NET Analyzers</a> and using an <code>.editorconfig</code> file to upgrade CA2016 to a warning or error to make it more visible:</strong></p><pre><code class="language-ini">[*.cs]# Make it a green squiggledotnet_diagnostic.CA2016.severity = warning# Or make it a red squiggle that fails the builddotnet_diagnostic.CA2016.severity = error</code></pre><p>To implement cancellation properly, we have to add a <code>CancellationToken</code> parameter to every async method. For a class method, that’s not such a big deal — you just add <code>CancellationToken cancellationToken = default</code> to the parameter list. In most cases, someone calling that method can recompile with no changes required.</p><p>But with interfaces, it’s a different story. For example, here’s an interface before and after adding cancellation:</p><pre><code class="language-csharp">// No cancellationpublic interface IDoSomething&#123;    Task DoIt();&#125;// Cancellation addedpublic interface IDoSomething&#123;    Task DoIt(CancellationToken cancellationToken = default);&#125;</code></pre><p>However, it doesn’t matter that you marked the parameter as optional. Because an interface is a contract, a class implementing that interface <strong>must</strong> add the token parameter:</p><div class="text-center"><figure class="figure"><img src="/images/blog/2022/cancellation/must-follow-contract.png" class="figure-img img-fluid rounded border" alt="The class must implement the contract, including the parameter, even though it has a default" /><figcaption>The class must implement the contract, including the parameter, even though it has a default</figcaption></figure></div><p>This change to an interface is a breaking change, and worse, a change that is difficult to decorate with <code>Obsolete</code> attributes to guide the user in their upgrade. We start in a bad spot because the feedback from the compiler here is <strong>not</strong> helpful—instead of saying that you need to add the <code>CancellationToken</code> to the <code>DoIt</code> method, it instead highlights the <code>IDoSomething</code> interface name, saying the class doesn’t implement the interface anymore. To make matters worse, if you let the compiler “fix” it, it will create a <em>new</em> <code>DoIt</code> method overload that contains the token.</p><p>All this brings us NServiceBus’s central interface, <code>IHandleMessages&lt;T&gt;</code>.</p><h2 id="ihandlemessages-t"><a class="markdown-anchor" href="#ihandlemessages-t">🔗</a>IHandleMessages&lt;T&gt;</h2><p>Every message handler in an NServiceBus system is a class that implements the interface <code>IHandleMessages&lt;T&gt;</code>. Here’s the interface definition:</p><pre><code class="language-csharp">public interface IHandleMessages&lt;T&gt;&#123;    Task Handle(T message, IMessageHandlerContext context);&#125;</code></pre><p>If we changed that interface, users would have to change <em>every single message handler</em> class in their system. That would be an <em><strong>excruciating change</strong></em> <sup id="fnref:1:061222"><a href="#fn:1:061222" rel="footnote">1</a></sup> to force onto our users, and we’d prefer not to do that if we can help it.</p><p>So, on the one hand, we have the generally-accepted way of implementing cancellation, backed up by compiler support through Roslyn analyzers, demanding that we break the interface. But on the other hand, we have all the pain we would cause customers by releasing a breaking change that affects nearly <em>every code file</em> where NServiceBus is referenced.</p><p>How do we reconcile these two sides?</p><h2 id="so-what’s-changing"><a class="markdown-anchor" href="#so-what’s-changing">🔗</a>So what’s changing?</h2><p>Luckily, we found a way to support cancellation and keep the benefit of compiler support but <strong>not</strong> force you through the pain of an <code>IHandleMessages&lt;T&gt;</code> change.</p><p>On all of our lesser-used methods and all our internal methods, we’re adding a <code>CancellationToken</code> parameter. Many users won’t even notice this.</p><p>However, we are not breaking the <code>IHandleMessages&lt;T&gt;</code> interface.</p><p>Anywhere you’re handling a message <sup id="fnref:2:061222"><a href="#fn:2:061222" rel="footnote">2</a></sup> the existing context object will include a <code>CancellationToken</code> property rather than adding a separate argument that would result in a breaking change.</p><p>The built-in CA2016 analyzer provided by the Roslyn team can’t tell you to forward <code>context.CancellationToken</code> to all the other code you call from your message handlers <sup id="fnref:3:061222"><a href="#fn:3:061222" rel="footnote">3</a></sup>, so how do we deal with that?</p><p>We’ve had great success bundling our own Roslyn analyzers with NServiceBus, <sup id="fnref:4:061222"><a href="#fn:4:061222" rel="footnote">4</a></sup> and we can use the same strategy in this case.</p><p>In NServiceBus version 8, if you call a method that accepts a <code>CancellationToken</code> parameter, you’ll now get a compilation warning which takes the place of CA2016:</p><div class="text-center"><figure class="figure"><img src="/images/blog/2022/cancellation/nsb0002-code-fix.png" class="figure-img img-fluid rounded border" alt="The NSB0002 code fix ensures the context.CancellationToken is forwarded to the method" /><figcaption>The NSB0002 code fix ensures the context.CancellationToken is forwarded to the method</figcaption></figure></div><p>Aside from avoiding a breaking change—which is already a big win—we think this has a lot of other advantages.</p><p>The default severity for CA2016 is Suggestion. This means that in Visual Studio, you only see three tiny gray dots that are easy to miss, and the message itself is often hidden away in the Messages pane of the Error List window, where it’s often ignored. <sup id="fnref:5:061222"><a href="#fn:5:061222" rel="footnote">5</a></sup></p><p>By making our analyzer a warning, we’re elevating the concept of cancellation where it’s more noticeable. We hope that after learning about cancellation, you’ll also use an <a href="https://editorconfig.org/">.editorconfig file</a> to upgrade CA2016 to a warning as well:</p><pre><code class="language-ini">[*.cs]dotnet_diagnostic.CA2016.severity = warning</code></pre><p>Of course, if your project files use <a href="https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/compiler-options/errors-warnings#treatwarningsaserrors"><code>TreatWarningsAsErrors</code></a> this will prevent code that doesn’t pass the cancellation token from compiling. Or, maybe cancellation isn’t really a concern for your system, and you don’t want to deal with passing cancellation tokens everywhere. That’s fine too! Roslyn analyzers are configurable, so you can also downgrade our analyzer to <code>suggestion</code> or even <code>silent</code> or <code>none</code> if you wish: <sup id="fnref:6:061222"><a href="#fn:6:061222" rel="footnote">6</a></sup></p><pre><code class="language-ini">[*.cs]dotnet_diagnostic.NSB0002.severity = suggestion</code></pre><h2 id="summary"><a class="markdown-anchor" href="#summary">🔗</a>Summary</h2><p>NServiceBus version 8 supports cooperative cancellation, which gives you the ability to ensure a message endpoint can shut down quickly and cleanly.</p><p>While this feature is critical for many developers, for others, it’s not. That’s why we’ve minimized the breaking changes required to support cancellation, especially on <code>IHandleMessages&lt;T&gt;</code>, our most-used interface.</p><p>There is a lot of productivity to be gained by using Roslyn analyzers, and we hope that our new analyzer will help you implement cancellation efficiently and show how analyzers can improve your development workflow.</p><p>NServiceBus 8 is <a href="https://discuss.particular.net/t/nservicebus-8-0-0-major-release-available/3270">available now</a> on <a href="https://www.nuget.org/packages/NServiceBus">NuGet</a>.</p>]]></content>
    
    <summary type="html">
    
      &lt;p&gt;NServiceBus endpoints have always been good at running reliably, but they could have been better at &lt;em&gt;stopping&lt;/em&gt;. And when you want an endpoint to stop, you want it to stop…&lt;em&gt;&lt;strong&gt;now&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;In NServiceBus 8.0, which is &lt;a href=&quot;https://discuss.particular.net/t/nservicebus-8-0-0-major-release-available/3270&quot;&gt;now available&lt;/a&gt;, we have introduced support for &lt;strong&gt;cooperative cancellation&lt;/strong&gt;, which will give you greater control over how an NServiceBus endpoint behaves when you need to shut it down.&lt;/p&gt;
&lt;p&gt;Let’s talk about what cancellation is, how it relates to NServiceBus, and how we’re delivering cancellation in NServiceBus 8 without forcing a massive breaking change on your existing systems.&lt;/p&gt;
    
    </summary>
    
    
  </entry>
  
  <entry>
    <title>Banish ghost messages and zombie records from your web tier</title>
    <link href="https://particular.net/blog/transactional-session"/>
    <id>https://particular.net/blog/transactional-session</id>
    <published>2022-10-25T00:00:00.000Z</published>
    <updated>2026-04-20T11:16:19.180Z</updated>
    
    <content type="html"><![CDATA[<p>Because <a href="/blog/what-does-idempotent-mean">it’s hard to write idempotent code effectively</a>, NServiceBus provides the <a href="https://docs.particular.net/nservicebus/outbox/">outbox feature</a> to make your business data transaction and any sent or received messages atomic. That way, you don’t get any <strong>ghost messages</strong> or <strong>zombie records</strong> polluting your system. <sup id="fnref:1:251022"><a href="#fn:1:251022" rel="footnote">1</a></sup></p><p>But the outbox can only be used inside a message handler. What about web applications and APIs?</p><p>With the new <a href="https://www.nuget.org/packages/NServiceBus.TransactionalSession">NServiceBus.TransactionalSession</a> package, you can use the outbox pattern <em>outside</em> of a message handler too.</p><span id="more"></span><h2 id="the-problem"><a class="markdown-anchor" href="#the-problem">🔗</a>The problem</h2><p>Let’s say you have a web application where you need to create an entity and perform some background processing.</p><p>Before the transactional session, the best guidance was to <strong>not do any database work</strong> inside the ApiController, but to only take the input data and send a message to the back end, responding only with an <code>HTTP 202 Accepted</code> message. Then, a few milliseconds later, a message handler would pick up the message and process it with the complete protection of the outbox feature.</p><p>But this isn’t always very realistic. What if the database is in charge of ID generation, and you must return that ID to the client? Or do you need to update the UI to show the request, even if the processing isn’t complete yet?</p><h2 id="ghost-protocol"><a class="markdown-anchor" href="#ghost-protocol">🔗</a>Ghost protocol</h2><p>This example code compromises by inserting a single record using Entity Framework and then sends a message to the backend. As a result of the compromise, this code is <em>still vulnerable to ghost messages</em> if the database transaction has to roll back after the message has been sent.</p><pre><code class="language-csharp">[ApiController]public class SendMessageController : Controller{    readonly MyDataContext dataContext;    readonly IMessageSession messageSession;    public SendMessageController(IMessageSession messageSession, MyDataContext dataContext)    {        this.messageSession = messageSession;        this.dataContext = dataContext;    }    [HttpPost]    public async Task&lt;string&gt; Post(Guid id)    {        await dataContext.MyEntities.AddAsync(new MyEntity { Id = id, Processed = false });        var message = new MyMessage { EntityId = id };        await messageSession.SendLocal(message);        return $&quot;Message with entity ID '{id}' sent to endpoint&quot;;    }}</code></pre><p>Those familiar with Entity Framework might wonder where the call to <code>SaveChangesAsync</code> is happening. Because Entity Framework already supports the <a href="https://www.programmingwithwolfgang.com/repository-and-unit-of-work-pattern">Unit of Work pattern</a>, the calls to <code>SaveChangesAsync</code> can be done using <a href="https://learn.microsoft.com/en-us/aspnet/core/fundamentals/middleware">ASP.NET Core middleware</a>, which means the database is only updated when the HTTP request successfully completes:</p><pre><code class="language-csharp">public class UnitOfWorkMiddleware{    readonly RequestDelegate next;    public UnitOfWorkMiddleware(RequestDelegate next, MyDataContext dataContext)    {        this.next = next;    }    public async Task InvokeAsync(HttpContext httpContext, ITransactionalSession session)    {        await next(httpContext);        await dataContext.SaveChangesAsync();    }}</code></pre><p>Regardless of the outcome of the database operations, the moment our controller logic calls <code>SendLocal</code>, the message is handed over to the transport. This makes the message <code>MyMessage</code> almost immediately available to be processed in the background.</p><p>Things will often seem fine until a <a href="https://particular.net/blog/but-all-my-errors-are-severe#transient-exceptions">transient exception</a> forces your Entity Framework operations to roll back. The database entity was never committed, but the ghost message starts processing in the background anyway. The message handler tries to load the missing entity from the database…and disaster unfolds.</p><p>Imagine looking into this error and trying to figure out why that entity doesn’t exist. Obviously, the message was sent, so the entity should be in the database, but it’s not. Where is it?</p><p>Unfortunately, ghost messages don’t know they’re ghosts, <sup id="fnref:2:251022"><a href="#fn:2:251022" rel="footnote">2</a></sup> which makes them hard to diagnose.</p><h2 id="ghostbusters"><a class="markdown-anchor" href="#ghostbusters">🔗</a>Ghostbusters</h2><p>So how do we banish the ghost message? A <a href="https://en.wikipedia.org/wiki/Proton_pack">proton pack</a> won’t help you, but our new TransactionalSession packages can.</p><p>In this case, since we’re using Entity Framework, we’ll use <a href="https://www.nuget.org/packages/NServiceBus.Persistence.Sql.TransactionalSession">NServiceBus.Persistence.Sql.TransactionalSession</a>, but we’ve got a <a href="https://www.nuget.org/packages?q=NServiceBus.Persistence+TransactionalSession">bunch of different packages available</a> depending on what database you’re using.</p><p>In our code, we replace <code>IMessageSession</code> with <code>ITransactionalSession</code> like this:</p><pre><code class="language-csharp">[ApiController]public class SendMessageController : Controller{    readonly MyDataContext dataContext;-   readonly IMessageSession messageSession;+   readonly ITransactionalSession messageSession;-   public SendMessageController(IMessageSession messageSession, MyDataContext dataContext)+   public SendMessageController(ITransactionalSession messageSession, MyDataContext dataContext)    {        this.messageSession = messageSession;        this.dataContext = dataContext;    }    // Rest omitted. Believe us, the code looks the same ;)}</code></pre><p>All the business logic stays the same. With that small change, the newly stored entity and the sends are wrapped in an all-or-nothing transaction, meaning both succeed or fail. The <code>TransactionalSession</code> package works with all the persistence technologies supported by NServiceBus, like Microsoft SQL Server, CosmosDB, <a href="https://www.nuget.org/packages?q=NServiceBus.Persistence+TransactionalSession">and many more</a>.</p><p>Check out this video demo of the TransactionalSession in action:</p><figure class="text-center"><iframe style="margin: 0 auto;" width="560" height="315" src="https://www.youtube.com/embed/-UOyxjnlYXs" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><h2 id="one-final-detail"><a class="markdown-anchor" href="#one-final-detail">🔗</a>One final detail</h2><p>Much like the code that ensures Entity Framework operations are executed against the database by calling <code>SaveChangesAsync</code>, there needs to be a tiny bit of middleware that ensures the transactional session is committed when the HTTP pipeline has to be executed.</p><pre><code class="language-csharp">public class UnitOfWorkMiddleware{    readonly RequestDelegate next;+   readonly ITransactionalSession messageSession;-   public UnitOfWorkMiddleware(RequestDelegate next, MyDataContext dataContext)+   public UnitOfWorkMiddleware(RequestDelegate next, ITransactionalSession messageSession)    {        this.next = next;+       this.messageSession = messageSession;    }    public async Task InvokeAsync(HttpContext httpContext, ITransactionalSession session)    {+       await session.Open(new SqlPersistenceOpenSessionOptions());        await next(httpContext);-       await dataContext.SaveChangesAsync();+       await session.Commit();    }}</code></pre><p>Notice how the middleware no longer calls <code>SaveChangesAsync</code> anymore–instead, the middleware commits the transaction on the data context. The transaction created by the transactional session will take care of saving all database changes and triggering the outbox so that everything remains consistent and atomic.</p><h2 id="bulletproof"><a class="markdown-anchor" href="#bulletproof">🔗</a>Bulletproof</h2><p>The algorithm behind the Transactional Session feature was based on the proven NServiceBus Outbox implementation. But we’ve also <a href="https://en.wikipedia.org/wiki/TLA%2B">modeled and verified the algorithm using TLA+</a>, <sup id="fnref:3:251022"><a href="#fn:3:251022" rel="footnote">3</a></sup> a formal specification language to verify and test programs. Plus, we’ve covered it with a rich set of automated test suites covering every supported database engine, so you know you can trust it.</p><h2 id="summary"><a class="markdown-anchor" href="#summary">🔗</a>Summary</h2><p>With the new TransactionalSession, there are no more compromises. You don’t have to painfully redesign a web application to move all the data transactions to the backend. Instead, you can update the database <strong>and</strong> send a message, and be confident that the outbox implementation will prevent ghost messages or zombie records.</p><p>To get started, check out <a href="https://docs.particular.net/nservicebus/transactional-session/">the TransactionalSession documentation</a>, including a detailed description of <a href="https://docs.particular.net/nservicebus/transactional-session/#how-it-works">how it works</a>. Or, check out our <a href="https://docs.particular.net/samples/transactional-session/aspnetcore-webapi/">Using TransactionalSession with Entity Framework and ASP.NET Core sample</a> to see how to use it in your own projects.</p>]]></content>
    
    <summary type="html">
    
      &lt;p&gt;Because &lt;a href=&quot;/blog/what-does-idempotent-mean&quot;&gt;it’s hard to write idempotent code effectively&lt;/a&gt;, NServiceBus provides the &lt;a href=&quot;https://docs.particular.net/nservicebus/outbox/&quot;&gt;outbox feature&lt;/a&gt; to make your business data transaction and any sent or received messages atomic. That way, you don’t get any &lt;strong&gt;ghost messages&lt;/strong&gt; or &lt;strong&gt;zombie records&lt;/strong&gt; polluting your system. &lt;sup id=&quot;fnref:1:251022&quot;&gt;&lt;a href=&quot;#fn:1:251022&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;
&lt;p&gt;But the outbox can only be used inside a message handler. What about web applications and APIs?&lt;/p&gt;
&lt;p&gt;With the new &lt;a href=&quot;https://www.nuget.org/packages/NServiceBus.TransactionalSession&quot;&gt;NServiceBus.TransactionalSession&lt;/a&gt; package, you can use the outbox pattern &lt;em&gt;outside&lt;/em&gt; of a message handler too.&lt;/p&gt;
    
    </summary>
    
    
  </entry>
  
  <entry>
    <title>How we achieved 5X faster pipeline execution by removing closure allocations</title>
    <link href="https://particular.net/blog/pipeline-and-closure-allocations"/>
    <id>https://particular.net/blog/pipeline-and-closure-allocations</id>
    <published>2022-09-27T00:00:00.000Z</published>
    <updated>2026-04-20T11:16:19.180Z</updated>
    
    <content type="html"><![CDATA[<p>The NServiceBus messaging pipeline strives to achieve the right balance of flexibility, maintainability, and wicked fast…ummm…ability. It needs to be wicked fast because it is executed at scale. For our purposes, “at scale” means that throughout the lifetime of an NServiceBus endpoint, the message pipeline will be executed hundreds, even thousands of times per second under high load scenarios.</p><p>Previously, we were able to <a href="https://particular.net/blog/10x-faster-execution-with-compiled-expression-trees">achieve 10X faster pipeline execution and a 94% reduction in Gen 0 garbage creation</a> by building expression trees at startup and then dynamically compiling them. One of the key learnings of those expression tree adventures is that reducing Gen 0 allocation makes a big difference. The less Gen 0 allocation used, the more speed can be squeezed out of the message handling pipeline, which ultimately means more speed for our users.</p><span id="more"></span><p>A common, but overlooked source of allocations is closure allocations. Originally, our message pipeline had, to use the scientific term, a gazillion of them. By getting rid of the major source of closure allocations in the pipeline, we’ve managed to get another five-fold increase in pipeline execution performance, as well as the complete removal of closure-related Gen 0 garbage creation. In this post, we’ll recap what closures look like, explain how they can be avoided, and show the tricks we’ve applied to the NServiceBus pipeline to get rid of them.</p><h2 id="closures-lurking-beneath-the-surface"><a class="markdown-anchor" href="#closures-lurking-beneath-the-surface">🔗</a>Closures lurking beneath the surface</h2><p>Closure allocations can be hard to spot. Before the introduction of the <a href="https://plugins.jetbrains.com/plugin/9223-heap-allocations-viewer">Heap Allocation Viewer for Jetbrains Rider</a> or <a href="https://marketplace.visualstudio.com/items?itemName=MukulSabharwal.ClrHeapAllocationAnalyzer">Clr Heap Allocation Analyzer for Visual Studio</a> you’d have to either decompile the code or attach a memory profiler and watch out for various <code>*__DisplayClass*</code>, <code>Action*</code> or <code>Func*</code> allocations.</p><p>Closures can occur anywhere we have lambdas (i.e. <code>Action</code> or <code>Func</code> delegates) being invoked that access state that exists outside the lambda. Here’s an example:</p><pre><code class="language-csharp">static void MyFunction(Action action) =&gt; action();int myNumber = 42;MyFunction(() =&gt; Console.WriteLine(myNumber));</code></pre><p>Here is the decompiled code for this snippet:</p><pre><code class="language-csharp">&lt;&gt;c__DisplayClass0_0 &lt;&gt;c__DisplayClass0_ = new &lt;&gt;c__DisplayClass0_0();&lt;&gt;c__DisplayClass0_.myNumber = 42;&lt;Main&gt;g__MyFunction|0_0(new Action(&lt;&gt;c__DisplayClass0_.&lt;Main&gt;b__1));</code></pre><p>In the generated code, we can spot two allocations. The first allocation of the type <code>c__DisplayClass0_0</code> represents the state class that will “host” the number, and a second allocation of type <code>Action</code> to be able to point to some compiler generated method called <code>&lt;&gt;c__DisplayClass0_.&lt;Main&gt;b__1</code> that will eventually execute our <code>Console.WriteLine</code>.</p><p>All that just to print a number to the console. And keep in mind, closure allocations might not only be in your code; they can also occur when you <a href="https://www.meziantou.net/concurrentdictionary-closure.htm">use .NET classes like the <code>ConcurrentDictionary.GetOrAdd</code></a>.</p><h2 id="bye-bye-closures"><a class="markdown-anchor" href="#bye-bye-closures">🔗</a>Bye bye closures</h2><p>It’s possible to get rid of the closure and thus remove the extra display class and delegate allocation. Starting with .NET 5, we can mark the lambda as <code>static</code> with C# 9. In this way, the compiler will allow only state that is static or available inside the lambda to be accessed; otherwise, it will display an error.</p><pre><code class="language-csharp">int myNumber = 42;MyFunction(static () =&gt; Console.WriteLine(myNumber));error CS8820: A static anonymous function cannot contain a reference to 'myNumber'.</code></pre><p>In the simple example, when the state is just a fixed number, the variable can be marked as a constant to get rid of the <code>CS8820</code> compiler error.</p><pre><code class="language-csharp">const int myNumber = 42;MyFunction(static () =&gt; Console.WriteLine(myNumber));&lt;Main&gt;g__MyFunction|0_0(&lt;&gt;c.&lt;&gt;9__0_1 ?? (&lt;&gt;c.&lt;&gt;9__0_1 = new Action(&lt;&gt;c.&lt;&gt;9.&lt;Main&gt;b__0_1)));</code></pre><p>Yet in reality, code is rarely that simple. What if the number comes from actual user input or is generated with <a href="https://docs.microsoft.com/en-us/dotnet/api/system.random.next"><code>random.Next()</code></a>? When the input is not static the state must be passed into the lambda somehow. By changing the lambda from <code>Action</code> to <code>Action&lt;object&gt;</code> the lambda will accept a state object that can be passed from <code>MyFunction</code> into the <code>action</code> delegate.</p><pre><code class="language-csharp">static void MyFunction(Action&lt;object&gt; action, object state) =&gt; action(state);int myNumber = 42;MyFunction(static state =&gt; Console.WriteLine((int)state), myNumber);&lt;Main&gt;g__MyFunction|0_0(&lt;&gt;c.&lt;&gt;9__0_1 ?? (&lt;&gt;c.&lt;&gt;9__0_1 = new Action&lt;object&gt;(&lt;&gt;c.&lt;&gt;9.&lt;Main&gt;b__0_1)), num);IL_0023: box [System.Runtime]System.Int32</code></pre><p>While this gets rid of the display class and the function allocation, unfortunately, the usage of <code>object</code> forces the compiler to emit a <code>box</code> statement. That is, the compiler will box the integer to an object which causes an unnecessary allocation to occur. Whenever possible, the state-based overloads should use generics instead.</p><pre><code class="language-csharp">static void MyFunction&lt;T&gt;(Action&lt;T&gt; action, T state) =&gt; action(state);int myNumber = 42;MyFunction(static number =&gt; Console.WriteLine(number), myNumber);&lt;Main&gt;g__MyFunction|0_0(&lt;&gt;c.&lt;&gt;9__0_1 ?? (&lt;&gt;c.&lt;&gt;9__0_1 = new Action&lt;int&gt;(&lt;&gt;c.&lt;&gt;9.&lt;Main&gt;b__0_1)), state);</code></pre><p>By ensuring the delegates have all their state available inside the closure and by using C# 9 static lambda support, we no longer allocate unintentionally.</p><p>Let’s see how we can apply this to the NServiceBus pipeline.</p><h2 id="the-state-captured-in-the-pipeline"><a class="markdown-anchor" href="#the-state-captured-in-the-pipeline">🔗</a>The state captured in the pipeline</h2><p>At its core, the NServiceBus pipeline is just a series of delegates of the shape <code>Task Invoke(TInContext context, Func&lt;TOutContext, Task&gt; next)</code> that get chained together to build the pipeline. By design, the state that is required for the execution of the pipeline is passed into the invocation as <code>TInContext</code> and passed out of the invocation into the next part of the pipeline as <code>TOutContext</code>. For example, the incoming context captures information of the received message (such as the headers), as well as the child service provider that resolves dependencies scoped to the execution of the pipeline, and more. Since the context object captures all the important states, there shouldn’t be any closure allocations happening. Let’s verify that by looking at a simple pipeline.</p><pre><code class="language-csharp">var behavior1 = new MyBehavior1();var behavior2 = new MyBehavior2();var behavior3 = new MyBehavior3();var context = new RootContext();await behavior1.Invoke(context, ctx1 =&gt; behavior2.Invoke(ctx1, ctx2 =&gt; behavior3.Invoke(ctx2, ctx3 =&gt; Task.CompletedTask)));</code></pre><p>The context is nicely flowing from one execution to the other. However, the lambda needs to access the behavior instance, so that it can call the <code>Invoke</code> method. If we mark these lambdas as <code>static</code> as shown before, we immediately get the familiar <code>CS8820</code> compiler error.</p><pre><code class="language-csharp">await behavior1.Invoke(context, static ctx1 =&gt; behavior2.Invoke(ctx1, static ctx2 =&gt; behavior3.Invoke(ctx2, static ctx3 =&gt; Task.CompletedTask)));error CS8820: A static anonymous function cannot contain a reference to 'behavior2'.error CS8820: A static anonymous function cannot contain a reference to 'behavior3'.</code></pre><p>The compiler error shows us that the NServiceBus pipeline captures the behaviors that are taking part in the pipeline execution as state inside the lambda, which causes closure allocations. To solve this, we need a way to bring the behaviors for each part of the pipeline into the pipeline itself.</p><h2 id="make-the-behaviors-part-of-the-pipeline-state"><a class="markdown-anchor" href="#make-the-behaviors-part-of-the-pipeline-state">🔗</a>Make the behaviors part of the pipeline state</h2><p>The idea is simple but powerful. Besides being a series of delegates, the pipeline is also essentially a collection of behaviors. So if we can somehow store that collection of behaviors in the context of the pipeline, we’re all set.</p><p>The good thing is that once the pipeline execution plan is built, the order of the behaviors is well-known and never changes. So the collection representing all the behaviors including their order can be created once and reused over and over again. Luckily, all behaviors inside the pipeline implement a non-generic marker interface: <code>IBehavior</code>. With that in mind, we can store all behaviors of the pipeline in an array of <code>IBehavior</code> objects in the context.</p><pre><code class="language-csharp">public class BehaviorContext : IBehaviorContext {  internal IBehavior[] Behaviors { get; set; }}// done once at initialization/startup timevar behavior1 = new MyBehavior1();var behavior2 = new MyBehavior2();var behavior3 = new MyBehavior3();var cachedPipeline = new IBehavior[3] { behavior1, behavior2, behavior3 };var context = new RootContext();// assigned every time a new context is builtcontext.Behaviors = cachedPipeline;</code></pre><p>Now that all the behaviors can be accessed from the context, the pipeline execution plan builder can build the pipeline invocation plan that will access the behaviors array at a specific index, cast the returned behavior to the right behavior type, and then call <code>Invoke</code>:</p><pre><code class="language-csharp">await behavior1.Invoke(context, static ctx1 =&gt; ((MyBehavior2)ctx1.Behaviors[1]).Invoke(ctx1, static ctx2 =&gt; ((MyBehavior3)ctx2.Behaviors[2]).Invoke(ctx2, static ctx3 =&gt; Task.CompletedTask)));</code></pre><p>That’s it! No more closure allocations. With this simple change, we’ve seen 5 times more throughput in the raw pipeline execution benchmarks and all allocations that previously occurred due to closures are gone.</p><p><img src="/images/blog/2022/pipeline-closure-allocations/pipeline-execution-performance.png" alt=""></p><h2 id="summary"><a class="markdown-anchor" href="#summary">🔗</a>Summary</h2><p>Closure allocations can occur in various places whenever there are delegates involved that are accessing state outside the closure (or the curly braces) of the lambda. By removing closure allocations on code that is executed at scale thousands of times per second, it is possible to achieve significant gains in terms of throughput. Since the NServiceBus pipeline is such a core part of our framework, it has to be solid and fast. So we continue to make small but impactful performance improvements to NServiceBus. Hopefully this deeper understand of closure allocations can improve your own code as well.</p><p>But removing closure allocations to achieve high-performing and low-allocating code is just one trick in your tool belt. In my recent webinar <a href="https://particular.net/webinars/performance-tricks-i-learned-from-contributing-to-open-source-dotnet-packages">“Performance tricks I learned from contributing to open source .NET packages”</a> I summarized not only this tip, but others that I’ve learned from contributing over fifty pull requests to the Azure .NET SDK.</p><p>If you are interested in hearing more technical details about other improvements, leave a comment here or <a href="https://twitter.com/danielmarbach/">reach out to me</a> and I’ll tell you how we managed to get another 11-23% throughput improvement in the NServiceBus pipeline execution, or how we got rid of all allocations from the saga persistence deterministic ID creation. Until then, stay safe and allocation free.</p>]]></content>
    
    <summary type="html">
    
      &lt;p&gt;The NServiceBus messaging pipeline strives to achieve the right balance of flexibility, maintainability, and wicked fast…ummm…ability. It needs to be wicked fast because it is executed at scale. For our purposes, “at scale” means that throughout the lifetime of an NServiceBus endpoint, the message pipeline will be executed hundreds, even thousands of times per second under high load scenarios.&lt;/p&gt;
&lt;p&gt;Previously, we were able to &lt;a href=&quot;https://particular.net/blog/10x-faster-execution-with-compiled-expression-trees&quot;&gt;achieve 10X faster pipeline execution and a 94% reduction in Gen 0 garbage creation&lt;/a&gt; by building expression trees at startup and then dynamically compiling them. One of the key learnings of those expression tree adventures is that reducing Gen 0 allocation makes a big difference. The less Gen 0 allocation used, the more speed can be squeezed out of the message handling pipeline, which ultimately means more speed for our users.&lt;/p&gt;
    
    </summary>
    
    
  </entry>
  
  <entry>
    <title>Let&#39;s talk about Kafka</title>
    <link href="https://particular.net/blog/lets-talk-about-kafka"/>
    <id>https://particular.net/blog/lets-talk-about-kafka</id>
    <published>2022-07-19T00:00:00.000Z</published>
    <updated>2026-04-20T11:16:19.180Z</updated>
    
    <content type="html"><![CDATA[<p>We get a <em>lot</em> of questions about Kafka. Is it good? Does it live up to the hype? And most frequently, when are we going to support Kafka in NServiceBus.</p><p>But to fully answer these questions, it’s essential to understand what Kafka is, and more importantly <em>what it isn’t</em>, and then think about the kinds of problems that Kafka solves. So, let’s dive into the (heavily footnoted) details…</p><span id="more"></span><h2 id="what-is-kafka"><a class="markdown-anchor" href="#what-is-kafka">🔗</a>What is Kafka?</h2><p><a href="https://kafka.apache.org/">Apache Kafka</a> is a <strong>partitioned log</strong> and is a similar product in architecture to <a href="https://azure.microsoft.com/en-us/services/event-hubs/">Azure Event Hubs</a> or <a href="https://aws.amazon.com/kinesis/">Amazon Kinesis</a>. A partitioned log works precisely as its name suggests: it’s like a bunch of separate log files (partitions) constantly writing whatever data you throw at them. Those bits of data are usually called <em>events</em>, <sup id="fnref:1:190722"><a href="#fn:1:190722" rel="footnote">1</a></sup> and one of these append-only log files is called an <strong>event stream</strong>.</p><p>Events are written to an event stream in the order in which they are received, and read in the same order as well. <sup id="fnref:2:190722"><a href="#fn:2:190722" rel="footnote">2</a></sup> <sup id="fnref:3:190722"><a href="#fn:3:190722" rel="footnote">3</a></sup> The events will then stay in that log until a defined retention period is reached—nothing else other than the passage of time can remove an event from the log once it is written.</p><p>When you want to receive an event from Kafka, you need to decide how you will read from the event stream. You are responsible (or, more accurately, the client library you are using is responsible) for determining what partition you are reading from and keeping track of your position within that partition. So you don’t just get the next message waiting to be processed. Instead, you have to decide where you want to start reading from (called a “cursor”) and then keep that cursor up-to-date yourself.</p><p>If another reader wants to receive an event, they can’t just pick up where you left off. Instead, they have to manage their own cursor because Kafka doesn’t manage cursors for you. It just provides you with repeatable access to the stream of events.</p><p>While Kafka can deal with high traffic volumes, the right partition strategy is a key factor and will have a major impact on the scalability and elasticity of the whole system. Choosing the right number of partitions for event streams is <a href="https://newrelic.com/blog/best-practices/effective-strategies-kafka-topic-partitioning">a bit of an art</a> and can be <a href="https://developer.confluent.io/tutorials/change-topic-partitions-replicas/ksql.html">difficult to change</a>, especially on the fly.</p><p>But what happens if something fails while processing an event in Kafka? Since the log stream is still all there, <sup id="fnref:4:190722"><a href="#fn:4:190722" rel="footnote">4</a></sup> you can just re-read the log stream from a specific offset or decide to skip over the offset of the troublesome event altogether.</p><p>So now we know more about Kafka, but the big question remains, is it a message queue?</p><h2 id="what-are-message-queues"><a class="markdown-anchor" href="#what-are-message-queues">🔗</a>What are message queues?</h2><p>A message queue or message broker is a highly-optimized database for processing messages.</p><div class="inline-cta"><strong>Want more information on how message queues work?:</strong> Get FREE access to Udi Dahan's <a href="https://go.particular.net/kafka-dsdf" target="_blank">Distributed Systems Design Fundamentals</a> video course for a limited time.</div><p>Like Kafka, events (but here, let’s call them messages) can be stored in the same order in which they are received. But that’s where the similarities end.</p><p>The goal of a message queue is to be empty. Therefore, it does not have a retention period; it only hangs on to a message until a client confirms it has been successfully processed, and then that message is deleted from the broker.</p><p>Each message queue client is not responsible for managing a cursor, as that is managed centrally by the broker. Therefore, the only message you can receive is the one that is next on the queue. If you attempt to process that message, the broker will prevent any other consumer from getting access to the same message unless you report you were unable to successfully process it <sup id="fnref:5:190722"><a href="#fn:5:190722" rel="footnote">5</a></sup> or a timeout expires. When this happens, the broker is forced to assign the message to someone else.</p><p>This behavior leads to the <a href="https://www.enterpriseintegrationpatterns.com/CompetingConsumers.html">competing consumers pattern</a>, where multiple clients can cooperate to process messages more quickly, something that Kafka and other event streams are not designed to do. Unlike Kafka, consumers can be added and removed at any point in time without any impact on the topology of the queuing system.</p><p>The error handling pattern on message queues is also different. With a queue, you only get access to the next message in the queue, but sometimes, that message can’t be successfully processed. For example, it may not be possible to deserialize its contents. When that happens, that message effectively blocks the processing of subsequent messages. Messages which cannot be successfully processed are called <strong>poison messages</strong>. Systems that rely on a message queue have a process for retrying messages and eventually will forward poison messages to an error queue or dead-letter queue for investigation. <sup id="fnref:6:190722"><a href="#fn:6:190722" rel="footnote">6</a></sup> Kafka can be made to emulate some of these error-handling patterns, but this will require more work than message queues, where it’s the default behavior. <sup id="fnref:7:190722"><a href="#fn:7:190722" rel="footnote">7</a></sup></p><p>For an even deeper dive into message queues, including how you’re using multiple queues <em>right now</em> without even realizing it, check out this video by Clemens Vasters, Principal Architect for the Microsoft Azure Messaging platform:</p><figure class="text-center"><iframe style="margin: 0 auto;" width="560" height="315" src="https://www.youtube.com/embed/bHSV916YbHE" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><p>So as we can see, the only thing that partitioned logs (like Kafka) and message queues (like <a href="https://www.rabbitmq.com/">RabbitMQ</a>, <a href="https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-messaging-overview">Azure Service Bus</a>, and <a href="https://aws.amazon.com/sqs/">Amazon SQS</a>) have in common is that they help you process messages. But, after that, the differences couldn’t be more stark.</p><h2 id="so-which-is-better"><a class="markdown-anchor" href="#so-which-is-better">🔗</a>So which is better?</h2><p>Now that we’ve established that Kafka is not a message queue, <sup id="fnref:8:190722"><a href="#fn:8:190722" rel="footnote">8</a></sup> the main question that remains is which is better?</p><p>The answer, of course, is neither. There are good reasons to use both, even at the same time.</p><h3 id="when-to-use-kafka"><a class="markdown-anchor" href="#when-to-use-kafka">🔗</a>When to use Kafka</h3><p>Kafka is great for situations where you need to ingest or process large amounts of data, and you want to be able to read that data repeatedly, typically from different logical consumers. This is very common in systems for telemetry and data distribution.</p><p>Consider using Kafka when “events” represent the state of something at a specific time, but any individual event doesn’t have much business meaning by itself. In these cases, you need a stream of these events to analyze changes and transitions in state to derive any business meaning.</p><h3 id="when-to-use-a-message-queue"><a class="markdown-anchor" href="#when-to-use-a-message-queue">🔗</a>When to use a message queue</h3><p>Message queues excel when messages must be successfully processed once and only once, and losing data is not an option. These systems depend on reliable state transition management for business processes.</p><p>When using a message queue, messages represent information about a specific business process step. Therefore, every message in a queue has business value by itself; it doesn’t necessarily need to be analyzed in relation to other messages.</p><p>Because every single message has significance, messages have to be processed independently and can be scaled using the competing consumers pattern. After a given message has been successfully processed, it’s no longer available to any consumers.</p><h3 id="when-to-use-kafka-and-message-queues"><a class="markdown-anchor" href="#when-to-use-kafka-and-message-queues">🔗</a>When to use Kafka AND message queues</h3><p>Imagine a system that monitors changes to stock prices and alerts users when specific changes occur.</p><p>Consuming the firehose of real-time stock price data is a perfect job for Kafka, which excels at handling that amount of data and storing it for further processing. Each point-in-time stock price value isn’t that useful to us; it’s just the changes we’re interested in.</p><p>There may be many different consumers of this data. Of course, there may be consumers that are interested in different stocks. Depending on the system requirements, the streams may be set up so that each stock’s data lives on separate partitions, and any reader is only looking at the values for one stock symbol at a time. Or, stocks may be organized on the same partition, and readers simply ignore data they don’t care about.</p><p>However, even for one stock symbol, we may have many readers looking at the same event stream but interested in different things. For example, one reader may be interested only in sudden spikes or drops in a stock price, while another may be monitoring for trends over an extended period.</p><p>It ultimately doesn’t matter because each reader of a partitioned log maintains its own cursor and can read the same stream as often as it wants.</p><p>Once one of these Kafka stream readers detects an important business event, such as a sudden increase in stock price, <em>that’s the time to use a message queue</em>. Now we publish a <code>StockPriceSpikeDetected</code> event using a message queue, so that message handlers can execute business processes for that event. These message handlers might be making stock trades, updating data in a database, emailing fund managers, sending push notifications to mobile applications, or whatever else needs to be done.</p><p>It can be helpful to think of Kafka events more as raw data. Only when trends in the data become relevant to the business does something become a <em>business event</em>, and that’s the point at which you should be using a message queue. In fact, we’ve seen customers <a href="https://docs.particular.net/samples/azure-functions/service-bus-kafka/">use Kafka in this way</a>. They host code in Azure Functions using a <code>KafkaTrigger</code> to monitor event streams, then raise business events in Azure Service Bus (using either a send-only NServiceBus endpoint or <a href="https://docs.particular.net/samples/azure-service-bus-netstandard/native-integration/">native sends</a>) which are processed by NServiceBus endpoints. That’s a more appropriate use case for Kafka, and it works great.</p><h2 id="and-what-about-nservicebus"><a class="markdown-anchor" href="#and-what-about-nservicebus">🔗</a>And what about NServiceBus?</h2><p>Here at Particular Software, we believe in using the right tool for the job. So while it is possible to employ complex workarounds to make Kafka kinda-sorta behave somewhat like a queue, we think that if you have a situation where you should use a message queue, you should use a message queue.</p><p>Since NServiceBus is a communication framework over message queues, we don’t <em>currently</em> have any plans for a Kafka-based message transport that would be similar to our message transports for RabbitMQ, Azure Service Bus, and Amazon SQS. Unfortunately, when implementing a message transport, the devil is in the details—many of the features our customers love most about NServiceBus are much harder to achieve with Kafka due to seemingly inconspicuous differences between message queues and event stream processing infrastructure.</p><p>However, we are investigating other communication abstractions that might suit Kafka (and other partitioned logs) better. And that’s where <strong>you</strong> come in.</p><p>If you’re interested in using Kafka (or another partitioned log) <em>we want to hear from you</em>. Drop us a line and tell us what you want to do. We’re far more interested in building the <strong>right</strong> thing than getting something out there to check a Kafka box for the folks in the marketing department.</p><p>So <a href="/contact">let us know what you’re up to</a>—we’d love to chat.</p>]]></content>
    
    <summary type="html">
    
      &lt;p&gt;We get a &lt;em&gt;lot&lt;/em&gt; of questions about Kafka. Is it good? Does it live up to the hype? And most frequently, when are we going to support Kafka in NServiceBus.&lt;/p&gt;
&lt;p&gt;But to fully answer these questions, it’s essential to understand what Kafka is, and more importantly &lt;em&gt;what it isn’t&lt;/em&gt;, and then think about the kinds of problems that Kafka solves. So, let’s dive into the (heavily footnoted) details…&lt;/p&gt;
    
    </summary>
    
    
  </entry>
  
  <entry>
    <title>Simpler configuration in ServiceControl</title>
    <link href="https://particular.net/blog/whats-new-in-servicecontrol-4-21"/>
    <id>https://particular.net/blog/whats-new-in-servicecontrol-4-21</id>
    <published>2022-07-05T00:00:00.000Z</published>
    <updated>2026-04-20T11:16:19.180Z</updated>
    
    <content type="html"><![CDATA[<p>The Particular Service Platform is packed full of features that help you monitor your NServiceBus systems. Among other things, it enables you to:</p><ul><li>manage messages that require a manual retry</li><li>see when endpoints go offline and back online</li><li>detect connectivity problems with databases, brokers, and other systems using custom checks</li><li>troubleshoot message processing performance, both per-message and across an entire flow, using ServiceInsight</li></ul><p>Many of these capabilities have been developed over time as separate plugin packages. Each one has brought its own code-first configuration API. This approach has led to a situation where getting the most from the platform means installing six different NuGet packages and calling seven different configuration APIs.</p><p>We knew we could make it easier. What if it was just one package, and one configuration API?</p><span id="more"></span><h2 id="one-package-one-api"><a class="markdown-anchor" href="#one-package-one-api">🔗</a>One package, one API</h2><p>The new <code>NServiceBus.ServicePlatform.Connector</code> package simplifies connecting an NServiceBus endpoint to the Particular Service Platform by putting all of the configuration details into an easily serialized set of classes. It also includes a simple API to apply those configuration details to an endpoint.</p><pre><code class="language-csharp">var json = File.ReadAllText(pathToConfiguration);var platformConnection = ServicePlatformConnectionDetails.Parse(json);endpointConfiguration.ConnectToServicePlatform(platformConnection);</code></pre><p>The configuration file is in JSON format:</p><pre><code class="language-json">{  &quot;ErrorQueue&quot;: &quot;error&quot;,  &quot;MessageAudit&quot;: {    &quot;Enabled&quot;: true,    &quot;AuditQueue&quot;: &quot;audit&quot;  },  &quot;SagaAudit&quot;: {    &quot;Enabled&quot;: true,    &quot;SagaAuditQueue&quot;: &quot;audit&quot;  },  &quot;Heartbeats&quot;: {    &quot;Enabled&quot;: true,    &quot;HeartbeatsQueue&quot;: &quot;Particular.ServiceControl&quot;,    &quot;Frequency&quot;: &quot;00:00:30&quot;  },  &quot;CustomChecks&quot;: {    &quot;Enabled&quot;: true,    &quot;CustomChecksQueue&quot;: &quot;Particular.ServiceControl&quot;  },  &quot;Metrics&quot;: {    &quot;Enabled&quot;: true,    &quot;MetricsQueue&quot;: &quot;Particular.Monitoring&quot;,    &quot;Interval&quot;: &quot;00:00:05&quot;,    &quot;InstanceId&quot;: &quot;MyEndpointUniqueInstanceId&quot;  }}</code></pre><p>With this file in place, you can enable and disable features and adjust configuration at deployment time in each environment.</p><p>“But that’s just moving the complexity to a file,” you astutely point out. If you have to hand-craft the file then we haven’t really simplified anything. We thought of that too.</p><p>With the latest versions of ServiceControl and ServicePulse, you can view the JSON configuration directly in ServicePulse.</p><p><img src="/images/blog/2021/servicecontrol-4-22/connecting.servicepulse.png" alt="ServicePulse connection details"></p><p>You don’t have to store the configuration in a file, or even as JSON. You can integrate with anything that can handle strongly-typed configuration. <a href="https://docs.particular.net/samples/platform-connector/ms-config/">Here’s a sample</a> showing how to retrieve ServicePlatform Configuration from Microsoft.Extensions.Configuration.</p><h2 id="summary"><a class="markdown-anchor" href="#summary">🔗</a>Summary</h2><p>Centralized configuration makes things much easier for anyone who is managing a distributed system. With the <code>NServiceBus.ServicePlatform.Connector</code> we’ve greatly simplified this process for ServiceControl and we think you’ll agree it’s a better experience.</p><p>Get the latest versions of ServiceControl and ServicePulse here:</p><ul><li><a href="https://github.com/Particular/ServiceControl/releases">ServiceControl releases</a></li><li><a href="https://github.com/Particular/ServicePulse/releases">ServicePulse releases</a></li></ul>]]></content>
    
    <summary type="html">
    
      &lt;p&gt;The Particular Service Platform is packed full of features that help you monitor your NServiceBus systems. Among other things, it enables you to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;manage messages that require a manual retry&lt;/li&gt;
&lt;li&gt;see when endpoints go offline and back online&lt;/li&gt;
&lt;li&gt;detect connectivity problems with databases, brokers, and other systems using custom checks&lt;/li&gt;
&lt;li&gt;troubleshoot message processing performance, both per-message and across an entire flow, using ServiceInsight&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Many of these capabilities have been developed over time as separate plugin packages. Each one has brought its own code-first configuration API. This approach has led to a situation where getting the most from the platform means installing six different NuGet packages and calling seven different configuration APIs.&lt;/p&gt;
&lt;p&gt;We knew we could make it easier. What if it was just one package, and one configuration API?&lt;/p&gt;
    
    </summary>
    
    
  </entry>
  
  <entry>
    <title>More powerful Cosmos DB persistence</title>
    <link href="https://particular.net/blog/more-powerful-cosmos-db-persistence-2022"/>
    <id>https://particular.net/blog/more-powerful-cosmos-db-persistence-2022</id>
    <published>2022-06-21T00:00:00.000Z</published>
    <updated>2026-04-20T11:16:19.180Z</updated>
    
    <content type="html"><![CDATA[<p>The key to a successful Cosmos DB system is its data partitioning strategy. Like the rows of shrubs in a hedge maze, the logical partitions that divide data must be carefully planned, because that affects the scalability of the system and defines the boundaries for logical transactions.</p><p>In version 1.1 of our <a href="https://docs.particular.net/persistence/cosmosdb/">CosmosDB persistence package</a>, we’ve made defining the partition key for each message processed by NServiceBus much more straightforward, without needing a custom pipeline behavior. We’ve also added pessimistic concurrency support for more reliable processing of sagas with high contention patterns.</p><span id="more"></span><p>Let’s take a closer look at these two new features.</p><h2 id="partition-configuration-api"><a class="markdown-anchor" href="#partition-configuration-api">🔗</a>Partition configuration API</h2><p>We made it a <em>lot</em> easier to specify the container partition to use for each message, which is essential to make Cosmos DB transactions work.</p><p>NServiceBus uses Cosmos DB transactions to keep NServiceBus outbox and saga data consistent with whatever business data you modify in your message handlers. Cosmos DB supports transactions through its <a href="https://devblogs.microsoft.com/cosmosdb/introducing-transactionalbatch-in-the-net-sdk/">TransactionalBatch API in the .NET SDK</a>, and NServiceBus gives you access to the <code>TransactionalBatch</code> so that you can use it for your business data.</p><p>There’s just one catch: all the operations in the transaction must take place in the same partition within a container. So, for each incoming message, you must tell NServiceBus which container partition to use so that the NServiceBus data and your business data can be stored <em>together</em>.</p><p>Previously, specifying the partition key required implementing a <a href="https://docs.particular.net/nservicebus/pipeline/manipulate-with-behaviors">custom pipeline behavior</a> to provide the information needed for the transaction. A pipeline behavior is an advanced NServiceBus API, which is very powerful, <sup id="fnref:1:210622"><a href="#fn:1:210622" rel="footnote">1</a></sup> and there are <a href="/blog/infrastructure-soup">a lot of good reasons to use one</a>, but you shouldn’t have to create one <em>just</em> to use Cosmos DB.</p><p>We made this process more straightforward with a new transaction information API that allows you to provide NServiceBus with the necessary information without poking under the hood.</p><p>Here are a few examples of how to use the new API:</p><pre><code>// Get the configuration objects we needvar persistence = endpointConfiguration.UsePersistence&lt;CosmosDBPersistence&gt;();var transactionsInfo = persistence.TransactionInformation();// The partition to use is always located in a message headertransactionsInfo.ExtractPartitionKeyFromHeader(&quot;PartitionKeyHeader&quot;);// OR you can use multiple headerstransactionsInfo.ExtractPartitionKeyFromHeaders(headers =&gt; new PartitionKey(…));// OR get the partition key from the messagetransactionsInfo.ExtractPartitionKeyFromMessage&lt;MyMessage&gt;(message =&gt; new PartitionKey(message.PartitionKey));// OR use a custom class that implements IPartitionKeyFromHeadersExtractortransactionsInfo.ExtractPartitionKeyFromHeaders(new CustomPartitionKeyFromHeadersExtractor());</code></pre><p>There are a <em>lot</em> of options to cover a variety of use cases, all of which are much easier than defining your own NServiceBus pipeline behavior. Check out the documentation for <a href="https://docs.particular.net/persistence/cosmosdb/transactions">more API options for advanced scenarios</a>.</p><p>This is a much easier way to configure NServiceBus to use your tenant-per-container or tenant-per-partition scheme. Even if you aren’t building multi-tenant systems, the new configuration API makes it easier to align your NServiceBus processing with your chosen partitioning scheme. No more tinkering with the internals of NServiceBus. <sup id="fnref:2:210622"><a href="#fn:2:210622" rel="footnote">2</a></sup></p><p>To learn more about building multi-tenant systems with NServiceBus and Cosmos DB and how to design your data partitioning strategy to fit your requirements, check out our recent webinar:</p><div class="text-center"><figure class="figure"><a href="/webinars/building-multi-tenant-systems-using-cosmosdb"><img src="https://img.youtube.com/vi/fKqi-F_M3wQ/maxresdefault.jpg" class="figure-img img-fluid rounded" /></a><figcaption>Watch <a href="/webinars/building-multi-tenant-systems-using-cosmosdb" target="_blank">Building multi-tenant systems using NServiceBus and Cosmos DB</a> now</figcaption></figure></div><h2 id="pessimistic-concurrency-support"><a class="markdown-anchor" href="#pessimistic-concurrency-support">🔗</a>Pessimistic concurrency support</h2><p>One of the most powerful features of an NServiceBus saga is how it handles multiple messages trying to modify the same data simultaneously. No matter what, the saga will ensure that two concurrent messages can’t make conflicting changes to the stored saga data that would result in a corrupted state.</p><p>However, how the saga controls access impacts the system performance and cost to run the system under certain conditions.</p><p>The original version of Cosmos DB persistence supported only optimistic concurrency. In this strategy, message handlers for multiple messages can start processing concurrently, but the first one to commit their changes wins. When other message handlers try to commit, they get a concurrency exception (because the underlying data has changed) and are forced to retry.</p><p>This works well for sagas with little or no contention, and the performance is good. From the Cosmos DB perspective, this is also the cheapest option because you don’t have to perform any database operations (which cost money) to determine if it’s safe to proceed.</p><p>However, some sagas, such as those that implement the <a href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/BroadcastAggregate.html">scatter-gather pattern</a>, have much higher contention, and that’s when optimistic concurrency starts to break down. Many competing messages cause many concurrency exceptions to be thrown when the first message commits, resulting in floods of retries that increase the overall load, decrease message throughput, and may result in many failed messages in the error queue. <sup id="fnref:3:210622"><a href="#fn:3:210622" rel="footnote">3</a></sup></p><p>For sagas with high contention, pessimistic concurrency is a better approach. In this mode, we don’t try to process the message until a lock has been acquired so that we’re sure when starting the message handler that we’ll be able to commit the changes later. Every other message that needs access to the same saga data must wait until the lock is released. Then, it can obtain a new lock and proceed with processing.</p><p>This method results in fewer failures and eases contention, especially in scatter-gather scenarios, but comes at a cost. Because Cosmos DB charges for each storage operation, there is increased cost associated with checking for and obtaining the lock before a message is processed. Additionally, sagas normally unaffected by contention issues will now process more slowly due to the extra locking behavior.</p><p>Because of the extra cost associated with pessimistic concurrency, it’s not enabled by default. To enable it:</p><pre><code>var persistence = endpointConfiguration.UsePersistence&lt;CosmosDBPersistence&gt;();persistence.Sagas().UsePessimisticLocking();</code></pre><p>We recommend only enabling pessimistic locking in endpoints that contain sagas prone to contention issues. All other endpoints can use the default optimistic locking strategy.</p><p>Check out the <a href="https://docs.particular.net/persistence/cosmosdb/saga-concurrency">Cosmos DB persistence documentation page for saga concurrency</a> for more details on how to use and tune pessimistic locking to get the best out of your endpoints with high-contention sagas.</p><h2 id="summary"><a class="markdown-anchor" href="#summary">🔗</a>Summary</h2><p>With Cosmos DB persistence version 1.1, it’s even easier to create a Cosmos DB system, align it to your partitioning scheme, and then manage its performance.</p><p>To learn more about Cosmos DB and NServiceBus, check out our <a href="https://docs.particular.net/persistence/cosmosdb/">Cosmos DB persistence documentation</a>. If you’re currently using Azure Table Storage in your system, check out how to <a href="https://docs.particular.net/persistence/cosmosdb/migration-from-azure-table">migrate from Azure Table storage to Cosmos DB</a>. We’ve also got several <a href="https://docs.particular.net/samples/cosmosdb/">code samples</a> showing how to use Cosmos DB with NServiceBus.</p>]]></content>
    
    <summary type="html">
    
      &lt;p&gt;The key to a successful Cosmos DB system is its data partitioning strategy. Like the rows of shrubs in a hedge maze, the logical partitions that divide data must be carefully planned, because that affects the scalability of the system and defines the boundaries for logical transactions.&lt;/p&gt;
&lt;p&gt;In version 1.1 of our &lt;a href=&quot;https://docs.particular.net/persistence/cosmosdb/&quot;&gt;CosmosDB persistence package&lt;/a&gt;, we’ve made defining the partition key for each message processed by NServiceBus much more straightforward, without needing a custom pipeline behavior. We’ve also added pessimistic concurrency support for more reliable processing of sagas with high contention patterns.&lt;/p&gt;
    
    </summary>
    
    
  </entry>
  
  <entry>
    <title>Supercharging saga development</title>
    <link href="https://particular.net/blog/supercharging-saga-development-2022"/>
    <id>https://particular.net/blog/supercharging-saga-development-2022</id>
    <published>2022-03-29T00:00:00.000Z</published>
    <updated>2026-04-20T11:16:19.180Z</updated>
    
    <content type="html"><![CDATA[<p>Sagas are one of the most powerful tools available in the NServiceBus toolbox. With a saga, a business process that would otherwise have been implemented as a clunky batch job <sup id="fnref:1:290322"><a href="#fn:1:290322" rel="footnote">1</a></sup> can be built in a much more elegant and real-time manner.</p><p>We’ve focused on supercharging your ability to develop NServiceBus sagas in our latest round of releases. As a result, you’re going to feel like you’ve got your own “<a href="https://en.wikipedia.org/wiki/HUD_(video_gaming)">heads-up display</a>” when developing sagas. We’ll give you suggestions and point out problems before you even hit compile. You focus on your business logic.</p><span id="more"></span><p>Let’s take a look at the new features.</p><h2 id="saga-analyzers"><a class="markdown-anchor" href="#saga-analyzers">🔗</a>Saga analyzers</h2><p>Sagas have a powerful API, but it’s limited by the confines of C#. You can create a class that is a lousy saga but is still perfectly valid C# code. With <a href="https://docs.microsoft.com/en-us/visualstudio/code-quality/roslyn-analyzers-overview">Roslyn analyzers and code fixes</a>, we can now do a lot better and help guide you toward the <a href="https://blog.codinghorror.com/falling-into-the-pit-of-success/">pit of success</a>.</p><p>In NServiceBus version 7.7, we’ve added a variety of Roslyn analyzers that distill many of our saga development best practices and make them available as hints in your Error List window and as red squiggles <sup id="fnref:2:290322"><a href="#fn:2:290322" rel="footnote">2</a></sup> directly in your code.</p><p>Some of the diagnostics created by the new analyzers simply elevate a runtime error to compile time. This will save you time, as you won’t have to run your code to find out that <a href="https://docs.particular.net/nservicebus/sagas/analyzers#correlation-id-property-must-be-a-supported-type">you can’t use <code>DateTime</code> as a correlation property type</a>.</p><p>Others will save you time by doing some of the coding for you. Check out how you can now generate the code for the <code>ConfigureHowToFindSaga</code> method when adding an <code>IAmStartedByMessages&lt;T&gt;</code> to your saga:</p><p><img src="/images/blog/2022/supercharging-sagas/generate-configurehowtofindsaga.png" alt="ConfigureHowToFindSaga method generated by analyzer"></p><p>That’s for a new saga, where the correlation id <code>OrderId</code> hasn’t been defined yet. Watch what happens when we add another message <code>OrderBilled</code> that already has a matching <code>OrderId</code> property:</p><p><img src="/images/blog/2022/supercharging-sagas/detects-existing-correlation-id.png" alt="ConfigureHowToFindSaga method generated by analyzer when matching correlation id already known"></p><p>Since the analyzer already knows that the saga data’s <code>OrderId</code> property is the correlation id, and the <code>OrderBilled</code> message also contains an <code>OrderId</code> property, the generated code already includes the correct mapping, and there’s nothing more to do.</p><p>We created a bunch of analyzer diagnostics—15 in total—to supercharge your saga development. Check out <a href="https://docs.particular.net/nservicebus/sagas/analyzers">Roslyn analyzers for sagas</a> in our documentation for all the details.</p><h2 id="saga-scenario-testing"><a class="markdown-anchor" href="#saga-scenario-testing">🔗</a>Saga scenario testing</h2><p>While we were making it easier to write sagas, we thought it was also essential to make it easier to test them. So the newest version of our testing framework now includes tools to perform <strong>saga scenario testing</strong>, which are more expressive than testing sagas with standard unit tests.</p><p>For a long time, we’ve had the <a href="https://www.nuget.org/packages/NServiceBus.Testing">NServiceBus.Testing package</a> which provided a means to <a href="https://docs.particular.net/nservicebus/testing/">write unit tests on message handlers</a>, both regular handlers and those found inside sagas.</p><p>This has always worked pretty well for regular message handlers, but it has always seemed a bit clunky for testing sagas. You have to understand too much about how sagas work internally to write effective and accurate unit tests, or you can miss important details. For example, did you know that NServiceBus will automatically assign the saga data’s correlation property value with the value from the first incoming message? How about that an external message handler replying to a saga will include the <code>SagaId</code> in a message header, which means a mapping in the <code>ConfigureHowToFindSaga</code> method isn’t required? These details can really trip you up, leading to tests that don’t test what you think they are.</p><p>There was also no way to test the <code>ConfigureHowToFindSaga</code> method. <em>At all.</em> Unit tests have to call the saga’s <code>Handle</code> methods directly, without exercising any mapping expressions inside the <code>ConfigureHowToFindSaga</code> method, so you just had to hope <em>really hard</em> that your mappings were all correct.</p><p>With our new <a href="https://docs.particular.net/nservicebus/testing/saga-scenario-testing">saga scenario testing framework</a>, you can now do more than make assertions on the result of one message handler. Instead, the framework enables testing the result of a whole series of messages (a scenario) at a time, where handling each message exercises the <code>ConfigureHowToFindSaga</code> mappings to load the saga data from a virtual data store managed by the test.</p><p>This enables tests like “Is the <code>OrderShipped</code> event published after the <code>OrderPlaced</code> and <code>OrderBilled</code> are received, and the <a href="https://docs.particular.net/tutorials/nservicebus-sagas/2-timeouts/">buyer’s remorse</a> period has ended?” or “If the messages arrive in a different order than I expect, will the order still be shipped?”</p><p>The testable saga also keeps track of a virtual <code>CurrentTime</code>, stores saga timeouts internally, and plays them only when you call the <code>AdvanceTime</code> method. This gives you complete control over the scenario and enables testing race conditions—what happens if a specific message arrives before a timeout fires, or the other way around?</p><p>We know this new testing framework will make your sagas easier to test. Additionally, we hope that the tests you write with it will be more expressive. With scenario testing, each test can tell a complete story that documents the behavior you expect the saga to exhibit.</p><h2 id="critical-time-metric"><a class="markdown-anchor" href="#critical-time-metric">🔗</a>Critical time metric</h2><p>In NServiceBus version 7.7, we’re adding information to outgoing messages to more accurately calculate the <a href="https://docs.particular.net/monitoring/metrics/definitions#metrics-captured-critical-time">critical time metric</a>, which tells you how long after a message is sent for it to be fully processed. This metric is essential to ensure that you meet your message processing SLAs and can also be used as a trigger to scale out infrastructure when it becomes apparent that the SLA will be breached.</p><p>We realized that critical time had a serious flaw when it came to sagas: delayed messages caused by saga timeouts <em>include the delay</em> in addition to the delivery and processing time. So if you had saga timeouts from a year ago (which is a normal part of some business processes!), it would look like your critical time was one year, when there’s actually nothing wrong with your system performance.</p><p>To fix this problem, a new <code>DeliverAt</code> message header has been added to outgoing messages that include a delay to calculate the critical time attribute more accurately.</p><p>We also updated the critical time calculation in <a href="https://www.nuget.org/packages/NServiceBus.Metrics/3.1.0">NServiceBus.Metrics version 3.1</a> to use this new information. If you update your system with NServiceBus 7.7 and NServiceBus.Metrics 3.1, the <a href="https://docs.particular.net/monitoring/metrics/in-servicepulse">performance metrics in ServicePulse</a> will reflect the new method of calculating critical time information.</p><p>A future release of ServiceControl will update how critical time is displayed for audit messages in ServiceInsight.</p><h2 id="saga-not-found-logging"><a class="markdown-anchor" href="#saga-not-found-logging">🔗</a>Saga not found logging</h2><p>One small (but important!) way we’re always trying to improve NServiceBus is through our logging and error messages. We want to make sure the exception and log messages we put in front of you are clear and let you know precisely what you need to do. So, as part of NServiceBus 7.7, we adjusted the logging that occurs when saga data is not found.</p><p>If a message is handled by a saga <em>but does not start the saga</em>, NServiceBus will log a message. Of course, it’s possible the saga has done its job and doesn’t care about that type of message anymore, so it’s not necessarily a problem—but it could be.</p><p>Previously the log message indicated that a saga was not found for a specific message type. However, the log would not include the type of saga that was not found. In cases where a message was handled by multiple sagas, the message would be shown <em>only</em> if saga data couldn’t be found for <em>any</em> of them.</p><p>In NServiceBus 7.7, we’ve made this a lot clearer. For each saga type where the data could not be found, we now log:</p><blockquote><p>Could not find a started saga of <code>ShippingPolicy</code> for message type <code>OrderPlaced</code>.</p></blockquote><p>And if all sagas that handle a message are not found, we log:</p><blockquote><p>Could not find any started sagas for message type <code>OrderPlaced</code>. Going to invoke SagaNotFoundHandlers.</p></blockquote><p>This additional saga type information in the log should provide more clarity when investigating these types of situations.</p><h2 id="summary"><a class="markdown-anchor" href="#summary">🔗</a>Summary</h2><p>We’re always trying to make development with NServiceBus simpler, easier, and more powerful. Whether it’s our new <a href="https://docs.particular.net/nservicebus/sagas/analyzers">saga analyzers</a>, our new <a href="https://docs.particular.net/nservicebus/testing/saga-scenario-testing">scenario testing framework</a>, or our fixes for critical time and saga not found logging, we hope you’ll find something that will supercharge your own saga development.</p><p>It may not be an actual video game heads-up display, but we do what we can.</p><p>To get all these updates, you’ll want to update to <a href="https://www.nuget.org/packages/NServiceBus/7.7.0">NServiceBus 7.7.0</a>, <a href="https://www.nuget.org/packages/NServiceBus.Testing/7.4.0">NServiceBus.Testing 7.4.0</a>, and <a href="https://www.nuget.org/packages/NServiceBus.Metrics/3.1.0">NServiceBus.Metrics 3.1.0</a>. Keep in mind that you might be using NServiceBus.Metrics as a transitive dependency of <a href="https://www.nuget.org/packages/NServiceBus.Metrics.ServiceControl/">NServiceBus.Metrics.ServiceControl</a> or <a href="https://www.nuget.org/packages/NServiceBus.ServicePlatform.Connector/">NServiceBus.ServicePlatform.Connector</a>, and because NuGet will only load the lowest matching version of a transitive dependency, you will need to add an explicit package reference to NServiceBus.Metrics 3.1.0 to get the critical time update.</p>]]></content>
    
    <summary type="html">
    
      &lt;p&gt;Sagas are one of the most powerful tools available in the NServiceBus toolbox. With a saga, a business process that would otherwise have been implemented as a clunky batch job &lt;sup id=&quot;fnref:1:290322&quot;&gt;&lt;a href=&quot;#fn:1:290322&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; can be built in a much more elegant and real-time manner.&lt;/p&gt;
&lt;p&gt;We’ve focused on supercharging your ability to develop NServiceBus sagas in our latest round of releases. As a result, you’re going to feel like you’ve got your own “&lt;a href=&quot;https://en.wikipedia.org/wiki/HUD_(video_gaming)&quot;&gt;heads-up display&lt;/a&gt;” when developing sagas. We’ll give you suggestions and point out problems before you even hit compile. You focus on your business logic.&lt;/p&gt;
    
    </summary>
    
    
  </entry>
  
  <entry>
    <title>When distributed systems get frustrated</title>
    <link href="https://particular.net/blog/when-distributed-systems-get-frustrated-nservicebus-7-6"/>
    <id>https://particular.net/blog/when-distributed-systems-get-frustrated-nservicebus-7-6</id>
    <published>2022-01-19T00:00:00.000Z</published>
    <updated>2026-04-20T11:16:19.179Z</updated>
    
    <content type="html"><![CDATA[<p>One of the greatest ever contributions to video games was the invention of the pause button. There are some sequences—those that require absolute perfect timing—where your repeated failure can make you so frustrated you just need to pause, walk away, and try again later until you get it right.</p><span id="more"></span><div class="text-center"><figure class="figure"><img src="/images/blog/2021/modified-mario-screenshot.png" class="figure-img img-fluid rounded" alt="Screenshot of modified Super Mario Bros. level 1-1 with fire rods everywhere." /><figcaption><a href="https://www.youtube.com/watch?v=eb60pnjABGg&t=15s">I'm not sure that the pause button is going to help on this one</a></figcaption></figure></div><p>Distributed systems can “get frustrated” too. One tiny thing goes wrong, and suddenly every message starts to fail. If you’re lucky, it just throws a bunch of errors. If you’re unlucky, it goes into a tight loop of failure that results in a hefty bill from your cloud provider.</p><h2 id="failure-on-repeat"><a class="markdown-anchor" href="#failure-on-repeat">🔗</a>Failure on repeat</h2><p>Distributed software systems built with NServiceBus are pretty great about dealing with all kinds of failure. From <a href="/blog/but-all-my-errors-are-severe">transient to systemic exceptions</a> and everything in-between, systems built on reliably exchanging messages are resistant to failure, but that doesn’t prevent issues from occurring.</p><p>If your message handler relies on a 3rd-party service, and that service is not available, NServiceBus will keep retrying that message and won’t lose the data contained in the message. Eventually, that message gets sent to an error queue. And if you have high message throughput, you might have a <em>lot of messages</em> headed for an error queue, all of which are going to need to be <a href="https://docs.particular.net/tutorials/message-replay/">replayed later</a>.</p><p>In cloud environments, this can be especially problematic. (Read: <em>expensive!</em>) Every single attempt to process a message costs money, which is silly when we already know trying to process messages is going to be futile until the 3rd-party service is available again. You literally have to pay money to accomplish nothing.</p><h2 id="what-to-do-with-consecutive-failures"><a class="markdown-anchor" href="#what-to-do-with-consecutive-failures">🔗</a>What to do with consecutive failures</h2><p>NServiceBus 7.6 now tracks the number of consecutive failures and lets you take action to minimize its effect on the system.</p><p>For example, if your message handlers are all calling an unavailable 3rd-party service, all messages will likely fail. After, say, 10 consecutive failures, it should be clear that something more serious is going on. <sup id="fnref:1:190122"><a href="#fn:1:190122" rel="footnote">1</a></sup></p><p>Now you can change how messages are processed to prevent flooding the error queue. After enough consecutive failures, you can now enter a throttled mode where NServiceBus will only attempt to process one message at a time at a rate you specify.</p><p>It’s just like pushing pause in a video game and walking away. Instead of trying over and over as fast as possible, the endpoint walks away for a while. Instead of sending every message to the error queue, the system attempts one message per second to see if the situation has improved.</p><p>It doesn’t have to be a long wait—trying one message every few seconds usually works pretty well. The critical point is that when the system becomes fully operational again, there are only a handful of messages in the error queue instead of hundreds or thousands.</p><p>And if you’re in the cloud, the system didn’t just spend a small fortune chasing its tail.</p><h2 id="rate-limiting-on-consecutive-failures"><a class="markdown-anchor" href="#rate-limiting-on-consecutive-failures">🔗</a>Rate limiting on consecutive failures</h2><p>To enable the throttled one-message-at-a-time processing mode, we have introduced an API on the <code>RecoverabilitySettings</code> class:</p><pre><code class="language-CSharp">var recoverability = endpointConfiguration.Recoverability();recoverability.OnConsecutiveFailures(10,  new RateLimitSettings(    timeToWaitBetweenThrottledAttempts: TimeSpan.FromSeconds(1),    onRateLimitStarted: () =&gt; Console.Out.WriteLineAsync(&quot;Rate limiting started&quot;),    onRateLimitEnded: () =&gt; Console.Out.WriteLineAsync(&quot;Rate limiting stopped&quot;)));</code></pre><p>With this setting, the endpoint will switch to a rate-limited mode after it experiences 10 consecutive failures. By default, this mode will change the endpoint concurrency to 1 and wait 1 second after each attempt. However, as soon as a single message is processed successfully, the endpoint will revert to the regular processing mode with the previous concurrency setting and no delay after attempts.</p><p>The <code>RateLimitSettings</code> class allows you to configure the delay between processing attempts and take action when rate-limiting starts and stops.</p><p>The exact settings you use depend on the circumstances for each endpoint. How many consecutive failures should determine a persistent failure state? And how often do you want to check to see if things have improved? That’s up to you.</p><h2 id="summary"><a class="markdown-anchor" href="#summary">🔗</a>Summary</h2><p>When you get frustrated in a video game, sometimes the best thing is to pause the game and walk away. Then, a bit later, you come back more relaxed, pick up the controller, and nail it on the first try.</p><p>NServiceBus 7.6 lets you do the same thing with your distributed system. Instead of going into “failure on repeat” mode and generating a big cloud resource bill, NServiceBus can now notice the repeated failures and push pause, patiently waiting until conditions improve, and then it’s back to normal.</p><p>NServiceBus 7.6 is available now. You can download <a href="https://www.nuget.org/packages/NServiceBus/7.6.0">NServiceBus 7.6 from NuGet</a>, read the <a href="https://github.com/Particular/NServiceBus/releases/7.6.0">release notes</a>, or check out the <a href="https://docs.particular.net/nservicebus/recoverability/#automatic-rate-limiting">automatic rate-limiting documentation</a>.</p><p>Just remember it’s healthy to step away once in a while.</p>]]></content>
    
    <summary type="html">
    
      &lt;p&gt;One of the greatest ever contributions to video games was the invention of the pause button. There are some sequences—those that require absolute perfect timing—where your repeated failure can make you so frustrated you just need to pause, walk away, and try again later until you get it right.&lt;/p&gt;
    
    </summary>
    
    
  </entry>
  
  <entry>
    <title>RPC vs. Messaging – which is faster?</title>
    <link href="https://particular.net/blog/rpc-vs-messaging-which-is-faster"/>
    <id>https://particular.net/blog/rpc-vs-messaging-which-is-faster</id>
    <published>2021-09-21T00:00:00.000Z</published>
    <updated>2026-04-20T11:16:19.179Z</updated>
    
    <content type="html"><![CDATA[<p>Sometimes developers only care about speed. Ignoring all the other advantages messaging has, they’ll ask us the following question:</p><blockquote><p>“But isn’t RPC faster than messaging?”</p></blockquote><p>In place of RPC, <sup id="fnref:1:210921"><a href="#fn:1:210921" rel="footnote">1</a></sup> they may substitute a different term or technology like REST, microservices, gRPC, WCF, Java RMI, etc. However, no matter the specific word used, the meaning is the same: remote method calls over HTTP. So we’ll just use “RPC” for short.</p><p>Some will claim that any type of RPC communication ends up being faster (meaning it has lower latency) than any equivalent invocation using asynchronous messaging. But the answer isn’t that simple. It’s less of an apples-to-oranges comparison and more like apples-to-orange-sherbet.</p><p>Let’s take a look at the bigger picture.</p><span id="more"></span><h2 id="why-rpc-is-“faster”"><a class="markdown-anchor" href="#why-rpc-is-“faster”">🔗</a>Why RPC is “faster”</h2><p>It’s tempting to simply write a micro-benchmark test where we issue 1000 requests to a server over HTTP and then repeat the same test with asynchronous messages. But to be fair, we also have to process the messages on the server before we can consider the messaging case to be complete.</p><p>If you did such a benchmark, here’s an incomplete picture you might end up with:</p><div class="text-center"><figure class="figure"><img src="/images/blog/2021/rpc-vs-messaging/1-microbenchmark.png" class="figure-img img-fluid rounded" alt="Graph of microbenchmark showing RPC is faster than messaging. But is it?" /><figcaption>Graph of microbenchmark showing RPC is faster than messaging. But is it?</figcaption></figure></div><p>Initially, the messaging solution takes longer to complete than RPC. And this makes sense! After all, in the HTTP case, we open a direct socket connection from the client to the server, the server executes its code, and then returns a response on the already-open socket. In the messaging case, we need to send a message, write that message to disk, and then another process needs to pick it up and process it. There are more steps, so the increased latency is easily explained.</p><p>But that’s just a micro-benchmark and doesn’t tell you the whole story. Anyone who shows you a graph like that and says “RPC is faster” is either lying or selling you something. <sup id="fnref:2:210921"><a href="#fn:2:210921" rel="footnote">2</a></sup></p><h2 id="threads-and-memory"><a class="markdown-anchor" href="#threads-and-memory">🔗</a>Threads and memory</h2><p>Unfortunately, the web servers serving your RPC request won’t scale linearly forever, which becomes a big problem.</p><p>What you typically see nowadays in a “microservices architecture” using RPC <sup id="fnref:3:210921"><a href="#fn:3:210921" rel="footnote">3</a></sup> is not a single RPC call, but instead, one service calling another service, which calls another service, <em>and so on</em>. Even a service that doesn’t turn around and call another service usually has to do something like talk to a database, which is another form of RPC.</p><p>What happens with threads and memory when you’re doing these remote calls?</p><p>When you begin a remote call, any memory you had allocated needs to be preserved until you get a response back. You may not even be thinking about this as you’re coding, but whatever variables you’ve declared before the RPC call must retain their values. Otherwise, you won’t be able to use them once you have your response.</p><p>Meanwhile, the garbage collector (or whatever manages memory in your runtime environment) is trying to make things efficient by cleaning up memory that’s not used anymore.</p><p>Garbage collectors are designed under the assumption that memory should be cleaned up reasonably quickly. So in relatively short order, the garbage collector will perform Generation 0 (Gen0) collection, in which it will ask your thread, “Are you done with that memory yet?” Nope, as it turns out, you’re still waiting for a response from the RPC call. “No problem,” the garbage collector will say, “I’ll come back and check with you later.” And it marks that memory as Generation 1 (Gen1), so it knows not to bother your thread again too soon.</p><p>Around ~50,000 CPU operations later, the garbage collector will come around for a Gen1 memory collection. This is a <em>long</em> time in terms of CPU cycles, but it’s maybe about 50 microseconds for us humans, which isn’t much at all. It’s also not a long time in terms of a remote call, which is <em>way slower</em> than a local function execution. “Are you done with that memory now?” Your thread is shocked—doesn’t the garbage collector understand how long remote calls take? “No problem,” the collector says, “I’ll come back later.” And it marks your memory as Gen2.</p><p>The actual timings of the garbage collector’s activity will vary on a <strong>lot</strong> of things, but the point is that your memory can be put into Gen2 before your RPC call even completes. This is important because the garbage collector doesn’t <em>actively</em> clean up Gen2 memory. So even if you get a response back from the server and your method completes, your Gen2 memory may not be cleaned up except for <code>IDisposable</code> objects.</p><p>Regular memory just sits there in Gen2. You essentially have a minor memory leak when you’re invoking remote calls if those calls take enough time to come back. This memory accrues until the system is under enough load that it can’t allocate additional memory anymore.</p><p>Then the garbage collector says, “Uh-oh, I guess I better do something about this Gen2 memory.”</p><h2 id="stop-the-world"><a class="markdown-anchor" href="#stop-the-world">🔗</a>Stop the world</h2><p>This is where the throughput of an RPC system starts to go off the rails.</p><p>The garbage collector has already tried to clean up Gen2 memory twice, and it’s obviously being actively used by the thread. So the garbage collector’s only choice is to suspend all the currently-executing threads in the process to clean up the Gen2 memory.</p><p>That’s when the throughput of your RPC system starts to look like this:</p><div class="text-center"><figure class="figure"><img src="/images/blog/2021/rpc-vs-messaging/2-rpc-flattening.png" class="figure-img img-fluid rounded" alt="Performance of RPC system flattens as threads are suspended for garbage collection" /><figcaption>Performance of RPC system flattens as threads are suspended for garbage collection</figcaption></figure></div><p>The scale on the <strong>Load</strong> axis is expanded beyond a micro-benchmark now. As the garbage collector starts suspending the threads of your process to clean up memory, all the clients waiting for a response <em>from you</em> now have to wait longer.</p><p>This creates a dangerous domino effect. As one part of your system gets slower, it responds more slowly to its clients, which means their memory goes into Gen2 faster, which means their garbage collector will suspend their threads more frequently, which means their clients must wait longer…you can see where this is going. The deeper your RPC call stacks are from one microservice to the next, the more accumulated memory pressure you have. And the more memory pressure you have, the more likely it is that you will find yourself in this sort of situation:</p><div class="text-center"><figure class="figure"><img src="/images/blog/2021/rpc-vs-messaging/3-rpc-out-of-memory.png" class="figure-img img-fluid rounded" alt="The RPC system has run out of memory and cannot spin up threads to handle requests" /><figcaption>The RPC system has run out of memory and cannot spin up threads to handle requests</figcaption></figure></div><p>On the right side of the graph, the process can’t spin up more threads to handle additional incoming requests because it ran out of memory. Meanwhile, you, the client, are receiving the exception “Connection refused by remote host.” You’re getting a response, but the server is saying, “Look, I’m too busy. You’re going to have to come back later.” It can’t afford to spin up more threads and is <strong>load shedding</strong>, which is the only mechanism an RPC system has to handle the excess load.</p><p>If all you have is a single client and a single server, this usually isn’t going to be a big deal. But the more small moving parts that you have, the more fragile the system will be.</p><h2 id="load-in-messaging-systems"><a class="markdown-anchor" href="#load-in-messaging-systems">🔗</a>Load in messaging systems</h2><p>Systems built on messaging, under load, will usually <em>exceed</em> the throughput of an RPC-based system.</p><p>Systems built on message queues don’t do load shedding like RPC systems because they have storage on disk to store incoming requests as they come in. This makes a queue-based system more resilient under a higher load than an RPC system. Instead of using threads and memory to hold onto requests, it uses durable disks. As a result, many more messages can be sitting in queues even while the system is processing at peak capacity.</p><p>This is why it’s like apples and orange sherbet to compare RPC and messaging using a micro-benchmark like at the beginning of this article: <em>If you’re allowed to throw away whatever requests you feel like, it’s not a fair comparison.</em> Messaging doesn’t do that.</p><p>In a messaging-based system, there’s usually no waiting around for responses from other microservices. I receive a message, I write something to my database, and maybe I send out additional messages, and then on to the next. Message-based microservices are running very much in parallel with each other. With no waiting, message processing under load will scale up to a certain point, based on a number you configure: the maximum number of concurrently processing messages. <sup id="fnref:4:210921"><a href="#fn:4:210921" rel="footnote">4</a></sup></p><p>With all the parallel processing and no waiting, the messaging architecture generally overtakes the RPC under load, resulting in a higher (and more importantly, stable) overall throughput.</p><div class="text-center"><figure class="figure"><img src="/images/blog/2021/rpc-vs-messaging/4-final.png" class="figure-img img-fluid rounded" alt="Messaging outperforms RPC systems under both medium and heavy loads" /><figcaption>Messaging outperforms RPC systems under both medium and heavy loads</figcaption></figure></div><p>How much higher throughput? It depends very much on the system, how you designed it, whether the database is the bottleneck, and a million other factors. But usually, the async processing model results in a more parallel-processing result, resulting in higher throughput for your system than the synchronous blocking RPC model.</p><h2 id="summary"><a class="markdown-anchor" href="#summary">🔗</a>Summary</h2><p>Anytime we use a synchronous RPC model, there’s always a risk of an “epic fail” scenario. There’s always a risk that an RPC system will start running out of threads and memory, the garbage collector will start suspending threads more frequently, the system will do more housekeeping than business work, and soon after, it will fail.</p><p>Systems built on asynchronous messaging won’t fail like that. Even if the RPC system doesn’t fail, the messaging sytem will usually exceed the throughput of an RPC system.</p><p>If you’d like to learn how to build messaging systems this way, join me for a live webinar <a href="/webinars/live-coding-your-first-nservicebus-system">Live coding your first NServiceBus system</a> where I’ll show you all the fundamental messaging concepts you need to understand to build an effective distributed system using messaging instead of relying on RPC communication.</p>]]></content>
    
    <summary type="html">
    
      &lt;p&gt;Sometimes developers only care about speed. Ignoring all the other advantages messaging has, they’ll ask us the following question:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“But isn’t RPC faster than messaging?”&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;In place of RPC, &lt;sup id=&quot;fnref:1:210921&quot;&gt;&lt;a href=&quot;#fn:1:210921&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; they may substitute a different term or technology like REST, microservices, gRPC, WCF, Java RMI, etc. However, no matter the specific word used, the meaning is the same: remote method calls over HTTP. So we’ll just use “RPC” for short.&lt;/p&gt;
&lt;p&gt;Some will claim that any type of RPC communication ends up being faster (meaning it has lower latency) than any equivalent invocation using asynchronous messaging. But the answer isn’t that simple. It’s less of an apples-to-oranges comparison and more like apples-to-orange-sherbet.&lt;/p&gt;
&lt;p&gt;Let’s take a look at the bigger picture.&lt;/p&gt;
    
    </summary>
    
    
  </entry>
  
  <entry>
    <title>What&#39;s new with NServiceBus and Azure Functions</title>
    <link href="https://particular.net/blog/whats-new-azure-functions-1-3"/>
    <id>https://particular.net/blog/whats-new-azure-functions-1-3</id>
    <published>2021-08-31T00:00:00.000Z</published>
    <updated>2026-04-20T11:16:19.179Z</updated>
    
    <content type="html"><![CDATA[<p>Do you think Azure Functions are pretty great? Us too! Do you hate boilerplate code? Yeah, us too.</p><p>Have you heard of C# source generators <sup id="fnref:1:310821"><a href="#fn:1:310821" rel="footnote">1</a></sup> and thought they sounded pretty cool but didn’t really know how they could be useful?</p><p>In the newest version of our Azure Functions integration, we’ve used source generators to reduce the boilerplate needed to set up an NServiceBus endpoint on Azure Service Bus down to just a few lines of code.</p><span id="more"></span><p>Now, this code is all it takes to write transactionally consistent NServiceBus handlers inside of your Azure Function project.</p><pre><code class="language-csharp">[assembly: FunctionsStartup(typeof(Startup))][assembly: NServiceBusTriggerFunction(&quot;MyEndpoint&quot;, SendsAtomicWithReceive = true)]class Startup : FunctionsStartup{    public override void Configure(IFunctionsHostBuilder builder) =&gt;        builder.UseNServiceBus();}</code></pre><p>From those attributes, we’ll generate an Azure Function trigger, wire it up to an Azure Service Bus queue, and manage flowing transactions around for you. All you have to do is supply the business logic. Intrigued? Let’s dive in to see what’s new with NServiceBus and Azure Functions.</p><h2 id="automatic-trigger-function-generation"><a class="markdown-anchor" href="#automatic-trigger-function-generation">🔗</a>Automatic trigger function generation</h2><p>Both NServiceBus and Azure Functions provide abstractions over receiving and handling messages from an Azure Service Bus queue. To get them working together, we need a bit of boilerplate code to create a functions trigger that passes everything needed to NServiceBus. In the 1.0 release, that looked something like this:</p><pre><code class="language-csharp">using System.Threading.Tasks;using Microsoft.Azure.ServiceBus;using Microsoft.Azure.WebJobs;using Microsoft.Extensions.Logging;using NServiceBus;class FunctionEndpointTrigger{    readonly IFunctionEndpoint endpoint;    public FunctionEndpointTrigger(IFunctionEndpoint endpoint)    {        this.endpoint = endpoint;    }    [FunctionName(&quot;NServiceBusFunctionEndpointTrigger-ASBTriggerQueue&quot;)]    public async Task Run(        [ServiceBusTrigger(queueName: &quot;ASBTriggerQueue&quot;)]        Message message,        ILogger logger,        ExecutionContext executionContext)    {        await endpoint.Process(message, executionContext, logger);    }}</code></pre><p>We didn’t like all that boilerplate, so we picked out the important details that <em>need</em> to be configured, and we used source generators to allow you to create an Azure Function trigger that maps to an NServiceBus endpoint with a simple attribute.</p><pre><code class="language-csharp">[assembly: NServiceBusTriggerFunction(&quot;ASBTriggerQueue&quot;)]</code></pre><p>Dropping this attribute into your Azure Functions project will automatically generate the same common code shown above at compile time. Then, you can delete your custom trigger altogether. We believe that most of the code in your project should be business logic for handling messages.</p><p>You can read more about this feature <a href="https://docs.particular.net/nservicebus/hosting/azure-functions/service-bus#basic-usage-azure-function-queue-trigger-for-nservicebus">in our documentation</a>.</p><h2 id="consistent-messaging"><a class="markdown-anchor" href="#consistent-messaging">🔗</a>Consistent messaging</h2><p>We also used source generators to make it really easy to use the transactional processing in Azure Service Bus that gives you consistency between your incoming and outgoing messages.</p><p>Imagine a message handler that looks like this:</p><pre><code class="language-csharp">class ProcessOrderMessageHandler : IHandleMessages&lt;ProcessOrder&gt;{  public async Task Handle(ProcessOrder message, MessageHandlerContext context)  {    await context.Send(new BillOrder { OrderId = message.OrderId });    await context.Send(new CreateShippingLabel { OrderId = message.OrderId });    await context.Publish(new OrderAccepted { OrderId = message.OrderId });  }}</code></pre><p>When everything is working, this handler is fine. A <code>ProcessOrder</code> message comes in, and three messages are produced: a <code>BillOrder</code> command, a <code>CreateShippingLabel</code> command, and an <code>OrderAccepted</code> event.</p><p>But what happens if one of those messages fails to be sent? What if <code>BillOrder</code> and <code>CreateShippingLabel</code> are sent, but something goes wrong, and the <code>OrderAccepted</code> event cannot be published? This can be caused by anything from a missing event topic to a momentary network glitch.</p><p>If left unchecked, this situation will result in duplicate <code>BillOrder</code> and <code>CreateShippingLabel</code> messages being sent each time the <code>ProcessOrder</code> handler is retried.</p><p>We definitely do not want to bill the customer multiple times nor ship them multiple orders. What we want is for the entire operation to succeed or fail atomically. Either all three outgoing messages are produced, or none of them are. If they <em>are</em> produced, then the incoming message should be marked as complete.</p><p>Getting this right is not always easy. You need to make sure the incoming message and all of the outgoing messages use the same transaction, and you need to ensure that the message will not be auto-completed by the incoming Service Bus Trigger. Getting it wrong can lead to some subtle bugs that are difficult to detect in a production environment.</p><p>So, we made it easy to get it right with just one line of code. If you want to enable transactional consistency between incoming and outgoing messages in your Azure Function NServiceBus endpoint, just tell us, and we’ll take care of the rest:</p><pre><code class="language-csharp">[assembly: NServiceBusTriggerFunction(&quot;MyEndpoint&quot;, SendsAtomicWithReceive = true)]</code></pre><p>This enables the <a href="https://docs.particular.net/transports/azure-service-bus/transaction-support#sends-atomic-with-receive">sends atomic with receive transport transaction mode for ServiceBus</a> and integrates it correctly with the Azure Functions host. You can read more about this feature <a href="https://docs.particular.net/nservicebus/hosting/azure-functions/service-bus#message-consistency">in our documentation</a>.</p><h2 id="iconfiguration"><a class="markdown-anchor" href="#iconfiguration">🔗</a>IConfiguration</h2><p>By embracing the <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/configuration/">Microsoft <code>IConfguration</code> API</a>, we’ve made the setup of your Azure Functions endpoint even simpler.</p><p>If you have used any of the new hosting models from Microsoft, you probably are familiar with the new <code>IConfiguration</code> API. This interface allows you to load configuration from files, environment variables, and other sources. This interface is available in Azure Functions as well, as shown here:</p><pre><code class="language-csharp">public class Startup : FunctionsStartup{    public override void ConfigureAppConfiguration(IFunctionsConfigurationBuilder builder)    {        FunctionsHostBuilderContext context = builder.GetContext();        var jsonConfig = Path.Combine(context.ApplicationRootPath, &quot;appsettings.json&quot;);        builder.ConfigurationBuilder            .AddJsonFile(jsonConfig, optional: true, reloadOnChange: false)            .AddEnvironmentVariables();    }}</code></pre><p>The new release of NServiceBus Azure Functions embraces this configuration interface, so you can use that to configure your endpoints. This is a lot simpler:</p><pre><code class="language-csharp">public override void Configure(IFunctionsHostBuilder builder){  var configuration = builder.GetContext().Configuration;  builder.UseNServiceBus(() =&gt;     new ServiceBusTriggeredEndpointConfiguration(&quot;NServiceBusFunctionEndpoint&quot;, configuration));}</code></pre><p>This will automatically look up configuration settings such as connections strings and license info directly from the configured sources. In fact, if you add a configuration variable called <code>ENDPOINT_NAME</code>, you can shrink your endpoint configuration to just this:</p><pre><code class="language-csharp">public override void Configure(IFunctionsHostBuilder builder) =&gt;    builder.UseNServiceBus();</code></pre><p>Of course, if you don’t want to use <code>IConfiguration</code> to configure the endpoint or would rather do things the old way, the manual configuration overloads still exist:</p><pre><code class="language-csharp">public override void Configure(IFunctionsHostBuilder builder){    var endpointConfig = new ServiceBusTriggeredEndpointConfiguration(&quot;NServiceBusFunctionEndpoint&quot;);    var transport = endpointConfig.Transport;    transport.ConnectionString(&quot;MyConnectionString&quot;);    builder.UseNServiceBus(cfg =&gt; endpointConfig);}</code></pre><h2 id="summary"><a class="markdown-anchor" href="#summary">🔗</a>Summary</h2><p>You hate boilerplate, and so do we. So with the power of C# source generators in our newest version of Azure Functions, you can shrink your endpoint configuration from a couple dozen lines of code (or more!) to a couple assembly-level attributes and a tiny <code>Startup</code> class containing the bare essentials:</p><pre><code class="language-csharp">[assembly: FunctionsStartup(typeof(Startup))][assembly: NServiceBusTriggerFunction(&quot;MyEndpoint&quot;, SendsAtomicWithReceive = true)]class Startup : FunctionsStartup{    public override void Configure(IFunctionsHostBuilder builder) =&gt;        builder.UseNServiceBus();}</code></pre><p>If only we could do that for <em>all</em> your code.</p><p>To check out NServiceBus on Azure Functions, check out our sample, <a href="https://docs.particular.net/samples/azure-functions/service-bus/">Using NServiceBus in Azure Functions with Service Bus triggers</a>.</p>]]></content>
    
    <summary type="html">
    
      &lt;p&gt;Do you think Azure Functions are pretty great? Us too! Do you hate boilerplate code? Yeah, us too.&lt;/p&gt;
&lt;p&gt;Have you heard of C# source generators &lt;sup id=&quot;fnref:1:310821&quot;&gt;&lt;a href=&quot;#fn:1:310821&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; and thought they sounded pretty cool but didn’t really know how they could be useful?&lt;/p&gt;
&lt;p&gt;In the newest version of our Azure Functions integration, we’ve used source generators to reduce the boilerplate needed to set up an NServiceBus endpoint on Azure Service Bus down to just a few lines of code.&lt;/p&gt;
    
    </summary>
    
    
  </entry>
  
</feed>