Skip to content

Commit

Permalink
Azure Service Bus Topology Description (#6997)
Browse files Browse the repository at this point in the history
* More description

* Polymorphic

* Topology description done

* Apply suggestions from code review

Co-authored-by: Szymon Pobiega <szymon.pobiega@gmail.com>

---------

Co-authored-by: Daniel Marbach <danielmarbach@users.noreply.github.com>
Co-authored-by: Szymon Pobiega <szymon.pobiega@gmail.com>
  • Loading branch information
3 people authored Feb 20, 2025
1 parent 14124f6 commit b46c6c8
Show file tree
Hide file tree
Showing 4 changed files with 211 additions and 39 deletions.
45 changes: 44 additions & 1 deletion Snippets/ASBS/ASBS_5/Usage.cs
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,10 @@
using System.Text;

using Azure.Identity;

using Azure.Messaging.ServiceBus;
using NServiceBus;
using NServiceBus.Transport;
using Shipping;

class Usage
{
Expand Down Expand Up @@ -52,7 +54,48 @@ class Usage

#endregion
#pragma warning restore CS0618 // Type or member is obsolete

var topology = TopicTopology.Default;
#region asb-interface-based-inheritance
topology.SubscribeTo<IOrderStatusChanged>("Shipping.OrderAccepted");
#endregion

#region asb-interface-based-inheritance-declined
topology.SubscribeTo<IOrderStatusChanged>("Shipping.OrderAccepted");
topology.SubscribeTo<IOrderStatusChanged>("Shipping.OrderDeclined");
#endregion

#region asb-versioning-subscriber-mapping
topology.SubscribeTo<IOrderAccepted>("Shipping.OrderAccepted");
topology.SubscribeTo<IOrderAccepted>("Shipping.OrderAcceptedV2");
#endregion

#region asb-versioning-publisher-mapping
topology.PublishTo<OrderAcceptedV2>("Shipping.OrderAccepted");
#endregion

#region asb-versioning-publisher-customization
transport.OutgoingNativeMessageCustomization = (operation, message) =>
{
if (operation is MulticastTransportOperation multicastTransportOperation)
{
// Subject is used for demonstration purposes only, choose a property that fits your scenario
message.Subject = multicastTransportOperation.MessageType.FullName;
}
};
#endregion
}

class MyEvent;
}

namespace Shipping
{
interface IOrderAccepted : IEvent { }
interface IOrderStatusChanged : IEvent { }

class OrderAccepted : IOrderAccepted, IOrderStatusChanged { }
class OrderDeclined : IOrderAccepted, IOrderStatusChanged { }

class OrderAcceptedV2 : IOrderAccepted, IOrderStatusChanged { }
}
Original file line number Diff line number Diff line change
Expand Up @@ -92,25 +92,6 @@ will still receive those events automatically since the base name of `Shipping.O

which matches the newly created contract more closely.

#### Single topic design

The "single topic" design was chosen over a "topic per event type" design due to:

- Support for polymorphic events
- Stronger decoupling of the publishers and subscribers
- Simpler hierarchy of topics, subscriptions, and rules
- Simpler management and mapping from events to parts of the topology

The need to support polymorphism was the most substantial reason to implement a "single topic" design.

For simple events or when support for polymorphism is not needed, it is possible to map the contract type to the topic since the relationship is 1:1. Such topologies are mostly built without automatically forwarding messages in subscriptions to the destination queues. The subscription acts as a virtual queue. In cases when subscribers are offline or slow for some time, the quota of the topic may be reached, which can cause all publish operations to the topic to fail. Subscriptions in non-forwarding mode share the topic quota.

Enabling forwarding on the subscription, resolves the problem. The benefit of the event-per-topic approach, is that the filter rules can be the default catch-all rule expressed as `1 = 1`. Those are relatively straightforward to evaluate at runtime by the broker and do not impose high-performance impacts. The client-to-broker interaction rises in complexity as soon as polymorphism is required. The subscriber needs to be able to figure out all the possible contracts of the event and create multiple subscriptions on multiple topics based on the contracts exposed. But in some cases, the subscriber doesn't have access to the whole event contract the publisher publishes. In such cases, complex manual mapping is required to determine the publisher-subscriber relationship. Such a design imposes more coupling between the publisher and the subscriber, serving the opposite of what publish/subscribe is intended to solve.

On the other hand, the publisher needs to figure out to what possible topics the events should be published and then publish by issuing multiple operations against multiple entities initiated by the client. These operations are impacted by the latency between the client and the broker, might fail, and must be retried in case of transient errors. Once multiple subscriptions on multiple topics must be managed, duplicates will likely occur because the broker duplicates the original message per subscription. In the case of polymorphism, the publisher might be publishing the same event to multiple topics, which naturally leads to duplicates.

The complexity of managing all the topics, subscriptions and rules can increase quickly. With the single topic design, only a common topic bundle and a subscription per endpoint containing the rules that match the enclosed message type headers are required. Polymorphism and de-duplication support is built into the design. The downside is that more advanced SQL filters for the rules can impact the namespace performance when the broker evaluates rules, which could impose a performance penalty in extremely high throughput scenarios.

#### Topology highlights

| | |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -103,25 +103,6 @@ will still receive those events automatically since the base name of `Shipping.O

which matches the newly created contract more closely.

#### Single topic design

The "single topic" design was chosen over a "topic per event type" design due to:

- Support for polymorphic events
- Stronger decoupling of the publishers and subscribers
- Simpler hierarchy of topics, subscriptions, and rules
- Simpler management and mapping from events to parts of the topology

The need to support polymorphism was the most substantial reason to implement a "single topic" design.

For simple events or when support for polymorphism is not needed, it is possible to map the contract type to the topic since the relationship is 1:1. Such topologies are mostly built without automatically forwarding messages in subscriptions to the destination queues. The subscription acts as a virtual queue. In cases when subscribers are offline or slow for some time, the quota of the topic may be reached, which can cause all publish operations to the topic to fail. Subscriptions in non-forwarding mode share the topic quota.

Enabling forwarding on the subscription, resolves the problem. The benefit of the event-per-topic approach, is that the filter rules can be the default catch-all rule expressed as `1 = 1`. Those are relatively straightforward to evaluate at runtime by the broker and do not impose high-performance impacts. The client-to-broker interaction rises in complexity as soon as polymorphism is required. The subscriber needs to be able to figure out all the possible contracts of the event and create multiple subscriptions on multiple topics based on the contracts exposed. But in some cases, the subscriber doesn't have access to the whole event contract the publisher publishes. In such cases, complex manual mapping is required to determine the publisher-subscriber relationship. Such a design imposes more coupling between the publisher and the subscriber, serving the opposite of what publish/subscribe is intended to solve.

On the other hand, the publisher needs to figure out to what possible topics the events should be published and then publish by issuing multiple operations against multiple entities initiated by the client. These operations are impacted by the latency between the client and the broker, might fail, and must be retried in case of transient errors. Once multiple subscriptions on multiple topics must be managed, duplicates will likely occur because the broker duplicates the original message per subscription. In the case of polymorphism, the publisher might be publishing the same event to multiple topics, which naturally leads to duplicates.

The complexity of managing all the topics, subscriptions and rules can increase quickly. With the single topic design, only a common topic bundle and a subscription per endpoint containing the rules that match the enclosed message type headers are required. Polymorphism and de-duplication support is built into the design. The downside is that more advanced SQL filters for the rules can impact the namespace performance when the broker evaluates rules, which could impose a performance penalty in extremely high throughput scenarios.

#### Topology highlights

| | |
Expand Down
167 changes: 167 additions & 0 deletions transports/azure-service-bus/topology_description_asbs_[5,).partial.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,167 @@
The **topic-per-event** topology dedicates one Azure Service Bus topic to each *concrete* event type. This design moves away from the single “bundle” topic and its SQL or Correlation filters, thereby reducing filter overhead and distributing messages more evenly across multiple topics.

In the topic-per-event topology:

1. **Publishers** send an event to a specific topic named after the most concrete event type.
2. **Subscribers** each create a *subscription* under each topic that matches the event(s) they are interested in.
3. Because there is no single, central “bundle” topic to hold all messages, each published event flows to its own dedicated topic.

```mermaid
flowchart LR
subgraph Publisher
P[Publishes<br/>ConcreteEventA]
end
subgraph Service Bus
T1[Topic: ConcreteEventA]
T2[Topic: ConcreteEventB]
end
subgraph Subscriber
S1[Subscribes to<br/>ConcreteEventA]
S2[Subscribes to<br/>ConcreteEventB]
end
P -->|Publish| T1
S1 -->|Subscribe| T1
S2 -->|Subscribe| T2
```

This design can dramatically reduce filtering overhead, boosting performance and scalability. Distributing the messages across multiple topics avoids the single-topic bottleneck and mitigates the risk of hitting per-topic subscription and filter limits.

#### Quotas and limitations

A single Azure Service Bus topic [can hold up to 2,000 subscriptions, and each Premium namespace (with one messaging unit) can have up to 1,000 topics](https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-quotas).

- Subscriptions per topic: 2,000 (Standard/Premium).
- Topics per Premium namespace: 1,000 per messaging unit.
- Topic size: 5 GB quota per topic.

By allocating a separate topic for each concrete event type, the overall system can scale more effectively:

- Each topic is dedicated to one event type, so message consumption is isolated.
- Failure domain size is reduced from entire system to single topic so if any single topic hits its 5 GB quota, only that event type is affected.
- The maximum limit of 1,000 topics per messaging unit can comfortably support hundreds of event types, especially when factoring that not all event types are high-volume

> [!NOTE]
> If the system has numerous event types beyond these limits, an architectural review is recommended. Additional messaging units or other partitioning strategies may be required.
#### Subscription rule matching

In this topology, no SQL or Correlation filtering is required on the topic itself because messages in a topic are all of the same event type. Subscriptions can use a default “match-all” rule (`1=1`) or the default catch-all rule on each topic subscription.

Since there is only one event type per topic:

- Subscribers don’t need to manage large numbers of SQL or Correlation filters.
- Interface-based inheritance does require extra care if multiple interfaces or base classes are in play (see below).

> [!NOTE]
> With the mapping API it is possible to multiplex multiple (related) events over the same topic. This is only advisable when all the subscribers on the same topic are interested in all the (related) events on the topic. Otherwise it would be necessary to re-introduce SQL or corelation filter rules which can impact the throughput on the topic. By disabling auto-subscribe and removing the manage rights the transport assumes all required events arrive in the input queue due to "forwarding" on the subscriptions and would never try to update the existing rules which allows tweaking the runtime behavior to even more complex multiplexing needs.
##### Interface-based inheritance

A published message type can have multiple valid interfaces in its hierarchy representing a message type. For example:

```csharp
namespace Shipping;

interface IOrderAccepted : IEvent { }
interface IOrderStatusChanged : IEvent { }

class OrderAccepted : IOrderAccepted, IOrderStatusChanged { }
class OrderDeclined : IOrderAccepted, IOrderStatusChanged { }
```

For a handler `class OrderAcceptedHandler : IHandleMessages<OrderAccepted>` the subscription will look like:

```mermaid
flowchart LR
subgraph Publisher
P[Publishes<br/>OrderAccepted]
end
subgraph Service Bus
T1[Topic: Shipping.OrderAccepted]
end
subgraph Subscriber
S1[Subscribes to<br/>OrderAccepted]
end
P -->|Publish| T1
S1 -->|Subscribe| T1
```

If the subscriber is interested only in the interface `IOrderStatusChanged`, it will declare a handler `class OrderStatusChanged : IHandleMessages<IOrderStatusChanged>` and a mapping to the corresponding topics where the types implementing that contract are published to.

snippet: asb-interface-based-inheritance

When a publisher starts publishing `Shipping.OrderDeclined` the event is needs to be mapped

snippet: asb-interface-based-inheritance-declined

in order to opt into receiving the event into the subscriber's input queue and therefore requires a topology change.

```mermaid
flowchart LR
subgraph Publisher
P[Publishes<br/>OrderAccepted<br/>OrderDeclined]
end
subgraph Service Bus
T1[Topic: Shipping.OrderAccepted]
T2[Topic: Shipping.OrderDeclined]
end
subgraph Subscriber
S1[Subscribes to<br/>OrderAccepted<br/>OrderDeclined]
end
P -->|Publish| T1
P -->|Publish| T2
S1 -->|Subscribe| T1
S1 -->|Subscribe| T2
```

##### Evolution of the message contract

As mentioned in [versioning of shared contracts](/nservicebus/messaging/sharing-contracts.md#versioning) and also shown in the examples above, NServiceBus uses the fully-qualified assembly name in the message header. [Evolving the message contract](/nservicebus/messaging/evolving-contracts.md) encourages creating entirely new contract types and then adding a version number to the original name. For example, when evolving `Shipping.OrderAccepted`, the publisher would create a new contract called `Shipping.OrderAcceptedV2`. When the publisher publishes `Shipping.OrderAcceptedV2` events, those would be published by default to `Shipping.OrderAcceptedV2` topic and therefore existing subscribers interested in the previous version would not receive those events. The following options are available:

- Publish both versions of the event on the publisher side to individual topics and setting up the subscribers where necessary to receive both _or_
- Multiplex all versions of the event to the same topic and filter the versions on the subscriber side within specialized filter rules

When publishing both versions of the event the subscribers need to opt-in to receiving those events by adding an explicit mapping:

snippet: asb-versioning-subscriber-mapping

When multiplexing all versions of the event to the same topic the following configuration needs to be added on the publisher side:

snippet: asb-versioning-publisher-mapping

and then a customization that promotes the full name to a property of the native message

snippet: asb-versioning-publisher-customization

which would allow adding either a correlation filter (preferred) or a SQL filter to filter out based on the promoted full name.

#### Handling overflow and scaling

In the single-topic model, a high volume of messages in one event type can degrade overall system performance for all events when the topic saturates. With topic-per-event, each event type has its own 5 GB quota and its own topic partitioning, providing a more localized failure domain

- Failure isolation: If one event type experiences a surge, only that topic can get throttled or fill its quota.
- Load distribution: The broker spreads load across multiple internal partitions, often improving throughput compared to a single large topic.

#### Observability

Monitoring is often simpler because each event type’s topic can be tracked with distinct metrics (message count, size, etc.). You can see which event types are experiencing spikes without filtering through a single large “bundle” topic

#### Topology highlights

| | |
|---------------------------------------------|-------------------------------|
| Decoupled Publishers / Subscribers | yes |
| Polymorphic events support | yes (mapping API) |
| Event overflow protection | yes (per-topic) |
| Subscriber auto-scaling based on queue size | yes (queues) |
| Reduced complexity for non-inherited events | yes |
| Fine-grained resource usage / observability | yes (each topic is distinct) |

0 comments on commit b46c6c8

Please sign in to comment.