Chapter 38. Clusters

38.1. Clusters Overview

HornetQ clusters allow groups of HornetQ servers to be grouped together in order to share message processing load. Each active node in the cluster is an active HornetQ server which manages its own messages and handles its own connections. A server must be configured to be clustered, you will need to set the clustered element in the hornetq-configuration.xml configuration file to true, this is false by default.

The cluster is formed by each node declaring cluster connections to other nodes in the core configuration file hornetq-configuration.xml. When a node forms a cluster connection to another node, internally it creates a core bridge (as described in Chapter 36, Core Bridges) connection between it and the other node, this is done transparently behind the scenes - you don't have to declare an explicit bridge for each node. These cluster connections allow messages to flow between the nodes of the cluster to balance load.

Nodes can be connected together to form a cluster in many different topologies, we will discuss a couple of the more common topologies later in this chapter.

We'll also discuss client side load balancing, where we can balance client connections across the nodes of the cluster, and we'll consider message redistribution where HornetQ will redistribute messages between nodes to avoid starvation.

Another important part of clustering is server discovery where servers can broadcast their connection details so clients or other servers can connect to them with the minimum of configuration.

38.2. Server discovery

Server discovery is a mechanism by which servers can broadcast their connection settings across the network. This is useful for two purposes:

  • Discovery by messaging clients. A messaging client wants to be able to connect to the servers of the cluster without having specific knowledge of which servers in the cluster are up at any one time. Messaging clients can be initialised with an explicit list of the servers in a cluster, but this is not flexible or maintainable as servers are added or removed from the cluster.

  • Discovery by other servers. Servers in a cluster want to be able to create cluster connections to each other without having prior knowledge of all the other servers in the cluster.

Server discovery uses UDP multicast to broadcast server connection settings. If UDP is disabled on your network you won't be able to use this, and will have to specify servers explicitly when setting up a cluster or using a messaging client.

38.2.1. Broadcast Groups

A broadcast group is the means by which a server broadcasts connectors over the network. A connector defines a way in which a client (or other server) can make connections to the server. For more information on what a connector is, please see Chapter 16, Configuring the Transport.

The broadcast group takes a set of connector pairs, each connector pair contains connection settings for a live and (optional) backup server and broadcasts them on the network. It also defines the UDP address and port settings.

Broadcast groups are defined in the server configuration file hornetq-configuration.xml. There can be many broadcast groups per HornetQ server. All broadcast groups must be defined in a broadcast-groups element.

Let's take a look at an example broadcast group from hornetq-configuration.xml:

<broadcast-groups>
   <broadcast-group name="my-broadcast-group">
      <local-bind-port>54321</local-bind-port>
      <group-address>231.7.7.7</group-address>
      <group-port>9876</group-port>
      <broadcast-period>1000</broadcast-period>
      <connector-ref connector-name="netty-connector" 
        backup-connector-name="backup-connector"/>
   </broadcast-group>
</broadcast-groups>

Some of the broadcast group parameters are optional and you'll normally use the defaults, but we specify them all in the above example for clarity. Let's discuss each one in turn:

  • name attribute. Each broadcast group in the server must have a unique name.

  • local-bind-address. This is the local bind address that the datagram socket is bound to. If you have multiple network interfaces on your server, you would specify which one you wish to use for broadcasts by setting this property. If this property is not specified then the socket will be bound to the wildcard address, an IP address chosen by the kernel.

  • local-bind-port. If you want to specify a local port to which the datagram socket is bound you can specify it here. Normally you would just use the default value of -1 which signifies that an anonymous port should be used.

  • group-address. This is the multicast address to which the data will be broadcast. It is a class D IP address in the range 224.0.0.0 to 239.255.255.255, inclusive. The address 224.0.0.0 is reserved and is not available for use. This parameter is mandatory.

  • group-port. This is the UDP port number used for broadcasting. This parameter is mandatory.

  • broadcast-period. This is the period in milliseconds between consecutive broadcasts. This parameter is optional, the default value is 1000 milliseconds.

  • connector-ref. This specifies the connector and optional backup connector that will be broadcasted (see Chapter 16, Configuring the Transport for more information on connectors). The connector to be broadcasted is specified by the connector-name attribute, and the backup connector to be broadcasted is specified by the backup-connector attribute. The backup-connector attribute is optional.

38.2.2. Discovery Groups

While the broadcast group defines how connector information is broadcasted from a server, a discovery group defines how connector information is received from a multicast address.

A discovery group maintains a list of connector pairs - one for each broadcast by a different server. As it receives broadcasts on the multicast group address from a particular server it updates its entry in the list for that server.

If it has not received a broadcast from a particular server for a length of time it will remove that server's entry from its list.

Discovery groups are used in two places in HornetQ:

  • By cluster connections so they know what other servers in the cluster they should make connections to.

  • By messaging clients so they can discovery what servers in the cluster they can connect to.

38.2.3. Defining Discovery Groups on the Server

For cluster connections, discovery groups are defined in the server side configuration file hornetq-configuration.xml. All discovery groups must be defined inside a discovery-groups element. There can be many discovery groups defined by HornetQ server. Let's look at an example:

<discovery-groups>
   <discovery-group name="my-discovery-group">
      <group-address>231.7.7.7</group-address>
      <group-port>9876</group-port>
      <refresh-timeout>10000</refresh-timeout>
   </discovery-group>
</discovery-groups>

We'll consider each parameter of the discovery group:

  • name attribute. Each discovery group must have a unique name per server.

  • group-address. This is the multicast ip address of the group to listen on. It should match the group-address in the broadcast group that you wish to listen from. This parameter is mandatory.

  • group-port. This is the UDP port of the multicast group. It should match the group-port in the broadcast group that you wish to listen from. This parameter is mandatory.

  • refresh-timeout. This is the period the discovery group waits after receiving the last broadcast from a particular server before removing that servers connector pair entry from its list. You would normally set this to a value significantly higher than the broadcast-period on the broadcast group otherwise servers might intermittently disappear from the list even though they are still broadcasting due to slight differences in timing. This parameter is optional, the default value is 10000 milliseconds (10 seconds).

38.2.4. Discovery Groups on the Client Side

Let's discuss how to configure a HornetQ client to use discovery to discover a list of servers to which it can connect. The way to do this differs depending on whether you're using JMS or the core API.

38.2.4.1. Configuring client discovery using JMS

If you're using JMS and you're also using the JMS Service on the server to load your JMS connection factory instances into JNDI, then you can specify which discovery group to use for your JMS connection factory in the server side xml configuration hornetq-jms.xml. Let's take a look at an example:

<connection-factory name="ConnectionFactory">
   <discovery-group-ref discovery-group-name="my-discovery-group"/>
    <entries>
       <entry name="ConnectionFactory"/>
    </entries>
</connection-factory>

The element discovery-group-ref specifies the name of a discovery group defined in hornetq-configuration.xml.

When this connection factory is downloaded from JNDI by a client application and JMS connections are created from it, those connections will be load-balanced across the list of servers that the discovery group maintains by listening on the multicast address specified in the discovery group configuration.

If you're using JMS, but you're not using JNDI to lookup a connection factory - you're instantiating the JMS connection factory directly then you can specify the discovery group parameters directly when creating the JMS connection factory. Here's an example:

final String groupAddress = "231.7.7.7";

final int groupPort = 9876;

ConnectionFactory jmsConnectionFactory = 
        HornetQJMSClient.createConnectionFactory(groupAddress, groupPort);

Connection jmsConnection1 = jmsConnectionFactory.createConnection();

Connection jmsConnection2 = jmsConnectionFactory.createConnection();

The refresh-timeout can be set directly on the connection factory by using the setter method setDiscoveryRefreshTimeout() if you want to change the default value.

There is also a further parameter settable on the connection factory using the setter method setInitialWaitTimeout(). If the connection factory is used immediately after creation then it may not have had enough time to received broadcasts from all the nodes in the cluster. On first usage, the connection factory will make sure it waits this long since creation before creating the first connection. The default value for this parameter is 2000 milliseconds.

38.2.4.2. Configuring client discovery using Core

If you're using the core API to directly instantiate ClientSessionFactory instances, then you can specify the discovery group parameters directly when creating the session factory. Here's an example:final String groupAddress = "231.7.7.7"; final int groupPort = 9876; SessionFactory factory = HornetQClient.createClientSessionFactory(groupAddress, groupPort); ClientSession session1 = factory.createClientSession(...); ClientSession session2 = factory.createClientSession(...);

The refresh-timeout can be set directly on the session factory by using the setter method setDiscoveryRefreshTimeout() if you want to change the default value.

There is also a further parameter settable on the session factory using the setter method setInitialWaitTimeout(). If the session factory is used immediately after creation then it may not have had enough time to received broadcasts from all the nodes in the cluster. On first usage, the session factory will make sure it waits this long since creation before creating the first session. The default value for this parameter is 2000 milliseconds.

38.3. Server-Side Message Load Balancing

If cluster connections are defined between nodes of a cluster, then HornetQ will load balance messages arriving from at a particular node from a client.

Let's take a simple example of a cluster of four nodes A, B, C, and D arranged in a symmetric cluster (described in Section 38.7.1, “Symmetric cluster”). We have a queue called OrderQueue deployed on each node of the cluster.

We have client Ca connected to node A, sending orders to the server. We have also have order processor clients Pa, Pb, Pc, and Pd connected to each of the nodes A, B, C, D. If no cluster connection was defined on node A, then as order messages arrive on node A they will all end up in the OrderQueue on node A, so will only get consumed by the order processor client attached to node A, Pa.

If we define a cluster connection on node A, then as ordered messages arrive on node A instead of all of them going into the local OrderQueue instance, they are distributed in a round-robin fashion between all the nodes of the cluster. The messages are forwarded from the receiving node to other nodes of the cluster. This is all done on the server side, the client maintains a single connection to node A.

For example, messages arriving on node A might be distributed in the following order between the nodes: B, D, C, A, B, D, C, A, B, D. The exact order depends on the order the nodes started up, but the algorithm used is round robin.

HornetQ cluster connections can be configured to always blindly load balance messages in a round robin fashion irrespective of whether there are any matching consumers on other nodes, but they can be a bit cleverer than that and also be configured to only distribute to other nodes if they have matching consumers. We'll look at both these cases in turn with some examples, but first we'll discuss configuring cluster connections in general.

38.3.1. Configuring Cluster Connections

Cluster connections group servers into clusters so that messages can be load balanced between the nodes of the cluster. Let's take a look at a typical cluster connection. Cluster connections are always defined in hornetq-configuration.xml inside a cluster-connection element. There can be zero or more cluster connections defined per HornetQ server.

<cluster-connections>
    <cluster-connection name="my-cluster">
        <address>jms</address>
        <retry-interval>500</retry-interval>
        <use-duplicate-detection>true</use-duplicate-detection>
        <forward-when-no-consumers>false</forward-when-no-consumers>
        <max-hops>1</max-hops>
        <discovery-group-ref discovery-group-name="my-discovery-group"/>
    </cluster-connection>
</cluster-connections>                
            

In the above cluster connection all parameters have been explicitly specified. In practice you might use the defaults for some.

  • address. Each cluster connection only applies to messages sent to an address that starts with this value.

    In this case, this cluster connection will load balance messages sent to address that start with jms. This cluster connection, will, in effect apply to all JMS queue and topic subscriptions since they map to core queues that start with the substring "jms".

    The address can be any value and you can have many cluster connections with different values of address, simultaneously balancing messages for those addresses, potentially to different clusters of servers. By having multiple cluster connections on different addresses a single HornetQ Server can effectively take part in multiple clusters simultaneously.

    By careful not to have multiple cluster connections with overlapping values of address, e.g. "europe" and "europe.news" since this could result in the same messages being distributed between more than one cluster connection, possibly resulting in duplicate deliveries.

    This parameter is mandatory.

  • retry-interval. We mentioned before that, internally, cluster connections cause bridges to be created between the nodes of the cluster. If the cluster connection is created and the target node has not been started, or say, is being rebooted, then the cluster connections from other nodes will retry connecting to the target until it comes back up, in the same way as a bridge does.

    This parameter determines the interval in milliseconds between retry attempts. It has the same meaning as the retry-interval on a bridge (as described in Chapter 36, Core Bridges).

    This parameter is optional and its default value is 500 milliseconds.

  • use-duplicate-detection. Internally cluster connections use bridges to link the nodes, and bridges can be configured to add a duplicate id property in each message that is forwarded. If the target node of the bridge crashes and then recovers, messages might be resent from the source node. By enabling duplicate detection any duplicate messages will be filtered out and ignored on receipt at the target node.

    This parameter has the same meaning as use-duplicate-detection on a bridge. For more information on duplicate detection, please see Chapter 37, Duplicate Message Detection.

    This parameter is optional and has a default value of true.

  • forward-when-no-consumers. This parameter determines whether messages will be distributed round robin between other nodes of the cluster irrespective of whether there are matching or indeed any consumers on other nodes.

    If this is set to true then each incoming message will be round robin'd even though the same queues on the other nodes of the cluster may have no consumers at all, or they may have consumers that have non matching message filters (selectors). Note that HornetQ will not forward messages to other nodes if there are no queues of the same name on the other nodes, even if this parameter is set to true.

    If this is set to false then HornetQ will only forward messages to other nodes of the cluster if the address to which they are being forwarded has queues which have consumers, and if those consumers have message filters (selectors) at least one of those selectors must match the message.

    This parameter is optional, the default value is false.

  • max-hops. When a cluster connection decides the set of nodes to which it might load balance a message, those nodes do not have to be directly connected to it via a cluster connection. HornetQ can be configured to also load balance messages to nodes which might be connected to it only indirectly with other HornetQ servers as intermediates in a chain.

    This allows HornetQ to be configured in more complex topologies and still provide message load balancing. We'll discuss this more later in this chapter.

    The default value for this parameter is 1, which means messages are only load balanced to other HornetQ serves which are directly connected to this server. This parameter is optional.

  • discovery-group-ref. This parameter determines which discovery group is used to obtain the list of other servers in the cluster that this cluster connection will make connections to.

38.4. Client-Side Load balancing

With HornetQ client-side load balancing, subsequent sessions created using a single session factory can be connected to different nodes of the cluster. This allows sessions to spread smoothly across the nodes of a cluster and not be "clumped" on any particular node.

The load balancing policy to be used by the client factory is configurable. HornetQ provides two out-of-the-box load balancing policies and you can also implement your own and use that.

The out-of-the-box policies are

  • Round Robin. With this policy the first node is chosen randomly then each subsequent node is chosen sequentially in the same order.

    For example nodes might be chosen in the order B, C, D, A, B, C, D, A, B or D, A, B, C, A, B, C, D, A or C, D, A, B, C, D, A, B, C, D, A.

  • Random. With this policy each node is chosen randomly.

You can also implement your own policy by implementing the interface org.hornetq.api.core.client.loadbalance.ConnectionLoadBalancingPolicy

Specifying which load balancing policy to use differs whether you are using JMS or the core API. If you don't specify a policy then the default will be used which is org.hornetq.api.core.client.loadbalance.RoundRobinConnectionLoadBalancingPolicy.

If you're using JMS, and you're using JNDI on the server to put your JMS connection factories into JNDI, then you can specify the load balancing policy directly in the hornetq-jms.xml configuration file on the server as follows:

<connection-factory name="ConnectionFactory">
    <discovery-group-ref discovery-group-name="my-discovery-group"/>
    <entries>
        <entry name="ConnectionFactory"/>
    </entries>
    <connection-load-balancing-policy-class-name>
    org.hornetq.api.core.client.loadbalance.RandomConnectionLoadBalancingPolicy
    </connection-load-balancing-policy-class-name>
</connection-factory>            
        

The above example would deploy a JMS connection factory that uses the random connection load balancing policy.

If you're using JMS but you're instantiating your connection factory directly on the client side then you can set the load balancing policy using the setter on the HornetQConnectionFactory before using it:

ConnectionFactory jmsConnectionFactory = HornetQJMSClient.createConnectionFactory(...);
jmsConnectionFactory.setLoadBalancingPolicyClassName("com.acme.MyLoadBalancingPolicy");
        

If you're using the core API, you can set the load balancing policy directly on the ClientSessionFactory instance you are using:

ClientSessionFactory factory = HornetQClient.createClientSessionFactory(...);
factory.setLoadBalancingPolicyClassName("com.acme.MyLoadBalancingPolicy");
            

The set of servers over which the factory load balances can be determined in one of two ways:

  • Specifying servers explicitly

  • Using discovery.

38.5. Specifying Members of a Cluster Explicitly

Sometimes UDP is not enabled on a network so it's not possible to use UDP server discovery for clients to discover the list of servers in the cluster, or for servers to discover what other servers are in the cluster.

In this case, the list of servers in the cluster can be specified explicitly on each node and on the client side. Let's look at how we do this:

38.5.1. Specify List of Servers on the Client Side

This differs depending on whether you're using JMS or the Core API

38.5.1.1. Specifying List of Servers using JMS

If you're using JMS, and you're using the JMS Service to load your JMS connection factory instances directly into JNDI on the server, then you can specify the list of servers in the server side configuration file hornetq-jms.xml. Let's take a look at an example:

<connection-factory name="ConnectionFactory">
   <connectors>
      <connector-ref connector-name="my-connector1" 
           backup-connector-name="my-backup-connector1"/>
      <connector-ref connector-name="my-connector2" 
           backup-connector-name="my-backup-connector2"/>
      <connector-ref connector-name="my-connector3" 
           backup-connector-name="my-backup-connector3"/>
   </connectors>
   <entries>
      <entry name="ConnectionFactory"/>
   </entries>
</connection-factory>

The connection-factory element can contain zero or more connector-ref elements, each one of which specifies a connector-name attribute and an optional backup-connector-name attribute. The connector-name attribute references a connector defined in hornetq-configuration.xml which will be used as a live connector. The backup-connector-name is optional, and if specified it also references a connector defined in hornetq-configuration.xml. For more information on connectors please see Chapter 16, Configuring the Transport.

The connection factory thus maintains a list of [connector, backup connector] pairs, these pairs are then used by the client connection load balancing policy on the client side when creating connections to the cluster.

If you're using JMS but you're not using JNDI then you can also specify the list of [connector, backup connector] pairs directly when instantiating the HornetQConnectionFactory, here's an example:

List<Pair<TransportConfiguration, TransportConfiguration>> serverList = 
        new ArrayList<Pair<TransportConfiguration, TransportConfiguration>>();

serverList.add(new Pair<TransportConfiguration, 
        TransportConfiguration>(liveTC0, backupTC0));
serverList.add(new Pair<TransportConfiguration, 
        TransportConfiguration>(liveTC1, backupTC1));
serverList.add(new Pair<TransportConfiguration, 
        TransportConfiguration>(liveTC2, backupTC2));

ConnectionFactory jmsConnectionFactory = HornetQJMSClient.createConnectionFactory(serverList);

Connection jmsConnection1 = jmsConnectionFactory.createConnection();

Connection jmsConnection2 = jmsConnectionFactory.createConnection();

In the above snippet we create a list of pairs of TransportConfiguration objects. Each TransportConfiguration object contains knowledge of how to make a connection to a specific server.

A HornetQConnectionFactory instance is then created passing the list of servers in the constructor. Any connections subsequently created by this factory will create connections according to the client connection load balancing policy applied to that list of servers.

38.5.1.2. Specifying List of Servers using the Core API

If you're using the core API you can also specify the list of servers directly when creating the ClientSessionFactory instance. Here's an example:

List<Pair<TransportConfiguration, TransportConfiguration>> serverList = 
        new ArrayList<Pair<TransportConfiguration, TransportConfiguration>>();

serverList.add(new Pair<TransportConfiguration, 
        TransportConfiguration>(liveTC0, backupTC0));
serverList.add(new Pair<TransportConfiguration, 
        TransportConfiguration>(liveTC1, backupTC1));
serverList.add(new Pair<TransportConfiguration, 
        TransportConfiguration>(liveTC2, backupTC2));

ClientSessionFactory factory = HornetQClient.createClientSessionFactory(serverList);

ClientSession sesison1 = factory.createClientSession(...);

ClientSession session2 = factory.createClientSession(...);

In the above snippet we create a list of pairs of TransportConfiguration objects. Each TransportConfiguration object contains knowledge of how to make a connection to a specific server. For more information on this, please see Chapter 16, Configuring the Transport.

A ClientSessionFactoryImpl instance is then created passing the list of servers in the constructor. Any sessions subsequently created by this factory will create sessions according to the client connection load balancing policy applied to that list of servers.

38.5.2. Specifying List of Servers to form a Cluster

Let's take a look at an example where each cluster connection is defined for a symmetric cluster, but we're not using discovery for each node to discover its neighbours, instead we'll configure each cluster connection to have explicit knowledge of all the other nodes in the cluster.

Here's an example cluster connection definition showing that:

<cluster-connections>
    <cluster-connection name="my-explicit-cluster">
        <address>jms</address>
        <connector-ref connector-name="my-connector1" 
            backup-connector-name="my-backup-connector1"/>
        <connector-ref connector-name="my-connector2" 
            backup-connector-name="my-backup-connector2"/>
        <connector-ref connector-name="my-connector3" 
            backup-connector-name="my-backup-connector3"/>
    </cluster-connection>
</cluster-connections>

The cluster-connection element can contain zero or more connector-ref elements, each one of which specifies a connector-name attribute and an optional backup-connector-name attribute. The connector-name attribute references a connector defined in hornetq-configuration.xml which will be used as a live connector. The backup-connector-name is optional, and if specified it also references a connector defined in hornetq-configuration.xml. For more information on connectors please see Chapter 16, Configuring the Transport.

Note

Due to a limitation in HornetQ 2.0.0, failover is not supported for clusters defined using a static set of nodes. To support failover over cluster nodes, they must be configured to use a discovery group.

38.6. Message Redistribution

Another important part of clustering is message redistribution. Earlier we learned how server side message load balancing round robins messages across the cluster. If forward-when-no-consumers is false, then messages won't be forwarded to nodes which don't have matching consumers, this is great and ensures that messages don't arrive on a queue which has no consumers to consume them, however there is a situation it doesn't solve: What happens if the consumers on a queue close after the messages have been sent to the node? If there are no consumers on the queue the message won't get consumed and we have a starvation situation.

This is where message redistribution comes in. With message redistribution HornetQ can be configured to automatically redistribute messages from queues which have no consumers back to other nodes in the cluster which do have matching consumers.

Message redistribution can be configured to kick in immediately after the last consumer on a queue is closed, or to wait a configurable delay after the last consumer on a queue is closed before redistributing. By default message redistribution is disabled.

Message redistribution can be configured on a per address basis, by specifying the redistribution delay in the address settings, for more information on configuring address settings, please see Chapter 25, Queue Attributes.

Here's an address settings snippet from hornetq-configuration.xml showing how message redistribution is enabled for a set of queues:

<address-settings>     
   <address-setting match="jms.#">
      <redistribution-delay>0</redistribution-delay>
   </address-setting>
 </address-settings>

The above address-settings block would set a redistribution-delay of 0 for any queue which is bound to an address that starts with "jms.". All JMS queues and topic subscriptions are bound to addresses that start with "jms.", so the above would enable instant (no delay) redistribution for all JMS queues and topic subscriptions.

The attribute match can be an exact match or it can be a string that conforms to the HornetQ wildcard syntax (described in Chapter 13, Understanding the HornetQ Wildcard Syntax).

The element redistribution-delay defines the delay in milliseconds after the last consumer is closed on a queue before redistributing messages from that queue to other nodes of the cluster which do have matching consumers. A delay of zero means the messages will be immediately redistributed. A value of -1 signifies that messages will never be redistributed. The default value is -1.

It often makes sense to introduce a delay before redistributing as it's a common case that a consumer closes but another one quickly is created on the same queue, in such a case you probably don't want to redistribute immediately since the new consumer will arrive shortly.

38.7. Cluster topologies

HornetQ clusters can be connected together in many different topologies, let's consider the two most common ones here

38.7.1. Symmetric cluster

A symmetric cluster is probably the most common cluster topology, and you'll be familiar with if you've had experience of JBoss Application Server clustering.

With a symmetric cluster every node in the cluster is connected to every other node in the cluster. In other words every node in the cluster is no more than one hop away from every other node.

To form a symmetric cluster every node in the cluster defines a cluster connection with the attribute max-hops set to 1. Typically the cluster connection will use server discovery in order to know what other servers in the cluster it should connect to, although it is possible to explicitly define each target server too in the cluster connection if, for example, UDP is not available on your network.

With a symmetric cluster each node knows about all the queues that exist on all the other nodes and what consumers they have. With this knowledge it can determine how to load balance and redistribute messages around the nodes.

38.7.2. Chain cluster

With a chain cluster, each node in the cluster is not connected to every node in the cluster directly, instead the nodes form a chain with a node on each end of the chain and all other nodes just connecting to the previous and next nodes in the chain.

An example of this would be a three node chain consisting of nodes A, B and C. Node A is hosted in one network and has many producer clients connected to it sending order messages. Due to corporate policy, the order consumer clients need to be hosted in a different network, and that network is only accessible via a third network. In this setup node B acts as a mediator with no producers or consumers on it. Any messages arriving on node A will be forwarded to node B, which will in turn forward them to node C where they can get consumed. Node A does not need to directly connect to C, but all the nodes can still act as a part of the cluster.

To set up a cluster in this way, node A would define a cluster connection that connects to node B, and node B would define a cluster connection that connects to node C. In this case we only want cluster connections in one direction since we're only moving messages from node A->B->C and never from C->B->A.

For this topology we would set max-hops to 2. With a value of 2 the knowledge of what queues and consumers that exist on node C would be propagated from node C to node B to node A. Node A would then know to distribute messages to node B when they arrive, even though node B has no consumers itself, it would know that a further hop away is node C which does have consumers.