Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
544 views
in Technique[技术] by (71.8m points)

high availability - Artemis load balancing via queue federation plus HA replication policy

I want to build an ActiveMQ Artemis system that can communicate with another cross-region server and perform load balancing and high availability at the same time.

Therefore, I have two virtual machines (172.16.212.32 & 172.16.212.33). Each virtual machine has two replication servers for high availabilit, and I've set up federation in each server to achieve load balancing. I know that federation can set the ha option, but I think the replication server will perform a better failover effect locally.

There is the setting below.

Master (172.16.212.20) in VM (172.16.212.32) :

      <connectors>
         <connector name="netty-connector">tcp://172.16.212.20:61616</connector>
         <connector name="master-connector33">tcp://172.16.212.22:61616</connector>
         <connector name="slave-connector33">tcp://172.16.212.23:61616</connector>
      </connectors>

      <broadcast-groups>
         <broadcast-group name="bg-group32">
            <group-address>231.7.7.7</group-address>
            <group-port>9876</group-port>
            <broadcast-period>5000</broadcast-period>
            <connector-ref>netty-connector</connector-ref>
         </broadcast-group>
      </broadcast-groups>

      <discovery-groups>
         <discovery-group name="dg-group32">
            <group-address>231.7.7.7</group-address>
            <group-port>9876</group-port>
            <refresh-timeout>10000</refresh-timeout>
         </discovery-group>
      </discovery-groups>

      <cluster-user>root</cluster-user>
      <cluster-password>syscom#1</cluster-password>

      <cluster-connections>
         <cluster-connection name="cluster32">
            <connector-ref>netty-connector</connector-ref>
            <message-load-balancing>ON_DEMAND</message-load-balancing>
            <max-hops>1</max-hops>
            <discovery-group-ref discovery-group-name="dg-group32"/>
         </cluster-connection>
      </cluster-connections>
      
      <ha-policy>
         <replication>
            <master>
                <check-for-live-server>true</check-for-live-server>
            </master>
         </replication>
      </ha-policy>

      <federations>
         <federation name="master-federation32">
            <upstream name="master-upstream33">
               <circuit-breaker-timeout>1000</circuit-breaker-timeout>
               <static-connectors>
                  <connector-ref>master-connector33</connector-ref>
               </static-connectors>
               <policy ref="policySetA"/>
            </upstream>
            <upstream name="slave-upstream33">
               <circuit-breaker-timeout>1000</circuit-breaker-timeout>
               <static-connectors>
                  <connector-ref>slave-connector33</connector-ref>
               </static-connectors>
               <policy ref="policySetA"/>
            </upstream>

            <policy-set name="policySetA">
               <policy ref="queue-federation" />
            </policy-set>
             <queue-policy name="queue-federation" >
               <include queue-match="#" address-match="#" />
            </queue-policy>
         </federation>
      </federations>

Slave (172.16.212.21) in VM (172.16.212.32):

      <connectors>
         <connector name="netty-connector">tcp://172.16.212.21:61616</connector>
         <connector name="master-connector33">tcp://172.16.212.22:61616</connector>
         <connector name="slave-connector33">tcp://172.16.212.23:61616</connector>
      </connectors>

      <broadcast-groups>
         <broadcast-group name="bg-group32">
            <group-address>231.7.7.7</group-address>
            <group-port>9876</group-port>
            <broadcast-period>5000</broadcast-period>
            <connector-ref>netty-connector</connector-ref>
         </broadcast-group>
      </broadcast-groups>

      <discovery-groups>
         <discovery-group name="dg-group32">
            <group-address>231.7.7.7</group-address>
            <group-port>9876</group-port>
            <refresh-timeout>10000</refresh-timeout>
         </discovery-group>
      </discovery-groups>

      <cluster-user>root</cluster-user>
      <cluster-password>syscom#1</cluster-password>

      <cluster-connections>
         <cluster-connection name="cluster32">
            <connector-ref>netty-connector</connector-ref>
            <message-load-balancing>ON_DEMAND</message-load-balancing>
            <max-hops>1</max-hops>
            <discovery-group-ref discovery-group-name="dg-group32"/>
         </cluster-connection>
      </cluster-connections>
      
      <ha-policy>
         <replication>
            <slave>
                <allow-failback>true</allow-failback>
            </slave>
         </replication>
      </ha-policy>

      <federations>
         <federation name="slave-federation32">
            <upstream name="master-upstream33" priority-adjustment="0">
               <circuit-breaker-timeout>1000</circuit-breaker-timeout>
               <static-connectors>
                  <connector-ref>master-connector33</connector-ref>
               </static-connectors>
               <policy ref="policySetA"/>
            </upstream>
            <upstream name="slave-upstream33" priority-adjustment="0">
               <circuit-breaker-timeout>1000</circuit-breaker-timeout>
               <static-connectors>
                  <connector-ref>slave-connector33</connector-ref>
               </static-connectors>
               <policy ref="policySetA"/>
            </upstream>

            <policy-set name="policySetA">
               <policy ref="queue-federation" />
            </policy-set>
             <queue-policy name="queue-federation" >
               <include queue-match="#" address-match="#" />
            </queue-policy>
         </federation>
      </federations>

172.16.212.22 and 172.16.212.23 are master and slave servers in VM 172.16.212.33.

After set up like that the master server in different VM can communicate with each other, and slave servers announce backup successfully. But the load balancing does not work.

Does the idea of federation plus ha replication policy not work? I would appreciate it if you have any advice for me.

question from:https://stackoverflow.com/questions/65884527/artemis-load-balancing-via-queue-federation-plus-ha-replication-policy

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)
Waitting for answers

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...