||
Skip to end of metadata
Go to start of metadata

Apache ActiveMQ is a high performing message broker, however if clustering is needed, it supports a number of methods. Out of the latter mentioned methods the Master/Slave is a pattern where the persistence layer is shared between multiple broker instances. A single  Master broker connects to the persistence; while, the rest of the  Slave brokers keep waiting to attain the lock, which is based on the persistence. If the Master node goes down, the lock for the persistence is released and a Slave quickly acquires it allowing a client to continue operations without any data loss. The clients should connect to the Master/Slave setup using the failover:  transport, or they should implement a manual failover mechanism to automatically connect to the next available broker when the first one goes down.

connectionfactoryName=TopicConnectionFactory
java.naming.provider.url=failover:(tcp://localhost:61617,tcp://localhost:61618,tcp://localhost:61619)?initialReconnectDelay=100
java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory

Setting up a Master/Slave

The following instructions shows you how to setup two broker instances in the same machine. The two instances will open different ports for the protocols, so that there will be no conflicts. The instances will use the flat file based embedded KahaDB as the persistence layer and the two instances will share the KahaDB instance.

Step 1 - Create two broker instances

  1. Download ActiveMQ and unzip the ActiveMQ distribution in both the instances. 
  2. Offset port values in the second ActiveMQ distribution, so that different ports can be used and that there will be no conflicts with the ports used by the different protocol connectors.
    Change the port offset in the  <ACTIVEMQ_HOME>/conf/activemq.xml  and  <ACTIVEMQ_HOME>/conf/jetty.xml files.

    activemq.xml
    <transportConnectors>
        <!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
        <transportConnector name="openwire" uri="tcp://0.0.0.0:61626?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
        <transportConnector name="amqp" uri="amqp://0.0.0.0:5682?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
        <transportConnector name="stomp" uri="stomp://0.0.0.0:61623?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
        <transportConnector name="mqtt" uri="mqtt://0.0.0.0:1893?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
        <transportConnector name="ws" uri="ws://0.0.0.0:61624?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
    </transportConnectors>
    jetty.xml
    <bean id="jettyPort" class="org.apache.activemq.web.WebConsolePort" init-method="start">
      <!-- the default port number for the web console -->
      <property name="host" value="0.0.0.0"/>
      <property name="port" value="8171"/>
    </bean>
  3. Point the KahaDB persistence to the same location in both the instances.
    This will ensure that only one instance at a time is able to acquire the lock to the DB and when the lock is released, the other instance will be able get it from the same location.
    Modify the persistenceAdapter tag inside the <ACTIVEMQ_HOME>/conf/activemq.xml file as follows for this purpose.

    <persistenceAdapter>
        <kahaDB directory="/tmp/mq/kahadb"/>
    </persistenceAdapter>
  4. Connect the instances by making the  networkConnector  in each of the instances point to each other.
    1. Add the following block into the  <ACTIVEMQ_HOME>/conf/activemq.xml  file, after the  persistenceAdapter  block, in the Master pointing to the Slave's OpenWire port.

      <networkConnectors>
          <networkConnector uri="static:(tcp://localhost:61626)" />
      </networkConnectors>

      Port  61626  is the OpenWire port in the Slave instance.

    2. Add the following block into the <ACTIVEMQ_HOME>/conf/activemq.xml file after the persistenceAdapter block in the Slave  pointing to the Master’s OpenWire port.

      <networkConnectors>
          <networkConnector uri="static:(tcp://localhost:61616)" />
      </networkConnectors>

      Apache ActiveMQ uses static discovery to statically point to the existing broker instances.

Step 2 - Start the instances

  1. Start the Message Broker as follows:

    cd <ACTIVEMQ_HOME>/bin
    ./activemq start
    # tail the logs just for the fun of it
    tail -100f ../data/activemq.log

    When observing the logs, you will see some log entries similar to the following repeatedly appearing. The latter mentioned log entries appear because the Master broker will initially look for the Slave broker when it starts up. 

    2015-11-17 19:10:19,731 | INFO  | Establishing network connection from vm://localhost?async=false&network=true to tcp://localhost:61626 | org.apache.activemq.network.DiscoveryNetworkConnector | ActiveMQ Task-61
    2015-11-17 19:10:19,733 | INFO  | Connector vm://localhost started | org.apache.activemq.broker.TransportConnector | ActiveMQ Task-61
    2015-11-17 19:10:19,736 | INFO  | localhost Shutting down | org.apache.activemq.network.DemandForwardingBridgeSupport | ActiveMQ BrokerService[localhost] Task-134
    2015-11-17 19:10:19,738 | INFO  | localhost bridge to Unknown stopped | org.apache.activemq.network.DemandForwardingBridgeSupport | ActiveMQ BrokerService[localhost] Task-134
    2015-11-17 19:10:19,739 | INFO  | Connector vm://localhost stopped | org.apache.activemq.broker.TransportConnector | ActiveMQ Task-61
    2015-11-17 19:10:19,741 | WARN  | Could not start network bridge between: vm://localhost?async=false&network=true and: tcp://localhost:61626 due to: Connection refused | org.apache.activemq.network.DiscoveryNetworkConnector | ActiveMQ Task-61
  2. Start the Slave broker and tail the logs.
    You will see a different set of logs appearing.

    2015-11-17 18:34:37,359 | INFO  | Database /tmp/mq/kahadb/lock is locked... waiting 10 seconds for the database to be unlocked. Reason: java.io.IOException: File '/tmp/mq/kahadb/lock' could not be locked. | org.apache.activemq.store.SharedFileLocker | main
    

    The above log entry appears because the lock for the shared DB is already acquired by the Master broker. The Slave broker will not start until it is able to acquire the lock for the DB. If you try to check as to which ports are open using the netstat command, you will see that only the Master broker is up and running and ready to accept requests.

    netstat -tunelp | grep 616
    
    (Not all processes could be identified, non-owned process info
     will not be shown, you would have to be root to see it all.)
    tcp6       0      0 :::61613                :::*                    LISTEN      1000       67401       5619/java       
    tcp6       0      0 :::61614                :::*                    LISTEN      1000       67404       5619/java       
    tcp6       0      0 :::61616                :::*                    LISTEN      1000       67399       5619/java   

    Now if you connect to the broker setup using the failover: transport, you will see that the client is connected the Master broker. 

    1. Create a queue and publish an event to the queue without consuming it. 

    2. Now stop the Master broker. 

    3. You will see the Slave broker acquiring the lock to the DB and becoming ready to accept requests. 

    4. Start a consumer with the failover transport, and observe it connecting to and retrieving the event (which was published to the Master broker) from the Slave broker.
      You will see that there was no data loss and the service did not stop responding for more than a few moments, which was the time the Slave took to start up after acquiring the DB lock.

 

  • No labels