Connectivity issue with ActiveMQ Artemis HA backup server - virtualbox
I'm having some trouble connecting to the fallback server in an HA cluster. I have just a primary and secondary. I'm using stomp.py and JMS to test.
I'm using virtual box and a simulated local network to experiment with configuration before setting up HA on our cloud servers. I can connect with the primary using the IP like so:
stomp -H 192.168.56.105 -P 61616 -U test -W password
But after shutting down the primary, I'm not able to connect with the secondary. I'm not sure if it's actually activating or not.
stomp -H 192.168.56.106 -P 61616 -U test -W password
This gives "No connection could be made because the target machine actively refused it
Could not connect to host 192.168.56.106, port 61616"
Also, I've been using the ha > replicated-failback example, and I've read the documentation for clusters and HA several times trying to figure out what I'm missing.
Here is how I'm trying to confirm UDP connectivity. To me it looks like the acceptors are appearing, but not the cluster on the network.
Primary:192.168.56.105$ sudo nmap -sU -p 61616 192.168.56.106
Starting Nmap 7.60 ( https://nmap.org ) at 2020-06-24 18:16 UTC
Nmap scan report for 192.168.56.106
Host is up (-0.15s latency).
PORT STATE SERVICE
61616/udp closed unknown
MAC Address: 08:00:27:7B:56:3F (Oracle VirtualBox virtual NIC)
Nmap done: 1 IP address (1 host up) scanned in 0.48 seconds
Primary:192.168.56.105$ sudo nmap -sU -Pn -p 9876 231.7.7.7
Starting Nmap 7.60 ( https://nmap.org ) at 2020-06-24 19:25 UTC
Nmap done: 1 IP address (0 hosts up) scanned in 0.45 seconds
Backup:192.168.56.106$ sudo nmap -sU -p 61616 192.168.56.105
Starting Nmap 7.60 ( https://nmap.org ) at 2020-06-24 19:28 UTC
Nmap scan report for 192.168.56.105
Host is up (-0.15s latency).
PORT STATE SERVICE
61616/udp closed unknown
MAC Address: 08:00:27:0F:36:BE (Oracle VirtualBox virtual NIC)
Nmap done: 1 IP address (1 host up) scanned in 0.49 seconds
Backup:192.168.56.106$ sudo nmap -sU -Pn -p 9876 231.7.7.7
Starting Nmap 7.60 ( https://nmap.org ) at 2020-06-24 19:29 UTC
Nmap done: 1 IP address (0 hosts up) scanned in 0.46 seconds
Any ideas? My configuration is below.
Primary (192.168.56.105) broker.xml:
<configuration xmlns="urn:activemq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xi="http://www.w3.org/2001/XInclude"
xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq:core ">
<name>0.0.0.0</name>
<persistence-enabled>true</persistence-enabled>
<journal-type>NIO</journal-type>
<paging-directory>data/paging</paging-directory>
<bindings-directory>data/bindings</bindings-directory>
<journal-directory>data/journal</journal-directory>
<large-messages-directory>data/large-messages</large-messages-directory>
<journal-datasync>true</journal-datasync>
<journal-min-files>2</journal-min-files>
<journal-pool-files>10</journal-pool-files>
<journal-device-block-size>4096</journal-device-block-size>
<journal-file-size>10M</journal-file-size>
<journal-buffer-timeout>2884000</journal-buffer-timeout>
<journal-max-io>1</journal-max-io>
<disk-scan-period>5000</disk-scan-period>
<max-disk-usage>90</max-disk-usage>
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>HALT</critical-analyzer-policy>
<page-sync-timeout>2884000</page-sync-timeout>
<acceptors>
<!-- Acceptor for every supported protocol -->
<acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
<!-- AMQP Acceptor. Listens on default AMQP port for AMQP traffic.-->
<acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true</acceptor>
<!-- STOMP Acceptor. -->
<acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>
<!-- HornetQ Compatibility Acceptor. Enables HornetQ Core and STOMP for legacy HornetQ clients. -->
<acceptor name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>
<!-- MQTT Acceptor -->
<acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>
</acceptors>
<connectors>
<connector name="artemis">tcp://192.168.56.105:61616</connector>
</connectors>
<broadcast-groups>
<broadcast-group name="broadcast-group-1">
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<connector-ref>artemis</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="discovery-group-1">
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
</discovery-group>
</discovery-groups>
<cluster-user>cluster.user</cluster-user>
<cluster-password>password</cluster-password>
<ha-policy>
<replication>
<master>
<check-for-live-server>true</check-for-live-server>
</master>
</replication>
</ha-policy>
<cluster-connections>
<cluster-connection name="cluster-1">
<connector-ref>artemis</connector-ref>
<discovery-group-ref discovery-group-name="discovery-group-1"/>
</cluster-connection>
</cluster-connections>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="amq"/>
<permission type="deleteNonDurableQueue" roles="amq"/>
<permission type="createDurableQueue" roles="amq"/>
<permission type="deleteDurableQueue" roles="amq"/>
<permission type="createAddress" roles="amq"/>
<permission type="deleteAddress" roles="amq"/>
<permission type="consume" roles="amq"/>
<permission type="browse" roles="amq"/>
<permission type="send" roles="amq"/>
<!-- we need this otherwise ./artemis data imp wouldn't work -->
<permission type="manage" roles="amq"/>
</security-setting>
</security-settings>
<address-settings>
<!-- if you define auto-create on certain queues, management has to be auto-create -->
<address-setting match="activemq.management#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
</address-settings>
<addresses>
<address name="DLQ">
<anycast>
<queue name="DLQ" />
</anycast>
</address>
<address name="ExpiryQueue">
<anycast>
<queue name="ExpiryQueue" />
</anycast>
</address>
</addresses>
</core>
</configuration>
Backup (192.168.56.106) broker.xml:
<configuration xmlns="urn:activemq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xi="http://www.w3.org/2001/XInclude"
xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq:core ">
<name>0.0.0.0</name>
<persistence-enabled>true</persistence-enabled>
<journal-type>NIO</journal-type>
<paging-directory>data/paging</paging-directory>
<bindings-directory>data/bindings</bindings-directory>
<journal-directory>data/journal</journal-directory>
<large-messages-directory>data/large-messages</large-messages-directory>
<journal-datasync>true</journal-datasync>
<journal-min-files>2</journal-min-files>
<journal-pool-files>10</journal-pool-files>
<journal-device-block-size>4096</journal-device-block-size>
<journal-file-size>10M</journal-file-size>
<journal-buffer-timeout>2868000</journal-buffer-timeout>
<journal-max-io>1</journal-max-io>
<disk-scan-period>5000</disk-scan-period>
<max-disk-usage>90</max-disk-usage>
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>HALT</critical-analyzer-policy>
<page-sync-timeout>2868000</page-sync-timeout>
<acceptors>
<!-- Acceptor for every supported protocol -->
<acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
<!-- AMQP Acceptor. Listens on default AMQP port for AMQP traffic.-->
<acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true</acceptor>
<!-- STOMP Acceptor. -->
<acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>
<!-- HornetQ Compatibility Acceptor. Enables HornetQ Core and STOMP for legacy HornetQ clients. -->
<acceptor name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>
<!-- MQTT Acceptor -->
<acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>
</acceptors>
<connectors>
<connector name="artemis">tcp://192.168.56.106:61616</connector>
</connectors>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="amq"/>
<permission type="deleteNonDurableQueue" roles="amq"/>
<permission type="createDurableQueue" roles="amq"/>
<permission type="deleteDurableQueue" roles="amq"/>
<permission type="createAddress" roles="amq"/>
<permission type="deleteAddress" roles="amq"/>
<permission type="consume" roles="amq"/>
<permission type="browse" roles="amq"/>
<permission type="send" roles="amq"/>
<!-- we need this otherwise ./artemis data imp wouldn't work -->
<permission type="manage" roles="amq"/>
</security-setting>
</security-settings>
<address-settings>
<!-- if you define auto-create on certain queues, management has to be auto-create -->
<address-setting match="activemq.management#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
</address-settings>
<addresses>
<address name="DLQ">
<anycast>
<queue name="DLQ" />
</anycast>
</address>
<address name="ExpiryQueue">
<anycast>
<queue name="ExpiryQueue" />
</anycast>
</address>
</addresses>
<broadcast-groups>
<broadcast-group name="broadcast-group-1">
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<connector-ref>artemis</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="discovery-group-1">
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
</discovery-group>
</discovery-groups>
<cluster-user>cluster.user</cluster-user>
<cluster-password>password</cluster-password>
<ha-policy>
<replication>
<slave>
<allow-failback>true</allow-failback>
</slave>
</replication>
</ha-policy>
<cluster-connections>
<cluster-connection name="cluster-1">
<connector-ref>artemis</connector-ref>
<discovery-group-ref discovery-group-name="discovery-group-1"/>
</cluster-connection>
</cluster-connections>
</core>
</configuration>
Primary Server Logs:
2020-06-24 18:42:46,893 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server
2020-06-24 18:42:46,952 INFO [org.apache.activemq.artemis.core.server] AMQ221000: live Message Broker is starting with configuration Broker Configuration (clustered=true,journalDirectory=data/journal,bindingsDirectory=data/bindings,largeMessagesDirectory=data/large-messages,pagingDirectory=data/paging)
2020-06-24 18:42:57,165 INFO [org.apache.activemq.artemis.core.server] AMQ221013: Using NIO Journal
2020-06-24 18:42:57,182 INFO [org.apache.activemq.artemis.core.server] AMQ221057: Global Max Size is being adjusted to 1/2 of the JVM max size (-Xmx). being defined as 1,073,741,824
2020-06-24 18:42:57,267 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-server]. Adding protocol support for: CORE
2020-06-24 18:42:57,268 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-amqp-protocol]. Adding protocol support for: AMQP
2020-06-24 18:42:57,273 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-hornetq-protocol]. Adding protocol support for: HORNETQ
2020-06-24 18:42:57,273 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-mqtt-protocol]. Adding protocol support for: MQTT
2020-06-24 18:42:57,274 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-openwire-protocol]. Adding protocol support for: OPENWIRE
2020-06-24 18:42:57,276 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-stomp-protocol]. Adding protocol support for: STOMP
2020-06-24 18:42:57,810 INFO [org.apache.activemq.artemis.core.server] AMQ221080: Deploying address DLQ supporting [ANYCAST]
2020-06-24 18:42:57,811 INFO [org.apache.activemq.artemis.core.server] AMQ221003: Deploying ANYCAST queue DLQ on address DLQ
2020-06-24 18:42:57,812 INFO [org.apache.activemq.artemis.core.server] AMQ221080: Deploying address ExpiryQueue supporting [ANYCAST]
2020-06-24 18:42:57,812 INFO [org.apache.activemq.artemis.core.server] AMQ221003: Deploying ANYCAST queue ExpiryQueue on address ExpiryQueue
2020-06-24 18:42:58,358 INFO [org.apache.activemq.artemis.core.server] AMQ221020: Started EPOLL Acceptor at 0.0.0.0:61616 for protocols [CORE,MQTT,AMQP,STOMP,HORNETQ,OPENWIRE]
2020-06-24 18:42:58,363 INFO [org.apache.activemq.artemis.core.server] AMQ221020: Started EPOLL Acceptor at 0.0.0.0:5445 for protocols [HORNETQ,STOMP]
2020-06-24 18:42:58,373 INFO [org.apache.activemq.artemis.core.server] AMQ221020: Started EPOLL Acceptor at 0.0.0.0:5672 for protocols [AMQP]
2020-06-24 18:42:58,387 INFO [org.apache.activemq.artemis.core.server] AMQ221020: Started EPOLL Acceptor at 0.0.0.0:1883 for protocols [MQTT]
2020-06-24 18:42:58,408 INFO [org.apache.activemq.artemis.core.server] AMQ221020: Started EPOLL Acceptor at 0.0.0.0:61613 for protocols [STOMP]
2020-06-24 18:42:58,409 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live
2020-06-24 18:42:58,409 INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.13.0 [0.0.0.0, nodeID=4c8091c1-b230-11ea-99de-080027649461]
2020-06-24 18:42:59,001 INFO [org.apache.activemq.hawtio.branding.PluginContextListener] Initialized activemq-branding plugin
2020-06-24 18:42:59,110 INFO [org.apache.activemq.hawtio.plugin.PluginContextListener] Initialized artemis-plugin plugin
2020-06-24 18:42:59,757 INFO [io.hawt.HawtioContextListener] Initialising hawtio services
2020-06-24 18:42:59,842 INFO [io.hawt.system.ConfigManager] Configuration will be discovered via system properties
2020-06-24 18:42:59,843 INFO [io.hawt.jmx.JmxTreeWatcher] Welcome to hawtio 1.5.12 : http://hawt.io/ : Don't cha wish your console was hawt like me? ;-)
2020-06-24 18:42:59,887 INFO [io.hawt.jmx.UploadManager] Using file upload directory: /var/lib/broker-1/tmp/uploads
2020-06-24 18:42:59,916 INFO [io.hawt.web.AuthenticationFilter] Starting hawtio authentication filter, JAAS realm: "activemq" authorized role(s): "amq" role principal classes: "org.apache.activemq.artemis.spi.core.security.jaas.RolePrincipal"
2020-06-24 18:42:59,962 INFO [io.hawt.web.JolokiaConfiguredAgentServlet] Jolokia overridden property: [key=policyLocation, value=file:/var/lib/broker-1/etc/jolokia-access.xml]
2020-06-24 18:42:59,999 INFO [io.hawt.web.RBACMBeanInvoker] Using MBean [hawtio:type=security,area=jmx,rank=0,name=HawtioDummyJMXSecurity] for role based access control
2020-06-24 18:43:00,195 INFO [io.hawt.system.ProxyWhitelist] Initial proxy whitelist: [localhost, 127.0.0.1, 192.168.56.105, 10.0.3.15]
2020-06-24 18:43:00,790 INFO [org.apache.activemq.artemis] AMQ241001: HTTP Server started at http://0.0.0.0:8161
2020-06-24 18:43:00,790 INFO [org.apache.activemq.artemis] AMQ241002: Artemis Jolokia REST API available at http://0.0.0.0:8161/console/jolokia
2020-06-24 18:43:00,791 INFO [org.apache.activemq.artemis] AMQ241004: Artemis Console available at http://0.0.0.0:8161/console
2020-06-24 19:05:30,551 INFO [io.hawt.HawtioContextListener] Destroying hawtio services
2020-06-24 19:05:30,562 INFO [io.hawt.web.AuthenticationFilter] Destroying hawtio authentication filter
2020-06-24 19:05:30,624 INFO [org.apache.activemq.hawtio.plugin.PluginContextListener] Destroyed artemis-plugin plugin
2020-06-24 19:05:30,630 INFO [org.apache.activemq.hawtio.branding.PluginContextListener] Destroyed activemq-branding plugin
2020-06-24 19:05:30,672 INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.13.0 [4c8091c1-b230-11ea-99de-080027649461] stopped, uptime 22 minutes
Backup Server Logs:
2020-06-24 18:32:45,363 INFO [io.hawt.HawtioContextListener] Destroying hawtio services
2020-06-24 18:32:45,380 INFO [io.hawt.web.AuthenticationFilter] Destroying hawtio authentication filter
2020-06-24 18:32:45,488 INFO [org.apache.activemq.hawtio.plugin.PluginContextListener] Destroyed artemis-plugin plugin
2020-06-24 18:32:45,499 INFO [org.apache.activemq.hawtio.branding.PluginContextListener] Destroyed activemq-branding plugin
2020-06-24 18:32:45,618 INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.13.0 [null] stopped, uptime 5 hours 44 minutes
2020-06-24 18:32:49,520 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server
2020-06-24 18:32:49,591 INFO [org.apache.activemq.artemis.core.server] AMQ221000: backup Message Broker is starting with configuration Broker Configuration (clustered=true,journalDirectory=data/journal,bindingsDirectory=data/bindings,largeMessagesDirectory=data/large-messages,pagingDirectory=data/paging)
2020-06-24 18:32:49,633 INFO [org.apache.activemq.artemis.core.server] AMQ221055: There were too many old replicated folders upon startup, removing /var/lib/broker-1/data/journal/oldreplica.13
2020-06-24 18:32:49,634 INFO [org.apache.activemq.artemis.core.server] AMQ222162: Moving data directory /var/lib/broker-1/data/journal to /var/lib/broker-1/data/journal/oldreplica.15
2020-06-24 18:32:49,717 INFO [org.apache.activemq.artemis.core.server] AMQ221013: Using NIO Journal
2020-06-24 18:32:49,856 INFO [org.apache.activemq.artemis.core.server] AMQ221057: Global Max Size is being adjusted to 1/2 of the JVM max size (-Xmx). being defined as 1,073,741,824
2020-06-24 18:32:50,131 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-server]. Adding protocol support for: CORE
2020-06-24 18:32:50,132 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-amqp-protocol]. Adding protocol support for: AMQP
2020-06-24 18:32:50,140 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-hornetq-protocol]. Adding protocol support for: HORNETQ
2020-06-24 18:32:50,140 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-mqtt-protocol]. Adding protocol support for: MQTT
2020-06-24 18:32:50,141 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-openwire-protocol]. Adding protocol support for: OPENWIRE
2020-06-24 18:32:50,142 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-stomp-protocol]. Adding protocol support for: STOMP
2020-06-24 18:32:50,547 INFO [org.apache.activemq.hawtio.branding.PluginContextListener] Initialized activemq-branding plugin
2020-06-24 18:32:50,697 INFO [org.apache.activemq.hawtio.plugin.PluginContextListener] Initialized artemis-plugin plugin
2020-06-24 18:32:51,507 INFO [io.hawt.HawtioContextListener] Initialising hawtio services
2020-06-24 18:32:51,612 INFO [io.hawt.system.ConfigManager] Configuration will be discovered via system properties
2020-06-24 18:32:51,616 INFO [io.hawt.jmx.JmxTreeWatcher] Welcome to hawtio 1.5.12 : http://hawt.io/ : Don't cha wish your console was hawt like me? ;-)
2020-06-24 18:32:51,655 INFO [io.hawt.jmx.UploadManager] Using file upload directory: /var/lib/broker-1/tmp/uploads
2020-06-24 18:32:51,690 INFO [io.hawt.web.AuthenticationFilter] Starting hawtio authentication filter, JAAS realm: "activemq" authorized role(s): "amq" role principal classes: "org.apache.activemq.artemis.spi.core.security.jaas.RolePrincipal"
2020-06-24 18:32:51,726 INFO [io.hawt.web.JolokiaConfiguredAgentServlet] Jolokia overridden property: [key=policyLocation, value=file:/var/lib/broker-1/etc/jolokia-access.xml]
2020-06-24 18:32:51,774 INFO [io.hawt.web.RBACMBeanInvoker] Using MBean [hawtio:type=security,area=jmx,rank=0,name=HawtioDummyJMXSecurity] for role based access control
2020-06-24 18:32:51,962 INFO [io.hawt.system.ProxyWhitelist] Initial proxy whitelist: [localhost, 127.0.0.1, 192.168.56.106, 10.0.3.15]
2020-06-24 18:32:52,630 INFO [org.apache.activemq.artemis] AMQ241001: HTTP Server started at http://0.0.0.0:8161
2020-06-24 18:32:52,631 INFO [org.apache.activemq.artemis] AMQ241002: Artemis Jolokia REST API available at http://0.0.0.0:8161/console/jolokia
2020-06-24 18:32:52,631 INFO [org.apache.activemq.artemis] AMQ241004: Artemis Console available at http://0.0.0.0:8161/console
I don't see any evidence in your logs that the HA pair is actually forming. Try using static connectors rather than discovery. You could also try running the brokers in the same VM or simply on your local machine. Once you get it working in a simpler environment then you can move to a more complex one.
Related
[Wso2][Stream Processor][Dashbord] Running dashbord and worker in same machine
I try to run Worker and dashbord in same machine. the first tools is running coorectly, but when i start the second the error has been raised : [2018-03-07 09:59:43,546] INFO {org.wso2.msf4j.internal.websocket.EndpointsRegistryImpl} - Endpoint Registered : /server-stats/{type} [2018-03-07 09:59:43,636] INFO {org.wso2.carbon.data.provider.DataProviderAPI} - Data Provider Service Component is activated [2018-03-07 09:59:44,909] INFO {org.wso2.msf4j.internal.websocket.WebSocketServerSC} - All required capabilities are available of WebSocket service component is available. [2018-03-07 09:59:45,049] INFO {org.wso2.msf4j.internal.MicroservicesServerSC} - All microservices are available [2018-03-07 09:59:45,346] INFO {org.wso2.transport.http.netty.listener.ServerConnectorBootstrap$HTTPServerConnector} - HTTP(S) Interface starting on host 0.0.0.0 and port 9643 [2018-03-07 09:59:45,939] INFO {org.wso2.carbon.metrics.core.config.model.JmxReporterConfig} - Creating JMX reporter for Metrics with domain 'org.wso2.carbon.metrics' [2018-03-07 09:59:45,954] INFO {org.wso2.carbon.metrics.core.reporter.impl.AbstractReporter} - Started JMX reporter for Metrics [2018-03-07 09:59:45,954] INFO {org.wso2.msf4j.analytics.metrics.MetricsComponent} - Metrics Component is activated [2018-03-07 09:59:45,970] INFO {org.wso2.carbon.databridge.agent.internal.DataAgentDS} - Successfully deployed Agent Server [2018-03-07 09:59:52,914] ERROR {org.wso2.carbon.kernel.internal.startupresolver.StartupComponentManager} - Runtime Exception occurred while calling onAllRequiredCapabilitiesAvailable of component carbon-datasource-service com.zaxxer.hikari.pool.PoolInitializationException: Exception during pool initialization: Connection is broken: "java.net.SocketTimeoutException: connect timed out: 169.254.235.125:59336" [90067-196] at com.zaxxer.hikari.pool.HikariPool.initializeConnections(HikariPool.java:581) at com.zaxxer.hikari.pool.HikariPool.(HikariPool.java:152) at com.zaxxer.hikari.HikariDataSource.(HikariDataSource.java:73) at org.wso2.carbon.datasource.rdbms.hikari.HikariRDBMSDataSource.getDataSource(HikariRDBMSDataSource.java:56) at org.wso2.carbon.datasource.rdbms.hikari.HikariDataSourceReader.createDataSource(HikariDataSourceReader.java:74) at org.wso2.carbon.datasource.core.DataSourceBuilder.buildDataSourceObject(DataSourceBuilder.java:79) at org.wso2.carbon.datasource.core.DataSourceBuilder.buildDataSourceObject(DataSourceBuilder.java:60) at org.wso2.carbon.datasource.core.DataSourceBuilder.buildCarbonDataSource(DataSourceBuilder.java:44) at org.wso2.carbon.datasource.core.DataSourceManager.initDataSources(DataSourceManager.java:153) at org.wso2.carbon.datasource.core.internal.DataSourceListenerComponent.onAllRequiredCapabilitiesAvailable(DataSourceListenerComponent.java:125) at org.wso2.carbon.kernel.internal.startupresolver.StartupComponentManager.lambda$notifySatisfiableComponents$7(StartupComponentManager.java:266) at java.util.ArrayList.forEach(ArrayList.java:1249) at org.wso2.carbon.kernel.internal.startupresolver.StartupComponentManager.notifySatisfiableComponents(StartupComponentManager.java:252) at org.wso2.carbon.kernel.internal.startupresolver.StartupOrderResolver$1.run(StartupOrderResolver.java:204) at java.util.TimerThread.mainLoop(Timer.java:555) at java.util.TimerThread.run(Timer.java:505) Caused by: org.h2.jdbc.JdbcSQLException: Connection is broken: "java.net.SocketTimeoutException: connect timed out: 169.254.235.125:59336" [90067-196] at org.h2.message.DbException.getJdbcSQLException(DbException.java:345) at org.h2.message.DbException.get(DbException.java:168) at org.h2.engine.SessionRemote.connectServer(SessionRemote.java:457) at org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:367) at org.h2.jdbc.JdbcConnection.(JdbcConnection.java:116) at org.h2.jdbc.JdbcConnection.(JdbcConnection.java:100) at org.h2.Driver.connect(Driver.java:69) at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:95) at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:101) at com.zaxxer.hikari.pool.HikariPool.addConnection(HikariPool.java:496) at com.zaxxer.hikari.pool.HikariPool.initializeConnections(HikariPool.java:565) ... 15 more Caused by: java.net.SocketTimeoutException: connect timed out at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method) at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:85) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.h2.util.NetUtils.createSocket(NetUtils.java:103) at org.h2.util.NetUtils.createSocket(NetUtils.java:83) at org.h2.engine.SessionRemote.initTransfer(SessionRemote.java:115) at org.h2.engine.SessionRemote.connectServer(SessionRemote.java:453) ... 23 more can you please advise? Thanks.
Can you share the WSO2 SP version you were using when you got this exception? Also please check whether the AUTO_SERVER=TRUE config is avaiable in the jdbc url of WSO2_METRICS_DB datasource configuration, which can be found in {WSO2_SP_HOME}/conf/worker/deployment.yaml eg : jdbcUrl: 'jdbc:h2:${sys:carbon.home}/wso2/dashboard/database/metrics;AUTO_SERVER=TRUE'
I configured all datasource in mysql, and i can running all SP componnent. the issue is related to H2 datasase, that not allowed to share connection with default configuration. i will check default H2 connection parametrs, and test again.
Publish data from MQTT.fx to Cloud IoT Core
Using CloudIotCore documentation I am able to run Quickstart example and EndtoEnd example in my Google cloud platform project. Using MQTT.fx tool i am unable to connect to same Google cloud Project Username : unused Password : Generated from cretae_jwt function using code in cloudiot_pubsub_example_mqtt_device.py Broker Address: mqtt.googleapis.com Broker Port: 8883 Client ID: projects/myprojectid/locations/us-central1/registries/myregistryid/devices/mydeviceid { In the above line i changed my project and device details } Error: MQTTException error 2017-12-21 17:42:51,785 INFO --- Start App : Style: LIGHT /styles/mqttfx_theme_light.css 2017-12-21 17:42:52,137 INFO --- Start App : An update is availabe. 2017-12-21 17:42:55,668 INFO --- ScriptingManager : Found action with name: Switch Fountain Test 2017-12-21 17:43:30,034 INFO --- BrokerConnectorController : onConnect 2017-12-21 17:43:30,086 INFO --- MqttFX ClientModel : MqttClient with ID projects/myprojectid/locations/us-central1/registries/myregistryid/devices/mydeviceid assigned. 2017-12-21 17:43:30,500 ERROR --- MqttFX ClientModel : Error when connecting org.eclipse.paho.client.mqttv3.MqttException: MqttException at 2017-12-21 17:43:30,505 ERROR --- MqttFX ClientModel : Please verify your Settings (e.g. Broker Address, Broker Port & Client ID) and the user credentials!
A few notes: Your password will expire after 3600 seconds I had to enable TLS 1.3 from the dialog before connecting using MQTT.fx After you connect, you will only be able to publish to device-specific topics and only will be able subscribe to device-specific configuration changes. There are Java samples out there if you need to use Java, it seems a bit suboptimal to use a client when you'll need to regenerate the password every hour.
Change default port 5672 for WSO2-API Manager
We already have Active MQ running on the server. As a result we are getting port conflict when running WSO2. Where can we change the port to avoid this conflict. So far we have done the following: 1. Searched for references for port 5672 under /repository/conf and changed 5672 to 5673. Files changed included broker and jndi. Looks like the conflict is resolved but some services are still trying to connect to port 5672. I have not been able to find where to change this. [2017-02-03 08:37:36,342] INFO - QpidServiceComponent WSO2 Message Broker is started. [2017-02-03 08:37:36,374] INFO - JMSConnectionFactory JMS ConnectionFactory : Siddhi-JMS-Consumer initialized [2017-02-03 08:37:36,379] INFO - JMSTransportHandler Starting jms topic consumer thread... [2017-02-03 08:37:36,800] INFO - PassThroughHttpListener Starting Pass-through HTTP Listener... [2017-02-03 08:37:36,811] INFO - PassThroughListeningIOReactorManager Pass-through HTTP Listener started on 0:0:0:0:0:0:0:0:8280 [2017-02-03 08:37:36,815] INFO - PassThroughHttpSSLListener Starting Pass-through HTTPS Listener... [2017-02-03 08:37:36,825] INFO - PassThroughListeningIOReactorManager Pass-through HTTPS Listener started on 0:0:0:0:0:0:0:0:8243 [2017-02-03 08:37:37,669] INFO - AMQConnection Unable to connect to broker at tcp://172.17.51.124:5672 org.wso2.andes.transport.TransportException: Could not open connection at org.wso2.andes.transport.network.mina.MinaNetworkTransport$IoConnectorCreator.connect(MinaNetworkTransport.java:216) at org.wso2.andes.transport.network.mina.MinaNetworkTransport.connect(MinaNetworkTransport.java:74) at org.wso2.andes.client.AMQConnectionDelegate_8_0.makeBrokerConnection(AMQConnectionDelegate_8_0.java:130) at org.wso2.andes.client.AMQConnection$2.run(AMQConnection.java:631) at org.wso2.andes.client.AMQConnection$2.run(AMQConnection.java:628) at java.security.AccessController.doPrivileged(Native Method) at org.wso2.andes.client.AMQConnection.makeBrokerConnection(AMQConnection.java:628) at org.wso2.andes.client.AMQConnection.<init>(AMQConnection.java:409) at org.wso2.andes.client.AMQConnectionFactory.createConnection(AMQConnectionFactory.java:351) at org.wso2.andes.client.AMQConnectionFactory.createConnection(AMQConnectionFactory.java:40) at org.wso2.carbon.apimgt.jms.listener.utils.JMSUtils.createConnection(JMSUtils.java:387) at org.wso2.carbon.apimgt.jms.listener.utils.JMSListener.checkJMSConnection(JMSListener.java:137) at org.wso2.carbon.apimgt.jms.listener.utils.JMSListener.start(JMSListener.java:79) at org.wso2.carbon.apimgt.jms.listener.utils.JMSListener.run(JMSListener.java:186) at java.lang.Thread.run(Thread.java:745) org.wso2.andes.AMQConnectionFailureException: Could not open connection at org.wso2.andes.client.AMQConnection.<init>(AMQConnection.java:486) at org.wso2.andes.client.AMQConnectionFactory.createConnection(AMQConnectionFactory.java:351) at org.wso2.andes.client.AMQConnectionFactory.createConnection(AMQConnectionFactory.java:40) at org.wso2.carbon.apimgt.jms.listener.utils.JMSUtils.createConnection(JMSUtils.java:387) at org.wso2.carbon.apimgt.jms.listener.utils.JMSListener.checkJMSConnection(JMSListener.java:137) at org.wso2.carbon.apimgt.jms.listener.utils.JMSListener.start(JMSListener.java:79) at org.wso2.carbon.apimgt.jms.listener.utils.JMSListener.run(JMSListener.java:186) at java.lang.Thread.run(Thread.java:745) Caused by: org.wso2.andes.transport.TransportException: Could not open connection at org.wso2.andes.transport.network.mina.MinaNetworkTransport$IoConnectorCreator.connect(MinaNetworkTransport.java:216) at org.wso2.andes.transport.network.mina.MinaNetworkTransport.connect(MinaNetworkTransport.java:74) at org.wso2.andes.client.AMQConnectionDelegate_8_0.makeBrokerConnection(AMQConnectionDelegate_8_0.java:130) at org.wso2.andes.client.AMQConnection$2.run(AMQConnection.java:631) at org.wso2.andes.client.AMQConnection$2.run(AMQConnection.java:628) at java.security.AccessController.doPrivileged(Native Method) at org.wso2.andes.client.AMQConnection.makeBrokerConnection(AMQConnection.java:628) at org.wso2.andes.client.AMQConnection.<init>(AMQConnection.java:409) ... 7 more [2017-02-03 08:37:37,750] ERROR - JMSListener Unable to continue server startup as it seems the JMS Provider is not yet started. Please start the JMS provider now. [2017-02-03 08:37:37,756] ERROR - JMSListener Connection attempt : 1 for JMS Provider failed. Next retry in 20 seconds [2017-02-03 08:37:39,373] INFO - CarbonEventManagementService Starting polling event receivers [2017-02-03 08:37:43,411] INFO - ThriftDataReceiver Thrift Server started at 0.0.0.0 [2017-02-03 08:37:43,451] INFO - ThriftDataReceiver Thrift SSL port : 7711 [2017-02-03 08:37:43,468] INFO - ThriftDataReceiver Thrift port : 7611
We resolved this by using port offsets. No need to manually change ports. https://docs.wso2.com/display/AM200/Changing+the+Default+Ports+with+Offset
ERROR on API Manager 2.0.0 gateway worker on start-up
The following ERROR is logged on the gateway worker nodes on start-up. 2016-08-23 12:32:42,344 [-] [Timer-5] ERROR KeyTemplateRetriever Exception when retrieving throttling data from remote endpoint Unexpected character (<) at position 0. at org.json.simple.parser.Yylex.yylex(Unknown Source) at org.json.simple.parser.JSONParser.nextToken(Unknown Source) at org.json.simple.parser.JSONParser.parse(Unknown Source) at org.json.simple.parser.JSONParser.parse(Unknown Source) at org.json.simple.parser.JSONParser.parse(Unknown Source) at org.wso2.carbon.apimgt.gateway.throttling.util.KeyTemplateRetriever.retrieveKeyTemplateData(KeyTemplateRetriever.java:100) at org.wso2.carbon.apimgt.gateway.throttling.util.KeyTemplateRetriever.loadKeyTemplatesFromWebService(KeyTemplateRetriever.java:111) at org.wso2.carbon.apimgt.gateway.throttling.util.KeyTemplateRetriever.run(KeyTemplateRetriever.java:54) at java.util.TimerThread.mainLoop(Timer.java:555) at java.util.TimerThread.run(Timer.java:505) Despite the error the gateway worker nodes startup and the environment can be used to successfully invoke a sample API. All the apim nodes, bar the traffic manager, however report these warnings 2016-08-22 16:40:56,652 [-] [Timer-5] WARN KeyTemplateRetriever Failed retrieving throttling data from remote endpoint: Connection refused. Retrying after 15 seconds... 2016-08-22 16:40:56,653 [-] [Timer-4] WARN BlockingConditionRetriever Failed retrieving Blocking Conditions from remote endpoint: Connection refused. Retrying after 15 seconds... Environment: APIM 2.0.0 cluster publisher (default profile) store (default profile) gw manager and 2 gw workers (default profiles) traffic manager (using traffic-manager profile) Database: MariaDB Server, wsrep_25.10.r4144 Userstore : Read/write LDAP JVM: java version "1.8.0_92" OS: CentOS Linux release 7.0.1406 (Core) n.b. key manager un-configured using default pack settings
If you disable Advanced Throttling in api-manager.xml like below, that error will go away. If you enable that, it requires a key manager node. <EnableAdvanceThrottling>false</EnableAdvanceThrottling>
I encountered the issue recently and the issue was throttle#data#v1.war (repository/deployment/server/webapps/throttle#data#v1.war) hasn't been deployed at the time worker starts up. If you have a distributed AM 2.0 deployment make sure Keymanager is up and throttle#data#v1.war is deployed in keymanager before worker srartup..
How to check WSO2 CEP is correctly configured with Apache storm for distributed processing
I am using latest WSO2 CEP (v4.1.0), and apache storm 0.9.6 for WSO2 CEP clustering in distributed manner (distributed mode deployment). I have followed the guidelines which was provided by WSO2 for CEP clustering. After following those guidelines CEP is working properly. Now I want to make sure CEP is correctly clusterd or not.Is there any mechanism to check whether it is configured correctly.
You should be able to see similar logs like following. You can notice the IP and clustering port of other members in the log. INFO - MemberUtils Added member: Host:192.168.1.100, Remote Host:null, Port: 4200, HTTP:-1, HTTPS:-1, Domain: null, Sub-domain:null, Active:true INFO - HazelcastClusteringAgent Hazelcast initialized in 1283ms INFO - HazelcastClusteringAgent Local member: [03fa03f7-176b-48d5-9173-48866d7dd641] - Host:192.168.1.100, Remote Host:null, Port: 4100, HTTP:8280, HTTPS:8243, Domain: wso2con.domain, Sub-domain:mgt, Active:true INFO - HazelcastClusteringAgent Elected this member [03fa03f7-176b-48d5-9173-48866d7dd641] as the Coordinator node [2015-09-14 11:31:44,162] INFO - WKABasedMembershipScheme Member joined [a0f1c3cd-adaf-4fdf-ac9f-d6c6f3508022]: /192.168.1.100:4300 [2015-09-14 11:31:46,230] INFO - MemberUtils Added member: Host:192.168.1.100, Remote Host:null, Port: 4300, HTTP:8282, HTTPS:8245, Domain: wso2con.domain, Sub-domain:worker, Active:true
I tried one of the sample which was provided by WSO2. Then Storm UI will show spouts and bolts which are created according to the given Siddhi query. Using that we can decide CEP is correctly clustered or not. Apche storm UI shows the bolts and spouts in following manner.