How to configure activemq-replicatedLevelDB to configure instance to connect to specific port of master/slave - amazon-web-services

I'm new with activemq-replicatedLevelDB so I might assumed things wrong based on my limited understanding.
I'm setting up 3 activemq instances with zookeeper which then determine which among the activemq instances is the master in AWS. Zookeeper are deployed within a private subnet and activemq are deployed within a public subnet, there's no problem with zookeeper and activemq communication.
For security purposes:
Question/Issue: I can't find where I can configure the activemq intances to which port should these activemq instances communicate with each other.
Why the issue: I need to restrict the available ports that are open of these activemq instances. And I cannot simply allow all access coming from public subnet
example below of port restrictions
port 22 should be open for ssh access
zookeeper client port (2181) should be open only for access coming
from these activemq instances
port 8161 should be accessible from specific sources
I am using security group to restrict these accesses in AWS. I tried allowing all ports accessible wihtin the public subnet which allows activemq to know that other activemq instances are alive, and they were capable of electing master/slaves. The port 45818 is not the same port after every setup from scratch. So I assume this is random.
sample logs below
Promoted to master
Using the pure java LevelDB implementation.
Master started: tcp://**.*.*.**:45818
Once I removed that port setup(allow all access), I got the below stacktrace
Not enough cluster members have reported their update positions yet.
org.apache.activemq.leveldb.replicated.MasterElector
If my understanding of the stacktrace above is right, it tells that the current activemq does not know the existence of other activemq instances. So I needed to know how I can configure the port of these activemq when checking of other activemq instances so I can restrict/allow access.
Here is the configuration of my activemq that points to zookeeper addresses. Other configuration are on default values.
activemq version: 5.13.4
<persistenceAdapter>
<replicatedLevelDB directory="activemq-data"
replicas="3"
bind="tcp://0.0.0.0:0"
zkAddress="testzookeeperip1:2181,testzookeeperip2:2181,testzookeeperip3:2181"
hostname="testhostnameofactivemqinstance"
/>
</persistenceAdapter>
Should there any information lacking, I'll update this question asap. thanks

This is rather a hint than a qualified answer, but too large for comment.
You configured dynamic ports with bind="tcp:0.0.0.0:0". I haven't used a fixed port on this configuration setting, but configuration doc says, you can set it.
The bind port will be used for the replication protocol with the master, so obviously, you cannot cut it off, but it should be ok to allow only the zk machines to communicate there.
I have not analyzed traffic between the brokers, but as I understand replicated LevelDB, the ZK decides over the active master, not the brokers. So there should be no communication between the brokers on that port.
The external broker address is configured on the transportConnectors element in the <broker> section of the config file, but I guess you already have that covered.
I suggest, you configure the bind to a fixed port and allow communication to that port from the ZK and if required from the cluster partners. Clients have only access to the transport ports. Allow communication to the ZKs and that should be it.

Related

How to process messages outside GCP in a Kafka server running on GCP

I have been trying to run a consumer in my local machine connecting to a Kafka server running inside GCP.
Kafka and Zookeeper is running on the same GCP VM instance
Step 1: Start Zookeeper
bin/zookeeper-server-start.sh config/zookeeper.properties
Step 2: Start Kafka
bin/kafka-server-start.sh config/server.properties
If I run a consumer inside the GCP VM instance it works fine:
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
I verified the firewall rules, and I have access from my local machine, I can reach both the public IP and the port the Kafka server is running on.
I tested many options, changing the server.properties of kafka, for example:
advertised.host.name=public-ip
or
advertised.listeners=public-ip
Following the answer on connecting-kafka-running-on-ec2-machine-from-my-local-machine without success.
From the official documentation:
advertised.listeners
Listeners to publish to ZooKeeper for clients to use. In IaaS environments, this may
need to be different from the interface to which the broker binds. If
this is not set, the value for listeners will be used. Unlike
listeners it is not valid to advertise the 0.0.0.0 meta-address.
After testing many different options, this solution worked for me:
Setting up two listeners, one EXTERNAL with the public IP, and one INTERNAL with the private IP:
# Configure protocol map
listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
# Use plaintext for inter-broker communication
inter.broker.listener.name=INTERNAL
# Specify that Kafka listeners should bind to all local interfaces
listeners=INTERNAL://0.0.0.0:9027,EXTERNAL://0.0.0.0:9037
# Separately, specify externally visible address
advertised.listeners=INTERNAL://localhost:9027,EXTERNAL://kafkabroker-n.mydomain.com:9093
Explanation:
In many scenarios, such as when deploying on AWS, the externally
advertised addresses of the Kafka brokers in the cluster differ from
the internal network interfaces that Kafka uses.
Also remember to set up your firewall rule to expose the port on the EXTERNAL listener in other to connect to it from an external machine.
Note: It's important to restrict access to authorized clients only.
You can use network firewall rules to restrict access. This guidance
applies to scenarios that involve both RFC 1918 and public IP;
however, when using public IP addresses, it's even more important to
secure your Kafka endpoint because anyone can access it.
Taken from google solutions.

WSO2 APIM 2.0 deployment

I'm trying to understand WSO2 APIM components and deployment scenarios but the terminology is confusing/vague for me. Clustering vs distributed deployments, profiles, and Port Offsets.
Basically I'd like to deploy a minimal 5 node setup where:
Node # (Location) Purpose
(DMZ) the GW (worker=True right?) and KeyManager
(DMZ) 2nd GW node (as above) for GW & KeyManager
(non-dmz) the Management Console, MySQL master
(non-dmz) the Publisher UI,TrafficManager, MySQL slave
(DMZ) the Store
Questions:
Should I use -DportOffset=0 on all nodes?
What -Dprofile=?? do I need to use on each of the 5 nodes?
The 2 gateway nodes will be load-balanced by an F5 load balancer
for incoming api-traffic. What port is used there, 9443 or 9763?
What ports need to be accessible on the DMZ hosts for this to work?
I assume 3306,9443,9763,8280,8243,7711, and 9999,11111 if JMX reqd
Please don't point me to the documentation, that's what is confusing me.
Running the key manager nodes, Store node in the DMZ is not recommended as they need db access. If you are using multi tenancy, you cannot host gateway worker nodes in the DMZ as well due to db access. What you can do is host those nodes in LAN and have a reverse proxy in the DMZ to expose the endpoints on the Gateway and Store. If you do not use multi tenancy, then you can run gateway worker nodes in the DMZ as dbs are not used.
As you are running multiple WSO2 servers in a single server you need to use port offsets to avoid conflicts. Default port offset is 0. You can run one WSO2 server with default port offset. For the other server you need to use port offset 1 or any value other than 0. You can start the server by giving the -DportOffset=1 at the startup. Best way is to change the value offset to 1 in /repository/conf/carbon.xml so that you do not need to provide the -DportOffset value at the startup.
-Dprofile is denote the profile which API Manager should start. If you start with -Dprofile=api-publisher, it would only starts the front end/backend features relevant to the API Publisher. Running product profiles are mostly recommended as it would only load relevant features of the profile. You can use profiles in your deployment as you are running 6 profiles of API Manager.
I think you are referring gateway worker nodes which serve API traffic. If so, it will use passthrough ports that are 8280(http) and 8243(https). Requests can serve using both. 9443 and 9763 are servlet ports are those will not used in gateway worker nodes and only in gateway manager node for service calls.
My recommendation is you should revise this setup as you are running nodes in DMZ which have db access.
Should I use -DportOffset=0 on all nodes?
It depends on how do you set up those nodes. If all of these servers in the same node (machine), you must use different port offset as all the API Manager servers use those port, so, there will be port conflicts.
What -Dprofile=?? do I need to use on each of the 5 nodes?
It will adjust the ports used by API Manager so that, there won't be any port conflicts between them if you are running on same node.
The 2 gateway nodes will be load-balanced by an F5 load balancer for
incoming api-traffic. What port is used there, 9443 or 9763?
For API requests/responses handling, you need 9763.
What ports need to be accessible on the DMZ hosts for this to work? I
assume 3306,9443,9763,8280,8243,7711, and 9999,11111 if JMX reqd
Yes, it's correct.
Further, you can use WSO2 support any issues you encountered.

Tablet Server Access for Accumulo Running on AWS

I am attempting to run a simple driver to write some data to an Accumulo 1.5 instance running on AWS that is using a single node cluster managed by CDH 4.7 . The client successfully connects to zookeeper but then fails with the following message:
2015-06-26 12:12:13 WARN ServerClient:163 - Failed to find an available server in the list of servers: [172.31.13.210:10011:9997 (120000)]
I tried applying the solution listed
here
, but this has not resolved the issue. The IP that is set for the master/slave is the internal AWS IP for the server.
Other than the warning message, I have not been able to find anything else in the Accumulo logs that indicate what is preventing connection to the master server. Any suggestions on where to look next?
--EDIT--
It looks like zookeeper is returning connectors to the remote client that contain references to the internal IP of the AWS server. The remote client cannot use these connectors because it does not know about the internal IP. When I changed the internal IPs in the thrift connector objects to the public IP, the connection works fine. In essence I can't figure out how to get zookeeper to return public IPs and not AWS internal ones for remote clients
172.31.13.210:10011:9997
This looks really strange. This should be an IP/hostname and a port. It looks like you have two ports somehow..
Did you list ports in the slaves file in ACCUMULO_CONF_DIR? This file should only contain the hostname/IP. If you want to change the port that a TabletServer listens on, you need to change tserver.port.client.

zookeeper installation on multiple AWS EC2instances

I am new to zookeeper and aws EC2. I am trying to install zookeeper on 3 ec2 instances.
as per zookeeper document, I have installed zookeeper on all 3 instances, created zoo.conf and add below configuration:
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/opt/zookeeper/data
clientPort=2181
server.1=localhost:2888:3888
server.2=<public ip of ec2 instance 2>:2889:3889
server.3=<public ip of ec2 instance 3>:2890:3890
also I have created myid file on all 3 instances as /opt/zookeeper/data/myid
as per guideline..
I have couple of queries as below:
whenever I am starting zookeeper server on each instance, it will start in standalone mode.(as per logs)
can above configuration is really gonna connect to each other? port 2889:3889 & 2890:38900 - what these port all about. can I need to configure it on ec2 machine or I need to give some other port against it?
Is I need to create security group to open these connection? I am not sure how to do it in ec2 instance.
How to confirm all 3 zookeeper has started and they can communicate with each other?
The ZooKeeper configuration is designed such that you can install the exact same configuration file on all servers in the cluster without modification. This makes ops a bit simpler. The component that specifies the configuration for the local node is the myid file.
The configuration you've defined is not one that can be shared across all servers. All of the servers in your server list should be binding to a private IP address that is accessible to other nodes in the network. You're seeing your server start in standalone mode because you're binding to localhost. So, the problem is the other servers in the cluster can't see localhost.
Your configuration should look more like:
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/opt/zookeeper/data
clientPort=2181
server.1=<private ip of ec2 instance 1>:2888:3888
server.2=<private ip of ec2 instance 2>:2888:3888
server.3=<private ip of ec2 instance 3>:2888:3888
The two ports listed in each server definition are respectively the quorum and election ports used by ZooKeeper nodes to communicate with one another internally. There's usually no need to modify these ports, and you should try to keep them the same across servers for consistency.
Additionally, as I said you should be able to share that exact same configuration file across all instances. The only thing that should have to change is the myid file.
You probably will need to create a security group and open up the client port to be available for clients and the quorum/election ports to be accessible by other ZooKeeper servers.
Finally, you might want to look in to a UI to help manage the cluster. Netflix makes a decent UI that will give you a view of your cluster and also help with cleaning up old logs and storing snapshots to S3 (ZooKeeper takes snapshots but does not delete old transaction logs, so your disk will eventually fill up if they're not properly removed). But once it's configured correctly, you should be able to see the ZooKeeper servers connecting to each other in the logs as well.
EDIT
#czerasz notes that starting from version 3.4.0 you can use the autopurge.snapRetainCount and autopurge.purgeInterval directives to keep your snapshots clean.
#chomp notes that some users have had to use 0.0.0.0 for the local server IP to get the ZooKeeper configuration to work on EC2. In other words, replace <private ip of ec2 instance 1> with 0.0.0.0 in the configuration file on instance 1. This is counter to the way ZooKeeper configuration files are designed but may be necessary on EC2.
Adding additional info regarding Zookeeper clustering inside Amazon's VPC.
Solution with VPC's public IP addres should be preferable solution since Zookeeper and using '0.0.0.0' should be your last option.
In case when you are using docker in your EC2 instance '0.0.0.0' will not work properly with Zookeeper 3.5.X after node restart.
The issue lies in resolving '0.0.0.0' and ensemble sharing of node addresses and SID order (if you will start your nodes in descending order, this issue may not occur).
So far the only working solution is to upgrade to 3.6.2+ version.

How to connect hornetq on AWS VPC from another vm on AWS

I have 2 VMs on AWS. On the first VM I have hornet and application that send messages to hornet. On another VM I have application that is a consumer of hornet.
The consumer fails to pull messages from hornet, and I can't understand why. Hornetq is running, I opened to ports to any IP.
I tried to connect hornet with jconsole (on my local computer) and failed, so I can't see if the hornet has any consumers/ suppliers.
I've tried to change 'bind' configurations to 0.0.0.0 but when I restarted hornet they were automatically changed to what I have as server IP in config.properties.
Any suggestions what might be the problem that I failed to connect my application to the hornetq?
Thanks!
These are the things you need to check for the connectivity between VMs in VPC.
The Security- Group of the instance has both Ingress-Egress Configuration settings unlike the traditional EC2 Security Group [ now Classic EC2 ]. Check the Egress from your Consumer and ingress to the Server
If the instances are in different Subnets you need to check for the ACL as well; however the default setting would be allow.
Check if the iptables / OS level firewall which are blocking.
With respect to the connectivity failed from your local machine to Hornetq - you need to place the Instance in Public sub and configure the Instance's SG accordingly; only the app / VM would accessible to public internet
I have assumed that both the instances are in the Same VPC. However the title of the post sounds slightly misleading - if it is 2 different VPCs altogether, then new concept of VPC Peering also comes in