connection issue with hazelcast on amazon AWS - amazon-web-services

I am using Hazelcast v3.6 on two amazon AWS virtual machines (not using the AWS specific settings for hazelcast). The connection is supposed to work via TCP/IP connection settings (not multicasting). I have opened 5701-5801 address for connection on the virtual machines.
I have tried using iperf on the two virtual machines using which I can see that the client on one VM connects to the server on another VM (and vice versa when I switch the client server setup for iperf).
When I launch two Hazelcast servers on different VM's, the connection is not established. The log statements and the hazelcast.xml config are given below (I am not using the programmatic settings for Hazelcast). I have changed the IP addresses below:
20160401-16:41:02.812 [cached2] InitConnectionTask INFO - [45.46.47.48]:5701 [dev] [3.6] Connecting to /22.23.24.25:5701, timeout: 0, bind-any: true
20160401-16:41:02.812 [cached3] InitConnectionTask INFO - [45.46.47.48]:5701 [dev] [3.6] Connecting to /22.23.24.25:5703, timeout: 0, bind-any: true
20160401-16:41:02.813 [cached1] InitConnectionTask INFO - [45.46.47.48]:5701 [dev] [3.6] Connecting to /22.23.24.25:5702, timeout: 0, bind-any: true
20160401-16:41:02.816 [cached1] InitConnectionTask INFO - [45.46.47.48]:5701 [dev] [3.6] Could not connect to: /22.23.24.25:5702. Reason: SocketException[Connection refused to address /22.23.24.25:570
2]
20160401-16:41:02.816 [cached1] TcpIpJoiner INFO - [45.46.47.48]:5701 [dev] [3.6] Address[22.23.24.25]:5702 is added to the blacklist.
20160401-16:41:02.817 [cached3] InitConnectionTask INFO - [45.46.47.48]:5701 [dev] [3.6] Could not connect to: /22.23.24.25:5703. Reason: SocketException[Connection refused to address /22.23.24.25:570
3]
20160401-16:41:02.817 [cached3] TcpIpJoiner INFO - [45.46.47.48]:5701 [dev] [3.6] Address[22.23.24.25]:5703 is added to the blacklist.
20160401-16:41:02.834 [cached2] TcpIpConnectionManager INFO - [45.46.47.48]:5701 [dev] [3.6] Established socket connection between /45.46.47.48:51965 and /22.23.24.25:5701
20160401-16:41:02.849 [hz._hzInstance_1_dev.IO.thread-in-0] TcpIpConnection INFO - [45.46.47.48]:5701 [dev] [3.6] Connection [Address[22.23.24.25]:5701] lost. Reason: java.io.EOFException[Remote socket
closed!]
20160401-16:41:02.851 [hz._hzInstance_1_dev.IO.thread-in-0] NonBlockingSocketReader WARN - [45.46.47.48]:5701 [dev] [3.6] hz._hzInstance_1_dev.IO.thread-in-0 Closing socket to endpoint Address[54.89.161.2
28]:5701, Cause:java.io.EOFException: Remote socket closed!
20160401-16:41:03.692 [cached2] InitConnectionTask INFO - [45.46.47.48]:5701 [dev] [3.6] Connecting to /22.23.24.25:5701, timeout: 0, bind-any: true
20160401-16:41:03.693 [cached2] TcpIpConnectionManager INFO - [45.46.47.48]:5701 [dev] [3.6] Established socket connection between /45.46.47.48:60733 and /22.23.24.25:5701
20160401-16:41:03.696 [hz._hzInstance_1_dev.IO.thread-in-1] TcpIpConnection INFO - [45.46.47.48]:5701 [dev] [3.6] Connection [Address[22.23.24.25]:5701] lost. Reason: java.io.EOFException[Remote socket
closed!]
Part of Hazelcast config
<?xml version="1.0" encoding="UTF-8"?>
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.6.xsd"
xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<group>
<name>abc</name>
<password>defg</password>
</group>
<network>
<port auto-increment="true" port-count="100">5701</port>
<outbound-ports>
<ports>0-5900</ports>
</outbound-ports>
<join>
<multicast enabled="false">
<!--<multicast-group>224.2.2.3</multicast-group>
<multicast-port>54327</multicast-port>-->
</multicast>
<tcp-ip enabled="true">
<member>22.23.24.25</member>
</tcp-ip>
</join>
<interfaces enabled="true">
<interface>45.46.47.48</interface>
</interfaces>
<ssl enabled="false" />
<socket-interceptor enabled="false" />
<symmetric-encryption enabled="false">
<algorithm>PBEWithMD5AndDES</algorithm>
<!-- salt value to use when generating the secret key -->
<salt>thesalt</salt>
<!-- pass phrase to use when generating the secret key -->
<password>thepass</password>
<!-- iteration count to use when generating the secret key -->
<iteration-count>19</iteration-count>
</symmetric-encryption>
</network>
<partition-group enabled="false"/>
iperf server and client log statements
Server listening on TCP port 5701
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 22.23.24.25, TCP port 5701
TCP window size: 1.33 MByte (default)
------------------------------------------------------------
[ 5] local 172.31.17.104 port 57398 connected with 22.23.24.25 port 5701
[ 4] local 172.31.17.104 port 5701 connected with 22.23.24.25 port 55589
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 662 MBytes 555 Mbits/sec
[ 4] 0.0-10.0 sec 797 MBytes 666 Mbits/sec
Server listening on TCP port 5701
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local xxx.xx.xxx.xx port 5701 connected with 22.23.24.25 port 57398
------------------------------------------------------------
Client connecting to 22.23.24.25, TCP port 5701
TCP window size: 1.62 MByte (default)
------------------------------------------------------------
[ 6] local 172.31.17.23 port 55589 connected with 22.23.24.25 port 5701
[ ID] Interval Transfer Bandwidth
[ 6] 0.0-10.0 sec 797 MBytes 669 Mbits/sec
[ 4] 0.0-10.0 sec 662 MBytes 553 Mbits/sec
Note:
I forgot to mention that I can connect from hazelcast client to server i.e. when I use a hazelcast client to connect to a single hazlecast server node, I am able to connect just fine

An outbound ports range which includes 0 is interpreted by hazelcast as "use ephemeral ports", so the <outbound-ports> element has actually no effect in your configuration. There is an associated test in hazelcast sources: https://github.com/hazelcast/hazelcast/blob/75251c4f01d131a9624fc3d0c4190de5cdf7d93a/hazelcast/src/test/java/com/hazelcast/nio/NodeIOServiceTest.java#L60

Related

TLS origination from sidecar proxy instead of the Egress Gateway

I am trying to setup mTLS for outgoing connections, but instead of originating the TLS traffic from the egress gateway, I’m trying to do it from the sidecar proxy itself. We want to originate the TLS connection from proxy and not the egress gateway.
I took care of mounting the client certs in my sidecar proxy container and verified that the client certs are available in the expected path. My API resources look something like below
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: external-host-mtls
spec:
hosts:
- external-host-example.com
location: MESH_EXTERNAL
ports:
- number: 443
name: https
protocol: HTTPS
resolution: DNS
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: external-mtls
spec:
hosts:
- external-host-example.com
tls:
- match:
- port: 443
sniHosts:
- external-host-example.com
route:
- destination:
host: external-host-example.com
port:
number: 443
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: external-mtls
spec:
host: external-host-example.com
trafficPolicy:
tls:
mode: MUTUAL
clientCertificate: /etc/client-certs/client.pem
privateKey: /etc/client-certs/client.key
caCertificates: /etc/client-certs/ca.pem
when I try to curl to external-host-example.com, I am hoping that Istio will add the client certs to the connection.
I’m not sure if that’s happening, because I’m running into errors.
curl -H "Host: external-host-example.com" --tlsv1.2 -v https://external-host-example.com
* About to connect() to external-host-example.com port 443 (#0)
* Trying x.x.x.x...
* Connected to external-host-example.com (x.x.x.x) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* NSS error -5938 (PR_END_OF_FILE_ERROR)
* Encountered end of file
* Closing connection 0
curl: (35) Encountered end of file
Looking at the debug logs, I see this
|2020-11-17T15:53:58.226367Z|debug|envoy filter|[external/envoy/source/extensions/filters/listener/tls_inspector/tls_inspector.cc:148] tls:onServerName(), requestedServerName: external-host-example.com|
|2020-11-17T15:53:58.226443Z|debug|envoy filter|[external/envoy/source/common/tcp_proxy/tcp_proxy.cc:251] [C161] new tcp proxy session|
|2020-11-17T15:53:58.226480Z|debug|envoy filter|[external/envoy/source/common/tcp_proxy/tcp_proxy.cc:395] [C161] Creating connection to cluster outbound|443||external-host-example.com|
|2020-11-17T15:53:58.226509Z|debug|envoy pool|[external/envoy/source/common/tcp/conn_pool.cc:83] creating a new connection|
|2020-11-17T15:53:58.226550Z|debug|envoy pool|[external/envoy/source/common/tcp/conn_pool.cc:364] [C162] connecting|
|2020-11-17T15:53:58.226557Z|debug|envoy connection|[external/envoy/source/common/network/connection_impl.cc:727] [C162] connecting to x.x.x.x:443|
|2020-11-17T15:53:58.226641Z|debug|envoy connection|[external/envoy/source/common/network/connection_impl.cc:736] [C162] connection in progress|
|2020-11-17T15:53:58.226656Z|debug|envoy pool|[external/envoy/source/common/tcp/conn_pool.cc:109] queueing request due to no available connections|
|2020-11-17T15:53:58.226662Z|debug|envoy conn_handler|[external/envoy/source/server/connection_handler_impl.cc:411] [C161] new connection|
|2020-11-17T15:53:58.252446Z|debug|envoy connection|[external/envoy/source/common/network/connection_impl.cc:592] [C162] connected|
|2020-11-17T15:53:58.252555Z|debug|envoy connection|[external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:191] [C162] handshake expecting read|
|2020-11-17T15:53:58.277388Z|debug|envoy connection|[external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:191] [C162] handshake expecting read|
|2020-11-17T15:53:58.277417Z|debug|envoy connection|[external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:191] [C162] handshake expecting read|
|2020-11-17T15:53:58.277595Z|debug|envoy connection|[external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:176] [C162] handshake complete|
|2020-11-17T15:53:58.277633Z|debug|envoy pool|[external/envoy/source/common/tcp/conn_pool.cc:285] [C162] assigning connection|
|2020-11-17T15:53:58.277661Z|debug|envoy filter|[external/envoy/source/common/tcp_proxy/tcp_proxy.cc:624] TCP:onUpstreamEvent(), requestedServerName:external-host-example.com|
|2020-11-17T15:53:58.303804Z|debug|envoy connection|[external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:226] [C162]|
|2020-11-17T15:53:58.303830Z|debug|envoy connection|[external/envoy/source/common/network/connection_impl.cc:558] [C162] remote close|
|2020-11-17T15:53:58.303834Z|debug|envoy connection|[external/envoy/source/common/network/connection_impl.cc:200] [C162] closing socket: 0|
|2020-11-17T15:53:58.303853Z|debug|envoy connection|[external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:298] [C162] SSL shutdown: rc=-1|
|2020-11-17T15:53:58.303855Z|debug|envoy connection|[external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:226] [C162]|
|2020-11-17T15:53:58.303880Z|debug|envoy pool|[external/envoy/source/common/tcp/conn_pool.cc:124] [C162] client disconnected|
|2020-11-17T15:53:58.303894Z|debug|envoy connection|[external/envoy/source/common/network/connection_impl.cc:109] [C161] closing data_to_write=0 type=0|
|2020-11-17T15:53:58.303900Z|debug|envoy connection|[external/envoy/source/common/network/connection_impl.cc:200] [C161] closing socket: 1|
|2020-11-17T15:53:58.303985Z|debug|envoy conn_handler|[external/envoy/source/server/connection_handler_impl.cc:111] [C161] adding to cleanup list|
Any idea what am I doing wrong? How do I debug this further?

Stunnel for Elasticchache Redis(cluster mode enabled)

I have spin up Elasticcache Redis cluster mode enabled cluster on AWS. I am having 3 master shards and 1 replica for each(total 3 replicas). I have turn on in-transit encryption. For this I have installed stunnel on my EC2 instance and my config file looks like. 3001 is my cluster port
[redis-cli]
client = yes
accept = 127.0.0.1:3001
connect = master1_url:3001
[redis-cli1]
client = yes
accept = 127.0.0.1:3002
connect = replica1_url.com:3001
[redis-cli2]
client = yes
accept = 127.0.0.1:3003
connect = master2_url:3001
[redis-cli3]
client = yes
accept = 127.0.0.1:3004
connect = replica2_url.com:3001
[redis-cli4]
client = yes
accept = 127.0.0.1:3005
connect = master3_url:3001
[redis-cli3]
client = yes
accept = 127.0.0.1:3006
connect = replica3_url.com:3001
----------------------------------------------
sudo netstat -tulnp | grep -i stunnel
tcp 0 0 127.0.0.1:3001 0.0.0.0:* LISTEN 32272/stunnel
tcp 0 0 127.0.0.1:3002 0.0.0.0:* LISTEN 32272/stunnel
tcp 0 0 127.0.0.1:3003 0.0.0.0:* LISTEN 32272/stunnel
tcp 0 0 127.0.0.1:3004 0.0.0.0:* LISTEN 32272/stunnel
tcp 0 0 127.0.0.1:3005 0.0.0.0:* LISTEN 32272/stunnel
tcp 0 0 127.0.0.1:3006 0.0.0.0:* LISTEN 32272/stunnel
When I connect using localhost (src/redis-cli -c -h localhost -p 3001)my connection is successful. But when I hit "get key" it stuck with following
localhost:3001> get key
-> Redirected to slot [12539] located at master3_url:3001
If I change the cluster to single shard and single replica all works fine. What setting I am missing when using multi shards cluster. My Ec2 instance is open to accept connection on all ports. Redis Cluster is open to accept connection from Ec2 instance on all ports.
This is my first question on stackoverflow :)

Cannot cluster WSO2 API manager 2.6.0 using Hazelcast

I am working on Hazelcast clustering WSO2 API Manager (following doc: https://docs.wso2.com/display/AM260/Working+with+Hazelcast+Clustering#WorkingwithHazelcastClustering-EnablingHazelcastclustering) Here I have two nodes under the same domain 127.0.0.1.
Let us assume A is running in port 4001 while B is running in port 4002. I have joined the two nodes as
A -
<members>
<member>
<hostName>127.0.0.1</hostName>
<port>4002</port>
</member>
<member>
<hostName>127.0.0.1</hostName>
<port>4001</port>
</member>
</members>
B -
<members>
<member>
<hostName>127.0.0.1</hostName>
<port>4001</port>
</member>
<member>
<hostName>127.0.0.1</hostName>
<port>4001</port>
</member>
</members>
I also tried as;
A -
<members>
<member>
<hostName>127.0.0.1</hostName>
<port>4002</port>
</member>
</members>
B -
<members>
<member>
<hostName>127.0.0.1</hostName>
<port>4002</port>
</member>
</members>
But both methods returned as below
2019-11-27 13:23:16,763] INFO - SocketAcceptor [127.0.0.1]:0 [wso2.am.domain] [3.5.4] Accepting socket connection from /127.0.0.1:51206
[2019-11-27 13:23:16,763] INFO - TcpIpConnectionManager [127.0.0.1]:0 [wso2.am.domain] [3.5.4] Established socket connection between /127.0.0.1:4001
[2019-11-27 13:23:16,764] WARN - TcpIpConnectionManager [127.0.0.1]:0 [wso2.am.domain] [3.5.4] Wrong bind request from Address[127.0.0.1]:0! This node is not requested endpoint: Address[127.0.0.1]:4001
[2019-11-27 13:23:16,764] INFO - TcpIpConnection [127.0.0.1]:0 [wso2.am.domain] [3.5.4] Connection [/127.0.0.1:51206] lost. Reason: Socket explicitly closed
[2019-11-27 13:23:44,354] INFO - TcpIpConnectionManager [127.0.0.1]:0 [wso2.am.domain] [3.5.4] Established socket connection between /127.0.0.1:51211
[2019-11-27 13:23:44,359] INFO - TcpIpConnection [127.0.0.1]:0 [wso2.am.domain] [3.5.4] Connection [Address[127.0.0.1]:4002] lost. Reason: java.io.EOFException[Remote socket closed!]
[2019-11-27 13:23:44,360] WARN - ReadHandler [127.0.0.1]:0 [wso2.am.domain] [3.5.4] hz.wso2.am.domain.instance.IO.thread-in-1 Closing socket to endpoint Address[127.0.0.1]:4002, Cause:java.io.EOFException: Remote socket closed!
How to solve this issue?
[Issue solved]
The answer is "Use local machine IP instead of localhost ". There are 02 options.
Option 01
Add the following to the tasks/main.yml
# Get local IP
- name: get local ip
debug:
var: ansible_default_ipv4.address
and this to site.yml
- hosts: localhost
connection: local
Option 02
Add the following snippet to site.yml
---
- hosts: localhost
roles:
- carbon
connection: local
tasks:
- debug: var=ansible_all_ipv4_addresses
- debug: var=ansible_default_ipv4.address

Error during WebSocket handshake with user endpoint once Django Channels app is deployed

I'm trying to set up a websocket connection at the user level in my django app for receiving notifications. prod is https so need to use wss.
Here is the js:
$( document ).ready(function() {
socket = new WebSocket("wss://" + window.location.host + "/user_id/");
var $notifications = $('.notifications');
var $notificationList = $('.dropdown-menu.notification-list');
$notifications.click(function(){
$(this).removeClass('newNotification')
});
socket.onmessage = function(e) {
// notification stuff
// Call onopen directly if socket is already open
if (socket.readyState == WebSocket.OPEN) socket.onopen();
}
I've implemented django-channels in settings.py like this:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisChannelLayer",
"CONFIG": {
"hosts":[ENV_STR("REDIS_URL")]
},
"ROUTING": "project.routing.channel_routing",
},
}
routing.py
from channels.routing import route
from community.consumers import ws_add, ws_disconnect
channel_routing = [
route("websocket.connect", ws_add),
route("websocket.disconnect", ws_disconnect),
]
locally, this handshakes just fine.
2017-04-16 16:37:04,108 - INFO - worker - Listening on channels http.request, websocket.connect, websocket.disconnect, websocket.receive
2017-04-16 16:37:04,109 - INFO - worker - Listening on channels http.request, websocket.connect, websocket.disconnect, websocket.receive
2017-04-16 16:37:04,109 - INFO - worker - Listening on channels http.request, websocket.connect, websocket.disconnect, websocket.receive
2017-04-16 16:37:04,110 - INFO - worker - Listening on channels http.request, websocket.connect, websocket.disconnect, websocket.receive
2017-04-16 16:37:04,111 - INFO - server - HTTP/2 support not enabled (install the http2 and tls Twisted extras)
2017-04-16 16:37:04,112 - INFO - server - Using busy-loop synchronous mode on channel layer
2017-04-16 16:37:04,112 - INFO - server - Listening on endpoint tcp:port=8000:interface=127.0.0.1
[2017/04/16 16:37:22] HTTP GET / 200 [0.55, 127.0.0.1:60129]
[2017/04/16 16:37:23] WebSocket HANDSHAKING /user_id/ [127.0.0.1:60136]
[2017/04/16 16:37:23] WebSocket CONNECT /user_id/ [127.0.0.1:60136]
[2017/04/16 16:37:25] HTTP GET /user/test10 200 [0.47, 127.0.0.1:60129]
[2017/04/16 16:37:25] WebSocket DISCONNECT /user_id/ [127.0.0.1:60136]
[2017/04/16 16:37:26] WebSocket HANDSHAKING /user_id/ [127.0.0.1:60153]
[2017/04/16 16:37:26] WebSocket CONNECT /user_id/ [127.0.0.1:60153]
However, now that the app has been deployed to Heroku, the /_user_id/ endpoint is returning 404, and I'm getting ERR_DISALLOWED_URL_SCHEME when it should be a valid endpoint:
WebSocket connection to 'wss://mydomain.com/user_id/' failed: Error during WebSocket handshake: Unexpected response code: 404.
As I'm continuing to research it seems like this server config can't actually support websockets in prod. Current Procfile:
web: gunicorn project.wsgi:application --log-file -
worker: python manage.py runworker -v2
I spent a few hrs looking at converting the app to asgi and using Daphne, although since the project is python 2.7 that's a difficult conversion (not to mention seems to require a different static file implementation)

Zookeeper -Kafka: ConnectException - Connection refused

I am trying to setup 3 Kafka brokers on ubuntu EC2 machines. But I am getting ConnectException while starting zookeeper. All the ports in the security group of my ec2 intsances are already open.
Below is the stack trace:
[2016-03-03 07:37:12,040] ERROR Exception while listening (org.apache.zookeeper.server.quorum.QuorumCnxManager)
java.net.BindException: Cannot assign requested address
at java.net.PlainSocketImpl.socketBind(Native Method)
at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:376)
at java.net.ServerSocket.bind(ServerSocket.java:376)
at java.net.ServerSocket.bind(ServerSocket.java:330)
at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:507)
...
...
...
[2016-03-03 07:23:46,093] WARN Cannot open channel to 2 at election address /52.36.XXX.181:3888 (org.apache.zookeeper.server.quorum.QuorumCnxManager)
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:402)
at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:840)
at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:762)
Below is the configuration of:
zookeeper.properties:
dataDir=/tmp/zookeeper
clientPort=2181
maxClientCnxns=0
server.1=52.37.XX.70:2888:3888
server.2=52.36.XXX.181:2888:3888
server.3=52.37.XX.42:2888:3888
initLimit=5
syncLimit=2
server.properties:
broker.id=1
port=9092
host.name=52.37.XX.70
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
num.partitions=3
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
log.cleaner.enable=false
zookeeper.connect=52.37.XX.70:2181,52.36.XXX.181:2181,52.37.XX.42:2181
zookeeper.connection.timeout.ms=6000
I have added server's public IP in /etc/hosts of the instances. My modified /etc/hosts is as:
127.0.0.1 localhost localhost.localdomain ip-10-131-X-217
127.0.0.1 localhost localhost.localdomain 52.37.XX.70
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
The unique myid entries in /tmp/zookeper/myid are also entered correctly.
I have followed all the steps mentioned in: How to create a MultiNode - MultiBroker Cluster for Kafka on AWS
The issue was because I was using the Public IP of the server. Instead of that, use of Public DNS of ec2 instances fixed the issue.