The kafka virual machine address in the Google cloud platform is xx.xx.xxx.xxx.
From my local spring boot app that usually connects to the localhost:9092 of the local machine's kafka server, I changed it to the GCP's virtual machine's ip as xx.xx.xxx.xxx:9092
But the server start up spits out the warning
2020-04-05 15:30:41.356 WARN 7968 --- [| adminclient-4] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-4] Connection to node -1 could not be established. Broker may not be available.
and eventually times out. Should there be a different way to connect to cloud kafka from springboot application?
You need to configure your broker on the GCP VM with a correct advertised.listener so that your client receives a correct hostname/IP from it after the initial successful connection.
You can validate the connection and broker metadata provided by the advertised.listeners setting using kafkacat -L
$ kafkacat -b xx.xx.xxx.xxx:9092 -L
Metadata for all topics (from broker -1: xx.xx.xxx.xxx:9092/bootstrap):
1 brokers:
broker 0 at a.b.c.d:9092
The a.b.c.d. returned should be an IP or hostname that your client can successfully resolve to the broker itself (not a loopback address, internal network IP, etc).
To understand more see https://rmoff.net/2018/08/02/kafka-listeners-explained/
The following properties are added
advertised.host.name in server.properties to public IP address,
metadata.broker.list in producer.properties to public IP address,
host.name to 0.0.0.0.
These property files will be inside config folder. A restart is necessary.
And it solved the problem. For broker/bootstrap.sever, the public IP address is used in the application after adding the above mentioned properties.
Related
I have configured a Hashicorp Vault server on a EC2 instance. When trying to use postman to test transit secret engine API I keep getting a error connection refused on postman, I went full ape mode and opened all ports on the security group inbound rule and it didn't work, I attached an elastic IP to the instance and didnt work either, im just trying with a simple GET and I just keep getting the same connectionrefused error.
When I use cUrl on the ssh connected session i have no issues though. The specified hosted adress is 127.0.0.1:8200, in postman I replaced that localhost with the public adress of the instance that i obviously censored in the screencap, in the headers theres the token needed to access vault, for simplicity I was just using the root token.
Postman screecap if it helps
#Emilio Marchant
I have faced similar issue (not with postman, but with telnet), Let's try to understand problem here.
The issue is with 127.0.0.1 IP. This is loopback IP and When you (or your computer) call an IP address, you are usually trying to contact another computer on the internet. However, if you call the IP address 127.0.0.1 then you are communicating with the localhost – in principle, with your own computer.
Reference link : https://www.ionos.com/digitalguide/server/know-how/localhost/
What you can try is below.
Start vault dev server with --dev-listen-address parameter.
Eg:
vault server -dev -dev-listen-address="123.456.789.1:8200"
in above command replace '123.456.789.1:8200' with '<your ec2 instance private IP : 8200'>
Next set VAULT_ADDR and VAULT_TOKEN parameter as below
export VAULT_ADDR='http://123.456.789.1:8200'
export VAULT_TOKEN='*****************'
Again replace 'http://123.456.789.1:8200' with 'http://[Your ec2 instance private IP]:8200'
For Vault_token : you should get a root token in console, when you start vault server , use that token
Now try to connect from postman or using curl command. It should work.
Reference question and solution :
How to connect to remote hashicorp vault server
The notable thing here is that the response is "connection refused". This error means that the connection is getting established and it found that there are no processes running on that port. This error means that there is no issue with firewall. A firewall will cause the connection to either drop (reject) or timeout (ignore), but won't give "Econnrefused".
The most likely issue is that the vault server process is not bound to the correct network interface. There must be a configuration in hashicorp-vault to setup the IP on which to bind. Most servers, by default, bind only on loopback address which is accessible only from 127.0.0.1. You need to bind it to "all" network interfaces by changing that to 0.0.0.0. I am not aware of the specific configuration option of hashicorp vault, but there has to be something to this effect.
Possible security issue:
Note that some servers expect you to run it behind a reverse proxy so that you can setup SSL (https) and other authentication if needed. Applications like vault servers should not be publicly accessible on http without SSL.
We want to have a Test cloud virtual network that allows us to make an snmp-get over multiple virtual devices. To achieve this I am using GNS3. Now, we just deployed a GNS3 Server on EC2 (Ubuntu 18), but we can't ping nor snmp get any router outside the GNS3 server. We can ping these devices while we are in the GNS3 server, but this does not work from another server or my computer.
The GNS3 server already created and deployed.
The VPG, Site to site VPN, and VPC are already created, and the servers were added to this VPC.
After some weeks of research, our team found the solution, if anyone is having this same problem consider these important points in your AWS configuration:
Server A (GNS3) must be in a different Subnet than Server B (Test server that you want to ping from).
A Route Table must be created in AWS config pointing to the GNS3 ips.
Configure the NAT in Server A (In my case is an Ubuntu 18) using the following instructions:
Set up IP FORWARDing and Masquerading
iptables --table nat --append POSTROUTING --out-interface ens5 -j MASQUERADE
iptables --append FORWARD --in-interface virbr0 -j ACCEPT
Enables packet forwarding by kernel
echo 1 > /proc/sys/net/ipv4/ip_forward
Apply the configuration
service iptables restart
This will allow your virtual GNS3 devices in Server A to be reached from Server B (A more detailed explanation here). Additionally, you might want to test an SNMP-WALK from Server B to your virtual device in Server A (a router in my case).
If this does not work try debugging using flow logs in AWS and looking if server A is effectively receiving the requests.
I have been trying to run a consumer in my local machine connecting to a Kafka server running inside GCP.
Kafka and Zookeeper is running on the same GCP VM instance
Step 1: Start Zookeeper
bin/zookeeper-server-start.sh config/zookeeper.properties
Step 2: Start Kafka
bin/kafka-server-start.sh config/server.properties
If I run a consumer inside the GCP VM instance it works fine:
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
I verified the firewall rules, and I have access from my local machine, I can reach both the public IP and the port the Kafka server is running on.
I tested many options, changing the server.properties of kafka, for example:
advertised.host.name=public-ip
or
advertised.listeners=public-ip
Following the answer on connecting-kafka-running-on-ec2-machine-from-my-local-machine without success.
From the official documentation:
advertised.listeners
Listeners to publish to ZooKeeper for clients to use. In IaaS environments, this may
need to be different from the interface to which the broker binds. If
this is not set, the value for listeners will be used. Unlike
listeners it is not valid to advertise the 0.0.0.0 meta-address.
After testing many different options, this solution worked for me:
Setting up two listeners, one EXTERNAL with the public IP, and one INTERNAL with the private IP:
# Configure protocol map
listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
# Use plaintext for inter-broker communication
inter.broker.listener.name=INTERNAL
# Specify that Kafka listeners should bind to all local interfaces
listeners=INTERNAL://0.0.0.0:9027,EXTERNAL://0.0.0.0:9037
# Separately, specify externally visible address
advertised.listeners=INTERNAL://localhost:9027,EXTERNAL://kafkabroker-n.mydomain.com:9093
Explanation:
In many scenarios, such as when deploying on AWS, the externally
advertised addresses of the Kafka brokers in the cluster differ from
the internal network interfaces that Kafka uses.
Also remember to set up your firewall rule to expose the port on the EXTERNAL listener in other to connect to it from an external machine.
Note: It's important to restrict access to authorized clients only.
You can use network firewall rules to restrict access. This guidance
applies to scenarios that involve both RFC 1918 and public IP;
however, when using public IP addresses, it's even more important to
secure your Kafka endpoint because anyone can access it.
Taken from google solutions.
I am attempting to run a simple driver to write some data to an Accumulo 1.5 instance running on AWS that is using a single node cluster managed by CDH 4.7 . The client successfully connects to zookeeper but then fails with the following message:
2015-06-26 12:12:13 WARN ServerClient:163 - Failed to find an available server in the list of servers: [172.31.13.210:10011:9997 (120000)]
I tried applying the solution listed
here
, but this has not resolved the issue. The IP that is set for the master/slave is the internal AWS IP for the server.
Other than the warning message, I have not been able to find anything else in the Accumulo logs that indicate what is preventing connection to the master server. Any suggestions on where to look next?
--EDIT--
It looks like zookeeper is returning connectors to the remote client that contain references to the internal IP of the AWS server. The remote client cannot use these connectors because it does not know about the internal IP. When I changed the internal IPs in the thrift connector objects to the public IP, the connection works fine. In essence I can't figure out how to get zookeeper to return public IPs and not AWS internal ones for remote clients
172.31.13.210:10011:9997
This looks really strange. This should be an IP/hostname and a port. It looks like you have two ports somehow..
Did you list ports in the slaves file in ACCUMULO_CONF_DIR? This file should only contain the hostname/IP. If you want to change the port that a TabletServer listens on, you need to change tserver.port.client.
I'm trying to install a WSO2 EMM server on Amazon EC2 Instance, but have a problem with it. EC2 instance have 2 IP addresses - one of them is internal Amazon address like 172.32.x.x, another is external real IP.
If I try to setup carbon.xml file with real IP (or domain), I have a problem with thrift server, which can't open port 10500 at a real IP. If I use internal IP running of server is fine, but in application I can't reach identity server (because it's a gray IP, of course).
I tried some tricks with using /etc/hosts file, for example, setup 0.0.0.0 as my domain. In this case server is running without errors, and a can see that port 10500 is open using netstat, but web application is not redirect to identity server.
May be there are any solutions of this problem?
I have to update EMM server to version 1.1.0 and all is working now.
Thanks all!
In the carbon.xml, change the HostName and MgtHostName to the real IP and start the server.
For example if the real IP is 172.32.x.x then HostName and MgtHostName in the carbon.xml should change to:
<HostName>172.32.x.x</HostName>
<MgtHostName>172.32.x.x</MgtHostName>