E/DefaultBootstrapChannel: Failed to receive operation servers list {} - amazon-web-services

I have deployed kaa on AWS but whenever i try run sample project the it is showing error:
E/DefaultBootstrapChannel: Failed to receive operation servers list {}
org.apache.http.conn.HttpHostConnectException: Connection to http://ec2-old IP address.compute-1.amazonaws.com:9889 refused
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:248)
Please help me change IP address in kaa aws

Looks like i have figured out the solution.
Use below commond line
sudo /usr/lib/kaa-sandbox/bin/change_kaa_host.sh your-IP

Related

SSL_connect returned=1 errno=0 state=SSLv2/v3 read server hello A: (null) when communicating with Chef Automate server

I am having difficulty connecting to my Chef Automate server, hosted on AWS OpsWorks.
I am usually connecting to it at least once per day, however since the start of the week I have been unable to.
There is some weekly maintenance performed on the server on a Friday, however this seems to go unnoticed.
When I try and communicate with the server I get the following error:
knife environment from file environments/production.json
ERROR: SSL Validation failure connecting to host: crmpicco-production-lay0vgyp4ighjsxv.us-east-1.opsworks-cm.io - SSL_connect returned=1 errno=0 state=SSLv2/v3 read server hello A: (null)
ERROR: SSL Error connecting to https://crmpicco-production-lay0vgyp4ighjsxv.us-east-1.opsworks-cm.io/organizations/rfc1872/environments/production, retry 1/5
In the events, I can see the following:
2022-08-26T12:25:26Z Maintenance completed successfully
2022-08-26T12:24:54Z Updating stack arn:aws:cloudformation:us-east-1:367114569123:stack/aws-opsworks-cm-instance-mc-prod-chef-1661515433111/27c16c50-2537-22ed-80ab-12a4e5696267 to associate EIP 2.51.125.211
2022-08-26T12:24:23Z Updating stack arn:aws:cloudformation:us-east-1:367114569123:stack/aws-opsworks-cm-instance-mc-prod-chef-1660910626222/fad95750-1fb6-22ed-817f-0aca43928f1d to disassociate EIP 2.51.125.211
2022-08-26T12:24:11Z Checking health of new instance
I have tried a knife ssl fetch, but that is also unable to communicate with the server.

Redislabs UI logging error when number of nodes more than one

I am new at Redis Enterprise and can't fix this problem:
I have a Redis Enterprise cluster (v.6.0) in AWS with two nodes. When I have only one node I can enter UI, but after adding other (second) nodes always throws me out to the login page after entering credentials. Meanwhile, the cluster works fine (information is taken from rladmin).
In what direction I should investigate the issue?
P.S.: Can this error from logs cause an issue?
ERROR redis_mgr MainThread: Connect failed: connect: connection failed: Error 2 connecting to unix socket: /var/opt/redislabs/run/ccs.sock. No such file or directory.: retrying
Possibly, this solution will help anybody:
the reason was that ALB before UI didn't use sticky sessions.
the solution was to enable a sticky session and it works.

Bootstrap IP to internal address conversion

I am new to Kafka and I setup an instance in aws. runs well.
then I created another aws instance and run the codes:
See image here
it can print out messages that I published to kafka
If I ran the same codes in the kafka server itself, I can also get messages.
However, if I run the same codes in my own laptop, I cant get anything.
I thought it might be the codes so I used kafka's own client in my laptop:
bin/kafka-console-consumer.sh --topic test22 --bootstrap-server 34.215.180.111:9092
Now I got an error:
2021-05-11 16:21:32,252] WARN [Consumer clientId=consumer-console-consumer-94326-1, groupId=console-consumer-94326] Error connecting to node ip-172-31-29-222.us-west-2.compute.internal:9092 (id: 0 rack: null) (org.apache.kafka.clients.NetworkClient)
ip-172-31-29-222.us-west-2.compute.internal
this piece of name is actually the AWS instance's internal address:
See image here
Then I thought it might be Amazon's issue so I repeated the whole process in Google Cloud and got the same results:
[2021-05-11 17:15:34,840] WARN [Consumer clientId=consumer-console-consumer-2377-1, groupId=console-consumer-2377] Error connecting to node instance-1.us-central1-a.c.seventh-seeker-267203.internal:9092 (id: 0 rack: null) (org.apache.kafka.clients.NetworkClient)
These internal addresses can not be accessed from external computers at all.
Can anybody help? thanks!
The logs are showing you the advertised.listeners of the brokers. If you want that to be different in order to connect, you'll need to modify that property such that the brokers have resolvable addresses for the clients
https://www.confluent.io/blog/kafka-listeners-explained/

Postman Monitor / error : getaddrinfo ENOTFOUND

I'm configuring Postman Monitor to schedule night executions.
However I'm facing the following error in the monitor console log:
Error: getaddrinfo ENOTFOUND ...
Note that I'm working on the private network of my company.
When I send the request manually without the monitor it is working fine.
Could you please help me to fix this issue?
It is possible that in the header the server responds with a different name than the one you put in the URL. The name must also be resolvable by DNS or hosts file

How to deploy Kafka on Google cloud

I deployed Kafka on Google cloud, I changed listeners to
PLAINTEXT://[internal ip address]:9092
And when I try
sudo ./bin/kafka-topics.sh --list --zookeeper [external IP address]:2181
I can get the topic on the broker. However when I try to produce message to the Kafka broker
sudo ./bin/kafka-console-producer.sh --broker-list [external IP address]:9092
--topic test
following error shows up:
ERROR Error when sending message to topic test with key: null, value:
5 bytes with error:
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s)
for test-0: 1506 ms has passed since batch creation plus linger time
I wonder what properties did I set wrong and how to fix it?
You need to set advertised.listeners to the external IP so that clients can correctly connect to it. Otherwise they'll try to connect to the internal IP (since advertised.listeners will default to listeners unless explicitly set)
Ref: https://kafka.apache.org/documentation/#brokerconfigs