How to programatically notify WSO2 ESB about disconnected endpoint? - wso2

I have an ESB that interacts with core module via TCP. For high availability, I deployed 2 core module nodes and used a LoadBalance Endpoint to configure these nodes in ESB. I also wrote my own TCP Sender that extends AbstractTransportSender class to send message to core nodes.
My question is: when my TCP Sender detects that one node is disconnected, how can it programmatically notify ESB about that to stop sending messages to this node after several fail tries?

Related

How can I create a persistent websocket connection to kinesis? (I assume using ec2)

How would I go about creating a WS listener and streaming to Kinesis? I have seen api call examples with lambdas -> kinesis, but I am looking at something for a persistent connection.
I am the client of the websocket and have to send aa api token within 3 seconds, but I do not have to communicate other than that with the socket, I am the listener. I do not control the Websocket.
I can use a Websocket connection package and have a script written to connect and authenticate, just not sure how exactly this would be designed (i.e. it would never stop running, so there's no while loop unless I am wrong)

Diagnosing Kafka Connection Problems

I have tried to build as much diagnostics into my Kafka connection setup as possible, but it still leads to mystery problems. In particular, the first thing I do is use the Kafka Admin Client to get the clusterId, because if this operation fails, nothing else is likely to succeed.
def getKafkaClusterId(describeClusterResult: DescribeClusterResult): Try[String] = {
try {
val clusterId = describeClusterResult.clusterId().get(futureTimeout.length / 2, futureTimeout.unit)
Success(clusterId)
} catch {
case cause: Exception =>
Failure(cause)
}
}
In testing this usually works, and everything is fine. It generally only fails when the endpoint is not reachable somehow. It fails because the future times out, so I have no other diagnostics to go by. To test these problems, I usually telnet to the endpoint, for example
$ telnet blah 9094
Trying blah...
Connected to blah.
Escape character is '^]'.
Connection closed by foreign host.
Generally if I can telnet to a Kafka broker, I can connect to Kafka from my server. So my questions are:
What does it mean if I can reach the Kafka brokers via telnet, but I cannot connect via the Kafka Admin Client
What other diagnostic techniques are there to troubleshoot Kafka broker connection problems?
In this particular case, I am running Kafka on AWS, via a Docker Swarm, and trying to figure out why my server cannot connect successfully. I can see in the broker logs when I try to telnet in, so I know the brokers are reachable. But when my server tries to connect to any of 3 brokers, the logs are completely silent.
This is a good article that explains the steps that happens when you first connect to a Kafka broker
https://community.hortonworks.com/articles/72429/how-kafka-producer-work-internally.html
If you can telnet to the bootstrap server then it is listening for client connections and requests.
However clients don't know which real brokers are the leaders for each of the partitions of a topic so the first request they always send to a bootstrap server is a metadata request to get a full list of all the topic metadata. The client uses the metadata response from the bootstrap server to know where it can then make new connections to each of Kafka brokers with the active leaders for each topic partition of the topic you are trying to produce to.
That is where your misconfigured broker problem comes into play. When you misconfigure the advertised.listener port the results of the first metadata request are redirecting the client to connect to unreachable IP addresses or hostnames. It's that second connection that is timing out, not the first one on the port you are telnet'ing into.
Another way to think of it is that you have to configure a Kafka server to work properly as both a bootstrap server and a regular pub/sub message broker since it provides both services to clients. Yours are configured correctly as a pub/sub server but incorrectly as a bootstrap server because the internal and external ip addresses are different in AWS (also in docker containers or behind a NAT or a proxy).
It might seem counter intuitive in small clusters where your bootstrap servers are often the same brokers that the client is eventually connecting to but it is actually a very helpful architectural design that allow kafka to scale and to failover seamlessly without needing to provide a static list of 20 or more brokers on your bootstrap server list, or maintain extra load balancers and health checks to know onto which broker to redirect the client requests.
If you do not configure listeners and advertised.listeners correctly, basically Kafka just does not listen. Even though telnet is listening on the ports you've configured, the Kafka Client Library silently fails.
I consider this a defect in the Kafka design which leads to unnecessary confusion.
Sharing Anand Immannavar's answer from another question:
Along with ADVERTISED_HOST_NAME, You need to add ADVERTISED_LISTENERS to container environment.
ADVERTISED_LISTENERS - Broker will register this value in zookeeper and when the external world wants to connect to your Kafka Cluster they can connect over the network which you provide in ADVERTISED_LISTENERS property.
example:
environment:
- ADVERTISED_HOST_NAME=<Host IP>
- ADVERTISED_LISTENERS=PLAINTEXT://<Host IP>:9092

Vertx Clustered EventBus not sending messages

Diagram of Setup
I've setup TCP discovery using Hazelcast where parts of the cluster exist in and out of the AWS cloud.
Inside AWS I can send and receive messages no problem but not externally.
Looking at the members all 3 servers are in the list but no messages are sent to server 3 on my local machine.
For testing the AWS machines have their firewalls disabled, so the only thing I can think of is a firewall issue on my local network.
I tried making a new instance of Vertx on all servers setting the EventBus port to 80 but that stopped all messages.
Servers 1 or 2 are not reporting any failed to send issues, but I'm not sure what the problems is.
Anybody have any ideas as to why server 3 cannot send or receive messages despite being int he cluster?

Passthrough ports in a ESB cluster

From the carbon docs
Non-blocking HTTP/S transport ports: Used to accept message mediation requests. If you want to send a request to an API or a proxy service for example, you must use these ports. ESB_HOME}/repository/conf/axis2/axis2.xml file.
8243 - Passthrough or NIO HTTPS transport
8280 - Passthrough or NIO HTTP transport
But in a cluster scenario, 1 MGR and 2 WRK where I'm supposed to send a request ?
To the MGR ?
To one of the WRK ?
According to the documentation thos port are not load balanced.
Thank to anyone that may clarify
Proxy and API requests are served by Worker nodes. Manager nodes are there to access UI and deploy artifacts only.
If you have 2 worker nodes, you can/should have a loadbalancer in front of them.

WSO2 ESB with CEP doesn't work

I'm using WSO2 ESB v.4.8.1 and WSO2 CEP v.3.1.0 and I want to integrate each other. The problem is that I fill IP Address, protocol, disable security connection, Authentication Port set 7711, Receiver Port set on 7611 and when event come to ESB in order to send to CEP I get this error:
ERROR AsyncDataPublisher Reconnection failed for ssl://<ip_address>:<port>
but security connection is disable.
I turn off firewall, set security connection but this doesn't help.
Has anyone know how to fix that?
I assume you are doing this by creating a BAM Server profile and providing a CEP Thrift endpoint. I would suggest you to try this out on a single machine with CEP running on port offset. This will let you identify whether the issue is with your network. If your running CEP on port offset 1, Reciever port should be 7612 and Authentication port should be 7712