Azure relay hybrid connection listener not reestablishing when internet is disrupted - azure-relay

I have CUSTOM azure hybrid connection listener service running on premise with below mention code as MSDN suggested, but listener not getting reestablished when on premise INTERNET connectivity get disrupted.
Only 1 out of 10 times, listener getting reestablished with below code, when on premise INTERNET is disrupted
// Opening the listener establishes the control channel to
// the Azure Relay service. The control channel is continuously
// maintained, and is reestablished when connectivity is disrupted
await listener.OpenAsync(cts.Token);
//Below delegate not getting called when INTERNET plugged off from
//listener running machine
listener.Offline += listener_Offline;
What changes required to reestablish listener to azure hybrid connection 10 out of 10 times?. Please advice.

Related

AWS VPN not releasing tunnel connection

I am trying to setup a VPN client in AWS using Keycloak and Google OAUTH as my IDP. I can log into the VPN just fine and a connection is established but once I disconnect from the AWS VPN Client I am unable to log back in. The connection is stuck on "re-establishing" and it never seems to move from there. It almost seems like the connection to the AWS tunnel is not released as I am unable to access any AWS consoles. The only fix I have been able to find is to disconnect from my WIFI and re-connect. Has anybody encountered this before? As a side point we have separate VPN endpoint that uses user/password authentication and I can log on and off to my hearts content. There are no noticeable difference between the two VPN endpoint configurations

exposing kestrel server deployed as web job for external interaction

I have deployed an application hosting Kestrels server bindded to a specific port as web job .I want to access that port in order to have to access to APIs implemented in that application.
If I try to bind with port 443 it fails on other ports the server starts but cant interact with external requests.Is there any way I can expose this port to listen to incoming requests
Azure Web App only support port 443 and 80. And webjob host in Azure App Service.
After a lot of searching for information and trying. I can tell you with certainty that other ports cannot be used.
For more details, you can read below post.
Opening ports to Azure Web Job
Is it possible to use an Azure Web Job to listen on a public socket
The above is a statement of port restrictions in webjob.
For you want webjob to monitor and process incoming requests, my suggestion is that webjob monitors ports 443 and 80 instead of binding. You can use RawSocket.
Monitor all requests, analyze whether the request content contains instructions that need to be executed, and then proceed to the next business operation.
If you already have completed project, you also can choose VM or Cloud Services.

How to trigger listener to a server

I have a hardware device which is sending the data continuously to an configured IP and port
for example : 192.168.137.2:8080
Actually if it is AWS instance then using AWS console it is possible to see the data coming from the device directly without any web-service or application.
So i want to know whether Is there any way to see the data coming from device on dedicated server without any application?
Is it possible to add a listener or something similar to that so that we can read the data in dedicated server?
The problem was solved by TCP sockets.
I created a simple socket application which was expecting an IP and listening to a PORT and established the connection between the device and the server.

Diagnosing Kafka Connection Problems

I have tried to build as much diagnostics into my Kafka connection setup as possible, but it still leads to mystery problems. In particular, the first thing I do is use the Kafka Admin Client to get the clusterId, because if this operation fails, nothing else is likely to succeed.
def getKafkaClusterId(describeClusterResult: DescribeClusterResult): Try[String] = {
try {
val clusterId = describeClusterResult.clusterId().get(futureTimeout.length / 2, futureTimeout.unit)
Success(clusterId)
} catch {
case cause: Exception =>
Failure(cause)
}
}
In testing this usually works, and everything is fine. It generally only fails when the endpoint is not reachable somehow. It fails because the future times out, so I have no other diagnostics to go by. To test these problems, I usually telnet to the endpoint, for example
$ telnet blah 9094
Trying blah...
Connected to blah.
Escape character is '^]'.
Connection closed by foreign host.
Generally if I can telnet to a Kafka broker, I can connect to Kafka from my server. So my questions are:
What does it mean if I can reach the Kafka brokers via telnet, but I cannot connect via the Kafka Admin Client
What other diagnostic techniques are there to troubleshoot Kafka broker connection problems?
In this particular case, I am running Kafka on AWS, via a Docker Swarm, and trying to figure out why my server cannot connect successfully. I can see in the broker logs when I try to telnet in, so I know the brokers are reachable. But when my server tries to connect to any of 3 brokers, the logs are completely silent.
This is a good article that explains the steps that happens when you first connect to a Kafka broker
https://community.hortonworks.com/articles/72429/how-kafka-producer-work-internally.html
If you can telnet to the bootstrap server then it is listening for client connections and requests.
However clients don't know which real brokers are the leaders for each of the partitions of a topic so the first request they always send to a bootstrap server is a metadata request to get a full list of all the topic metadata. The client uses the metadata response from the bootstrap server to know where it can then make new connections to each of Kafka brokers with the active leaders for each topic partition of the topic you are trying to produce to.
That is where your misconfigured broker problem comes into play. When you misconfigure the advertised.listener port the results of the first metadata request are redirecting the client to connect to unreachable IP addresses or hostnames. It's that second connection that is timing out, not the first one on the port you are telnet'ing into.
Another way to think of it is that you have to configure a Kafka server to work properly as both a bootstrap server and a regular pub/sub message broker since it provides both services to clients. Yours are configured correctly as a pub/sub server but incorrectly as a bootstrap server because the internal and external ip addresses are different in AWS (also in docker containers or behind a NAT or a proxy).
It might seem counter intuitive in small clusters where your bootstrap servers are often the same brokers that the client is eventually connecting to but it is actually a very helpful architectural design that allow kafka to scale and to failover seamlessly without needing to provide a static list of 20 or more brokers on your bootstrap server list, or maintain extra load balancers and health checks to know onto which broker to redirect the client requests.
If you do not configure listeners and advertised.listeners correctly, basically Kafka just does not listen. Even though telnet is listening on the ports you've configured, the Kafka Client Library silently fails.
I consider this a defect in the Kafka design which leads to unnecessary confusion.
Sharing Anand Immannavar's answer from another question:
Along with ADVERTISED_HOST_NAME, You need to add ADVERTISED_LISTENERS to container environment.
ADVERTISED_LISTENERS - Broker will register this value in zookeeper and when the external world wants to connect to your Kafka Cluster they can connect over the network which you provide in ADVERTISED_LISTENERS property.
example:
environment:
- ADVERTISED_HOST_NAME=<Host IP>
- ADVERTISED_LISTENERS=PLAINTEXT://<Host IP>:9092

Vertx Clustered EventBus not sending messages

Diagram of Setup
I've setup TCP discovery using Hazelcast where parts of the cluster exist in and out of the AWS cloud.
Inside AWS I can send and receive messages no problem but not externally.
Looking at the members all 3 servers are in the list but no messages are sent to server 3 on my local machine.
For testing the AWS machines have their firewalls disabled, so the only thing I can think of is a firewall issue on my local network.
I tried making a new instance of Vertx on all servers setting the EventBus port to 80 but that stopped all messages.
Servers 1 or 2 are not reporting any failed to send issues, but I'm not sure what the problems is.
Anybody have any ideas as to why server 3 cannot send or receive messages despite being int he cluster?