Amazon Ec2 Outgoing Sockets (python) - amazon-web-services

I can't seem to be able to have Aws ec2 machines send outgoing socket communications. I allowed all traffic, turned of the firewall, and set a Elasitic I.P nothing is working. I'm using a python socket for simplicity. I'm able to listen and receive connections although then send data to that connection find. But whenever I try to connect I get a timeout. I searched far and wide for a answer to this question and none match my issue.

Related

Tcp level Information on Ec2

I'm trying to get TCP timestamp from the packets for clock skewing purposes on my application which is hosted on EC2. In my network I have an ALB.
So my question is how do I get TCP level packet information in my app ? Since ALB filters out all the OSI Layers except application level (HTTP)
If the only reason to get access to TCP packet is to detect timestamp and correct clock drift, I would suggest to configure your EC2 instance to use NTP time server instead.
https://aws.amazon.com/blogs/aws/keeping-time-with-amazon-time-sync-service/
That being said, the ALB is not "removing" TCP information from network packets. HTTP connections made to your application are still transported over IP and TCP. If you need low level access to network packets from an app, I would suggest to look at the pCAP library which is used by TCPDUMP and many other tool to capture network traffic on an interface.
https://www.tcpdump.org/
[UPDATED to include comments]
It is important to understand the TCP connection between your client and the ALB is terminated at the ALB level. The ALB creates a second TCP connection to forward HTTP requests to your EC2 instance. The ALB does not remove information from TCP/IP, it just creates a second, independent and new connection. Usually the only information you want to propagate from the initial TCP connection is the source IP address. The ALB, like most load balancers and proxies, captures this information from the original connection (the one received from the client) and embed the information in an HTTP header called X-Forwarded-For.
This is documented at https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/x-forwarded-headers.html
If you want to capture other information from the original connection, I am afraid it will not be possible using ALB. (but I also would be very curious about the use case, i.e. WHAT you're trying to achieve)

How to trigger listener to a server

I have a hardware device which is sending the data continuously to an configured IP and port
for example : 192.168.137.2:8080
Actually if it is AWS instance then using AWS console it is possible to see the data coming from the device directly without any web-service or application.
So i want to know whether Is there any way to see the data coming from device on dedicated server without any application?
Is it possible to add a listener or something similar to that so that we can read the data in dedicated server?
The problem was solved by TCP sockets.
I created a simple socket application which was expecting an IP and listening to a PORT and established the connection between the device and the server.

best architecture to deploy TCP/IP and UDP service on amazon AWS (Without EC2 instances)

i am traying to figure it out how is the best way to deploy a TCP/IP and UDP service on Amazon AWS.
I made a previous research to my question and i can not find anything. I found others protocols like HTTP, MQTT but no TCP or UDP
I need to refactor a GPS Tracking service running right now in AMAZON EC2. The GPS devices sent the position data using udp and tcp protocol. Every time a message is received the server have to respond with an ACKNOWLEDGE message, giving the reception confirmation to the gps device.
The problem i am facing right now and is the motivation to refactor is:
When the traffic increase, the server is not able to catch up all the messages.
I try to solve this issue with load balancer and autoscaling but UDP is not supported.
I was wondering if there is something like Api Gateway, which gave me a tcp or udp endpoint, leave the message on a SQS queue and process with a lambda function.
Thanks in advance!
Your question really doesn't make a lot of sense - you are asking how to run a service without running a server.
If you have reached the limits of a single instance, and you need to grow, look at using the AWS Network Load Balancer with an autoscaled group of EC2 instances. However, this will not support UDP - if you really need that, then you may have to look at 3rd party support in the AWS Marketplace.
Edit: Serverless architectures are designed for http based application, where you send a request and get a response. Since your app is TCP based, and uses persistent connections, most existing serverless implementations simply won't support it. You will need to rewrite your app to support http, or use traditional server based infrastructures that can support persistent connections.
Edit #2: As of Dec. 2018, API gateway supports WebSockets. This probably doesn't help with the original question, but opens up other alternatives if you need to run lambda code behind a long running connection.
If you want to go more Serverless, I think the ECS Container Service has instances that accept TCP and UDP. Also take a look at running Docker Containers with with Kubernetes. I am not sure if they support those protocols, but I believe they do.
If not, some EC2 instances with load balancing can be your best bet.

Diagnosing Kafka Connection Problems

I have tried to build as much diagnostics into my Kafka connection setup as possible, but it still leads to mystery problems. In particular, the first thing I do is use the Kafka Admin Client to get the clusterId, because if this operation fails, nothing else is likely to succeed.
def getKafkaClusterId(describeClusterResult: DescribeClusterResult): Try[String] = {
try {
val clusterId = describeClusterResult.clusterId().get(futureTimeout.length / 2, futureTimeout.unit)
Success(clusterId)
} catch {
case cause: Exception =>
Failure(cause)
}
}
In testing this usually works, and everything is fine. It generally only fails when the endpoint is not reachable somehow. It fails because the future times out, so I have no other diagnostics to go by. To test these problems, I usually telnet to the endpoint, for example
$ telnet blah 9094
Trying blah...
Connected to blah.
Escape character is '^]'.
Connection closed by foreign host.
Generally if I can telnet to a Kafka broker, I can connect to Kafka from my server. So my questions are:
What does it mean if I can reach the Kafka brokers via telnet, but I cannot connect via the Kafka Admin Client
What other diagnostic techniques are there to troubleshoot Kafka broker connection problems?
In this particular case, I am running Kafka on AWS, via a Docker Swarm, and trying to figure out why my server cannot connect successfully. I can see in the broker logs when I try to telnet in, so I know the brokers are reachable. But when my server tries to connect to any of 3 brokers, the logs are completely silent.
This is a good article that explains the steps that happens when you first connect to a Kafka broker
https://community.hortonworks.com/articles/72429/how-kafka-producer-work-internally.html
If you can telnet to the bootstrap server then it is listening for client connections and requests.
However clients don't know which real brokers are the leaders for each of the partitions of a topic so the first request they always send to a bootstrap server is a metadata request to get a full list of all the topic metadata. The client uses the metadata response from the bootstrap server to know where it can then make new connections to each of Kafka brokers with the active leaders for each topic partition of the topic you are trying to produce to.
That is where your misconfigured broker problem comes into play. When you misconfigure the advertised.listener port the results of the first metadata request are redirecting the client to connect to unreachable IP addresses or hostnames. It's that second connection that is timing out, not the first one on the port you are telnet'ing into.
Another way to think of it is that you have to configure a Kafka server to work properly as both a bootstrap server and a regular pub/sub message broker since it provides both services to clients. Yours are configured correctly as a pub/sub server but incorrectly as a bootstrap server because the internal and external ip addresses are different in AWS (also in docker containers or behind a NAT or a proxy).
It might seem counter intuitive in small clusters where your bootstrap servers are often the same brokers that the client is eventually connecting to but it is actually a very helpful architectural design that allow kafka to scale and to failover seamlessly without needing to provide a static list of 20 or more brokers on your bootstrap server list, or maintain extra load balancers and health checks to know onto which broker to redirect the client requests.
If you do not configure listeners and advertised.listeners correctly, basically Kafka just does not listen. Even though telnet is listening on the ports you've configured, the Kafka Client Library silently fails.
I consider this a defect in the Kafka design which leads to unnecessary confusion.
Sharing Anand Immannavar's answer from another question:
Along with ADVERTISED_HOST_NAME, You need to add ADVERTISED_LISTENERS to container environment.
ADVERTISED_LISTENERS - Broker will register this value in zookeeper and when the external world wants to connect to your Kafka Cluster they can connect over the network which you provide in ADVERTISED_LISTENERS property.
example:
environment:
- ADVERTISED_HOST_NAME=<Host IP>
- ADVERTISED_LISTENERS=PLAINTEXT://<Host IP>:9092

Want to implement a VPN for just one application

I looking for add support to a VPN for my software,
I known PPTP and OpenVPN , the two makes a system-wide binding, installing a TAP driver so all applications route their traffic to then.
How could i implement a VPN support for just my application ? ThereĀ“s any library, example, hint or way to do it ?
My software is actually made in C++ /MFC. Using the standard CAsyncSocket.
Forwading incoming connections to your application is relatively easy:
stunnel allows you to forward traffic to specific ports through an an SSL tunnel. It requires that you run it on both ends, though.
Most decent SSH clients, such as OpenSSH or PuTTY also support port forwarding, with the added advantage that any remote SSH server can usually act as the other end of the tunnel without any modifications.
You can also use OpenVPN and other VPN solutions, but this requires specific forwarding rules to be added to the remote server.
Forwarding outgoing connections, though, is trickier without modifying your application. The proper way to do it is to implement the SOCKS protocol, preferrably SOCKS5. Alternatively, you can use an external application, such as FreeCap, to redirect any connections from your application.
After you do that, you can forward your connections to any SOCKS server. Most SSH clients, for example, allow you to use the SOCKS protocol to route outgoing connections through the remote server.
As a sidenote, OpenVPN servers do not necessarily become the default gateway for all your traffic. Some do push such a route table entry to the clients, but it can be changed. In my own OpenVPN setup I only use the VPN to access the private network and do not route everything through it.
If you can force your application to bind all outgoing sockets to one or more specific ports, you could use IP filtering rules on your system to route any connections from those ports through the VPN.
EDIT:
Tunneling UDP packets is somewhat more difficult. Typically you need a proxy process on both the remote server and the local client that will tunnel incoming and outgoing connections through a persistent TCP connection.
Your best bet would be a full SOCKS5 client implementation in your application, including the UDP-ASSOCIATE command for UDP packets. Then you will have to find a SOCKS5 proxy that supports tunnelling.
I have occasionally used Delegate which seems to be the Swiss pocket-knife of proxies. As far as I know, it supports the UDP-ASSOCIATE command in its SOCKS5 implementation and it also supports connecting two Delegate processes through a TCP connection. It is also available for both Linux and Windows. I don't remember if it can also encrypt that TCP connection, but you could always tunnel that one through stunnel or SSH if you need to.
If you have system administrator rights on a remote VPN server, however, you could probably have a simpler set-up:
Have your P2P application bind it's outgoing UDP sockets to the client VPN interface. You many need to setup a secondary default route for that interface. This way your application's outgoing packets will go through the remote server.
Have the remote server forward incoming UDP packets to specific ports through the VPN connection back to you.
This should be a simpler set-up, although if you really care about anonymity you might be interested in ensuring your P2P application does not leak DNS or other requests that can be tracked.
Put SSH connectivity in your app or use SSL. You'll have to use a protocol/service instead of VPN technology. Good luck!
I think you simply need SSL: http://www.openssl.org/
OpenVPN is based on SSL - but it is a full vpn.
The question is what do you need? If you need encryption (application private connection) - and not a vpn (virtual private network) go for ssl.
Hints can be found here:
Adding SSL support to existing TCP & UDP code?
http://sctp.fh-muenster.de/dtls-samples.html
http://fixunix.com/openssl/152877-ssl-udp-traffic.html