In order to check the health of a server I have, I want to write a function I can call in order to check whether my service is online.
I used command prompt to ping the IP address of the server, however all of the packets were lost due to request time outs.
I'm guessing I don't need to have a dedicated function related to handle being pinged, and I believe that it is due to the server security protocols denying the request. Currently the server only allows inbound traffic of HTTP requests, and I believe this to be the problem.
For an AWS instance, what protocol rule do I need to add in order to accept ping requests?
In the Security Group for the EC2 instance you should allow inbound ICMP.
Related
In my application:
ASP.NET Core 3.1 with Kestrel
Running in AWS ECS + Fargate
Services run in a public subnet in the VPC
Tasks listen only in the port 80
Public Network Load Balancer with SSL termination
I want to set the Security Group to allow inbound connections from anywhere (0.0.0.0/0) to port 80, and disallow any outbound connection from inside the task (except, of course, to respond to the allowed requests).
As Security Groups are stateful, the connection tracking should allow the egress of the response to the requests.
In my case, this connection tracking only works for responses without body (just headers). When the response has a body (in my case, >1MB file), they fail. If I allow outbound TCP connections from port 80, they also fail. But if I allow outbound TCP connections for the full range of ports (0-65535), it works fine.
I guess this is because when ASP.NET Core + Kestrel writes the response body it initiates a new connection which is not recognized by the Security Group connection tracking.
Is there any way I can allow only responses to requests, and no other type of outbound connection initiated by the application?
So we're talking about something like that?
Client 11.11.11.11 ----> AWS NLB/ELB public 22.22.22.22 ----> AWS ECS network router or whatever (kubernetes) --------> ECS server instance running a server application 10.3.3.3:8080 (kubernetes pod)
Do you configure the security group on the AWS NLB or on the AWS ECS? (I guess both?)
Security groups should allow incoming traffic if you allow 0.0.0.0/0 port 80.
They are indeed stateful. They will allow the connection to proceed both ways after it is established (meaning the application can send a response).
However firewall state is not kept for more than 60 seconds typically (not sure what technology AWS is using), so the connection can be "lost" if the server takes more than 1 minute to reply. Does the HTTP server take a while to generate the response? If it's a websocket or TCP server instead, does it spend whole minutes at times without sending or receiving any traffic?
The way I see it. We've got two stateful firewalls. The first with the NLB. The second with ECS.
ECS is an equivalent to kubernetes, it must be doing a ton of iptables magic to distribute traffic and track connections. (For reference, regular kubernetes works heavily with iptables and iptables have a bunch of -very important- settings like connection durations and timeouts).
Good news is. If it breaks when you open inbound 0.0.0.0:80, but it works when you open inbound 0.0.0.0:80 + outbound 0.0.0.0:*. This is definitely an issue due to the firewall dropping the connection, most likely due to losing state. (or it's not stateful in the first place but I'm pretty sure security groups are stateful).
The drop could happen on either of the two firewalls. I've never had an issue with a single bare NLB/ELB, so my guess is the problem is in the ECS or the interaction of the two together.
Unfortunately we can't debug that and we have very little information about how this works internally. Your only option will be to work with the AWS support to investigate.
I'm trying to get TCP timestamp from the packets for clock skewing purposes on my application which is hosted on EC2. In my network I have an ALB.
So my question is how do I get TCP level packet information in my app ? Since ALB filters out all the OSI Layers except application level (HTTP)
If the only reason to get access to TCP packet is to detect timestamp and correct clock drift, I would suggest to configure your EC2 instance to use NTP time server instead.
https://aws.amazon.com/blogs/aws/keeping-time-with-amazon-time-sync-service/
That being said, the ALB is not "removing" TCP information from network packets. HTTP connections made to your application are still transported over IP and TCP. If you need low level access to network packets from an app, I would suggest to look at the pCAP library which is used by TCPDUMP and many other tool to capture network traffic on an interface.
https://www.tcpdump.org/
[UPDATED to include comments]
It is important to understand the TCP connection between your client and the ALB is terminated at the ALB level. The ALB creates a second TCP connection to forward HTTP requests to your EC2 instance. The ALB does not remove information from TCP/IP, it just creates a second, independent and new connection. Usually the only information you want to propagate from the initial TCP connection is the source IP address. The ALB, like most load balancers and proxies, captures this information from the original connection (the one received from the client) and embed the information in an HTTP header called X-Forwarded-For.
This is documented at https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/x-forwarded-headers.html
If you want to capture other information from the original connection, I am afraid it will not be possible using ALB. (but I also would be very curious about the use case, i.e. WHAT you're trying to achieve)
In AWS console we must specify actual rules in order make it possible to access EC2 instances from remote localizations.
I mean rules like opening some port or access from allowed IP addresses.
And it is working for me now.
I consider following scenario:
Let's assume that we have application A which maitain long running connection and everything is working because
security rules are properly set. Now,
(a) someone remove rules allowing application A connect to EC2 instancs (so external IP address which is used by application A)
(b) at any point external IP address of machine used by application A change.
I consider if it is possible that connection established before occurence (a) or (b) keeps working? If yes, then how is it possible?
Here's a pretty basic explaination to your answers. Ofcourse, there's a lot more information on the matter, but I guess it is not of importance right now.
If you change a rule, let's assume it is a Firewall rule or AWS Security Group rule, the connection will terminate as the rule takes effect immediately.
Simply put, you are sending a stream of information packet by packet, so when the change is detected the packets will no longer be receieved and you will no longer receive a response, i.e. the connection will terminate.
If you change your IP and you are using TCP connections, which I assume you do, they will also terminate as TCP connections are based on IP:Port combinations, BUT if you are using DNS rather than just IP your traffic will be routed correctly, you might experience some downtime, but your service will get back working soon enough.
EDIT: As noted by Michael, the security group change doesn't cut off existing connections. The next time an attempt is made, it will block them.
I don't know if it's even possible to do so, but I will still ask. The thing is that I want to have (using ECS) one service A with tasks that do some job with the clients (create TCP connection, then form a group from multiple players and send to each player that they are formed in this group). Then I want this clients to make request to some specific task (some ENI with private IP, because I use awsvpc) from other service B behind an ALB (and then that task sends a response to those clients and starts working with them).
So my question is: "How can I forward multiple clients to the same specific ENI if that ENI is behind ALB?". Maybe in service's A tasks I should use AWS SDK to figure out the IPs of a service B tasks? But I still don't know how to reach that task by private IP. Is that even possible to "tell" ALB that I want to connect to some specific ENI?
Yes, you can configure the ALB to route to a specific IP. The listener on your ALB has routing rules that you can edit. Rules can be based on the domain name and path to which the HTTP request was sent.
Here is a detailed Tutorial on how to do that.
In AWS, our users(system admins) can access internal zone DB servers by using SSH tunneling without any local firwall's restrictions.
As you know, to access internal node a user must go through public zone gateway server first.
Because the gateway is actually a passage, I wish control the traffic from tunneled users on the gateway server.
For example, to get the currently connected ip addresses of all clients, to idendify the internal path(eg DB server ip) the user accessed futhermore I wish control the connection of unauthorized users.
To my dreams come true, I think below idea is really ideal.
1) Change sshd port to something other than 22. Restart sshd daemon.
2) Locate ssh proxy(nginx, haproxy or else) prior to sshd and let the proxy get the all ssh traffic from clients.
3) The ssh proxy route the traffic to sshd
4) Then I can see all user's activity by analize ssh proxy log. That's it.
Is it possible dream ?
Clever, but with a critical flaw: you won't gain any new information.
Why? The first S in SSH: "secure."
The "ssh proxy" you envision would be unable to tell you anything about what's going on inside the SSH connections, which is where the tunnels are negotiated. The connections are encrypted, of course, and a significant point of SSH is that it can't be sniffed. The fact that the ssh proxy is on the same machine makes no difference. If it could be sniffed, it wouldn't be secure.
All your SSH proxy could tell you is that an inbound connection was made from a client computer, and syslog already tells you that.
In a real sense, it would not be an "ssh proxy" at all -- it would only be a naïve TCP connection proxy on the inbound connection.
So you wouldn't be able to learn any new information with this approach.
It sounds like what you need is for your ssh daemon, presumably openssh, to log the tunnel connections established by the connecting users.
This blog post (which you will, ironically, need to bypass an invalid SSL certificate in order to view) was mentioned at Server Fault and shows what appears to be a simple modification to the openssh source code to log the information you want: who set up a tunnel, and to where.
Or, enable some debug-level logging on sshd.
So, to me, it seems like the extra TCP proxy is superfluous -- you just need the process doing the actual tunnels (sshd) to log what it is doing or being requested to do.