Keeping connection after removing/(change IP address) in security policies in AWS - amazon-web-services

In AWS console we must specify actual rules in order make it possible to access EC2 instances from remote localizations.
I mean rules like opening some port or access from allowed IP addresses.
And it is working for me now.
I consider following scenario:
Let's assume that we have application A which maitain long running connection and everything is working because
security rules are properly set. Now,
(a) someone remove rules allowing application A connect to EC2 instancs (so external IP address which is used by application A)
(b) at any point external IP address of machine used by application A change.
I consider if it is possible that connection established before occurence (a) or (b) keeps working? If yes, then how is it possible?

Here's a pretty basic explaination to your answers. Ofcourse, there's a lot more information on the matter, but I guess it is not of importance right now.
If you change a rule, let's assume it is a Firewall rule or AWS Security Group rule, the connection will terminate as the rule takes effect immediately.
Simply put, you are sending a stream of information packet by packet, so when the change is detected the packets will no longer be receieved and you will no longer receive a response, i.e. the connection will terminate.
If you change your IP and you are using TCP connections, which I assume you do, they will also terminate as TCP connections are based on IP:Port combinations, BUT if you are using DNS rather than just IP your traffic will be routed correctly, you might experience some downtime, but your service will get back working soon enough.
EDIT: As noted by Michael, the security group change doesn't cut off existing connections. The next time an attempt is made, it will block them.

Related

Should I disable EC2 to access external network to improve safety?

I want to use Kubernetes on some clouds (maybe Amazon, Google, etc). Should I disallow my EC2 machines from accessing the external network? My guess is as follows, and I wonder whether it is correct or wrong?
I should disallow EC2 from accessing the external network. Otherwise, hackers can attack my machines more easily. (true?)
How to do it: I should use a dedicated load balancer (maybe Ingress) with the external IP that my domain name is bound to. The load balancer will then talk with my actual application (which has no external IP and can only access internal network). (true?)
Sorry I am new to Ops, and thanks for any help!
Allowing or disallowing your EC2 instances from accessing external networks, ie keeping the rule that allows all outgoing traffic in your security group won't be of much use keeping hackers out, that's what the incoming traffic rules are for. It will, however, prevent unwanted traffic from going out after the hacker has reached your instance and has been able to install whatever malicious software on it, and then it would try to initiate outgoing communication.
That outgoing traffic rule is usually kept to allow things like getting software installs and updates, but it won't affect how your instances respond to incoming requests (legitimate or not).
It is a good idea to have a load balancer in front of your instances and have it be the only allowed point of entry to your services. It's a good pattern to follow, and your instances will not need to have an external IP address.
Having a bastion host is a good idea as well, and use it to manage the instances themselves. And I would also recommend Systems Manager's Session Manager for this task.

How do I tell which ELB sent me traffic?

I have 2 clustered EC2 instances with 2 Classic ELB's one is for the DNS xxx.com and one for xxx.net. Both point to the same cluster of EC2 instances. My application needs to know whether the incoming request came from the .com or the .net URL.
The problem is that technically the ELB forwards the request, so I lose that in the header. I can get the IP address of the ELB, but Amazon will occasionally drop the ELB and give us a new one with a different IP, so it works for a while, then breaks out of nowhere.
Does Amazon offer something like a "static" ELB? I can't find anything so I assume not.
Is there any other way around this?
My application needs to know whether the incoming request came from the .com or the .net URL.
Read this from the incoming HTTP Host header. If it doesn't contain one of the two expected values, throw an error (503, 421, whatever makes the most sense) and don't render anything.
The problem is that technically the ELB forwards the request, so I lose that in the header.
I don't know what this statement is intended to convey. The Host header is set by the user agent and is not modified by the ELB so your application can read it from the incoming request.
You should not be looking at the internal IP address of the ELB for anything except to log it for correlation/troubleshooting if needed.
Of course, you also don't actually need 2 Classic ELBs for this application. You can use a single classic balancer with a single certificate containing both domains, or a single Application Load Balancer with either a combo cert or two separate certs.

Is there a limit on outbound TCP connections through a EC2 NAT Instance?

Our setup is as follows:
VPC (with about 30-50 ec2 instances) -> EC2 Nat Instance -> Internet.
Since Dec 13, we have been seeing issues where randomly the connection were starting to refuse. No such issue was seen earlier. Only change is the processing of the urls via API has increased (In other words more TCP connections are getting initiated & worked on). Requesting an API Request (POST/GET/PUT doesn't matter) from an EC2 instance within VPC via NAT Instance to the Internet is failing at random.
I tried logging the Flow logs, but in these flow logs, I see the entry where it shows ACCEPT OK for the TCP log transmission ( pic attached - https://ibb.co/dwe3X6 ). However, the same capture on tcpdump (one specific ec2 instance within vpc), shows the TCP Retransmission failure (where traffic is going through the NAT instance) ( pic attached - https://ibb.co/npqozm ). They are of the same time and same ec2 instance.
Basically, the SYN packet gets initiated, but then the actual handshake doesn't go through. Note, that this doesn't happen all the time.
The tcp retransmission failures are random. Sometimes it works and sometimes it doesn't. So this is leading me to believe there is some sort of a queue or buffer in NAT instance which is hitting the limit and I am not sure how to get to root of this.
This suggests a problem out on the Internet or at the distant end.
ACCEPT means the connection (the instantion of the flow) was allowed by the security group and network ACLs, telling you nothing about whether the handshake succeeded. OK in the flow logs means the log entry itself is intact.
There is no reason to believe that the NAT instance is limiting this, because the SYN packets are indeed shown in wireshark leaving the instance headed for the Internet and the flow log suggests that it is indeed making its way out of the NAT instance successfully.
You used the term "refuse" but the wireshark entries are consistent with a Connection timed out error, rather than Connection refused, which is an active refusal by the far end (or, less commonly, by an intermediate firewall) due to the lack of a listening service on the destination port, which would cause the destination to respond with a TCP RST.
If you can duplicate the problem with a NAT Gateway, then you can be confident that it is not in any way related to the NAT instance itself, which is simply a Linux instance using iptables ... -j MASQUERADE.
The only thing the network infrastructure throttles is outbound connections to destination port 25, because of spam. Everything else is bounded only by the capabilities of the instance itself. With a t2.micro, you should have (iirc) in excess of 125 Mbits/sec of Ethernet bandwidth available, and the NAT capability is not particularly processor intensive, so unless you are exhausting the Ethernet bandwidth or the CPU credit balance of the instance, it seems unlikely that the NAT instance could be the cause.

Request time out when pinging server on AWS

In order to check the health of a server I have, I want to write a function I can call in order to check whether my service is online.
I used command prompt to ping the IP address of the server, however all of the packets were lost due to request time outs.
I'm guessing I don't need to have a dedicated function related to handle being pinged, and I believe that it is due to the server security protocols denying the request. Currently the server only allows inbound traffic of HTTP requests, and I believe this to be the problem.
For an AWS instance, what protocol rule do I need to add in order to accept ping requests?
In the Security Group for the EC2 instance you should allow inbound ICMP.

How to setup EC2 Security Group to allow working with Firebase?

I am preparing a system of EC2 workers on AWS that use Firebase as a queue of tasks they should work on.
My app in node.js that reads the queue and works on tasks is done and working and I would like to properly setup a firewall (EC2 Security Group) that allows my machines to connect only to my Firebase.
Each rule of that Security Group contains:
protocol
port range
and destination (IP address with mask, so it supports whole subnets)
My question is - how can I setup this rule for Firebase? I suppose that IP address of my Firebase is dynamic (it resolves to different IPs from different instances). Is there a list of possible addresses or how would you address this issue? Can some kind of proxy be a solution that would not slow down my Firebase drastically?
Since using node to interact with Firebase is outbound traffic, the default security group should work fine (you don't need to allow any inbound traffic).
If you want to lock it down further for whatever reason, it's a bit tricky. As you noticed, there are a bunch of IP addresses serving Firebase. You could get a list of them all with "dig -t A firebaseio.com" and add all of them to your firebase rules. That would work for today, but there could be new servers added next week and you'd be broken. To try to be a bit more general, you could perhaps allow all of 75.126.., but that is probably overly permissive and could still break if new Firebase servers were added in a different data center or something.
FWIW, I wouldn't worry about it. Blocking inbound traffic is generally much more important than outbound (since to generate outbound traffic you have to have already managed to somehow run software on the box)