Why not using the direct IP but Security Group instead? - amazon-web-services

I was doing the question in the image below and the right answer blew my mind:
I my opinion putting the ALB IP address would work, but the right question answer suggests that I should put ALB to a security group and say to the target instance that ALB'S security group is the source.
Why?
Is it related to the fact that the target instance is inside a VPC?
I answer the question thinking that just put the ALB IP as source would be the correct answer.

First, 192.168.0.0/10 is not the ALB IP Address, but rather the CIDR block of the entire VPC.
Second, even if the actual ALB IP address were among the answers, it wouldn't be the best answer. The docs explain why:
The IP addresses for Classic Load Balancers and Application Load Balancers change over time. Avoid using this information to statically configure your applications to point to these IP addresses.

Whitelisting the VPC CIDR would effectively mean whitelisting the entire IP range defined by the CIDR, which could possibly include resources other than the load balancer.
Since the question is asking how to ensure that only traffic coming from the load balancer is allowed, then the right answer is indeed allowing the security group associated with the load balancer.

Related

In GCP, how to create firewall rules to isolate subnets by their IP ranges?

We are using a shared VPC with two subnets (10.65.0.0/16 and 10.66.0.0/16). The shared VPC has connection to on-prem network, so both two subnets can access the resource hosted on on-prem. Since we use one subnet for DEV environment, and the other one for PROD environment, we want to block all traffic between those two subnets. I don't want to manage those firewall rules by using tags or service accounts of each instance hosting on those subnets, since the owner of all projects hosting in those two subnets may not always following the rules, and cause extra communications to clarify. Ideally, I want to create some firewall rules to block the traffic just using those two IP ranges of subnets. To isolate subnets between each other, I need to create a "deny" firewall rule with source "10.65.0.0/16" and distinction "10.66.0.0/16", and another one with source "10.66.0.0/16" and distinction "10.65.0.0/16". For what I saw, in both egress and ingress firewalls, it's only allowed to set IP ranges on either source or distinction, but cannot be both. It looks there is no way to set both source and distinction in a single firewall using CIDR.
I know using peered network can easily cut the traffic between VPCs/subnets. But there is limitation in VPC, that the routing between 2+ layers of peering are terrible, and resources managed by google already involved a layer of peering, so if possible I don't want to involve another layer of peered network. If there are no better ideas, I probably have to use either Tags or Service accounts to create firewalls one by one.
Please share your ideas, or any other way to resolve my problem.
Thank you
Consulted Google tech support for this question. Their suggestion is no surprise. It can not be done by setting source IP CIDR and distinction CIDR. Their suggestion is using "Tags" + "resource IP ranges".
e.g. - allow all [ingress], Targets tags: vmGroup-1, Source IPv4 : CIDR of vmGroup-1
Basically, going through this way instead of create "deny" firewalls caused by one advantage of VPC, and one limitation of VPC. The advantage is: The advantage is: In VPC the traffic is blocked between any instances naturally, even they are in the same subnet, firewalls are created in VPC but working on each instance individually, it's like each instance has its own firewall. The limitation is: so far, VPC don't allow firewall be created with both source and distinction IP ranges be defined.

8 free IP addresses in the public subnet specified for AWS Elastic Load Balancer?

From the relevant section of AWS Official Doc, the following requirement is stated:
When you create a load balancer, you must specify one public subnet
from at least two Availability Zones. You can specify only one public
subnet per Availability Zone.
To ensure that your load balancer can scale properly, verify that each
subnet for your load balancer has a CIDR block with at least a /27
bitmask (for example, 10.0.0.0/27) and has at least 8 free IP
addresses. Your load balancer uses these IP addresses to establish
connections with the targets.
However, I don't understand why -- the bit about requirement 8 free IP addresses. Can someone throw in an explanation? Thanks in advance!
I googled a bit on the Internet and could not find a good explanation. I think understanding this requirement may help me understand how ELB works (I did read the chapter on "How ELB works" but I am still confused)
AWS Elastic Load Balancers can scale up and down to meet the traffic demands for your site. The scaling up uses private IP addresses from your subnet. AWS is not very forthcoming with how that works. The best I can find is vague references to it.
load balancers that all feature the high availability, automatic scaling, and robust security necessary to make your applications fault tolerant
The 8 free IP addresses is vaguely addressed below, in general it allows the ELB to scale horizontally.
If subnets in your VPC run out of available IP addresses, AWS resources, such as load balancers, might not respond successfully to increased traffic.
It's a best practice to keep at least eight IP addresses in each subnet available for use. There are two ways to free up or add additional IP addresses for use with load balancers.
FYI, If you try to create an ELB without eight IP addresses free it will fail and you'll get the following error message:
References
https://aws.amazon.com/elasticloadbalancing/
https://aws.amazon.com/premiumsupport/knowledge-center/subnet-insufficient-ips/

AWS NLB in public subnets with EC2 in private subnets

Has someone configured a NLB in the public subnets of your VPC to route traffic to EC2 instances that are in the private subnets?
When using an ELB, a good solution is to create a Security Group for the ELB and then create another SecurityGroup for the private EC2 Instances, allowing incoming traffic from that ELB Security Group, as explained here:
https://aws.amazon.com/premiumsupport/knowledge-center/public-load-balancer-private-ec2/
"You can also add a rule on the instance’s security group to allow traffic from the security group assigned to the load balancer. For example, if the security group on the load balancer is sg-1234567a, make the following changes on the security group associated with the private instances"
Since you cannot associate a Security Group to a NLB, how could you accomplish this with the same type of security?
Thanks!
Since you cannot associate a Security Group to a NLB, how could you
accomplish this with the same type of security?
The security aspect does not change.
NLB is a different beast, it not the same as classic Load Balancers. For Classic Load Balancers, from the point of view of your instances, traffic does appear to come from inside the VPC. From outside, traffic goes to a (random and mutating) list of IP addresses, resolved by the DNS record that AWS provides to you.
Network Load Balancers are completely different. From the point of view of your instances, they are completely invisible. If it is an external network load balancer, traffic appears to be coming from instances on the internet directly (even though this is an illusion). Therefore, if you want to talk to everyone on the internet, 0.0.0.0/0 is what you open it to.
This is, in fact, what the documentation says:
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#target-security-groups
Recommended Rules
Inbound Source Port Range Comment
Client IP addresses instance listener Allow traffic from clients on the instance listener port
VPC CIDR health check Allow traffic from the load balancer on the health check port
Client IP addresses is whatever your client IPs are. If they are on the open internet, 0.0.0.0/0 it is. Adding the NLB private IP address, as I saw in other responses, accomplishes nothing. Traffic is not coming from there, as far as the instances are concerned.
On the security angle, nothing changes. Since your instances are in private subnets, traffic cannot flow directly to them, as there is a NAT gateway in the middle. It can only flow from them to the internet (through NAT gateway, then internet gateway). Even if you specify all traffic is allowed from everywhere, traffic still won't come. It will have to come through another way. In your case, that way is the NLB, which has a fixed number of ports it listens to, and only sends traffic to the destination ports on the instances you specify.
If you are moving from classic Load Balancers to NLBs, move the security group rules from the Load Balancer to your instances. Or better yet, since you can have multiple security groups, just add the SG you currently have for the classic LB to the instances(and update any ASGs as needed). Your security posture will be exactly the same. With the added benefit that now your applications won't need things like proxy protocol to figure out where traffic is coming from, it is no longer obfuscated by the load balancer.
That is indeed true as per AWS Documentation :
Network Load Balancers do not have associated security groups.
Therefore, the security groups for your targets must use IP addresses
to allow traffic from the load balancer.
So If you do not want to grant access to the entire VPC CIDR, you can grant access to the private IP addresses used by the load balancer nodes. There is one IP address per load balancer subnet.
On NLB Tab of there is one Network Interface per Load Balancer from there :
On the Details tab for each network interface, copy the address from
Primary private IPv4 IP.
You can use this private IP Address at add it SG of EC2 Instances.
Please Refer to AWS Documentation
Tail your http access logs and you will see there is no changing of source IP address from the network load balancer which means you need to allow 0.0.0.0/0 on the endpoints security group if the internet needs access to your endpoint.
This is only ok if you use a private subnet so be careful if you have this server on a public subnet as this solution would not be advisable. In this case just use an application load balancer. You can still setup the same listener and configure a target group by instance as well. The application load balancer will update the source IP address to it's own private address if you tail the access logs. The advantage of this is you only need to allow https traffic to the app load balancer and then you can accept http for the target group if you like from the load balancer.

AWS: security groups ignoring traffic from elastic IP

I have 2 AWS instances, i-1 and i-2. They are each on a different security group: sg-1 and sg-2, respectively. Both machines have elastic IPs.
sg-2 is configured to allow all traffic from sg-1, regardless of port, source IP or protocol.
When i-1 tries to talk to i-2 its traffic is being blocked. It seems AWS doesn't account for the fact that i-1's traffic is actually coming from its elastic IP.
Is this expected? Is there anything I can do to work around it, apart from manually adding i-1's elastic IP to sg-2?
sg-2 is configured to allow all traffic from sg-1
When you do this, only traffic from Private IP address is allowed. However, as you as using EIP, you explicitly need to allow traffic from that ip address.
Read this: https://forums.aws.amazon.com/thread.jspa?messageID=414060
Quoting from above link:
Out of curiosity, are you perhaps connecting using a public IP address? When you use a rule with a security group as the source, it will only match when connecting over the internal network. The private IP address can change though. If you have an Elastic IP associated with the instance, the public DNS name happens to be static and will always resolve to the current private IP address when used from within the same EC2 region. That allows you to easily connect internally without worrying about any address changes.
You haven't really provided enough information to diagnose the problem, but there are a few things to check:
Is I-1 definitely in SG-1? If you've got the instances muddled, the SG rules would be around the wrong way.
Does the machine in SG-2 have a firewall running that might be blocking incoming traffic even though the SG rules are allowing it?
You've tagged this with the VPC tag - do you have any network ACL settings that might be preventing traffic flow? Are the machines private, using a NAT appliance to get out to the Internet, or public, routing through the standard AWS gateway? Can I-1 see the Internet? If you're routing through a NAT, assigning an EIP to a machine effectively cuts it off from the Internet because EIP and NAT are mutually incompatible, and although I haven't tried it this might also screw up SG routing.
Does SG-1 have any egress rules that might be preventing traffic from leaving?
The answer to your question is likely to be found in the resolution of one of these questions if the answer to any of them is 'Yes'.
As previously stated by slayedbylucifer, you will need to explicitly allow traffic from the EIP.
Here's the reasoning from the official AWS documentation about Security Groups:
When you specify a security group as the source for a rule, traffic is allowed from the network interfaces that are associated with the source security group for the specified protocol and port. Incoming traffic is allowed based on the private IP addresses of the network interfaces that are associated with the source security group (and not the public IP or Elastic IP addresses).

AWS Load Balancer with a static IP address

I have a set-up running on Amazon cloud with a couple of EC2 Instances running through a load balancer.
It is important that the site has a unique(static) IP or set of IPs as I'm plugging in 3rd party APIs which only accept requests made from IPs which have been added to their whitelist.
So basically unless we can give these 3rd parties a static IP or range of IPs that the requests from the site will always come from then we would be unable to make any calls to them.
Anyone knows how to achieve this as I know that Elastic IPs are not compatible with load balancers?
If I were to look up the IP of the load balancer DNS name (e.g. dualstack.awseb-BAMobile-ENV-xxxxxxxxx.eu-west-1.elb.amazonaws.com resolves to 200.200.200.200) would that IP be Static?
Any help/advise is greatly appreciated guys.
The ip addresses of your load balancer is not static. In any event, your incoming load balancer IP wouldn't be used for outgoing connections.
You could assign elastic IPs to the actual instances behind the load balancer, which would then be used for outgoing requests. You get 5 free elastic ips, and I believe you can apply for more if you need them.
Additionally if using a VPC and if your instances are in a private subnet then they will only be able to access the internet via the NAT instance(s) you setup, and you can of course assign an elastic IP to the NAT instances
This is an old question, but things have changed now.
Now you can create a Network ELB to get a LB with a static IP.
from https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html
Support for static IP addresses for the load balancer. You can also
assign one Elastic IP address per subnet enabled for the load
balancer.
https://aws.amazon.com/blogs/aws/new-network-load-balancer-effortless-scaling-to-millions-of-requests-per-second/
You can attache an additional ENI (Elastic Network Interface) to an instance in your VPC. This way the ELB (Elastic Load Balancer) routes the incoming Internet requests to the web server, and the additional ENI will be used to connect to your 3rd party (or internal) requests (Management network)
You can see more details about it in the VPC documentations
Really the only way I am aware of doing this is by setting up your instances within a VPC and having dedicated NAT instances by which all outbound traffic is routed.
Here is a link to the AWS documentation on how to set up NAT instances:
http://docs.amazonwebservices.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html
You CAN attach an elastic IP to the instances BUT NOT to the ELB (which is what the client sees).
You could use a full reverse proxy layer 7 load balancer like HAProxy:
Or a commercial implementation like Loadbalancer.org or Riverbed (Zeus)
They both are in the AWS Marketplace:
Your outbound requests to your 3rd party APIs will NOT go out via the ELB/ALB. That's for incoming connections. If you need an inbound static IP you'll probably need to forego the loadbalancer (or figure out how to implement Anshu's suggestion to attach an elastic IP to the loadbalancers, the doc is light on details). Update: I found some documentation that ALB use static addresses (and I just tried binding an elastic IP to one to be sure and that failed).
If you're talking about outbound connections see below:
If your server is deployed in a public subnet you can attach an
elastic IP to that host. Outbound communications will go out over
that address.
If your server is deployed in a private subnet there's
a NAT gateway attached to it. All outbound traffic from your private
subnet will go out over that interface.
You could use as already mentioned loadbalancer.org appliance in AWS. It would replace the AWS NAT instance and give greater functionality and include both Layer4 and Layer7, along with SSL termination and a WAF.
Best of all you get free support in your 30 day trial in AWS to help you get up and running.
Yes I am biased as I work for loadbalancer.org however I would say nothing ventured nothing gained.
You can use a DNS service like DNSMadeeasy that allows "ANAME" records. These act like an A Record but can be pointed at a FQDN or IP. So in this case you can point it to the ELB DNS.
Dave