Multiple ENIs on EC2 use case - amazon-web-services

I created 2 private subnets PRIVATEA and PRIVATEB in a custom VPC. These subnets are in different availability zones. Added an EC2 instance in PRIVATEA. The instance already has an ENI eth0 attached to it. Next I created an ENI in the other PRIVATEB subnet and attached it to EC2 instance in PRIVATEB subnet. The setup is successful. Basically I followed a blog tutorial for this setup. It said that secondary interface will allow traffic for another group i.e. Management.
But I am not able to relate any use case with it. Could anyone please explain when do we use such a setup ? Is this the correct question to ask in this forum here ?
Thanks

An Elastic Network Interface (ENI) is a virtual network card that connects an Amazon EC2 instance to a subnet in an Amazon VPC. In fact, ENIs are also used by Amazon RDS databases, AWS Lambda functions, Amazon Redshift databases and any other resource that connects to a VPC.
Additional ENIs can be attached to an Amazon EC2 instance. These extra ENIs can be attached to different subnets in the same Availability Zone. The operating system can then route traffic out to different ENIs.
Security Groups are actually associated with ENIs (not instances). Thus, different ENIs can have different rules about traffic that goes in/out of an instance.
An example for using multiple ENIs is to create a DMZ, which acts as a perimeter through which traffic must pass. For example:
Internet --> DMZ --> App Server
In this scenario, all traffic must pass through the DMZ, where traffic is typically inspected before being passed onto the server. This can be implemented by using multiple ENIs, where one ENI connects to a public subnet to receive traffic and another ENI connects to a private subnet to send traffic. The Network ACLs on the subnets can be configured to disallow traffic passing between the subnets, so that the only way traffic can flow from the public subnet to the private subnet is via the DMZ instance, since it is connected to both subnets.
Another use-case is software that attaches a license to a MAC address. Some software products do this because MAC addresses are (meant to be) unique across all networking devices (although some devices allow it to be changed). Thus, they register their software under the MAC address attached to a secondary ENI. If that instance needs to be replaced, the secondary ENI can be moved to another instance without the MAC address changing.

Related

Several concept questions about AWS VPC

I'm preparing for AWS Certificate Associate. Have some concept questions about AWS VPC.
For Elastic Network Interfaces (ENI), the main text in the study guide says and I quote:
It’s possible to attach additional ENIs to an instance. Those ENIs may be in a different
subnet, but they must be in the same availability zone as the instance. As always, any
addresses associated with the ENI must come from the subnet to which it is attached.
while in the summary of this chapter, still in study guide says:
Any
additional ENI you attach to an instance must be in the same subnet as the primary ENI.
1.1 Are the bold parts in the above two statements contradictory? Because one says ENIs should be in the same subset while the other says in the same AZ, which one is right?
1.2 How to interpret the relationship between the bold part and the italic part in the first statement? Is it like the ENI can be located in other subnets, but the address should point to the instance that it has been attached to? Sounds kind of weird.
About difference between NAT Gateway and NAT Instance.
The book states NAT Instance can connect to instances that don't have public IP, while NAT Gateway cannot. Just want to clarify, does this "instance" means the destination instance in the Internet, instead of the sourcing instance within VPC? Since the reason to adopt NAT devices (no matter gateway or instance) is because the sourcing instance in private cloud doesn't have public IP?
Thanks!
An instance can have multiple ENIs, each in a different subnet (within the same Availability Zone). I recommend that you try it yourself to confirm. In fact, that is good advice for everything you do in AWS because the Certification is meant to prove that you have the knowledge and experience (rather than just having read a Study Guide).
All you'll need to know about NAT in a VPC is:
A NAT Gateway is a managed service that resides in a single subnet and AZ. An Elastic IP address is assigned to the NAT Gateway and all traffic coming from it to the Internet will 'appear' to be coming from that Elastic IP address.
A NAT Instance is an EC2 instance configured as a NAT. It can be assigned an Elastic IP address, or a normal (random) public IP address.
Again, I highly recommend you create both types, then configure and use them in a VPC. That way, you are actually increasing your own knowledge that would be useful for a future employer (rather than just getting a certification).

EKS Kubernetes outbound traffic

I have recently setup a EKS cluster in AWS for my company's new project. Before I get into my issue, here is some info of my setup. There are two nodes (at the moment) in the nodegroup with auto-scaling. At first, I created the nodes in the private subnet as I think it is more secure. But my supervisor told me that we will need the capability to SSH to the node directly, so I recreated the nodes in the public subnet so that we can SSH to the node using public key.
We have two CMS instances sitting in AWS (for example aws.example.com) and DigitalOcean (for example do.example.com) that contains some data. When the containers in EKS cluster start, some of them will need to access the instance in AWS by using the url aws.example.com or do.example.com. If the containers in EKS failed to access the instances, the container will still run but the app in it won't. So I need to whitelist the public IP of all my EKS nodes on the two CMS instances in order for the app work. And it works.
I am using ingress in my EKS cluster. When I created the ingress object, the AWS created an application load balancer for me. All the inbound traffic is being handled by the ALB, it is great. But here comes the issue. When more containers are created, the auto-scaling spin up new nodes in my EKS cluster (with different public IP every time), then I will have to go to the two CMS instances to whitelist the new public IP address.
So, is there any way to configure in such a way that all the nodes to use a single fixed IP address for outbound traffic? or maybe configure them to use the ALB created by ingress for outbound traffic as well? or I need to create a server to do that? I am very lost right now.
Here is what I have tried:
When the cluster is created, it seems like it created a private subnet as well even though I specify the nodes to be created in public subnet. There is a nat-gateway (ngw-xxxxxx) created for the private subnet and it comes with an Elastic IP (for example 1.2.3.4). The routetable of the public subnet is as below:
192.168.0.0/16 local
0.0.0.0/0 igw-xxxxxx
So I thought by changing igw-xxxxxx to ngw-xxxxxx, all the outbound traffic will use the ngw-xxxxxx and send the traffic to the outside world using IP address 1.2.3.4, which I just need to whitelist 1.2.3.4 on my two CMS instances. But right after I applied the changes, all containers are terminated and all things stopped working.
Exactly, as #Marcin mentioned in the comment above.
You should move your node-group to the private subnet.
Private subnet
As the docs tell:
Use a public subnet for resources that must be connected to the internet, and a private subnet for resources that won't be connected to the internet
The idea of private subnet is to forbid access to resources inside directly from the internet.
You can read really good part of AWS documentation here: https://docs.aws.amazon.com/vpc/latest/userguide/how-it-works.html
For private subnet you need to setup outgoing traffic thru your Nat gateway in the Route Table (read here: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html).
Public subnet
If you need your cluster in the public subnet for some reasons, but it is BAD parctice, you can do the trick:
You can route traffic via Nat Gateways from public subnet only to a specific server(your CMS).
Your public subnet route table may look like:
Destination
Target
10.0.0.0/16
local
W.X.Y.Z/32
nat-gateway-id
0.0.0.0/0
igw-id
Where W.X.Y.Z/32 is your CMS IP address.
Some hint
Moreover good practice is to allocate a pool of EIP and attach them to NAT Gateway to be sure it is not changed in the future.
When you want to modify infrastructure and create more complicated Nat (e.g. you want to filtering traffic on layer 7), you can createHigh Availability Nat instances, and attach the IP to NAT instances instead of NAT Gateways.
In that situation you will avoid mailing some 3-rd party APIs to whitelist your new IPs.

Creating a management network with ENIs vs just using security groups

Looking at AWS docs they lay out a use case for ENIs to create a management network.
So my primary ENI is for public traffic but I create a second ENI for ssh via my private subnet.
But I can just use an ACL to only allow SSH traffic from my company's IP. And if i really want a private VPC I could using a route table for that instead of a second ENI on each instance.
Is there an advantage of 2 ENIs for management network I am missing?
I think you can have the same result with creating a bastion host. Here's the official quickstart: link
You can also attach a security group to the ENI which allows SSH traffic only from a certain subnet.

Restrict access to Cross-region ec2 instances amazon

I need to design a cross-region cross-VPC architecture and i am not sure how i can restrict access to my resources
The requirement is that i need to run my web app in one region and my database in an another region.
Both the servers are inside private subnets. The web app has an auto scaling group and load balancer attached. The database server in the other region should only be accessible from this web app. I cannot use ip based restrictions as the IP of load balancer changes with time. What other option do i have?
The IP address of your Load Balancer is irrelevant because the Load Balancer is only used for incoming connections to your web server.
First, I should point out that having your database remote from your application is a poor architectural decision, which will slow down your application. Please reconsider it if possible!
You have not provided much information, so I will make the following assumptions:
VPC in Region A contains:
A Load Balancer in a public subnet
Web servers in a private subnet
VPC in Region B contains:
A database in a private subnet
In this situation, you wish to communicate between two private subnets in different VPCs that reside in different regions. For this, you could consider creating a private VPN connection via Amazon EC2 instances located in the public subnets of each VPC. This would use a software VPN such as OpenVPN or OpenSwan.
You should also consider how to achieve a High Availability option for this solution. A well-architected VPC would have your web servers deployed across multiple Availability Zones in Region A, with your database preferably in a multi-AZ configuration in Region B (assuming that you are using Amazon RDS). You should similarly design your VPN solution to be highly-available in case of failure.
An alternative is to put a NAT Server in the public subnet of the VPC in Region A and configure the private Route Table to send traffic through the NAT Server. This means that traffic going from the web servers to the Internet would all come from the public IP address associate with the NAT Server (not the Load Balancer).
However, the database is in a private subnet, so the traffic cannot be directly routed to the database so this is only half a solution. It would then either require the database to be in a public subnet (with a Security Group that only accepts connections from the NAT Server) or some type of proxy server in the public subnet that would forward traffic to the database. This would become way too complex compared to the Software VPN option.

AWS - Locking down ports

This has probably been answered elsewhere but I can't seem to find it!
I have a number of AWS EC2 instances that I am using as part of a project being built and I am now looking into securing the setup a bit. I want to lock down access to some of the ports.
For example I want to have one of the instances act as a database server (hosting mysql). I want this to be closed to public access but open to access from my other EC2 instances on their private IP's.
I also use the AWS auto-scaler to add/remove instances as required and need these to be able to access the DB server without having to manually add its IP to a list.
Similarly if possible I want to lock down some instances so that they can only accept traffic from an AWS Load Balancer. So port 80 is open on the instance but only for traffic coming from the Load Balancer.
I've looked at specifying the IP's using CIDR notation but can't seem to get it working. From the look of the private IP's being assigned to my instances the first two octets remain the same and the last two vary. But opening it to all instances with the same first two octets doesn't seem that secure either?!
Thanks
What you want to do is all pretty standard stuff, and is extensively documented in the AWS VPC documentation for Virtual Private Clouds. If your EC2 instances are not running in a VPC, they should be.
The link below should help, it seems to be your scenario:
Scenario 2: VPC with Public and Private Subnets (NAT)
The configuration for this scenario includes a VPC with a public
subnet and private subnet, and a network address translation (NAT)
instance in the public subnet. A NAT instance enables instances in the
private subnet to initiate outbound traffic to the Internet. We
recommend this scenario if you want to run a public-facing web
application, while maintaining back-end servers that aren't publicly
accessible. A common example is a multi-tier website, with the web
servers in a public subnet and the database servers in a private
subnet. You can set up security and routing so that the web servers
can communicate with the database servers.
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html