When starting an EC2 from an image in AWS Marketplace, it requests Subnet Settings:
And says:
Ensure you are in the selected VPC above
It gives 2 options:
I am not sure what this means.
Is it asking me to identify which AWS "subnet" (in this case either ap-southeast-2b or ap-southeast-2a) that my laptop is currently in, and tell AWS via this drop down? I don't understand why it would want this information, nor what to give it. I've used thousands of EC2s and never needed to specify anything more granular than region. But today I am starting the EC2 from a marketplace image and it requires this additional information.
Whenever you are launching an instance, you have to choose a VPC and a subnet. When you launch your instance, usually a default VPC with default subnets are pre-selected.
The default VPC and subnets are usually public, which makes your instances accessible from the internet. Often, for security reasons, it may not be desired. In that case a custom VPC and/or subnets are created. This allows you to create private subnets shielded from direct access from the internet. One such architecture is VPC with public and private subnets (NAT).
The NAT in the above setups allows instances in private subnets to access internet, without allowing direct access to the instances from the internet.
Related
I created 2 private subnets PRIVATEA and PRIVATEB in a custom VPC. These subnets are in different availability zones. Added an EC2 instance in PRIVATEA. The instance already has an ENI eth0 attached to it. Next I created an ENI in the other PRIVATEB subnet and attached it to EC2 instance in PRIVATEB subnet. The setup is successful. Basically I followed a blog tutorial for this setup. It said that secondary interface will allow traffic for another group i.e. Management.
But I am not able to relate any use case with it. Could anyone please explain when do we use such a setup ? Is this the correct question to ask in this forum here ?
Thanks
An Elastic Network Interface (ENI) is a virtual network card that connects an Amazon EC2 instance to a subnet in an Amazon VPC. In fact, ENIs are also used by Amazon RDS databases, AWS Lambda functions, Amazon Redshift databases and any other resource that connects to a VPC.
Additional ENIs can be attached to an Amazon EC2 instance. These extra ENIs can be attached to different subnets in the same Availability Zone. The operating system can then route traffic out to different ENIs.
Security Groups are actually associated with ENIs (not instances). Thus, different ENIs can have different rules about traffic that goes in/out of an instance.
An example for using multiple ENIs is to create a DMZ, which acts as a perimeter through which traffic must pass. For example:
Internet --> DMZ --> App Server
In this scenario, all traffic must pass through the DMZ, where traffic is typically inspected before being passed onto the server. This can be implemented by using multiple ENIs, where one ENI connects to a public subnet to receive traffic and another ENI connects to a private subnet to send traffic. The Network ACLs on the subnets can be configured to disallow traffic passing between the subnets, so that the only way traffic can flow from the public subnet to the private subnet is via the DMZ instance, since it is connected to both subnets.
Another use-case is software that attaches a license to a MAC address. Some software products do this because MAC addresses are (meant to be) unique across all networking devices (although some devices allow it to be changed). Thus, they register their software under the MAC address attached to a secondary ENI. If that instance needs to be replaced, the secondary ENI can be moved to another instance without the MAC address changing.
I am not an expert in networking, so I want to get a clearer image. I have an AWS running instance, and its local network is 172.31.16.0/20 with address. I know that Amazon uses 172.31.0.0/16 CIDR to manage private addresses.
If someone does a scan on 172.31.0.0/16, could he/she discover my instance?
I tried to do it with another instance of mine and it detects it, but I am not sure if it works, for instance, I don't own because of this notion of VPC that I don't really understand.
Simply no. This CIDR is for a VPC, and your VPC is different from another AWS user's VPC.
To allow another AWS user to access your VPC network, you need to share it manually, so if you do not share it, it is not possible for other users to detect your instance by a brute force query.
For public IP addresses, you definitely can be discovered.
For intern IP addresses, to the extent I know, it is a virtual network, and it is isolated from other VPCs.
Traffic for private RFC1918 addresses is not routable over the Internet. No one can hit your 172.31 address across the Internet. Not from outside AWS and not from another VPC (yours or anyone else's).
VPCs are per account and are isolated from each other. You can, however, share subnets of your VPC with another AWS account within the same AWS Organization, if you choose to. You can also peer VPCs, if you choose to.
Other instances within your VPC can reach an instance in the same VPC, of course, assuming the default routing and NACLs, as can anyone on your VPC's extended network, for example if you have a VPN connection into your VPC (but I assume that's not relevant here).
We have a Lambda function in an account and we would like to access an EC2 instance (via HTTP) in another VPC which has a public IP attached. I was wondering what would be the best way to perform this communication. I am new to Lambda and I just got to know of the VPC lambdas. Which CIDRs do I need to open on the Security Group on the EC2 instance? Can I have a specific set of public IPs being picked in the source VPC - this way I can whitelist that range in the SG?
Does VPC peering seem like an overhead for this case or the only possible solution?
It all Depends on your requirements.However, Peering those VPCs is the best way to have the traffic remaining in your trusted Private Subnets which are located in the Internal Trust Boundaries which satisfies Security best practices (Threat/Security Models and Cloud/Network Architecture).
If there is an enterprise policy with strict rules that communication shall not be routed via DMZ/Public Internet and is a must to keep things within the Trusted Boundary of Internal Routes, I don't see any other choice but to go with the VPC peering : https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/peer-with-vpc-in-another-account.html
However, if requirements are less strict and you can use Public Internet to forward your Traffic out via DMZ, it is possible to achieve this with out sacrificing too much of Security (assuming that your EC2 with public IP on other account is providing service over SSL/TLS where your lambda can communicate with it over an Encrypted Communication Channel while validating the EC2's certificate).
This could be achieved by having Lambda associated with an Internal Subnet of your VPC to talk to the EC2 of other account with Public IP. https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html
Yes, You still can have your Lambda remaining inside the Internal Subnet. But you need a NAT GATEWAY and update the Routing Table for Lambda's Internal Subnet to point to the NAT GATEWAY (which should be assigned with an EIP) i.e. it will subsequently point to your INTERNET GATEWAY. By this you will make sure that your LAMBDA which, located in your private subnet of VPC can talk with outside i.e. with the EC2 instance, located in another account's with a Public IP. Therefore you can whitelist one IP in the SecurityGroups of your EC2 in Other Account which is the EIP of your NATGATEWAY which Lambda or any other Internal Components in that subnet will use to find their way out to the Internet.
When creating an EC2 instance (or some other kind of stuff) on AWS, there appears a default VPC.
Also, as another option, a VPC can be created beforehand and selected during the EC2 instance creation etc..
So, in which use cases should we create a new VPC instead of using the default one?
The AWS Documentation does a pretty good job describing how they create the default VPC.
When we create a default VPC, we do the following to set it up for
you:
Create a VPC with a size /16 IPv4 CIDR block (172.31.0.0/16). This provides up to 65,536 private IPv4 addresses.
Create a size /20 default subnet in each Availability Zone. This provides up to 4,096 addresses per subnet, a few of which are reserved
for our use.
Create an internet gateway and connect it to your default VPC.
Create a main route table for your default VPC with a rule that sends all IPv4 traffic destined for the internet to the internet
gateway.
Create a default security group and associate it with your default VPC.
Create a default network access control list (ACL) and associate it with your default VPC.
Associate the default DHCP options set for your AWS account with your default VPC.
This is great with simple applications and proof of concepts, but not for productions deployments. A DB instance should for example not be publicly available, and should there for be placed in a private subnet, something the default VPC does not have. You would then create some sort of backend instance that would connect to the DB and expose a REST interface to the public.
Another reason may be if you are running multiple environments (DEV, QA, PROD) that are all copies of each other. In this case you would want them to be totally isolated from each other as to not risk a bad deployment in the DEV environment to accidentally affect the PROD environment.
The list can go on with other reasons, and there are probably some that are better than those that I have presented you with today.
If you understand VPC reasonably well, then you should always create your own VPC. This is because you have full control over the structure.
I suspect that the primary reason for providing a Default VPC is so that people can launch an Amazon EC2 instance without having to understand VPCs first.
I am creating infrastructure for one of my web application on AWS. That app needs Mysql RDS instance. Now I am wondering that whether I should simply create RDS instance in public subnet and just change its settings to Publicly Accessible=No, or I will have to create this RDS instance in private subnet for better security or something. I am confused that whether any of above option will provide better security than other.
I have also read that simply assigning security group to instance will act as firewall, so I can have publicly accessible=true RDS instance and its security group allowing access only from my application EC2 instance. So basically I have three options mentioned below.
Publicly Accessible = True RDS instance in public subnet with security group allowing access only to EC2 application instance.
Publicly Accessible = False RDS instance in public subnet.
RDS instance in private subnet.
Can anyone explain pros and cons in terms of security for above approaches?
You are correct that Security Groups can provide sufficient protection for your database, and also for Amazon EC2 instances.
So why does AWS provide public/private subnets? It's because many customers want them because that is how enterprises typically organise their network prior to using the cloud. Traditional firewalls only act between subnets, whereas Security Groups apply to each instance individually.
So, if you understand how to correctly configure Security Groups, there is no actual need to use Private Subnets at all! Some people, however, feel more comfortable putting resources in private subnets because it provides an additional layer of security.