When to set up a nondefault VPC in AWS? - amazon-web-services

When creating an EC2 instance (or some other kind of stuff) on AWS, there appears a default VPC.
Also, as another option, a VPC can be created beforehand and selected during the EC2 instance creation etc..
So, in which use cases should we create a new VPC instead of using the default one?

The AWS Documentation does a pretty good job describing how they create the default VPC.
When we create a default VPC, we do the following to set it up for
you:
Create a VPC with a size /16 IPv4 CIDR block (172.31.0.0/16). This provides up to 65,536 private IPv4 addresses.
Create a size /20 default subnet in each Availability Zone. This provides up to 4,096 addresses per subnet, a few of which are reserved
for our use.
Create an internet gateway and connect it to your default VPC.
Create a main route table for your default VPC with a rule that sends all IPv4 traffic destined for the internet to the internet
gateway.
Create a default security group and associate it with your default VPC.
Create a default network access control list (ACL) and associate it with your default VPC.
Associate the default DHCP options set for your AWS account with your default VPC.
This is great with simple applications and proof of concepts, but not for productions deployments. A DB instance should for example not be publicly available, and should there for be placed in a private subnet, something the default VPC does not have. You would then create some sort of backend instance that would connect to the DB and expose a REST interface to the public.
Another reason may be if you are running multiple environments (DEV, QA, PROD) that are all copies of each other. In this case you would want them to be totally isolated from each other as to not risk a bad deployment in the DEV environment to accidentally affect the PROD environment.
The list can go on with other reasons, and there are probably some that are better than those that I have presented you with today.

If you understand VPC reasonably well, then you should always create your own VPC. This is because you have full control over the structure.
I suspect that the primary reason for providing a Default VPC is so that people can launch an Amazon EC2 instance without having to understand VPCs first.

Related

How to determine which subnet I am in on AWS?

When starting an EC2 from an image in AWS Marketplace, it requests Subnet Settings:
And says:
Ensure you are in the selected VPC above
It gives 2 options:
I am not sure what this means.
Is it asking me to identify which AWS "subnet" (in this case either ap-southeast-2b or ap-southeast-2a) that my laptop is currently in, and tell AWS via this drop down? I don't understand why it would want this information, nor what to give it. I've used thousands of EC2s and never needed to specify anything more granular than region. But today I am starting the EC2 from a marketplace image and it requires this additional information.
Whenever you are launching an instance, you have to choose a VPC and a subnet. When you launch your instance, usually a default VPC with default subnets are pre-selected.
The default VPC and subnets are usually public, which makes your instances accessible from the internet. Often, for security reasons, it may not be desired. In that case a custom VPC and/or subnets are created. This allows you to create private subnets shielded from direct access from the internet. One such architecture is VPC with public and private subnets (NAT).
The NAT in the above setups allows instances in private subnets to access internet, without allowing direct access to the instances from the internet.

Alternative to AWS's Security groups in GCP?

Is there an alternative to AWS's security groups in the Google Cloud Platform?
Following is the situation which I have:
A Basic Node.js server running in Cloud Run as a docker image.
A Postgres SQL database at GCP.
A Redis instance at GCP.
What I want to do is make a 'security group' sort of so that my Postgres SQL DB and Redis instance can only be accessed from my Node.js server and nowhere else. I don't want them to be publically accessible via an IP.
What we do in AWS is, that only services part of a security group can access each other.
I'm not very sure but I guess in GCP I need to make use of Firewall rules (not sure at all).
If I'm correct could someone please guide me as to how to go about this? And if I'm wrong could someone suggest the correct method?
GCP has firewall rules for its VPC that work similar to AWS Security Groups. More details can be found here. You can place your PostgreSQL database, Redis instance and Node.js server inside GCP VPC.
Make Node.js server available to the public via DNS.
Set default-allow-internal rule, so that only the services present in VPC can access each other (halting public access of DB and Redis)
As an alternative approach, you may also keep all three servers public and only allow Node.js IP address to access DB and Redis servers, but the above solution is recommended.
Security groups inside AWS are instance-attached firewall-like components. So for example, you can have a SG on an instance level, similar to configuring IP-tables on regular Linux.
On the other hand, Google Firewall rules are more on a Network level. I guess, for the level of "granularity", I'd say that Security Groups can be replaced to instance-level granularity, so then your alternatives are to use one of the following:
firewalld
nftables
iptables
The thing is that in AWS you can also attach security groups to subnets. So SG's when attached to subnets, are also kind of similar to google firewalls, still, security groups provide a bit more granularity since you can have different security groups per subnet, while in GCP you need to have a firewall per Network. At this level, protection should come from firewalls in subnets.
Thanks #amsh for the solution to the problem. But there were a few more things that were required to be done so I guess it'll be better if I list them out here if anyone needs in the future:
Create a VPC network and add a subnet for a particular region (Eg: us-central1).
Create a VPC connector from the Serverless VPC Access section for the created VPC network in the same region.
In Cloud Run add the created VPC connector in the Connection section.
Create the PostgreSQL and Redis instance in the same region as that of the created VPC network.
In the Private IP section of these instances, select the created VPC network. This will create a Private IP for the respective instances in the region of the created VPC network.
Use this Private IP in the Node.js server to connect to the instance and it'll be good to go.
Common Problems you might face:
Error while creating the VPC Connector: Ensure the IP range of the VPC connector and the VPC network do not overlap.
Different regions: Ensure all instances are in the same region of the VPC network, else they won't connect via the Private IP.
Avoid changing the firewall rules: The firewall rules must not be changed unless you need them to perform differently than they normally do.
Instances in different regions: If the instances are spread across different regions, use VPC network peering to establish a connection between them.

AWS RDS "Publicly Accessible = No" vs instance in private subnet

I am creating infrastructure for one of my web application on AWS. That app needs Mysql RDS instance. Now I am wondering that whether I should simply create RDS instance in public subnet and just change its settings to Publicly Accessible=No, or I will have to create this RDS instance in private subnet for better security or something. I am confused that whether any of above option will provide better security than other.
I have also read that simply assigning security group to instance will act as firewall, so I can have publicly accessible=true RDS instance and its security group allowing access only from my application EC2 instance. So basically I have three options mentioned below.
Publicly Accessible = True RDS instance in public subnet with security group allowing access only to EC2 application instance.
Publicly Accessible = False RDS instance in public subnet.
RDS instance in private subnet.
Can anyone explain pros and cons in terms of security for above approaches?
You are correct that Security Groups can provide sufficient protection for your database, and also for Amazon EC2 instances.
So why does AWS provide public/private subnets? It's because many customers want them because that is how enterprises typically organise their network prior to using the cloud. Traditional firewalls only act between subnets, whereas Security Groups apply to each instance individually.
So, if you understand how to correctly configure Security Groups, there is no actual need to use Private Subnets at all! Some people, however, feel more comfortable putting resources in private subnets because it provides an additional layer of security.

Default AWS VPC vs A new one?

Should I use the AWS Default VPC, or should I create a new one?
What are the differences and advantages to create a new one?
Or, in witch situations should I select between the two?
The default VPC is a public VPC. It is designed to make it easy to get going with EC2/RDS and other related AWS services. It has an internet gateway and public subnets with corresponding route table. So, it's a good way to go if you don't know how to setup a VPC, you only need publicly accessible resources, or you're playing around or quickly prototyping something.
However, for production or environments in which you need to keep parts of your network private, I would recommend creating your own. This allows you to setup exactly what need. It is more complicated than just using the default but if you already know how to setup a VPC, it's recommended.
There would be no real problem with you using the default VPC and adding a private subnet but this is certainly not ideal. The default VPC is designed so that you can quickly deploy resources and not have to think about the underlying network. If you are just doing a very basic deployment then it works great. But you are locked into the network model that is included with the default VPC. So if you decide that 172.31.0.0/16 won't work for you then the default VPC is no longer an option. By creating a custom VPC you can tailor your network exactly the way you want it and prevent any overlapping IP addresses if you plan to connect to an on-premise environment or plan to peer VPCs together. If you don't mind the restrictions on the network then feel free to use the default VPC.

Creating Multiple domains in one VPC, in Amazon AWS

Is it possible to create multiple domains in single Amazon VPC (Virtual Private Cloud) created in Amazon AWS. ?
The domain-name and domain-name-servers are part of an DHCP Options Sets and a single VPC can only have one DHCP Option set at a time, as visible in the AWS Management Console and e.g. documented for ec2-associate-dhcp-options:
After you associate the options with the VPC, any existing instances
and all new instances that you launch in that VPC use the options. [...]
However, if your use case allows, you could create additional VPCs instead - by default you can create 5 VPCs per region, see Amazon VPC Limits.
Assuming you mean multiple IP's as well, the answer to this question is here.
Quoting:
You can now create and attach an additional network interface, known
as an elastic network interface (ENI), to any Amazon EC2 instance in
your VPC for a total of two network interfaces per instance. More
information here
http://aws.typepad.com/aws/2011/12/new-elastic-network-interfaces-in-the-virtual-private-cloud.html
Yours is a good question if for instance, you are trying to broker various web-services.