Default AWS VPC vs A new one? - amazon-web-services

Should I use the AWS Default VPC, or should I create a new one?
What are the differences and advantages to create a new one?
Or, in witch situations should I select between the two?

The default VPC is a public VPC. It is designed to make it easy to get going with EC2/RDS and other related AWS services. It has an internet gateway and public subnets with corresponding route table. So, it's a good way to go if you don't know how to setup a VPC, you only need publicly accessible resources, or you're playing around or quickly prototyping something.
However, for production or environments in which you need to keep parts of your network private, I would recommend creating your own. This allows you to setup exactly what need. It is more complicated than just using the default but if you already know how to setup a VPC, it's recommended.

There would be no real problem with you using the default VPC and adding a private subnet but this is certainly not ideal. The default VPC is designed so that you can quickly deploy resources and not have to think about the underlying network. If you are just doing a very basic deployment then it works great. But you are locked into the network model that is included with the default VPC. So if you decide that 172.31.0.0/16 won't work for you then the default VPC is no longer an option. By creating a custom VPC you can tailor your network exactly the way you want it and prevent any overlapping IP addresses if you plan to connect to an on-premise environment or plan to peer VPCs together. If you don't mind the restrictions on the network then feel free to use the default VPC.

Related

Bastion Server for all AWS instances

i have more than 30 production Windows severs in all AWS regions. I would like to connect all servers from one base bastion host. can any one please let me know which one is good choice? How can i setup one bastion host to communicate all servers which is different regions and different VPC's? Kindly anyone give advice for this?
First of all, I would question what are you trying to achieve with a single bastion design? For example, if all you want is to execute automation commands or patches it would be significantly more efficient (and cheaper) to use AWS System Manager Run Commands or AWS System Manager Patch Manager respectively. With AWS System Manager you are getting a managed service that offers advance management capabilities with highest security principles built-in. Additionally, with SSM almost all access permissions could be regulated via IAM permission policies.
Still, if you need to set-up bastion host for some other purpose not supported by SSM, the answer includes several steps that you need to do.
Firstly, since you are dealing with multiple VPCs (across regions), one way to connect them all and access them from you bastion's VPC would be to set-up a Inter-Region Transit Gateway. However, among other, you would need to make sure that your VPC (and Subnet) CIDR ranges are not overlapping. Otherwise, it will be impossible to properly arrange routing tables.
Secondly, you need to arrange that access from your bastion is allowed in the inbound connections of your target's security group. Since, you are dealing with peered VPCs you will need to properly allow your inbound access based on CIDR ranges.
Finally, you need to decide how you will secure access to your Windows Bastion host. Like with almost all use-cases relying on Amazon EC2 instances, I would stress to keep all the instances in private subnets. From private subnets you can always reach the internet via NAT Gateways (or NAT Instances) and stay protected from unauthorized external access attempts. Therefore, if your Bastion is in private subnet you could use the capability of SSM to establish a port-forwarding session to your local machine. In this way, you enable yourself the connection while even your bastion is secured in private subnet.
Overall, this answer to your question involves a lot of complexity and components that will definitely incur charges to your AWS account. So, it would be wise to consider what practical problem are you trying to solve (not shared in the question)? Afterwards, you could evaluate if there is an applicable managed service like SSM that is already provided by AWS. In the end, from a security perspective, granting access to all instances from a single bastion might not be best practice. If you consider scenarios in which you bastion is compromised for whatever reason, you basically compromised all of your instances across all of the regions.
Hope it gives you slightly better understanding of your potential solution.

Alternative to AWS's Security groups in GCP?

Is there an alternative to AWS's security groups in the Google Cloud Platform?
Following is the situation which I have:
A Basic Node.js server running in Cloud Run as a docker image.
A Postgres SQL database at GCP.
A Redis instance at GCP.
What I want to do is make a 'security group' sort of so that my Postgres SQL DB and Redis instance can only be accessed from my Node.js server and nowhere else. I don't want them to be publically accessible via an IP.
What we do in AWS is, that only services part of a security group can access each other.
I'm not very sure but I guess in GCP I need to make use of Firewall rules (not sure at all).
If I'm correct could someone please guide me as to how to go about this? And if I'm wrong could someone suggest the correct method?
GCP has firewall rules for its VPC that work similar to AWS Security Groups. More details can be found here. You can place your PostgreSQL database, Redis instance and Node.js server inside GCP VPC.
Make Node.js server available to the public via DNS.
Set default-allow-internal rule, so that only the services present in VPC can access each other (halting public access of DB and Redis)
As an alternative approach, you may also keep all three servers public and only allow Node.js IP address to access DB and Redis servers, but the above solution is recommended.
Security groups inside AWS are instance-attached firewall-like components. So for example, you can have a SG on an instance level, similar to configuring IP-tables on regular Linux.
On the other hand, Google Firewall rules are more on a Network level. I guess, for the level of "granularity", I'd say that Security Groups can be replaced to instance-level granularity, so then your alternatives are to use one of the following:
firewalld
nftables
iptables
The thing is that in AWS you can also attach security groups to subnets. So SG's when attached to subnets, are also kind of similar to google firewalls, still, security groups provide a bit more granularity since you can have different security groups per subnet, while in GCP you need to have a firewall per Network. At this level, protection should come from firewalls in subnets.
Thanks #amsh for the solution to the problem. But there were a few more things that were required to be done so I guess it'll be better if I list them out here if anyone needs in the future:
Create a VPC network and add a subnet for a particular region (Eg: us-central1).
Create a VPC connector from the Serverless VPC Access section for the created VPC network in the same region.
In Cloud Run add the created VPC connector in the Connection section.
Create the PostgreSQL and Redis instance in the same region as that of the created VPC network.
In the Private IP section of these instances, select the created VPC network. This will create a Private IP for the respective instances in the region of the created VPC network.
Use this Private IP in the Node.js server to connect to the instance and it'll be good to go.
Common Problems you might face:
Error while creating the VPC Connector: Ensure the IP range of the VPC connector and the VPC network do not overlap.
Different regions: Ensure all instances are in the same region of the VPC network, else they won't connect via the Private IP.
Avoid changing the firewall rules: The firewall rules must not be changed unless you need them to perform differently than they normally do.
Instances in different regions: If the instances are spread across different regions, use VPC network peering to establish a connection between them.

Best way to connect AWS EC2 instances for avoid failed connection on ip change

I have four EC2 instances, three of them running api services and another running user interface (UI). The UI instance obtains the data over api calls to another instances. Right now everthing works fine becouse im using the public IP provided for eeach EC2 service for api calling. But, mi cocern is about what happend if the public ip of service change (for any reason)? then miy application go down becouse UI cannot get the data from services. After a little researching i have found that appers to be a solution: use a vpc for connect EC2 instances over private ip (because is static) and associed the UI instance to an Elastic IP (no problem here). Sow, i have some issues:
1) I make a test putting all instances in the same vpc (and sub net) but when I do ping from one to another the pings faild. Its my approach right? or i missing some thing?
2) I read a couple of another options but im not sure what is best: Maybe i have to use an Api Gateway?. Or a NAT Gateway?
3) What is the standar practice to communicate EC2 instances in private way?
1) I make a test putting all instances in the same vpc (and sub net) but when I do ping from one to another the pings faild. Its my approach right? or i missing some thing?
For security reasons, AWS block the ICMP traffic using security group. Please enable Ping traffic (ICMP) in security group from the Ip's you are trying to connect, it's better to allow the entire CIDR block for the VPC for all traffic, will make your life a lot easy. Please make sure you do this in a test Environment only.
2) I read a couple of another options but im not sure what is best: Maybe i have to use an Api Gateway?. Or a NAT Gateway?
Also, as you mentioned that your concern is that the public IP of the Instance will change, (definitely if your Instance stop/starts for any reason), but why don't you use Elastic IP for all of your Instances, that could be on of the solution, but using this approach all of your instances will be exposed to internet, so going with private IP is the best option.
3) What is the standard practice to communicate EC2 instances in private way?
It depends on the use case, if your Instances are in the same vpc no extra configuration is required, you only need to make sure the security groups, Network Access Control List and firewall configuration are correct.
In case if your instances are in different VPC, then you can use VPC Peering/Transit gateway.
1.) You need to update security groups with the permission to ICMP traffic.
Go to your VPC -> Select Security Groups -> Select the relevant security group -> Add Inbound/Outbound rule for all traffic with CIDR of the instance subnet.
2.) Internal network is the better way as long as all your traffic gonna be internal.
Thanks

When to set up a nondefault VPC in AWS?

When creating an EC2 instance (or some other kind of stuff) on AWS, there appears a default VPC.
Also, as another option, a VPC can be created beforehand and selected during the EC2 instance creation etc..
So, in which use cases should we create a new VPC instead of using the default one?
The AWS Documentation does a pretty good job describing how they create the default VPC.
When we create a default VPC, we do the following to set it up for
you:
Create a VPC with a size /16 IPv4 CIDR block (172.31.0.0/16). This provides up to 65,536 private IPv4 addresses.
Create a size /20 default subnet in each Availability Zone. This provides up to 4,096 addresses per subnet, a few of which are reserved
for our use.
Create an internet gateway and connect it to your default VPC.
Create a main route table for your default VPC with a rule that sends all IPv4 traffic destined for the internet to the internet
gateway.
Create a default security group and associate it with your default VPC.
Create a default network access control list (ACL) and associate it with your default VPC.
Associate the default DHCP options set for your AWS account with your default VPC.
This is great with simple applications and proof of concepts, but not for productions deployments. A DB instance should for example not be publicly available, and should there for be placed in a private subnet, something the default VPC does not have. You would then create some sort of backend instance that would connect to the DB and expose a REST interface to the public.
Another reason may be if you are running multiple environments (DEV, QA, PROD) that are all copies of each other. In this case you would want them to be totally isolated from each other as to not risk a bad deployment in the DEV environment to accidentally affect the PROD environment.
The list can go on with other reasons, and there are probably some that are better than those that I have presented you with today.
If you understand VPC reasonably well, then you should always create your own VPC. This is because you have full control over the structure.
I suspect that the primary reason for providing a Default VPC is so that people can launch an Amazon EC2 instance without having to understand VPCs first.

Connecting from GCP to AWS instances behind VPN

I am trying to find a simple solution to the following problem. I have 2 microservices in AWS behind VPN on machines with a static IP (which won't change) behind VPN (so it's visible by another AWS instances in the same security group) and then I have another microservice on GCP (Kubernetes), which needs to access these (basically for aa very simple and very occasional HTTP POST requests). What would be the easiest way to do so? I was thinking about specifying IP addresses of my Kubernetes pool instances to inbound rules in the AWS security group for those two microservices, but that is dangerous because of the dynamic nature of these...
I found some solutions using tunnels and cetera, but most of the guides were either outdated or doesn't suite to my needs. They e.g. require to create a new VPC, while I want to reuse the existing one. I am sure it's the way, but seems as a huge overkill to me. Couldn't I e.g. somehow leverage Ingress or some simple proxy container?
Thanks!
I solved it by using two proxies.