Just starting to move our first application to AWS. We have one public subnet (web servers) and two private subnets (for our app instances and databases). My query is where should we put our log management (greylog) and monitoring instances. Should we put them into our app instances subnet or create another private subnet?
Yes you can use the same app subnet, but it is good to have separate subnet for separate services since it will helpful for scenarios like while debugging we can easily classify that requests coming from a particular CIDR range belongs to app subnet or monitoring subnet and we can also have the flexibility of blocking traffic through ACL which applies at subnet level.
You can very well place the logging/monitoring servers in the one the existing private subnets as long as you have rightly sized the subnet CIDR block with enough IP addresses.
Refer this documentation for VPC/Subnet sizing
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html#vpc-sizing-ipv4
Related
Question
Can we make a route table which has both igw-id (Internet gateway ID) and vgw-id (VPN gateway ID)? If we can't/shouldn't do it, why?
Example
10.0.0.0/16 --> Local
172.16.0.0/12 --> vgw-id
0.0.0.0/0 --> igw-id
Reference
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario3.html
(I mainly refer to "Overview" section and "Alternate routing" section.)
Explanation
This webpage above shows the scenario where one VPC has one public subnet, one private subnet and one VPN gateway. In this example, the VPN gateway is always accessed from instances in the private subnet (meaning its route table doesn't have the record "igw-id"). I wonder why one route table doesn't have both "igw-id" and "vgw-id".
Yes, we can have both igw and vgw. In fact, the example above would be a perfect for a public subnet which can connect to your corporate network through direct connect or site-to-site VPN, and also have internet access and be accessible from the internet.
Now, weather you would want this or not, it is an architectural decision. In the example scenario given by AWS, they try to segregate subnets by having a public subnet (with the igw) which can contain services accessible from the internet and a private subnet for other backend services (exmaple: databases). These backend services can be accessed from the corporate network using a site-to-site VPN, this is why the subnet has the vgw.
Yes, you can have a route table with the 3 routes you specified. However, bear in mind that with a route 0.0.0.0/0 --> igw-id, hosts on the internet can initiate connections with the instances in that subnet. Typically, you would want to secure the instances in a subnet that you allow a route to your on-premise network, and not expose it to the internet. If these instances require to connect to the internet, AWS recommends NAT devices for your VPC.
While it's technically possible, the main reason not to do that is due to a concept called network segmentation. It basically follows a "defense in depth" approach that network layers should be separated in a "public" and a "private" part (more zones are also often used like third tier for data storage and a fourth for infra used to manage the other three tiers).
Since your public subnet is directly exposed to the internet, it'most likely to be breached when you misconfigure something. If you would have routes in your public subnet to your VPN gateway, a single breach enables an attacker to attack your on-prem environment as well.
This is why its best practice to add one or two network tiers to your architecture. Even when the public tier is compromised, they still need to get into an instance in the private tier before they can attack your on prem environment.
It's recommended that your public subnet only contains an elastic load balancer or similar when possible. Its a pretty good way to reduce the attack surface of your application servers (less chance that you expose an unwanted port etc.).
A nice guide on this can be found here. It doenst include vpns, but you can condisider them part of the app layer.
I created 2 private subnets PRIVATEA and PRIVATEB in a custom VPC. These subnets are in different availability zones. Added an EC2 instance in PRIVATEA. The instance already has an ENI eth0 attached to it. Next I created an ENI in the other PRIVATEB subnet and attached it to EC2 instance in PRIVATEB subnet. The setup is successful. Basically I followed a blog tutorial for this setup. It said that secondary interface will allow traffic for another group i.e. Management.
But I am not able to relate any use case with it. Could anyone please explain when do we use such a setup ? Is this the correct question to ask in this forum here ?
Thanks
An Elastic Network Interface (ENI) is a virtual network card that connects an Amazon EC2 instance to a subnet in an Amazon VPC. In fact, ENIs are also used by Amazon RDS databases, AWS Lambda functions, Amazon Redshift databases and any other resource that connects to a VPC.
Additional ENIs can be attached to an Amazon EC2 instance. These extra ENIs can be attached to different subnets in the same Availability Zone. The operating system can then route traffic out to different ENIs.
Security Groups are actually associated with ENIs (not instances). Thus, different ENIs can have different rules about traffic that goes in/out of an instance.
An example for using multiple ENIs is to create a DMZ, which acts as a perimeter through which traffic must pass. For example:
Internet --> DMZ --> App Server
In this scenario, all traffic must pass through the DMZ, where traffic is typically inspected before being passed onto the server. This can be implemented by using multiple ENIs, where one ENI connects to a public subnet to receive traffic and another ENI connects to a private subnet to send traffic. The Network ACLs on the subnets can be configured to disallow traffic passing between the subnets, so that the only way traffic can flow from the public subnet to the private subnet is via the DMZ instance, since it is connected to both subnets.
Another use-case is software that attaches a license to a MAC address. Some software products do this because MAC addresses are (meant to be) unique across all networking devices (although some devices allow it to be changed). Thus, they register their software under the MAC address attached to a secondary ENI. If that instance needs to be replaced, the secondary ENI can be moved to another instance without the MAC address changing.
I'm new to AWS VPC setup for 3-tier web application. I created a VPC with subnet 10.0.0.0/16, and what is the good best practice to do the subnet segmentation in AWS VPC for 3 tier web application? I have ELB with 2 EC2 instances, and RDS and S3 in the backend.
Please advise!! Thanks.
A common pattern you will find is:
VPC with /16 (eg 10.0.0.0/16, which gives all 10.0.x.x addresses)
Public subnets with /24 (eg 10.0.5.0/24, which gives all 10.0.5.x addresses)
Private subnets with /23 (eg 10.0.6.0/23, which gives all 10.0.6.x and 10.0.7.x) -- this is larger because most resources typically go into private subnets and it's a pain to have to make it bigger later
Of course, you can change these above sizes to whatever you want within allowed limits.
Rather than creating a 3-tier structure, consider a 2-tier structure:
1 x Public Subnet per AZ for the Load Balancer (and possibly a Bastion/Jump Box)
1 x Private Subnet per AZ for everything else — application, database, etc.
There is no need to have apps and databases in separate private subnets unless you are super-paranoid. You can use Security Groups to configure the additional layer of security without using separate subnets. This means less IP addresses are wasted (eg in a partially-used subnet).
Of course, you could just use Security Groups for everything and just use one tier, but using private subnets gives that extra level of assurance that things are configured safely.
The way we do it:
We create a VPC that is a /16, e.g. 172.20.0.0/16. Do not use the default VPC.
Then we create a set of subnets for each application “tier”.
Public - Anything with a public IP. Load balancers and NAT gateways are pretty much the only thing here.
Web DMZ - Web servers go here. Anything that is a target for the load balancer.
Data - Resources responsible for storing and retrieving data. RDS instances, EC2 database servers, ElastiCacahe instances
Private - For resources that are truly isolated from Internet traffic. Management and reporting. You may not need this in your environment.
Subnets are all /24. One subnet per availability zone. So there would be like 3 Public subnets, 3 Web DMZ subnets, etc.
Network ACLs control traffic between the subnets. Public subnets can talk to Web DMZ. Web DMZ can talk to Data. Data subnets can talk to each other to facilitate clustering. Private subnets can’t talk to anybody.
I intentionally keep things very coarse in the Network ACL. I do not restrict specific ports/applications. We do that at the Security Group level.
Pro tip: Align the Subnets groups on a /20 boundary to simplify your Network ACLs rules. Instead of listing each data subnet individually, you can just list a single /20 which encompasses all data subnets.
Some people would argue this level of separation is excessive. However I find it useful because it forces people to think about the logical structure of the application. It guards against someone doing something stupid with a Security Group. It’s not bulletproof, but it is a second layer of defense. Also, we sometimes get security audits from customers that expect to see a traditional structure like you would find in an on-prem network.
When setting up an ELB, it would say the following:
You must specify subnets from at least two Availability Zones to increase the availability of your load balancer.
I currently have two VPCs:
WebVPC
public-subnet-us-east-1a
private-subnet-us-east-1b
DatabaseVPC
public-subnet-us-east-1a
private-subnet-us-east-1b
The ELB is only meant for the WebVPC (to serve web traffic). I currently only have one public and one private subnet per VPC, which means I can only provide the ELB with one public subnet from my WebVPC.
Does this mean it is best practice to have at least two public and at least two private subnets?
Your architecture is not Highly Available. It is best practice to replicate services across multiple Availability Zones (AZs) in case there is a failure in one AZ (effectively, if a data center fails).
Also, it is typically best to keep all related services for an application in the same VPC unless you have a particular reason to keep them separate.
Also, security is improved by putting your application in private subnets, with only your load balancer in the public subnets.
Therefore, the recommended architecture would be:
One VPC
A public subnet in AZ-a
A public subnet in AZ-b
A load balancer connected to both public subnets
A private subnet in AZ-a
A private subnet in AZ-b
Your web application running simultaneously in both private subnets (assuming that it can run on multiple Amazon EC2 instances)
Your database running in one of the private subnets, with the ability to fail-over to the other private subnet. Amazon RDS can do this automatically with the Multi-AZ option (additional charges apply).
To learn more about architecting highly scalable solutions, I recommend the ARC201: Scaling Up to Your First 10 Million Users session from the AWS re:Invent conference in 2016 (YouTube, SlideShare, Podcast).
Yes. It is best practice to provide at least two Availability Zones.
If EC2 Instances were launched in Private subnet then load balancer should be launched in Public subnet which should have internet gateway attached to it.
Load Balancer can handle traffic through internet gateway and redirect to Private IPs of EC2 Instances. Only registered EC2 Instances will receive traffic from Load Balancer.
In your case:
You have to launch Database in Private subnet not in Public subnet as per best practice. Both Web tier and database tier can be in same VPC. If you have different environment like Dev, Test and Prod - all should be launch in different VPC. You can use VPC Peering to connect VPCs.
Instead of launching EC2 Instances in Public subnet, it is good to launch in Private subnet. Because, You will be using Load balancer to redirect network traffic to EC2 Instances.
This has probably been answered elsewhere but I can't seem to find it!
I have a number of AWS EC2 instances that I am using as part of a project being built and I am now looking into securing the setup a bit. I want to lock down access to some of the ports.
For example I want to have one of the instances act as a database server (hosting mysql). I want this to be closed to public access but open to access from my other EC2 instances on their private IP's.
I also use the AWS auto-scaler to add/remove instances as required and need these to be able to access the DB server without having to manually add its IP to a list.
Similarly if possible I want to lock down some instances so that they can only accept traffic from an AWS Load Balancer. So port 80 is open on the instance but only for traffic coming from the Load Balancer.
I've looked at specifying the IP's using CIDR notation but can't seem to get it working. From the look of the private IP's being assigned to my instances the first two octets remain the same and the last two vary. But opening it to all instances with the same first two octets doesn't seem that secure either?!
Thanks
What you want to do is all pretty standard stuff, and is extensively documented in the AWS VPC documentation for Virtual Private Clouds. If your EC2 instances are not running in a VPC, they should be.
The link below should help, it seems to be your scenario:
Scenario 2: VPC with Public and Private Subnets (NAT)
The configuration for this scenario includes a VPC with a public
subnet and private subnet, and a network address translation (NAT)
instance in the public subnet. A NAT instance enables instances in the
private subnet to initiate outbound traffic to the Internet. We
recommend this scenario if you want to run a public-facing web
application, while maintaining back-end servers that aren't publicly
accessible. A common example is a multi-tier website, with the web
servers in a public subnet and the database servers in a private
subnet. You can set up security and routing so that the web servers
can communicate with the database servers.
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html