Can we have two elastic beanstalk applications along with RDS database instances in one VPC.
What i am trying to do is the following:
1) EB App1: Web tier which hands web request
2) EB App2: Worker (application) Tier that performs the processing
3) RDS Db Instances: This is the database tier.
I want to put each of the above in one VPC and assign them within there separate VPC Security Groups in this VPC, hence controlling the flow of traffic between all the tiers.
Also, can i span these security groups into multiple availability zones.
Does beanstalk and VPC allow this above proposed design and is it a good design or am i overcomplicating stuff.
Thanks
MHF
I want to put each of the above in one VPC and assign them within
there separate VPC Security Groups in this VPC, hence controlling the
flow of traffic between all the tiers.
Yes of course, that's exactly how a VPC works.
Also, can i span these security groups into multiple availability
zones.
Security groups are VPC wide, they automatically span all availability zones. You would have to create Security Group rules that specify a specific subnet's IP range to narrow a security group to a specific availability zone.
Does beanstalk and VPC allow this above proposed design and is it a
good design or am i overcomplicating stuff.
Yes, this is just a normal AWS VPC configuration. What you are proposing is the normal way to do this.
Related
I am reading up on AWS Auto Scaling Groups and trying to understand (from a network-perspective) how the following resources all fit together:
Auto Scaling Group (ASG)
Application Load Balancer (ALB)
Individual EC2 instances sitting behind the ALB
ALB Listeners
ALB Target Groups
Security Group(s) enforcing which IPs/ports are allowed access to the EC2 instances
I understand what each of these does in theory, but in practice, I'm having trouble seeing the forest through the trees with how they all snap together. For example: do I configure the EC2 instances to be members of the Security Group? Or do I do that at the balancer-level? If I attach the ALB to the Auto Scaling Group, then why would I need to do any additional configuration with an ALB Target Group? When it comes to routing, do I route port 80 traffic to the ALB or the Auto Scale Group?
I know these are lots of small questions, so the main question here is: how do all of these snap together to provide a load balanced web server hosted on EC2 instances? Ultimately I need to configure all of this inside a CloudFormation template, but a diagram or explanation to help me configure everything manually is probably the best place for me to start. Thanks for any help!
do I configure the EC2 instances to be members of the Security Group?
Or do I do that at the balancer-level?
The EC2 instances should be a member of one security group. The Load Balancer should be a member of another security group. The Load Balancer's security group should allow incoming traffic from the Internet. The EC2 instances should allow incoming traffic from the load balancer.
If I attach the ALB to the Auto Scaling Group, then why would I need
to do any additional configuration with an ALB Target Group?
If you are using an auto-scaling group to create the instances, then you don't have to do any manual updates to the target group, the auto-scaling group will handle those updates for you.
When it comes to routing, do I route port 80 traffic to the ALB or the
Auto Scale Group?
An Auto-scaling group is not a resource that exists in your network. It is a construct within AWS that just creates/removes EC2 servers for you based on metrics. The traffic goes to the load balancer, and the load balancer sends it to the EC2 instances in the target group.
I know these are lots of small questions, so the main question here is: how do all of these snap together to provide a load balanced web server hosted on EC2 instances? Ultimately I need to configure all of this inside a CloudFormation template, but a diagram or explanation to help me configure everything manually is probably the best place for me to start.
It's a bit much to ask somebody on here to spend their free time creating a diagram for you. I suggest looking at the AWS reference WordPress implementations which they tend to use for providing reference implementations of auto-scaled web server environments.
See the "WordPress scalable and durable" CloudFormation template example here.
See the AWS WordPress Reference Architecture project here, which includes a diagram.
Why I should configure an AWS ECS Service or an EC2 Instance with two or more Private Subnets from the same VPC? What would be the benefits of doing such thing instead of configuring it within just one Subnet? Would it be because of availability? I've read the documentation but it was not clear about it.
Reference: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html
This is generally to distribute your ECS service across multiple availability zones, allowing your service to maintain high availability.
A subnet is bound to a single AZ, so it is assumed each subnet is in a different AZ.
By splitting across multiple subnets, during an outage load can be shifted to launch containers entirely in other subnets (assuming they're in different AZs).
This is generally encouraged for all services that support multiple availability zones.
More information on Amazon ECS Availability best practices are available from the blog.
I have been reading and watching everything [1] I can related to designing highly available VPCs. I have a couple of questions. For a typical 3-tier application (web, app, db) that needs HA within a single region it looks like you need to do the following:
Create one public subnet in each AZ.
Create one web, app, and db private subnet in each AZ.
Ensure your web, app and db EC2 instances are split evenly between AZs (for this post assume the DBs are running hot/hot and the apps are stateless).
Use an ALB / autoscaling to distribute load across the web tier. From what I read ALBs provide HA across AZs within the same region.
Utilize Internet gateways to provide a target route for Internet traffic.
Use NAT gateways to SRC NAT the private subnet VMs so they can get out to
the Internet.
With this approach do you need to deploy one Internet and NAT gateway to each AZ? If you only deploy one what happens when you have an AZ outage. Are these services AZ aware (can't find a good answer for this question). Any and all feedback (glad to RTFM) is welcomed!
Thank you,
- Mick
[1] Last two resources I reviewed
Deploying production grade VPCs
High Availability Application Architectures in Amazon VPC
You need NAT Gateway in each AZ as the redundancy is limited to a single AZ. Here is the snippet from the official documentation
Each NAT gateway is created in a specific Availability Zone and
implemented with redundancy in that zone.
You need just a single Internet gateway for a VPC as it is redundant across AZs and a VPC level resource. Here is the snippet from Internet Gateway offical documentation
An internet gateway is a horizontally scaled, redundant, and highly
available VPC component that allows communication between instances in
your VPC and the internet. It therefore imposes no availability risks
or bandwidth constraints on your network traffic.
Here is a highly available architecture image showing NAT GW per AZ and Internet GW as a VPC resource
Image source: https://aws.amazon.com/quickstart/architecture/vpc/
I'm new to AWS VPC setup for 3-tier web application. I created a VPC with subnet 10.0.0.0/16, and what is the good best practice to do the subnet segmentation in AWS VPC for 3 tier web application? I have ELB with 2 EC2 instances, and RDS and S3 in the backend.
Please advise!! Thanks.
A common pattern you will find is:
VPC with /16 (eg 10.0.0.0/16, which gives all 10.0.x.x addresses)
Public subnets with /24 (eg 10.0.5.0/24, which gives all 10.0.5.x addresses)
Private subnets with /23 (eg 10.0.6.0/23, which gives all 10.0.6.x and 10.0.7.x) -- this is larger because most resources typically go into private subnets and it's a pain to have to make it bigger later
Of course, you can change these above sizes to whatever you want within allowed limits.
Rather than creating a 3-tier structure, consider a 2-tier structure:
1 x Public Subnet per AZ for the Load Balancer (and possibly a Bastion/Jump Box)
1 x Private Subnet per AZ for everything else — application, database, etc.
There is no need to have apps and databases in separate private subnets unless you are super-paranoid. You can use Security Groups to configure the additional layer of security without using separate subnets. This means less IP addresses are wasted (eg in a partially-used subnet).
Of course, you could just use Security Groups for everything and just use one tier, but using private subnets gives that extra level of assurance that things are configured safely.
The way we do it:
We create a VPC that is a /16, e.g. 172.20.0.0/16. Do not use the default VPC.
Then we create a set of subnets for each application “tier”.
Public - Anything with a public IP. Load balancers and NAT gateways are pretty much the only thing here.
Web DMZ - Web servers go here. Anything that is a target for the load balancer.
Data - Resources responsible for storing and retrieving data. RDS instances, EC2 database servers, ElastiCacahe instances
Private - For resources that are truly isolated from Internet traffic. Management and reporting. You may not need this in your environment.
Subnets are all /24. One subnet per availability zone. So there would be like 3 Public subnets, 3 Web DMZ subnets, etc.
Network ACLs control traffic between the subnets. Public subnets can talk to Web DMZ. Web DMZ can talk to Data. Data subnets can talk to each other to facilitate clustering. Private subnets can’t talk to anybody.
I intentionally keep things very coarse in the Network ACL. I do not restrict specific ports/applications. We do that at the Security Group level.
Pro tip: Align the Subnets groups on a /20 boundary to simplify your Network ACLs rules. Instead of listing each data subnet individually, you can just list a single /20 which encompasses all data subnets.
Some people would argue this level of separation is excessive. However I find it useful because it forces people to think about the logical structure of the application. It guards against someone doing something stupid with a Security Group. It’s not bulletproof, but it is a second layer of defense. Also, we sometimes get security audits from customers that expect to see a traditional structure like you would find in an on-prem network.
I am trying to deploy Spring Boot Application with AWS Elastic Beanstalk. Instead of using default settings for the environment, I modified something under "VPC". After picking availability zone and one of the security groups for the VPC, I created the environment.
However when I looked at the instance detail after it is created, I noticed it is tied to two security groups. Other than the one I chose sg-98c031f3, it has another newly-generated security group sg-72b94919.
Why does it create two security groups for the environment when I selected only one group? Is there a way to remove one of them since one security group is enough to handle all the rules.
Elastic Beanstalk will always create and utilize one security group that gets attached to the EC2 instance. This group is managed by Elastic Beanstalk and it's primary purpose is to allow inbound connections from your load balancer.
(It also has a secondary purpose of allowing inbound SSH connections if you have selected a keypair for your EC2 instances)
Elastic Beanstalk allows you to select 0 or more additional security groups to attach to your EC2 instances. Note that you do not need to select any security groups if you don't want to. This is so that you can add additional inbound/outbound rules for your EC2 instances without needing to modify the EB-managed one.
Some reasons why you might want to add additional security groups:
To allow more inbound ports (for example, RDP)
To allow outbound network connections (for example, NTP)
To act as sources and targets for other security group rules (for example, allow connections from your selected security group into your RDS instances)