AWS solution architecture - amazon-web-services

A workload in an Amazon VPC consist of a single web-server launched from a custom AMI.Session state is stored in database. How should the Solutions Architect modify this workload to be both highly available and scalable?
I am hesitating between A and C. Lots of my friends, including me, prefer C. But why the answer is A from an unofficial testing agency? Is the agency wrong?
A. Create a launch configuration with a desired capacity of two web servers across multiple Availability Zones. Create an Auto Scaling group with the AMI ID of the web server image. Use Amazon Route 53 latency-based routing to balance traffic across the Auto Scaling group.
B. Create a launch configuration with the AMI ID of the web server image. Create an Auto Scaling group using the newly-created launch configuration, and a desired capacity of two web servers across multiple regions. Use an Application Load Balancer (ALB) to balance traffic across the Auto Scaling group.
C. Create a launch configuration with the AMI ID of the web server image. Create an Auto Scaling group using the newly-created launch configuration, and a desired capacity of two web servers across multiple Availability Zones. Use an ALB to balance traffic across the Auto Scaling group.
D.Create a launch configuration with the AMI ID of the web server image. Create an Auto Scaling group using the newly-created launch configuration, and a desired capacity of two web servers across multiple Availability Zones. Use Route 53 weighted routing to balance traffic across the Auto Scaling group.

Choice A cannot be correct. Where did you see this question (and answer)?
This question requires knowledge of how Auto Scaling (AS) works:
AS launch configurations have an AMI id
AS groups have desired capacity and a launch configuration
AS groups contain EC2 instances from multiple AZs within a region, but not multiple regions
So, applying that knowledge to the available choices:
A is incorrect because a launch configuration does not have a desired capacity
B is incorrect because AS groups are cross AZ, not cross region
So, the only possible correct choices are C and D. At that point you need to decide if cross-AZ ALB is correct vs Route 53 weighted routing.

Is this an exam question??
The Correct answer is C.
The architecture is like Internet->Route53 (points to ALB Endpoint)->ALB->ASG Instances
The other choices cannot be correct
A) This statement is wrong. It doesn't make any sense to use latency based routing when you use only one ALB.
B) You cannot use multiple regions in the same Auto Scaling Group.
D) Same as A

Related

Auto-scaling load balanced EC2 instances by example

I am reading up on AWS Auto Scaling Groups and trying to understand (from a network-perspective) how the following resources all fit together:
Auto Scaling Group (ASG)
Application Load Balancer (ALB)
Individual EC2 instances sitting behind the ALB
ALB Listeners
ALB Target Groups
Security Group(s) enforcing which IPs/ports are allowed access to the EC2 instances
I understand what each of these does in theory, but in practice, I'm having trouble seeing the forest through the trees with how they all snap together. For example: do I configure the EC2 instances to be members of the Security Group? Or do I do that at the balancer-level? If I attach the ALB to the Auto Scaling Group, then why would I need to do any additional configuration with an ALB Target Group? When it comes to routing, do I route port 80 traffic to the ALB or the Auto Scale Group?
I know these are lots of small questions, so the main question here is: how do all of these snap together to provide a load balanced web server hosted on EC2 instances? Ultimately I need to configure all of this inside a CloudFormation template, but a diagram or explanation to help me configure everything manually is probably the best place for me to start. Thanks for any help!
do I configure the EC2 instances to be members of the Security Group?
Or do I do that at the balancer-level?
The EC2 instances should be a member of one security group. The Load Balancer should be a member of another security group. The Load Balancer's security group should allow incoming traffic from the Internet. The EC2 instances should allow incoming traffic from the load balancer.
If I attach the ALB to the Auto Scaling Group, then why would I need
to do any additional configuration with an ALB Target Group?
If you are using an auto-scaling group to create the instances, then you don't have to do any manual updates to the target group, the auto-scaling group will handle those updates for you.
When it comes to routing, do I route port 80 traffic to the ALB or the
Auto Scale Group?
An Auto-scaling group is not a resource that exists in your network. It is a construct within AWS that just creates/removes EC2 servers for you based on metrics. The traffic goes to the load balancer, and the load balancer sends it to the EC2 instances in the target group.
I know these are lots of small questions, so the main question here is: how do all of these snap together to provide a load balanced web server hosted on EC2 instances? Ultimately I need to configure all of this inside a CloudFormation template, but a diagram or explanation to help me configure everything manually is probably the best place for me to start.
It's a bit much to ask somebody on here to spend their free time creating a diagram for you. I suggest looking at the AWS reference WordPress implementations which they tend to use for providing reference implementations of auto-scaled web server environments.
See the "WordPress scalable and durable" CloudFormation template example here.
See the AWS WordPress Reference Architecture project here, which includes a diagram.

Do ELB's increase availability of an AWS architecture?

I'm working my way through a practice exam for an AWS certification. One of the questions is as follows:
The web tier for an a pplication is running on 6 EC2 instances spread
across 2 AZs behind a classic ELB. The data tier is a MySQL database
running on an EC2 instance. What changes will increase the
availability of the application? (select TWO)
A: Turn on CloudTrail in the AWs account
B: Migrate the MySQL database to a Multi-AZ RDS MySQL database instance
C: Turn on cross-zone load balancing on the ELB
D: Launch the web tier EC2 instances in an Auto Scaling Group
E: Increase the instance size of the web tier EC2 instance
Correct answers are B and D. My question is, why is C NOT a correct answer? The instructor (an Amazon employee) glosses over C, explaining that "enabling cross-zone load balancing would have little to no effect on availability." But the way I'm looking at it, if the ELB can't send traffic to both AZ's, then we're effectively making our 6-instance system into a 3-instance system (assuming there are 3 in each AZ). And a single AZ system is never the considered a highly available architecture, since if that one AZ fails, your whole system is unavailable.
Enabling cross-zone load balancing does not impact availability because ELBs can send traffic to all configured AZs without the feature enabled. That's not what cross-zone balancing means.
An ELB configured in two availability zones always has at least two balancer nodes, one in each AZ. You can't see this, directly, but if you look under "Network Interfaces" in the EC2 console, you can find the Elastic Network Interfaces (ENIs) attached to the balancer nodes. Each node has one ENI. The service determines how many nodes a balancer has, based on load. This is managed automatically, and you are not billed based on node count.
Cross-zone load balancing controls what each node can do. "Enabled" means the balancer node in zone A can send traffic to instances in zone A or B, instead of just to instances in zone A, and the same for the balancer node in zone B.
This doesn't improve availability because if an availability zone is lost, then the balancer node in that zone is also lost, so the fact that it could have sent traffic to instances in the other zone is immaterial.
Cross-zone load balancing helps ensure that the workload is spread as evenly as possible across all instances behind the balancer, which helps if you have asymmetry -- such as 3 application instances in one AZ and 2 application instances in the other (in this case, the zone with 2 would see proportionally more traffic per instance than the zone with 3) -- or other cases where the instances are not seeing evenly-balanced workloads, which would be more likely when the number of instances behind the balancer is small or if there is wide variation in request processing time due to the complexity of certain requests compared to others.
What changes will increase the availability of the application?
Increased availability means that there are less time periods where the application is serving requests.
(B) Multi-AZ database will certainly help because if one AZ fails, it will automatically promote the secondary database server in the other AZ
(D) Auto Scaling will certainly help because failed instances will be replaced.
Cross-zone load balancing would help where there are no healthy instances available in an AZ but traffic is being handled by the ELB in that AZ. It is an unlikely scenario, especially with 3 instances in an AZ, but I could understand an argument for it. However, the other two answers are much stronger.
It's worth mentioning that official AWS Certification questions go through several levels of technical review and shouldn't leave such ambiguity in a question. Sample exam questions (be it in an AWS course or otherwise) probably haven't gone through such detailed scrutiny.

How is "Target Groups" different from "Auto-Scaling Groups" in AWS?

I'm a little too confused on the terms and its usage. Can you please help me understand how are these used with Load Balancers?
I referred the aws-doc in vain for this :(
Target groups are just a group of Ec2 instances. Target groups are closely associated with ELB and not ASG.
ELB -> TG - > Group of Instances
We can just use ELB and Target groups to route requests to EC2 instances. With this setup, there is no autoscaling which means instances cannot be added or removed when your load increases/decreases.
ELB -> TG - > ASG -> Group of Instances
If you want autoscaling, you can attach a TG to ASG which in turn gets associated to ELB. Now with this setup, you get request routing and autoscaling together. Real world usecases follow this pattern. If you detach the target group from the Auto Scaling group, the instances are automatically deregistered from the target group
Hope this helps.
What is a target group?
A target group contains EC2 instances to which a load balancer distributes workload.
A load balancer paired with a target group does NOT yet have auto scaling capability.
What is an Auto Scaling Group (ASG)?
This is where auto scaling comes in. An auto scaling group (ASG) can be attached to a load balancer.
We can attach auto scaling rules to an ASG. Then, when thresholds are met (e.g. CPU utilization), the number of instances will be adjusted programatically.
How to attach an ASG to a load balancer?
For Classic load balancer, link ASG with the load balancer directly
For Application load balancer, link ASG with the target group (which itself is attached to the load balancer)
Auto Scaling Group is just a group of identical instances that AWS can scale out (add a new one) or in (remove) automatically based on some configurations you've specified. You use this to ensure at any point in time, there is the specific number of instances running your application, and when a threshold is reached (like CPU utilization), it scales up or down.
Target Group is a way of getting network traffic routed via specified protocols and ports to specified instances. It's basically load balancing on a port level. This is used mostly to allow accessing many applications running on different ports but the same instance.
Then there are the classical Load Balancers where network traffic is routed between instances.
The doc you referred to is about attaching load balancers (either classical or target group) to an auto-scaling group. This is done so scaling instances can be auto-managed (by the auto scaling group) while still having network traffic routed to these instances based on the load balancer.
Target groups
They listen to HTTP/S request from a Load Balancer
Are the Load Balancer's targets which will be available to handle an HTTP/S request from any kind of clients (Browser, Mobile, Lambda, Etc). A target has a specific purpose like Mobile API processing, Web App processing, Etc. Further, these target groups could contain instances with any kind of characteristics.
AWS Docs
Each target group is used to route requests to one or more registered targets. When you create each listener rule, you specify a target group and conditions. When a rule condition is met, traffic is forwarded to the corresponding target group. You can create different target groups for different types of requests. For example, create one target group for general requests and other target groups for requests to the microservices for your application. Reference
So, a Target Group provides a set of instances to process specific HTTP/S requests.
AutoScaling groups
They are a set of instances who were started up to handle a specific workload, i.e: HTTP requests, SQS' message, Jobs to process any kind of tasks, Etc.
On this side, these groups are a set of instances who were started up by a metric which exceeded a specific threshold and triggered an alarm. The main difference is that Autoscaling groups' instances are temporary and they are available to process anything, from HTTP/S requests until SQS' messages. Further, the instances here are temporary and can be terminated at any time according to the configured metric. Likewise , the Autoscaling groups share the same characteristics because the follow something called Launch Configuration.
AWS Docs
An Auto Scaling group contains a collection of EC2 instances that share similar characteristics and are treated as a logical grouping for the purposes of instance scaling and management. For example, if a single application operates across multiple instances, you might want to increase the number of instances in that group to improve the performance of the application or decrease the number of instances to reduce costs when demand is low. Reference
So, an Autoscaling group not only will be able to process HTTP/S requests but also can process backend stuff, like Jobs to send emails, jobs to process tasks, Etc.
As I understand it, Target Groups is a connection between ELB and EC2 instances. Some kind of a service discovery rules. This layer allows to Target Groups for ECS Services for instance when it's possible to have more than one container per instance.
Auto-Scaling Groups is an abstraction for aggregation of EC2 metrics and taking some actions based on that data.
Also, bear in mind, that the possibility of attaching of Auto-Scaling Groups to ELB comes from the previous generation of ELBs. You may compare the first generation and the second one in the CloudFormation docs.

Is there a way to put AutoScalingGroup instances to stand by via cloud formation?

I'm thinking of using cloud formation as a means of blue-green deployment.
Part of it is putting instances of the Autoscaling group instances on stand by.
Is that possible?
Amazon EC2 Auto Scaling groups are responsible for launching and terminating Amazon EC2 instances. Note that instances are launched as new instances -- they are not kept on standby.
You can certainly do blue-green deployments by using two separate CloudFormation stacks, each with their own Auto Scaling group and, presumably, Elastic Load Balancer.
Both Auto Scaling groups would be 'operating', but not necessarily receiving traffic. You would then need some mechanism to 'switch' between the blue/green groups, such as changing a DNS entry in Route 53 to point to the different Load Balancer.
Adding to John's Answer, On route 53 you can use weighted load balancing to route traffic. It allows you to route percentage traffic to the setups.

Auto Scaling - Across regions?

I hope you can provide a quick response to my question.
Is it possible to create auto scaling group which spans across regions ? Consider this scenario - Lets say all the availability zones in west are unavailabe. Can we configure auto scaling so that if the instance in US.West are down, create an instance in east zone ?
I dont think it is possible, because we need to specify the region for AWS_AUTO_SCALING_URL while using Command line scripts, which restricts the creation of launch configs, auto scaling group within that region only.
So we can only hope all the AZ's in that region are not down or move to VPC is that right ?
Elastic load balancing and Elastic IP are both region specific, I would assume that auto scaling is region specific and only between the availability zones in that region. The white paper on building fault tolerant applications doesn't explicitly state that you could auto-scale across regions but it does say that you can across zones.
"Auto Scaling can work across multiple Availability Zones in an AWS Region, making it easier to automate increasing and
decreasing of capacity."
I would believe if they supported multi-region, they would explicitly say so.
Thinking about this further, I'm not so sure it's even a good idea to auto-scale across regions. Auto-scale is more geared for a specific tier of your application.
For example, if a region was to go down, you would not want some of your web servers to use services across a slow link to another region (potentially) across the country.
Instead you would want route 53 to route the traffic to an autonomous stack running it's own auto-scaled layers in a separate region.
see this hosting chart everything from ELB down is region specific.
An Auto Scaling group can contain EC2 instances from multiple Availability Zones within the same region. However, an Auto Scaling group can't contain EC2 instances from multiple regions.
Read Note in this AWS Document
EC2 Auto Scaling groups are regional constructs. They can span Availability Zones, but not AWS regions.
PS. AWS Documentation
https://aws.amazon.com/ec2/autoscaling/faqs/