I need to perform simple health checks to an ec2 instance that does not have access to the internet. The instance is behind another ec2 using Apache as the frontend.
I can not use an load balancer, nor give the instance access to the internet.
I looked at route53 health checks as an alternative, but it also needs internet connection.
I know I can do it by using a lambda function, but I would like to know if there is any other ( 'aws managed' ) way to do it.
What are you trying to check on the instance? EC2 instances come with status checks by default for general health.
If you want to check something specific, you might run a script on the instance (e.g. through cron) and use AWS CLI (or a similar API) to report the metrics to CloudWatch; you can also set alarms here.
Why not use a load balancer for the health check and just not route any traffic to it? Make sure your security groups allow traffic from the load balancer (assuming ALB) to the EC2 instance, but you can remove any inbound access to the load balancer's security group for added security.
Related
I am reading up on AWS Auto Scaling Groups and trying to understand (from a network-perspective) how the following resources all fit together:
Auto Scaling Group (ASG)
Application Load Balancer (ALB)
Individual EC2 instances sitting behind the ALB
ALB Listeners
ALB Target Groups
Security Group(s) enforcing which IPs/ports are allowed access to the EC2 instances
I understand what each of these does in theory, but in practice, I'm having trouble seeing the forest through the trees with how they all snap together. For example: do I configure the EC2 instances to be members of the Security Group? Or do I do that at the balancer-level? If I attach the ALB to the Auto Scaling Group, then why would I need to do any additional configuration with an ALB Target Group? When it comes to routing, do I route port 80 traffic to the ALB or the Auto Scale Group?
I know these are lots of small questions, so the main question here is: how do all of these snap together to provide a load balanced web server hosted on EC2 instances? Ultimately I need to configure all of this inside a CloudFormation template, but a diagram or explanation to help me configure everything manually is probably the best place for me to start. Thanks for any help!
do I configure the EC2 instances to be members of the Security Group?
Or do I do that at the balancer-level?
The EC2 instances should be a member of one security group. The Load Balancer should be a member of another security group. The Load Balancer's security group should allow incoming traffic from the Internet. The EC2 instances should allow incoming traffic from the load balancer.
If I attach the ALB to the Auto Scaling Group, then why would I need
to do any additional configuration with an ALB Target Group?
If you are using an auto-scaling group to create the instances, then you don't have to do any manual updates to the target group, the auto-scaling group will handle those updates for you.
When it comes to routing, do I route port 80 traffic to the ALB or the
Auto Scale Group?
An Auto-scaling group is not a resource that exists in your network. It is a construct within AWS that just creates/removes EC2 servers for you based on metrics. The traffic goes to the load balancer, and the load balancer sends it to the EC2 instances in the target group.
I know these are lots of small questions, so the main question here is: how do all of these snap together to provide a load balanced web server hosted on EC2 instances? Ultimately I need to configure all of this inside a CloudFormation template, but a diagram or explanation to help me configure everything manually is probably the best place for me to start.
It's a bit much to ask somebody on here to spend their free time creating a diagram for you. I suggest looking at the AWS reference WordPress implementations which they tend to use for providing reference implementations of auto-scaled web server environments.
See the "WordPress scalable and durable" CloudFormation template example here.
See the AWS WordPress Reference Architecture project here, which includes a diagram.
I have a simple Java application listening on port 8443. I've deployed it as a Docker image into Fargate, it has a public IP address and I can access it through the IP address just fine.
The problem is every time I redeploy the image, it gets a new IP address.
I would like to have a static hostname. For example, when I use Elastic Beanstalk and deploy a website, it will get a hostname. How do I get the same thing?
I've been following the documentation for one whole day and didn't make any progress. I've created load balancers, targets, listeners, accelerators, nothing seems to work. For example, when creating a load balancer, it doesn't tell me what the hostname is.
I'm pretty sure this is supposed to be something really easy, but I just cannot figure it out. What am I doing wrong?
You may want to create an Application Load Balancer and register your Fargate services into a Target Group for the load balancer. You have to register your services only once, if you redeploy newer versions afterwards, they will be automatically added to the Target Group.
The Application Load Balancer will provide a publicly accessible hostname. Example:
For your load balancer to be reachable, it needs to be in a public subnet. It also needs to have a security group which allow traffic from the public internet and also allows traffic to the registered targets.
Steps to create an ALB for your ECS cluster: AWS docs
Registering ECS services into a Target Group: AWS docs
Update:
The problem is that when I create a Target Group I cannot associate it with the service.
When you create the cluster, the AWS console asks you if you would want to deploy your containers in a VPC. You have to select yes, and create a VPC:
Afterwards, you may want to get the id of the VPC (for example, in my case: vpc-0e6...) and you may want to go into your EC2 console an create a new Application Load Balancer, placing it into that VPC
ALB:
Now, when you create a new Fargate service, you should see the Application Load Balancer:
I have created a classic load balancer and auto scaling policy which launch 2 instance successfully; now when I logged in through ssh to one of the load balancer.
ssh -i "mykeypair.pem" ec2-user#my-load-balancer-1222.us-east-1.elb.amazonaws.com
we looged in with the teminla
[ec2-user#ip-10-0-1-86 ~ ] << here this Ip is one of the instance which was created by auto scaling
now I want to check the security-group of the elb from curl http://169.254.169.254/latest/meta-data/security-groups command but it display the instnace security group name not the elb security group.
My question is how can we check the elb security group?
It seems you have SSH'ed into one of the 2 instances behind the load balancer (I doubt you can SSH into the ELB itself), so that's why you're seeing the security group of that instance.
I believe the way to check the ELB's security group is by using the AWS CLI (or one of their SDKs), using the
aws elb describe-load-balancers --load-balancer-name my-load-balancer
You can find more details in the docs
Note: of course, if you wanted to run this command from within the EC2 instance you SSH'ed into, you would need to make sure you have access to make that invocation. See here for more info, on getting set up.
You should not SSH into an instance via a Load Balancer.
An SSH session is persistent -- you wish to continue talking to the same server. This clashes with the concept of a Load Balancer, which distributes traffic across multiple servers.
I have created two instances in AWS (one is Live & other is Backup). My website is hosted on Live Instance. I have configured Route 53, Health checks & Hosted zones on default settings. Also have added both Instances to load balancer, and the status is "InService" for both the Instances.
For the Live Instance, Public IP & Elastic IP are the same. For Backup Instance, Public IP is different from live, and Elastic IP is null.
What I want to achieve is, when my Live Instance "status check" or "Health check" fails, then Backup Instance should get activated.
Currently when I manually stop my Live Instance for testing purpose, the backup Instance should get activated. but it doesn't. Please let me know if I am missing any steps.
You need to implement a healthcheck REST API that ELB can call. Your backup instance can return a non-200 HTTP status. The moment it is activated it starts return HTTP 200. This will tell ELB to only route calls to the primary.
Meanwhile, your Route53 should point at the ELB, not directly at the instances.
Generally speaking, however, you want to keep both primary and backup instances hot for optimal performance and failover. You can get the best of AWS advantages if you don't rely on your instances to be in a particular state -- primary vs backup in this case. I would devise a strategy to keep both instances in use.
My question is simple. Does it make sense to have an Amazon Elastic Load Balancer (ELB) with just one EC2 instance?
If I understood right, ELB will switch traffic between EC2 instances. However, I have just one EC2 instance. So, does it make sense?
On the other hand, I´m using Route 53 to route my domain requests example.com, and www.example.com to my ELB, and I don´t see how to redirect directly to my EC2 instance. So, do I need an ELB for routing purposes?
Using an Elastic Load Balancer with a single instance can be useful. It can provide your instance with a front-end to cover for a disaster situation.
For example, if you use an auto-scaling group with min=max=1 instance, with an Elastic Load Balancer, then if your instance is terminated or otherwise fails:
auto-scaling will launch a new replacement instance
the new instance will appear behind the load balancer
your user's traffic will flow to the new instance
This will happen automatically: no need to change DNS, no need to manually re-assign an Elastic IP address.
Later on, if you need to add more horsepower to your application, you can simply increase your min/max values in your autoscaling group without needing to change your DNS structure.
It's much easier to configure your SSL on an ELB than an EC2, just a few clicks in the AWS console. You can even hand pick the SSL protocols and ciphers.
It's also useful that you can associate different security groups to the actual EC2 and the forefront ELB. You can leave the ELB in the DMZ and protect your EC2 from being accessible by public and potentially vulnerable to attacks.
There is no need to use a Load Balancer if you are only running an single Amazon EC2 instance.
To point your domain name to an EC2 instance:
In the EC2 Management Console, select Elastic IP
Allocate New Address
Associate the address with your EC2 instance
Copy the Elastic IP address and use it in your Route 53 sub-domain
The Elastic IP address can be re-associated with a different EC2 instance later if desired.
Later, if you wish to balance between multiple EC2 instances:
Create an Elastic Load Balancer
Add your instance(s) to the Load Balancer
Point your Route 53 sub-domain to the Load Balancer
With NO ELB :-
Less Secure (DOS Attacks possible as HTTP 80 will be open to all, instead of being open only to ELB)
You won't have the freedom of terminating an instance to save EC2 hrs without worrying about remapping your elastic IP(not a big deal tho)
If you don't use ELB and your ec2 instance becomes unhealthy/terminates/goesDown
Your site will remain down (It will remain up if you use ELB+Scaling Policies)
You will have to remap your elastic IP
You pay for the time your elastic IP is not pointing to an instance around $0.005/hr
You get 750 hours of Elastic Load Balancing plus 15 GB data processing with the free tier so why not use it along with a min=1,max=1 scaling policy
On top of the answer about making SSL support easier by putting a load balancer in front of your EC2 instance, another potential benefit is HTTP/2. An Application Load Balancer (ALB) will automatically handle HTTP/2 traffic and convert up to 128 parallel requests to individual HTTP/1.1 requests across all healthy targets.
For more information, see: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#listener-configuration
It really depends on what are you running in the EC2 instance.
While with only one EC2 instance it's not necessary to use ELB (all your traffic will go to that instance anyways), if your EC2 service has to scale in the near future, is not a bad idea to invest some time now and get familiar with ELB.
This way, when you need to scale, it's just a matter of firing up additional instances, because you have the ELB part done.
If your EC2 service won't scale in the near future, don't worry too much!
About the second part, you definitely can route directly to your EC2 instance, you just need the EC2 instance IP. Take a look at the amazon route53 docs. Mind that if your IP is not static (you don't setup an Amazon Elastic IP), you'd need to change the IP mapping everytime the EC2 ip changes.
You can also use an ELB in front of EC2 if for example you want it to be publically reachable, without having to use up an Elastic IP address. As said previously they work well too with ASG's