I recently took over architecture from a 3rd party to help a client. I'm new to AWS, so this is probably simple, and I just couldn't find it in the docs/stack overflow. They had an existing EC2 instance that had both a node app and a react app deployed, from different repos. Each were deployed using their own pipeline. The source, build, and deploy steps were working for both, and I verified the artifacts were being generated and stored in S3. The load balancer had a target group that hit a single machine in one subnet. The app was running just fine until this morning, and I'm trying to figure out if it's something I did.
My goal this morning was to spin up a new EC2 instance (for which I have the keys, so I can connect directly), a new load balancer that pointed to my machine, and space in S3 for new pipelines I created to store artifacts. I created an AMI from their EC2 instance with the running app and used it to provision my own on the same subnet as their instance. I used the existing security group for my machine. I created a target group to target my machine for use with my load balancer. I created a load balancer to route traffic to this new machine. I then created two pipelines, similar to theirs, but with different artifact locations in S3, and a source of my own repo where I have a copy of the code. I got deployments through the pipeline to work. Everything was great until I was about to test my system, when I was informed their app was down.
I tried hitting it and got a 502, bad gateway. I checked the load balancer and it sees traffic coming in, but gave a 502 for all responses. I checked the target group and it's now showing their EC2 instance as unhealthy. I tried rebooting the machine, but it's still unhealthy, then I tried creating another version of their machine in another subnet, and ensured it was targeted by the target group, but the new instance showed up as unhealthy as well. I can't SSH into the machine because I don't have the key used to create the EC2 instance. If anyone knows where I should look to bring it back online, I'd be forever in your debt.
I undid everything I created this morning, stopping my EC2 instance, and deleting my load balancer, but their app is still returning a 502, showing the instance as unhealthy in their target group.
These are some things to help you debug:
You first need to access the EC2 directly and not through the Load Balancer. Check that the application is running. If the EC2 is in private VPC, you can start an EC2 instance with a public IP and use it as a bastion host.
You will need to have SSH access to the EC2 machine at some point, so that you can look at the logs. This question has answers on how to replace the key pair.
Related
I'm having an issue with AWS. I did deploy an application using Terraform, but when I try to destroy it, the process doesn't finish because of a subnet. That subnet was related to an EC2 instance that doesn't exist anymore.
If I try to remove it via AWS console it says there is a network interface using that subnet. Ok, but when I try to remove the network interface it says it is in use, but the supposed thing that could be using it, the EC2 instance, was terminated. Would you know how can I get rid of this network interface?
Thanks in advance!
I did try to remove the components individually on AWS console without success.
I think I figured out what happened. When I first run terraform apply, I had set up two availability zones. But then I decided to have just one availability zone, because I just wanted to work with one instance of the application. The point is that when using ELB, you MUST have at least two application instances, since it doesn't make sense to have a Load Balancer having just one app instance. When I run terraform apply with this new configuration, it applied the change partially, leaving an ALB instance available.
After removing the ELB from the Terraform configuration, everything worked fine!
I have a simple Java application listening on port 8443. I've deployed it as a Docker image into Fargate, it has a public IP address and I can access it through the IP address just fine.
The problem is every time I redeploy the image, it gets a new IP address.
I would like to have a static hostname. For example, when I use Elastic Beanstalk and deploy a website, it will get a hostname. How do I get the same thing?
I've been following the documentation for one whole day and didn't make any progress. I've created load balancers, targets, listeners, accelerators, nothing seems to work. For example, when creating a load balancer, it doesn't tell me what the hostname is.
I'm pretty sure this is supposed to be something really easy, but I just cannot figure it out. What am I doing wrong?
You may want to create an Application Load Balancer and register your Fargate services into a Target Group for the load balancer. You have to register your services only once, if you redeploy newer versions afterwards, they will be automatically added to the Target Group.
The Application Load Balancer will provide a publicly accessible hostname. Example:
For your load balancer to be reachable, it needs to be in a public subnet. It also needs to have a security group which allow traffic from the public internet and also allows traffic to the registered targets.
Steps to create an ALB for your ECS cluster: AWS docs
Registering ECS services into a Target Group: AWS docs
Update:
The problem is that when I create a Target Group I cannot associate it with the service.
When you create the cluster, the AWS console asks you if you would want to deploy your containers in a VPC. You have to select yes, and create a VPC:
Afterwards, you may want to get the id of the VPC (for example, in my case: vpc-0e6...) and you may want to go into your EC2 console an create a new Application Load Balancer, placing it into that VPC
ALB:
Now, when you create a new Fargate service, you should see the Application Load Balancer:
The EC2 machines are running behind the ELB with the same AMI Image.
My requirement is, currently there are 5 EC2 instances are running behind the ELB this is my Min count in Auto-scaling Group and I also associated Elastic IP with them so its easy to serving code on them via Ansible, But when traffic Goes up Auto-scaling add more machines behind the same ELB, Its very Headache to add newly added machine public IP manually in Ansible Host.
How can I get all the machines IP to my Ansible host?
That's the classic use of dynamic inventory. Ansible docs even call out this specific use case :)
They also provide a working example. Check this link
So, I'm running an AWS Elastic Beanstalk environment with a single instance.
This particular app is a background job app, and in order to deploy changes to my database, I need to pause the app during the deployment process. I'm running into a couple of problems with this: -
I can stop the EC2 instance for that EBS env, however this eventually terminates that instance, and it will spin up a new instance that immediately tries to run (don't want this, I want to control when the EBS env starts again).
When the new instance starts up, the Elastic IP I've associated to the previous instance gets un-allocated, and is not automatically allocated to the new EC2 instance (this is a problem because my database has an IP firewall, so I need it to keep the same IP, before and after pausing).
I read that associating my EBS to an VPC might solve the IP issue, but I can't figure out how to do that. In my configuration it says "This environment is not part of a VPC.", but there isn't an option to make the environment part of a VPC?
Ideally, I'd love to just "pause" the instance, so that it stops and can be re-started without me losing that instance or the IP configuration of that instance.
Can anyone help me to solve these problems, or provide some other method of configuring this setup?
I'm not so experienced with Beanstalk, but you can use .ebsextension to get a script run at instance start, right? Then use that script to call aws api to get available Elastic IP and assign that to the instance itself.
I’m having a personal website hosted at AWS EC2 with ELB. Today I have started my AWS EC2 instances (I had turned off due to non usage and Of course, I can save some cost) and tried to load my website via AWS Elastic loadbalancer public dns url but it was not coming up in my browser, instead of webpage I got a blank white page. So I checked my AWS EC2 instances and ELB services.
In the Elastic Load Balancer section, I can see that the status message is showing the registered AWS EC2 instances are “Out of Service”! I tried to change the health check parameter values, nothing happened! So I deregister the EC2 instances from the loadbalancer and register the same again. After few minutes the instances are coming up to “In Service”. It took sometime because the EC2 instances should register into the loadbalancer and health check. Finally I brought my website up.
Solutions tried --
If you have launched your instance in EC2-VPC, by default, the IP address associated with your instance does not change when you stop and then start the instance. However, when you stop and then start your EC2-VPC instance, your load balancer might take sometime to recognize that the stopped instance has started. During this time your load balancer is not connected to the restarted instance. I recommend that you reregister your restarted instance with the load balancer.
My instance is in EC2-VPC and I tried the baove and when I re-register the instance falls back in the load balancer but otherwise I am just waiting to no avail. Any reason?
This is very common issue in for aws elb. What you can do is add following lines at
end of your /etc/rc.local (assuming you are running linux box)
elb-deregister-instances-from-lb <load_balancer_name> --instances <instance-id>
elb-register-instances-with-lb <load_balancer_name> --instances <instance-id>
It first deregisters your instance from elb and then registers back the instance.
Regards
Rajarshi Haldar