Code Deploy Health Constraint Error EC2&GitHub - amazon-web-services

I'm trying to launch a single EC2 instance and connect it to my Github utilizing the CodeDeploy service on AWS. I'm having trouble with the actual deployment of a revision to my EC2 instance. It seems that there is something wrong with the recognition of my EC2's health. The EC2 however, is running perfectly, the code deploy agent is running on the instance and the IAM roles are configured appropriately. I've been at this for hours and I cant seem to figure out what is wrong.

Related

EC2 Instance Health Using ELB

I recently took over architecture from a 3rd party to help a client. I'm new to AWS, so this is probably simple, and I just couldn't find it in the docs/stack overflow. They had an existing EC2 instance that had both a node app and a react app deployed, from different repos. Each were deployed using their own pipeline. The source, build, and deploy steps were working for both, and I verified the artifacts were being generated and stored in S3. The load balancer had a target group that hit a single machine in one subnet. The app was running just fine until this morning, and I'm trying to figure out if it's something I did.
My goal this morning was to spin up a new EC2 instance (for which I have the keys, so I can connect directly), a new load balancer that pointed to my machine, and space in S3 for new pipelines I created to store artifacts. I created an AMI from their EC2 instance with the running app and used it to provision my own on the same subnet as their instance. I used the existing security group for my machine. I created a target group to target my machine for use with my load balancer. I created a load balancer to route traffic to this new machine. I then created two pipelines, similar to theirs, but with different artifact locations in S3, and a source of my own repo where I have a copy of the code. I got deployments through the pipeline to work. Everything was great until I was about to test my system, when I was informed their app was down.
I tried hitting it and got a 502, bad gateway. I checked the load balancer and it sees traffic coming in, but gave a 502 for all responses. I checked the target group and it's now showing their EC2 instance as unhealthy. I tried rebooting the machine, but it's still unhealthy, then I tried creating another version of their machine in another subnet, and ensured it was targeted by the target group, but the new instance showed up as unhealthy as well. I can't SSH into the machine because I don't have the key used to create the EC2 instance. If anyone knows where I should look to bring it back online, I'd be forever in your debt.
I undid everything I created this morning, stopping my EC2 instance, and deleting my load balancer, but their app is still returning a 502, showing the instance as unhealthy in their target group.
These are some things to help you debug:
You first need to access the EC2 directly and not through the Load Balancer. Check that the application is running. If the EC2 is in private VPC, you can start an EC2 instance with a public IP and use it as a bastion host.
You will need to have SSH access to the EC2 machine at some point, so that you can look at the logs. This question has answers on how to replace the key pair.

How to automatically deploy docker to ec2 instance w/o ECS / Is it possible to SSH to an EC2 instance using the post build commands of a build script?

I am using AWS ECS to automatically deploy my server in a docker container to my EC2 instance, the only problem is I have to use an elastic load balancer (ELB). This is a for a school project but it also uses a Telegram bot so I needed a HTTPS endpoint to receive updates from Telegram. An ELB is completely overkill for this and is also costing me more than I would like considering everything else is under the free tier that I am using. Does anyone know how to set up automatic deployment of a docker container to EC2 without an ELB/ECS OR does anyone know if it is possible to SSH to an EC2 instance during a build since that could possibly be a solution of how to run a deployment script on the instance automatically from the build. Thanks!
You dont need ECS.to run Docker. I have run Docker containers from an EC2 userdata script, so that is does a docker run command at launch. Works great.

How to deploy code build to a EC2 instance from GitLab Pipeline

I have been working on a React web app and I need to deploy it now. I have the codebase in GitLab and I'm using Gitlab pipeline to run the tests and create build and deploy. For deployment I need to deploy it to a EC2 instance. My pipeline runs well until creating the build. Now the problem is how to push that created build to the EC2 instance. Can someone help me in here. I tried in the following way.
Gitlab CI deploy AWS EC2
It showed me connection time out message instead of connecting to ec2 instance. After that I allowed all IPs to access the instance with SSH using the security groups. Then it worked fine for me. But the problem is it's not secure to allow all IPs to access SSH. How can I solve this problem.

AWS ECS services are moving to another cluster when instances fail health check

We have started using ECS and we are not quite sure if the behaviour we are experiencing is the correct one, and if it is, how to work around it.
We have setup a Beanstalk Docker Multicontainer environment which in the background uses ECS to manage everything, that has been working just fine. Yesterday, we created a standalone cluster in ECS "ecs-int", a task definition "ecs-int-task" and a service "ecs-int-service" associated to a load balance "ecs-int-lb" and we added one instance to the cluster.
When the service first ran, it worked fine and we were able to reach the docker service through the loadbalance. While we were playing with the instance security group that is associated to the cluster "ecs-int" we mistakenly removed the port rule where the container were running, and the health check started failing on the LB resulting it in draining the instance out from it. When it happened, for our surprise the service "ecs-int-service" and the task "ecs-int-task" automatically moved to the Beanstalk cluster and started running there creating an issue for our beanstalk app.
While setting up the service we setup the placement rule we set as "AZ Balanced Spread".
Should the service move around cluster? Shouldn't the service be attached only to the cluster it was originally created to? If this is the normal behaviour though, how can we set a rule so he service even if the instances for some reason fail the health check but to stick within the same cluster?
Thanks
I have re-created all the infrastructure again and the problem went away. As I suspected, services created for one cluster should not move to different cluster when instance(s) fail.

How does AWS charge me for my Elastic Beanstalk applications when not doing anything

I created a elastic beanstalk environment and it created an EC2 instance. Then I thought I don't actually need this yet so I'll stop the EC2 instance, but then it seemed to start another one.
So my question is if I have an EB instance will I be charged by the hour for the underlying EC2 image all the time or only when the the service it provides is being access via the public elasticip. And if Im charged all the time is there a way to halt a elastic beanstalk application or only delete it or instantiate to a new environment.
The auto scaling feature of Elastic Beanstalk will automatically start another instance if a current instance continues to fail a health check. Stopping individual instances outside of the environment will cause failed health checks and trigger a new instance to be spun up.
You will be charged when the components within the environment are running as stated by Amazon here:
There is no additional charge for Elastic Beanstalk – you only pay for the underlying AWS resources (e.g. Amazon EC2, Amazon S3) that your application consumes.
You can completely stop an environment through the CLI. I gave this answer to a previous question about starting and stopping Elastic Beanstalk:
The EB command line interface has an eb stop command. Here is a little bit about what the command actually does:
The eb stop command deletes the AWS resources that are running your application (such as the ELB and the EC2 instances). It however leaves behind all of the application versions and configuration settings that you had deployed, so you can quickly get started again. Eb stop is ideal when you are developing and testing your application and don’t need the AWS resources running over night. You can get going again by simply running eb start.