How to set default state to my aws ecs service? - amazon-web-services

I'm new at AWS.
Before 30 minutes, I launch ecs to deploy my docker container.
Everything looks fine.
After finishing my work, I deleted cluster, task definition.
But in my ec2 console, ec2 launch every 2 minutes inifinitly.
I deleted every resource about it.
Why it launch automatically?
Is there any solution about cleaning aws ecs configuration?
Thanks.

As per your confirmation, Recreation of the associated autoscaling group which was responsible to spin up instances solved your problem.

Related

EC2 implementation of ECS doesnt stop

I am currently having a docker container running on a fargate that is working well automatically turning on and off to run a workload. Due to restrictions in memory and I want to have more GBs than 30GB, I wanted to move to EC2 version of ECS task. The task is running in EC2 instance created but doesnt turn off after the task is completed.
I want to know how to configure this automatically using ECS.
ECS Cluster Auto Scaling should be able to scale your EC2 cluster back to 0 instances if you have no tasks running on it. Have a look at this blog.

How to configure AWS Fargate task to run a container that will create a cloudwatch custom metric

I need to set up a monitoring into an aws account to ping certain servers from outside the account, create a custom cloudwatch metric with the package loss and i need to deploy the solution without any EC2 instance.
My first choice was lambda, but it seems that lambda does not allow pinging from it.
Second choice was a container, as FARGATE has the ability to execute containers without any EC2 instance. The thing is im able to run the task definition and i see the task in RUNNNING state in the cluster, but the cloudwatch metric is never received.
If I use the normal EC2 cluster, the container works perfectly, so i assume I have some error within the configuration, but I'm lost why. I have added admin rights to the ECS Task Execution Role and opened all ports in the sec group.
I have tried public/private subnets with no success.
Anyone could please help me?
Here you can find that the task is certainly RUNNING, however the app dont generate any further action
So i solved the problem. There was problem inside the container. It seems Fargate doesn't like cron, so i removed my cron schedule from the container and used a cloudwatch event rule instead and it works perfectly

AWS Beanstalk Restarts Instance

I have created a pipeline using AWS Codepipeline, Github, Jenkins and AWS Elastic Beanstalk (Docker) running a nodejs application. Everytime a build is triggered in AWS Codepipeline and deployment done on the Elastic Beanstalk instance, it's corresponding EC2 instance is terminated and another one created afresh and we only want the app to be deployed without termination of EC2 instance. What could be the cause for termination on every build/deployed?
how many instances do you have in your beanstalk and what deployment method are you using: All at Once, Rolling, Rolling with an Additional Batch or Immutable?
With these responses, we can continue the research.
I switched to Immutable deployment and stopped experiencing the issue as explained here: Difference between rolling, rolling with additional batch and immutable deployments in AWS?
Turns out that Rolling deployments can cause the timeouts especially that I had a single instance needed

Multiple Aws CodeDeploy applications in a newly added instance

I think i've done something wrong while designing my aws infrastructure.
Actually i have one autoscaling group with one ec2 instance.
On this instance there are 6 laravel projects that are associated to 6 applications in aws CodeDeploy, so when i want to update the version i simply update using codedeploy.
Issues comes when the autoscaling group adds instances to the group, all my codedeploy applications are deployed to the newly created instance and it fails with this message:
One or more lifecycle events did not run and the deployment was unsuccessful. Possible causes include:
(1) Multiple deployments are attempting to run at the same time on an instance;
So... what's the best way to get this to work ?
AWS recommends associating a single deployment group to an ASG and consolidate deployments to a single deployment for proper scale out. Each deployment group associates a lifecycle hook with ASG through which ASG will notify deployment-group when scale-out events occur. Parallel deployments (in your case 6) will be prone to codedeploy timeouts (5 -60 min) and codedeploy agent running on ec2 can take one command at time.
If each of your app takes less time (<60 mins), you may want to consolidate them to a single application and deploy via codedeploy hooks. Else would suggest to use different asg for app.
Refer: https://aws.amazon.com/blogs/devops/under-the-hood-aws-codedeploy-and-auto-scaling-integration/
list lifecycle hooks:
aws autoscaling describe-lifecycle-hooks --auto-scaling-group-name <asg_name> --region <region>
If launch of new ec2 goes infinite loop of terminate and launch,you can remove lifecycle hooks
aws autoscaling delete-lifecycle-hook --lifecycle-hook-name <lifecycleName> --auto-scaling-group-name <asg_name> --region <region>

For AWS ASG, how to set up custom readiness check for new instances?

We have an AutoScaling Group that runs containers (using ECS). When we add OR replace EC2 instances in the ASG, they don't have the docker images we want on them. So, we run a few docker pull commands using cloud-init to fetch the images when they boot up.
However, the ASG thinks that the new instance is ready, and terminates an old instance. But in reality, this new instance isn't ready until all docker images have been pulled.
E.g.
Let's say my ASG's desired count is 5, and I have to pull 5 containers using cloud-init. Now, I want to replace all EC2 instances in my ASG.
As new instances start to boot up, ASG will terminate old instances. But due to the docker pull lag, there will be a time during the deploy, when the actual valid instances will be lesser than 3 or 2.
How can I "mark an instance ready" only when cloud-init is finished?
Note:I think Cloudformation can bridge this communication gap using CFN-Bootstrap. But I'm not using Cloudformation.
What you're look got is AutoScaling Lifecycle Hooks. You can keep an instance in the Pending:Wait state until you're docker pull has completed. You can then move the instance to InService. all of this can be done with the AWS CLI so it should be achievable with an AWS AutoScaling command before and after your docker commands.
The link to the documentation I have provided explains this feature in detail and provides great examples on how to use it.