ECS Service restart after deploy new version of docker image - amazon-web-services

Hu guys,
I have ec2 cluster with service and instance. Task is based on latest version of docker file which is allocated in ecr. Now I'm looking for simplest way to finish my pipeline with auto "refresh" service when latest image has been deployed. I can't find any feature from aws to resolve this problem, but I found this: https://github.com/fdfk/ecsServiceRestart but unfortunately it doesn't work (can't communicate with my service). But this case inspired me very much because according to author's approach this solution make a duplicate service before update so it provide something like HA without any downtime. Guys is it possible to go throughout these steps without any downtime at all?
deploy new version of image,
service detect new version of image,
auto refresh with implementation new version

Finally I found the best way to achieve my goal. So it was very easy - I just have used ecs-deploy https://github.com/fabfuel/ecs-deploy which I have adopted to my pipeline. I set up longer timeout with no warning flag and this script do for me everything what exactly need. In my example I have one cluster with 3 instances and 1 service witch two running tasks (two the same nodes behind load balancer). When I update my docker image in ECR, ecs-deploy runs auto update first instance, and according to blue-green deployment it updates next instances one by one with load balancer links too. So in this way I achieved full automated deployment after accepting merge request (of course I skipped few steps in this describe). I hope that this will be helpful for somebody. Cheers!

Related

How can i Update container image with imagedigest parameter in aws fargate cluster with aws cli

I have running my cluster and task is running.
My need is want to update container image in running task in cluster how to do?
My Image is with latest tag and every time any new changes come will push to ecr on latest tag.
Deploying with the tag latest isn't a best practice because you loose a lot of visibility into what you are doing (e.g. scale out events where you deploy more tasks as part of a service will all end up using LATEST but will be effectively running different versions of the code, etc.).
This pontificating aside, you didn't say if you started your task(s) as standalone using the run-task API or if you started your task(s) as part of a service.
If the former, you need to stop your task and run it again. If the latter, you need to redeploy your service using the --force-new-deployment flag.

AWS ECS Restart service if not running latest image

I'm setting up automated deployments using Azure DevOps to deploy to AWS ECS running services on an EC2 instance (not Fargate). Docker images are loaded into ECR and ECS is fully setup using CloudFormation templates.
After deployments are done (new image loaded and/or any CloudFormation changes applied) I need to restart the service only if it's not running the latest version of the image that the Task Definition uses. If CloudFormation updated the service or task definition then it will automatically restart and I don't want to restart it again. Also, if the docker image was not deployed then I don't want to restart it. So, is there a way to check what version of an image a service task is currently running?
I can't check if the version of a task definition changed because most of the time it would just be the image that changes and not the task definition. I am loading the image using the Latest tag so I can just call ForceNewDeployment on the service to have it deploy the new version, but if it's possible I don't want to do that if nothing has changed.
I initially thought of trying to keep track of it in the deployment process but due to how our release pipeline is setup and some of the limits in Azure DevOps that's not something I'll be able to do.
I think the best solution, if it's possible, would be to check to see what version of an Image is currently running for each task in a service and if an image has an updated version then do a ForceNewDeployment on the service. But...is there a way to accomplish that? Or would there be a better way to do what I'm trying to do?
Edit: When I load the Docker image to ECR I'm adding two tags: "Latest" and the DevOps release number ("1234"). I was initially hoping to just query the ECR images at the end of a release and if one has a tag with the current release number then I know the image has been updated. The issue is that I don't know if the service was already restarted due to a CloudFormation change.

Continuous Deployment of Docker Compose App to AWS/EC2

I've been trying to find an efficient way to handle continuous deployment with a Docker compose setup and AWS hosting.
So far I've looked into CodeDeploy, S3 buckets, and ECS. My application is relatively small with only 3 docker services, a Django app, NGINX, and PostgreSQL. I was unable to find any reliable information for using CodeDeploy with Docker compose and because of the small scale ECS seems impractical. I've considered an S3 bucket but that seems no better than just deploying my application with something like git or scp.
What is a standard way of handling deploying a docker compose setup on AWS? If possible I would like to use Bitbucket Pipelines or CircleCI to perform the deployment in a manually triggered step after running tests. But I've been unable to find a solution that would easily let me copy over the code (which is in a git repo on a production branch and is how I get the code onto the production server at the moment).
I would like to add some possibilities to #gasc answer
It would be better if you make a cloudformation template for deploying your EC2 resources with all required groups, auto scaling and other stuff.
Then Create the AMI with docker compose installed or any other thing you would be required for your ec2 enviroment.
Then you can use code deploy pipeline, here also aws provides private container registry may be you want to use that
Rest of the steps are same just SCP the compose file into EC2 launch
docker-compose up
command and you are done.
Let me know if you want more help I'm open for discussion
What I will do in your case is:
1 - If needed, update your docker-compose.yml file (or however you called it) to version 3 or higher, to use swarm.
2 - During your pipeline build all images needed, and push them to a registry.
3 - In your pipeline scp your compose file to a manager node.
4 - Deploy your application using swarm (docker stack deploy -c <your-docker-compose-file> your_app_name). This way you can handle rolling updates and scale easily.
Note that if you want to use multiple nodes you need to open a few ports in them
I see you mentioned that ECS might seem impractical for such a small scale - in my opinion not necesarilly. It would require of you to rewrite your docker-compose.yml into task and services definitions, but since there's not a lot of services, that shouldn't take you much time.

Capistrano and Auto-Scaling AWS

We're trying to figure out the best way to deploy to an auto-scaling AWS setup using Capistrano, and stuck on the best way to ensure new servers automatically get the latest code, without having to rely on AMIs.
Any ideas?
Using User Data, you can have your EC2 instances pull the latest code each time a new instance is launched.
More info on user data here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
tldr: user data is pretty much a shell script thats executed when your ec2 instance launches. you can get it to pull the latest code and run it
#Moe's answer (or something like it is the right one). But just as another thought, you could write some Ruby which queries AWS on deploy to fetch the list of servers to which Capistrano will deploy. The issue with this approach is that you will have to manually deploy to all servers every time auto-scaling adds a server, which kind of defeats the purpose.

How to deploy to autoscaling group with only one active node without downtime

There are two questions about AWS autoscaling + deployment which I cannot clearly answer:
I'm currently trying to figure out, whats the best strategy to deploy to an EC2 instance behind an ELB which is the only member of an autoscaling group without downtime.
By now the EC2 setup will be done with puppet including the deployment of the application, triggered after an successful build by jenkins.
The best solution I have found is to check per script how many instances are registered at the ELB. If a single one is registered, spawn a new one, which runs puppet on startup (the new node will be up to date) and kill the old node.
How to deploy (autoscaling EC2 behind an ELB) without delivering two different versions of the application?
Possible solution: Check per script how many EC2 instances are registered to the ELB, spawn the same amount of instances, register all new instances and unregister all old ones.
My experiences with AWS teacher me that AWS has a service for everything. So are there any services out there to accomplish my requirements and my solutions are inconvenient?
You can create an entirely new environment with its own ELB and when it's ready and checked, you switch the DNS record to the new ELB.
Anyway for a brief time (60 seconds or so, depending on the TTL of your DNS record) some users will see your old version while some others will see the new version.
In the end there were two possible solutions. Both of them would temporarily deliver two versions of the app.
Use AWS CodeDeploy to perform an sequential deployment (one after another). This solution offers the possibility to rollback to a previous state and visual shows the state and results of the deployment.
Create a python script to get the registered nodes (using Boto) and run the appropriate puppet script on them (using Fabric). This solution offers more control of the deployment but requires some time to build these script. Also there can be bugs..
For now I choose AWS CodeDeploy because its already available and - hopefully - well tested.