DinD and AWS Fargate CI? - amazon-web-services

Regarding fargate - since it seems we can't run containers in privileged mode and also cannot mount /var/run/docker.sock, has anyone figured out a good solution for building/publishing docker images inside fargate tasks?

You probably want AWS Codebuild

I came upon this question after trying to run Jenkins builder slaves in Fargate. Previously they ran in ECS on EC2 instances with the docker.sock mounted.
I considered trying DinD but with Fargate currently having the maximum storage size of 10 GB I will abandon this idea for slaves. We would simply want to be able to cache more data before pruning or recycling the slave.
In my opinion storage size is also a factor when considering to build Docker containers in Fargate.

Related

EC2 implementation of ECS doesnt stop

I am currently having a docker container running on a fargate that is working well automatically turning on and off to run a workload. Due to restrictions in memory and I want to have more GBs than 30GB, I wanted to move to EC2 version of ECS task. The task is running in EC2 instance created but doesnt turn off after the task is completed.
I want to know how to configure this automatically using ECS.
ECS Cluster Auto Scaling should be able to scale your EC2 cluster back to 0 instances if you have no tasks running on it. Have a look at this blog.

Jenkins setup on EC2 vs ECS

Currently we have Jenkins that is running on-premise(VMware), planning to move into the cloud(aws). What would be the best approach to install Jenkins whether on ec2 or ECS?
Best way would be running on EC2. Make sure you have granular control over your instance Security Group and Network ACL's. I would recommend using terraform to build your environment as you can write code and also version control it. https://www.terraform.io/downloads.html
Have you previously containerized your Jenkins? On VMWare itself? If not, and if you are not having experience with containers, go for EC2. It will be as easy as running on any other VM. For reproducing the infrastructure, use Terraform or CloudFormartion.
I would recommend dockerize your on-premise Jenkins first. See how much efforts are required in implementation and administrating/scaling it. Then go for ECS.
Else, shift to EC2 and see how much admin overhead + costs you are billed. Then if required, go for ECS.
Another point you have to consider is how your Jenkins is architected. Are you using master-slave? Are you running builds contentiously so that VMs are never idle? Do you want easy scaling such that build environment is created and destroyed per build execution?
If you have no experience with running containers then create it on EC2. Before running on ECS make sure you really understand containers and container orchestration.
Just want to complement the other answers by providing link to official AWS white paper:
Jenkins on AWS
It might be of special interest as it discusses both options in detail: EC2 and ECS:
In this section we discuss two approaches to deploying Jenkinson AWS. First, you could use the traditional deployment on top of Amazon Elastic Compute Cloud (Amazon EC2). Second, you could use the containerized deployment that leverages Amazon EC2 Container Service (Amazon ECS).Both approaches are production-ready for an enterprise environment.
There is also AWS sample solution for Jenkins on AWS for ECS:
https://github.com/aws-samples/jenkins-on-aws:
This project will build and deploy an immutable, fault tolerant, and cost effective Jenkins environment in AWS using ECS. All Jenkins images are managed within the repository (pulled from upstream) and fully configurable as code. Plugin installation is automated, including versioning, as well as configured through the Configuration as Code plugin.

docker-compose on AWS

I would like an web a software on AWS. Locally, I run it on Ubuntu VirtualBox VM with docker-compose, it requires 2-4 cores, 8G RAM, 30-40G disk. Do you think it will run on AWSS? Should I install docker-compose and app on a EC2? Elasticbean, ECS, (https://aws.amazon.com/about-aws/whats-new/2018/06/amazon-ecs-cli-supports-docker-compose-version-3/ ), or something else?
I am vary because my attempts to run it on an IT department managed KVM failed.
What resources are best to request for either of solutions?
At the moment it is more of prove of concept/demo, but eventually I hope deploying production on a Kubernetes cluster
I'm looking for, in the order of decreasing importance :
Simplicity and chances to succeed with deployment asap
costs
stability, QoS
You may want to consider using AWS Fargate. This lets you run container-based applications without having to managing the underlying EC2 instance. You can use Fargate with either ECS or EKS.
The ECS CLI that you link to in your question also helps you create your application and should make it easy get started.
You can look at ecsworkshop.com for an introduction to using ECS and Fargate.

Schedule Docker image to be run periodically on AWS ECS?

How do I schedule a docker image to be run periodically (hourly) using ECS and without having to use a continually running EC2 instance + cron? I have a docker image containing third party binaries and the python project.
The latter approach is not viable long-term as it's expensive for the instance to be running 24/7, while only being used for a small fraction of the day given invocation of the script only lasts ~3 minutes.
For AWS ECS cluster, it is recommended to have atleast 1 EC2 server running 24x7. Have you looked at AWS Fargate whether it can run your docker container?. Also AWS Batch?. If Fargate and AWS Batch are not possible then for your requirement, I would recommend something like this without ECS.
Build an EC2 AMI with pre-built docker and required softwares and libraries.
Have AWS Instance Scheduler to spin up a EC2 server every hour and as part of user data, start a docker container with image you mentioned.
https://aws.amazon.com/answers/infrastructure-management/instance-scheduler/
If you know your task execution time maybe 5min. After 8 or 10min then bring server down with scheduler.
Above approach will blindly start a EC2 and stop it without knowing whether your python work is done successfully. We can still improve above with Lambda and CloudFormation templates combination. Let me know your thoughts :)
Actually it's possible to schedule the launch directly in CloudWatch defining a rule, as explained in
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/scheduled_tasks.html
This solution is cleaner, because you will not need to worry about the execution time: once finished, the Task will just terminate and a new one will be spawned on the next cycle

Deployment methods for docker based micro services architecture on AWS

I am working on a project using a microservices architecture.
Each service lives in its own docker container and has a separate git repository in order to ensure loose coupling.
It is my understanding that AWS recently announced support for Multi-Container Docker environments in ElasticBeanstalk. This is great for development because I can launch all services with a single command and test everything locally on my laptop. Just like Docker Compose.
However, it seems I only have the option to also deploy all services at once which I am afraid defies the initial purpose of having a micro services architecture.
I would like to be able to deploy/version each service independently to AWS. What would be the best way to achieve that while keeping infrastructure management to a minimum?
We are currently using Amazon ECS to accomplish exactly what you are talking about trying to achieve. You can define your Docker Container as a Task definition and then Create an ECS Service which will handle number of instances, scaling, etc.
One thing to note is Amazon mentions the word container a lot in the documentation. They may be talking about the EC2 instance used for the cluster for your docker instances/containers.