Mount a volume in AWS copilot task run - amazon-web-services

I'm using copilot to execute containers that test our code.
Actually I can make the tests run with the following command:
copilot task run -n <app_name> --default \
--image <image_from_ecr> \
--command <test_file> --cpu 1024 --memory 2048
This creates the service based on the app image and executes the <test_file>. If I use the --follow tag I can see the execution and all goes well.
What I need now, is to be able to read the test outputs that nightwatch writes inside the container.
I used to mount a volume when I was executing the tests with docker run.
But now I don't know how to mount a volume with the copilot task run command.
And if there is another way to get the files generated any help would be appreciated.

It is not possible to mount a volume to a task executed this way.
The solution for my case is:
Execute the task with an IAM role capable of writing to a S3 bucket.
Upload the output testing data to the bucket after the tests are finished. This is done from the task itself.
Download the tests outputs from the S3 bucket from the script that controls the test executions in the host.
This is the script that runs tasks using copilot, so knows all parameters and options and can download the appropiate folder from the bucket.

Related

Interactive shell in Docker image with Amazon ECS with `aws ecs run-task` followed by `aws ecs execute-command`

I would like to launch an interactive shell into a public Docker image on my AWS ECS/Fargate cluster to run network/connectivity tests from inside the cluster.
It seems the official way to do this is with aws ecs run-task followed by aws ecs execute-command [1][2]
I'd like to use existing, public Docker Hub images rather than build custom images if possible.
If I run do run-task with no command or the default command, the task exits and execute-command won't work on an exited task.
"Essential container in task exited"
If I set a Docker command of sleep 10000, I get:
"CannotStartContainerError: ResourceInitializationError: failed to create new container runtime task: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: \"sleep 10000\": executable file not found in $PATH: unknown",
Ideally, run-task and execute-command would be combined in one step. I don't want a background task running indefinitely, I want a shell to run a few commands interactively, that is cleaned up when I'm finished. How would I achieve this?
[1] https://aws.amazon.com/blogs/containers/new-using-amazon-ecs-exec-access-your-containers-fargate-ec2/
[2] https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-exec.html
I had the same issue. I was finally able to get a container to sit "idle" with the following command inside the Task Definition:
"tail", "-F", "/dev/null"
Then I could connect in with an interactive execute-command.

Why is my Docker container stuck in a state of "Created"?

I am trying to deploy to EC2 using Bitbucket Pipelines and AWS CodeDeploy. I have everything setup so that the upload step for the CodeDeploy Agent works as it should, it's just that when I try running the statement in the deploy step, my script.sh /usr/local/bin/docker-compose -f /home/ec2-user/my-app/docker-compose.yml run --rm composer install fails.
Everything else works and if I remove this step, it deploys successfully. If I try to execute this command manually it also works and the container for this runs and then exits as it should. I've checked permissions, changed my IAM setup and done everything I can think of before coming here.
So after a LONG time searching, running countless pipeline deployments and hammering my build minutes for weeks on end, I finally got to the bottom of the problem and am hoping this may help anyone with the same problem.
It was a permissions issue that was failing to run execute commands such as docker-compose run. With AWS Code Deploy, we run the scripts for the life cycle hooks in appspec.yml, usually as root. However, the AWSCodeDeployRole needs full permissions - in my case this was for EC2 so was missing the AmazonEC2FullAccess policy that needs to be attached to the AWSCodeDeployRole. Also add ec2.amazonaws.com to the JSON trust policy, this is what worked for me...2 months later!

How to automatically start, execute and stop EC2?

I want to test my Python library in GPU machine once a day.
I decided to use AWS EC2 for testing.
However, the fee of gpu machine is very high, so I want to stop the instance after the test ends.
Thus, I want to do the followings once a day automatically
Start EC2 instance (which is setup manually)
Execute command (test -> push logs to S3)
Stop EC2 (not remove)
How to do this?
It is very simple...
Run script on startup
To run a script automatically when the instance starts (every time it starts, not just the first time), put your script in this directory:
/var/lib/cloud/scripts/per-boot/
Stop instance when test has finished
Simply issue a shutdown command to the operating system at the end of your script:
sudo shutdown now -h
You can push script logs to custom coudwatch namespaces. Like when the process ends publish a state to cloudwatch. In cloudwatch create alarms based on the state of process, so if it has a completed state trigger an AWS lambda function that will stop instance after completion of your job.
Also if you want to start and stop on specific time you can use ec2 instance scheduler to start/stop instances. It just works like a cron job at specific intervals.
You can use the aws cli
To start an instance you would do the following
aws ec2 start-instances --instance-ids i-1234567890abcdef0
and to stop the instance you would do the following
aws ec2 stop-instances --instance-ids i-1234567890abcdef0
To execute commands inside the machine, you will need to ssh into it and run the commands that you need, then you can use the aws cli to upload files to s3
aws s3 cp test.txt s3://mybucket/test2.txt
I suggest reading the aws cli documentation, you will find most if not all what you need to automate aws commands there.
I created a shell script to start an EC2 instance -if not already running,- connect via SSH and, if you want, run a command.
https://gist.github.com/jotaelesalinas/396812f821785f76e5e36cf928777a12
You can use it in three different ways:
./ec2-start-and-ssh.sh -i <instance id> -s
will show status information about your instance: running state and private and public IP addresses.
./ec2-start-and-ssh.sh -i <instance id>
will connect and leave you inside the default shell.
./ec2-start-and-ssh.sh -i <instance id> <command>
will run whatever command you specify, e.g.:
./ec2-start-and-ssh.sh -i <instance id> ./run.sh
./ec2-start-and-ssh.sh -i <instance id> sudo poweroff
I use the last two commands to run periodic jobs minimizing billing costs.
I hope this helps!

How to add a wait time before executing a step in Bitbucket Pipeline

I have a Bitbucket pipeline where it creates AWS resources using cloudformation and deploys website to it. But deployment fails even the cloudformation creates the stack correctly. What I think the issue is when the deployment happens cloudformation S3 bucket creation may not have been finished.
I have a Hugo website and I have created a bitbucket pipeline to deploy it to server. What it does is it creates S3 bucket using cloudformation to host the website and then upload the Hugo website to it. When I ran the steps in the pipeline manually in a terminal with a delay between each step, it happens successfully. But when it happens on Bitbucket pipeline it gave error saying the S3 bucket that I'm trying to upload content is not available. When I checked in AWS that bucket is actually there. That means Cloudformation has worked correctly. But when the files start to copy, the bucket may have not been available to upload the file. That's my assumption. Is there a workaround for this one. When doing it locally I can wait between the two commands of cloudformation creation and file copying. But how to handle it in Bitbucket pipeline environment. Following is my pipeline code.
pipelines:
pull-requests:
'**':
- step:
script:
- aws cloudformation create-stack --stack-name demo-web --template-body file://cloudformation.json --parameters ParameterKey=S3BucketName,ParameterValue=demo-web
- hugo
- aws s3 cp public/ s3://demo-web/ --recursive
How to handle this scenario in the correct way. Is there a workaround for this situation. Or is the problem that I have identified is not the actual problem.
First, to wait in bitbucket pipelines you should be able to just use sleep x where x is the number of seconds you want to sleep.
A different note - bear in mind that after the first subsequent run of this, deployment will potentially fail the next time as you are using create-stack which will fail if stack already exists...
Using the AWS CloudFormation API for this is best practice.
There is a wait command just as there is a create-stack command, the wait command and its options allows AWS to halt your processing until stack is in COMPLETE status before continuing.
Option 1:
Put your stack creation command into a shell script and as a second step in your shell script you invoke the wait stack-create-complete command.
Then invoke that shell script in pipeline instead of the direct command.
Option 2:
In your bitbucket pipeline, right after your create command, invoke the aws cloudformation await complete command before the upload/hugo command.
I prefer option one because it allows you to manipulate and trace the creation as you wish.
See documentation: https://docs.aws.amazon.com/cli/latest/reference/cloudformation/wait/stack-create-complete.html
Using the CloudFormation API is great because you don't have to guess as to how long to wait, it is more guaranteed to be available.

AWS ECS - Ways to deploy containers

The use case is like - developer makes some code changes and the below things happen automatically -
build runs, application artifact created, docker image generated with the artifact, image pushed to Docker registry, AWS ECS tasks and ECS services updated.
I want to know what are the ways to achieve the above automation of update of AWS ECS services. Till now I have implemented AWS ECS update from Jenkins build using -
1>run post build AWS CLi scripts from Jenkins to update ECS
2>post build action or pipeline step to invoke AWS Lambda function. I have created one Lambda function in Java to implement that.
Please let me the other ways we can achieve the above. Thanks.
I'm continuously deploying Docker containers from CircleCI to AWS ECS.
The outline of the deployment flow is as follows:
Build and tag a new Docker image
Login to AWS ECR and push the image
Update task definitions and services of ECS with ecs-deploy
ecs-deploy is a useful script that updates Docker images in ECS.
https://github.com/silinternational/ecs-deploy
You could use a shell script that calls aws cli commands to create cloudformation stacks or directly call the create commands in the aws cli for the ECR repository, Task Definition, Events rule and target(for scheduling).
then you just call this script on your terminal using this command: ./setup.sh and it should execute all your commands at once.
aws ecr create-repository \
--repository-name tasks-${TASK_NAME}-${TASK_ENV} \
;
or if you want to set up your resources via cloudformation templates, you can launch them using this command as long as the template exists at file://name.yml:
aws cloudformation create-stack \
--stack-name stack-name \
--capabilities CAPABILITY_IAM \
--template-body file://name.yml \
--parameters
ParameterKey=ParamName,ParameterValue=${PARAM_NAME} \
;
Take a look at Codefresh - https://docs.codefresh.io/docs/amazon-ecs
You can build your pipeline
Build Step
Push to Registry
Deply to ECS
That easy
While there are a ton of CI/CD tools out there, since I am early in my rollout, I decided to write a small script instead of having CI/CD pipelines do it.
Here is a one-click deploy script I wrote using the ecs-deploy script as a dependency to achieve a rolling deploy of a docker image to ECS.
You can run this locally from your dev or build/deployment box or use Jenkins or some local build tool.
#!/bin/bash
# automatically login to AWS
eval $(aws ecr get-login)
# build local docker image and push repo to AWS
docker build -t <yourlocaldockerimagetag> .
docker tag <yourlocaldockerimagetag>:latest <yourECSRepoURL>:latest
docker -D -l debug push <yourECSRepoURL>:latest
# deploy to ECS
ecs-deploy/ecs-deploy -m 50 -k <access-key> -s <secret-key> -r <aws-region> -c <cluster-name> -n <service-name> -i <yourECSRepoURL>:latest
Parameters:
cluster-name: Your cluster name in ECS
service-name: Your service name that you had created in ECS
yourECSRepoURL: ECS Repository URL
yourlocaldockerimagetag: Any local image tag name
access-key: your AWS access key for deployments
secret-key: your AWS secret key
Make sure you install ecs-deploy before this script.
The -m 50 tells it that it can deploy even if the number of nodes drops to 50%. Ideally you would have an extra node to do deployments, but if you can't afford that setting this would ensure that deployments continue to happen.
If you are also using an ELB (load balancer), then the default deregistration delay for target groups is 5 minutes which is a bit excessive. The deregistration delay is the time to wait for existing requests to complete BEFORE ECS sends a SIGTERM or SIGINT to your docker container. You should lower this by going to the Target Groups in EC2 dashboard and click the Edit Attributes to edit it. Otherwise your deployments may take forever.
I think nobody has mentioned CodePipeline from AWS, it really integrates easilly with many AWS Services including ECS and CodeCommit:
Push commit to CodeCommit Repo, triggering the pipeline execution.
(Optional) Configure a Manual Approval step that needs you to take an action before Build.
Run a CodeBuild Project that builds your Dockerfile and push the image to an ECR Repo.
Run a "Deploy" step that deploys to a specific ECS Service. It updates the services with a new Task Definition that points to the new ECR Image.
I have used this flow with BitBucket also, just configure a BitBucket pipeline that pushes all new code to a CodeCommit Repo as a previous step.
Exactly as #minamiyojo and #astav answers, we ended up glueing ecs-deploy with a template engine to power up our CD pipeline with some reusable component, we just open-sourced as well:
https://github.com/GuccioGucci/yoke
Please refer to Motivation section in README, hope this would help your scenario too.