Why is my Docker container stuck in a state of "Created"? - amazon-web-services

I am trying to deploy to EC2 using Bitbucket Pipelines and AWS CodeDeploy. I have everything setup so that the upload step for the CodeDeploy Agent works as it should, it's just that when I try running the statement in the deploy step, my script.sh /usr/local/bin/docker-compose -f /home/ec2-user/my-app/docker-compose.yml run --rm composer install fails.
Everything else works and if I remove this step, it deploys successfully. If I try to execute this command manually it also works and the container for this runs and then exits as it should. I've checked permissions, changed my IAM setup and done everything I can think of before coming here.

So after a LONG time searching, running countless pipeline deployments and hammering my build minutes for weeks on end, I finally got to the bottom of the problem and am hoping this may help anyone with the same problem.
It was a permissions issue that was failing to run execute commands such as docker-compose run. With AWS Code Deploy, we run the scripts for the life cycle hooks in appspec.yml, usually as root. However, the AWSCodeDeployRole needs full permissions - in my case this was for EC2 so was missing the AmazonEC2FullAccess policy that needs to be attached to the AWSCodeDeployRole. Also add ec2.amazonaws.com to the JSON trust policy, this is what worked for me...2 months later!

Related

sam build botocore.exceptions.NoCredentialsError: Unable to locate credentials

I am trying to deploy my machine learning model with sam for couple of days and I am getting this error:
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I am also make sure that my aws config is fine
the "aws s3 ls" command works fine with me any help will be useful thanks in advance
I've read through this issue which seems to have been deployed in v1.53: SAM Accelerate issue
Reading that seemed to imply that it might be worth trying
sam deploy --guided --profile mark
--profile mark is the new part and mark is just the name of the profile.
I'm using v1.53 but still have to pass in the profile to avoid the problem you're having and I was having, so they may not have fixed the issue as well as intended, but at least the --profile seems to solve it for me.
If you are using Linux, this error can be caused by a misalignment between a docker root installation and user-level AWS credentials.
Amazon documentation recommends adding credentials using the aws configure command without sudo. However, when you install docker on Linux, it requires a root-level installation. This ultimately results in the user being forced to use sudo for the SAM CLI build and deploy commands, which leads to the error.
There are two different solutions that will fix the issue:
Allow non-root users to manage docker. If you use this method, you will not need to use sudo for your SAM CLI commands. This fix can be accomplished by using the following commands:
sudo groupadd docker
sudo usermod -aG docker $USER
OR
Use sudo aws configure to add AWS credentials to root. This fix requires you to continue using sudo for your SAM CLI commands.

Mount a volume in AWS copilot task run

I'm using copilot to execute containers that test our code.
Actually I can make the tests run with the following command:
copilot task run -n <app_name> --default \
--image <image_from_ecr> \
--command <test_file> --cpu 1024 --memory 2048
This creates the service based on the app image and executes the <test_file>. If I use the --follow tag I can see the execution and all goes well.
What I need now, is to be able to read the test outputs that nightwatch writes inside the container.
I used to mount a volume when I was executing the tests with docker run.
But now I don't know how to mount a volume with the copilot task run command.
And if there is another way to get the files generated any help would be appreciated.
It is not possible to mount a volume to a task executed this way.
The solution for my case is:
Execute the task with an IAM role capable of writing to a S3 bucket.
Upload the output testing data to the bucket after the tests are finished. This is done from the task itself.
Download the tests outputs from the S3 bucket from the script that controls the test executions in the host.
This is the script that runs tasks using copilot, so knows all parameters and options and can download the appropiate folder from the bucket.

AWS CodePipeline + CodeDeploy to EC2 with docker-compose

Hi I've been trying to get autodeployment working on AWS. Whenever there's a merge or commit to repo, CodePipeline will detect it and have CodeDeploy update the tagged EC2 instance with the new changes. The app is a simple node.js app which I want to start with docker-compose. I have installed docker-compose + docker on the EC2 instance already and enabled CodeDeploy Agent.
I tested the whole process and it is mostly working except for the part where CodeDeploy fails the deployment because it is unable to run the command docker-compose up -d in my ApplicationStart portion of the appspec.yml. I get the error docker compose cannot be found which is kind of weird because in the BeforeInstall script I download and install docker + docker-compose and set all the permissions. Is there something I'm missing or it is just not meant to happen with CodeDeploy and EC2?
I can confirm when I SSH into the EC2 instance and use the command docker-compose up -d in project root directory it works, but as soon as I try to run the docker-compose command in the script portion of the appspec.yml it fails.
The project repo is here, just in case there's anything I missed: https://github.com/c3ho/simple_crud

Is it possible to execute deployment of AWS CDK asynchronously?

The command cdk deploy ... is used to deploy one or more CloudFormation stacks. When it executes, it displays messages resulting from the deployment of the various stacks and this can take some time.
The deploy command supports the --notification-arns parameter, which is an array of ARNs of SNS topics that CloudFormation will notify with stack related events.
Is it possible to execute cdk deploy and not have it report to the console its progress (i.e. the command exits immediately after uploading the new CloudFormation assets) and simply rely on the SNS topic as a means of getting feedback on the progress of a deployment?
A quick and dirty way (untested) would be to use nohup
$ nohup cdk ... --require-approval never 1>/dev/null
The --require-approval never simply means it wont stop to ask for permission for sercurity requests and obviously nohup allows the command to run with out terminating.
Its the only solution I can think of that is quick.
Another solution for long term would be to use the CdkToolkit to create your own script for deployment. Take a look at cdk command to get an idea. This is been something Ive wanted from aws-cdk for a while - I want custom deploy scripts rather than using the shell.
I found a solution to asynchronously deploy a cdk app via the --no-execute flag:
cdk deploy StackX --no-execute
change_set_name = aws cloudformation list-change-sets --stack-name StackX --query "Summaries[0].ChangeSetId" --output text
aws cloudformation execute-change-set --change-set-name $change_set_name
For my case this works, because I use this method to deploy new stacks only, so there will ever be only exactly 1 change set for this stack and I can retrieve it with the query for the entry with index 0. If you wish to update existing stacks, you will have to select the correct change set from the list returned by the list-change-sets command.
I had a similar issue - I didn't want to keep a console open while waiting for a long-running init script on an ec2 instance to finish. I've hit Ctrl-C after publishing was completed and the changeset got created. It kept running and I've checked the status in the Cloudformation view. Not perfect and not automation-friendly, but I didn't need to keep the process running on my machine.

AWS Code Deploy Deployment Failed

I have been using AWS Code deploy from the past 3 months. Every thing went nice. And suddenly When I want to deploy code to EC2 servers today. I am getting this strange error (after it is trying to deploy for more than 20 Minutes).
The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems. (Error code: HEALTH_CONSTRAINTS)
I don't understand what happened.I have not messed any thing with AWS at all,I just tried to deploy code as I always do. What could be the reason?
Before starting the deployment, make sure to check the deployment group to see if there is any healthy instance listed.
2 potential reasons could be possible here :
You might have missed to install the Code Deploy Agent on your EC2 instance(s) for which the below set of commands will help you to install
sudo yum update
sudo yum install aws-cli
cd /home/ec2-user
aws s3 cp s3://aws-codedeploy-us-east-1/latest/install . --region
us-east-1
chmod +x ./install
sudo ./install auto
Please make sure above set of commands depend on the platform you are using, if you are using Amazon's Linux AMI its good to go, for other platforms it may vary.
There might be error in your appspec.yml, if that is the case then you may be able to see that error in which lifecycle event the error is there. To identify that, go to the deployments => select one of the deployments which got failed => go to events => here you will see the error => clicking on that error will display the reason.
If you want to understand in detail how it works, kindly go through my blog here
Please let me know if it doesn't fix your problem.