Elastic Beanstalk Docker deploy fails with "no space left on device" - amazon-web-services

I am following a tutorial to deploy a Flask application with Docker to AWS Elastic Beanstalk (EB). I created an AWS Elastic Container Registry (ECR) and ran some commands which successfully pushed the Docker image to the ECR:
docker build -t app-backend
docker tag app-backend:latest [URL_ID].dkr.ecr.us-east-1.amazonaws.com/app-backend:latest
docker push [URL_ID].dkr.ecr.us-east-1.amazonaws.com/app-backend:latest
Then I tried to deploy to EB:
eb init (selecting a Docker EB application I created on the AWS GUI)
eb deploy
On "eb init" I get the error "Cannot setup CodeCommit because there is no Source Control setup, continuing with initialization", but I assume this can be ignored as it otherwise looked fine. On "eb deploy" though, the deployment fails. In "eb-engine.log" (found in the AWS GUI), I see error messages like:
[ERROR] An error occurred during execution of command [app-deploy] - [Docker Specific Build Application]. Stop running the command. Error: failed to pull docker image: Command /bin/sh -c docker pull [URL_ID].dkr.ecr.us-east-1.amazonaws.com/app-backend:latest failed with error exit status 1. Stderr:failed to register layer: Error processing tar file(exit status 1): write /root/.cache/pip/http/5/e/7/3/b/[long number]: no space left on device
When I manually run the pull command the error references (locally, not from the EB instance), the command seems to respond as expected:
docker pull [URL_ID].dkr.ecr.us-east-1.amazonaws.com/app-backend:latest
What could be causing this deployment failure?
My Dockerrun.aws.json file looks like this:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "[URL_ID].dkr.ecr.us-east-1.amazonaws.com/app-backend",
"Update": "true"
},
"Ports": [
{
"ContainerPort": 5000,
"HostPort": 5000
}
]
}

I solved this by following how to prevent error "no space left on device" when deploying multi container docker application on AWS beanstalk?.
Basically you find your Elastic Beanstalk instance in the EC2 AWS GUI, you modify the volumes to add space to the EB instance. Then you follow the link in that Stack Overflow post to repartition your EB instance by SSHing into it with eb ssh and then using commands like df -H and lsblk to see how much space in in each partition. And use commands like:
sudo growpart /dev/xvda 1
sudo xfs_growfs -d /
to repartition the hard drive as to use all the new space you added in the AWS EC2 GUI. You can check with df -H and lsblk to see if the repartitioning gave you more space.
Then the eb deploy command should work. If SSH isn't setup yet, you may have to do eb ssh --setup first.

Related

How can I install aws cli, from WITHIN the ECS task?

Question:
How can I install aws cli, from WITHIN the ECS task ?
DESCRIPTION:
I'm using a docker container to run the logstash application (it is part of the elastic family).
The docker image name is "docker.elastic.co/logstash/logstash:7.10.2"
This logstash application needs to write to S3, thus it needs AWS CLI installed.
If aws is not installed, it crashes.
# STEP 1 #
To avoid crashing, when I used this application only as a docker, I ran it in a way that I caused the 'logstash start' to be delayed, after docker container was started.
I did this by adding "sleep" command to an external docker-entrypoint file, before it starts the logstash.
This is how it looks in the docker-entrypoint file:
sleep 120
if [[ -z $1 ]] || [[ ${1:0:1} == '-' ]] ; then
exec logstash "$#"
else
exec "$#"
fi
# EOF
# STEP 2 #
run the docker with "--entrypoint" flag so it will use my entrypoint file
docker run \
-d \
--name my_logstash \
-v /home/centos/DevOps/psifas_logstash_docker-entrypoint:/usr/local/bin/psifas_logstash_docker-entrypoint \
-v /home/centos/DevOps/logstash.conf:/usr/share/logstash/pipeline/logstash.conf \
-v /home/centos/DevOps/logstash.yml:/usr/share/logstash/config/logstash.yml \
--entrypoint /usr/local/bin/psifas_logstash_docker-entrypoint \
docker.elastic.co/logstash/logstash:7.10.2
# STEP 3 #
install aws cli and configure aws cli from the server hosting the docker:
docker exec -it -u root <DOCKER_CONTAINER_ID> yum install awscli -y
docker exec -it <DOCKER_CONTAINER_ID> aws configure set aws_access_key_id <MY_aws_access_key_id>
docker exec -it <DOCKER_CONTAINER_ID> aws configure set aws_secret_access_key <MY_aws_secret_access_key>
docker exec -it <DOCKER_CONTAINER_ID> aws configure set region <MY_region>
This worked for me,
Now I want to "translate" this flow into an AWS ECS task.
in ECS I will use parameters instead of running the above 3 "aws configure" commands.
MY QUESTION
How can I do my 3rd step, installing aws cli, from WITHIN the ECS task ? (meaning not to run it on the EC2 server hosting the ECS cluster)
When I was working on the docker I also thought of these options to use the aws cli:
find an official elastic docker image containing both logstash and aws cli. <-- I did not find one.
create such an image by myself and use. <-- I prefer not , because I want to avoid the maintenance of creating new custom images when needed (e.g when new version of logstash image is available).
Eventually I choose the 3 steps above, but I'm open to suggestion.
Also, My tests showed that running 2 containers within the same ECS task:
logstah
awscli
and then the logstash container will use the aws cli container
(image "amazon/aws-cli") is not working.
THANKS A LOT IN ADVANCE :-)
Your option #2, create the image yourself, is really the best way to do this. Anything else is going to be a "hack". Also, you shouldn't be running aws configure for an image running in ECS, you should be assigning a IAM role to the task, and the AWS CLI will pick that up and use it.
Mark B, your answer helped me to solve this. Thanks!
writing here the solution in case it will help somebody else.
There is no need to install AWS CLI, in the logstash docker container running inside the ECS task.
Inside the logstash container (from image "docker.elastic.co/logstash/logstash:7.10.2") there is AWS SDK to connect to the S3.
The only thing required is to allow the ECS Task execution role, access to S3.
(I attached AmazonS3FullAccess policy)

How to run a simple Docker container when an EC2 is launched in an AWS auto-scaling group?

$ terraform version
Terraform v0.14.4
I'm using Terraform to create an AWS autoscaling group, and it successfully launches an EC2 via a launch template, also created by the same Terraform plan. I added the following user_data definition in the launch template. The AMI I'm using already has Docker configured, and has the Docker image that I need.
user_data = filebase64("${path.module/docker_run.sh}")
and the docker_run.sh file contains simple
docker run -p 80:3000 -d 1234567890.dkr.ecr.us-east-1.amazonaws.com/node-app:latest
However, when I ssh to the EC2 instance, the container is NOT running. What am I missing?
Update:
Per Marcin's comment, I see the following in in /var/log/cloud-init-output.log
Jan 11 22:11:45 cloud-init[3871]: __init__.py[WARNING]: Unhandled non-multipart (text/x-not-multipart) userdata: 'docker run -p 80:3000 -d...'
From AWS docs and what you've posted the likely reason is that you are missing /bin/bash in your docker_run.sh:
User data shell scripts must start with the #! characters and the path to the interpreter you want to read the script (commonly /bin/bash).
Thus your docker_run.sh should be:
#!/bin/bash
docker run -p 80:3000 -d 1234567890.dkr.ecr.us-east-1.amazonaws.com/node-app:latest
If this still fails, please check /var/log/cloud-init-output.log on the instance for errors.

AWS CodePipeline + CodeDeploy to EC2 with docker-compose

Hi I've been trying to get autodeployment working on AWS. Whenever there's a merge or commit to repo, CodePipeline will detect it and have CodeDeploy update the tagged EC2 instance with the new changes. The app is a simple node.js app which I want to start with docker-compose. I have installed docker-compose + docker on the EC2 instance already and enabled CodeDeploy Agent.
I tested the whole process and it is mostly working except for the part where CodeDeploy fails the deployment because it is unable to run the command docker-compose up -d in my ApplicationStart portion of the appspec.yml. I get the error docker compose cannot be found which is kind of weird because in the BeforeInstall script I download and install docker + docker-compose and set all the permissions. Is there something I'm missing or it is just not meant to happen with CodeDeploy and EC2?
I can confirm when I SSH into the EC2 instance and use the command docker-compose up -d in project root directory it works, but as soon as I try to run the docker-compose command in the script portion of the appspec.yml it fails.
The project repo is here, just in case there's anything I missed: https://github.com/c3ho/simple_crud

Showing docker container log in aws ECS cluster

I have trouble showing log from my docker container in ecs.
What I did:
1) ssh into an ec2 instance of the cluster.
2) docker logs my service
Then this message is showing up:
FATA[0000] Error executing 'logs': Failed to get log configuration: Container 'my-container': Must specify log driver as awslogs
What I am trying to do is to show log in the console.
What I don't understand is that for some container, the command docker logs works fine.
open docker daemon file in /etc/docker/daemon.json and add log driver:
{
"log-driver": "awslogs"
}
and restart docker with sudo systemctl restart docker
You should choose the docker log driver to json-log while creating the task definition revision in ECS, If you want to see the docker logs with docker logs container-id command. you will get the container id from the docker ps command.
But if you want to push docker logs to cloudwatch logs then you have to choose aws-logs as docker log driver.
For some containers, it might work fine because those have docker log driver as json-file set in their task definition.
How to create a task definition?
Reference: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-task-definition.html
Do let me know if you are still facing the issue.

AWS code deploy + bitbucket = failed (Error code HEALTH_CONSTRAINTS)

i've set up everything according to this article
https://aws.amazon.com/tw/blogs/apn/announcing-atlassian-bitbucket-support-for-aws-codedeploy/
Here is my env:
Instance (free tier with amazon linux)
- apache 2.4 installed
Security group
- only 22 (only my ip can access) and 80 port are opened
Iptables stopped
2 roles are set
- one for linking S3 <-> bitbucket
(attached custom policy)
- one role is for deployment group
(attached AWSCodeDeployRole policy)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "codedeploy.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
The script tried to deploy is
https://s3.amazonaws.com/aws-codedeploy-us-east-1/samples/latest/SampleApp_Linux.zip
Permission
/var/www/* is owned by ec2-user with 755 permission
Agent
service codedeploy-agent status =
The AWS CodeDeploy agent is running as PID 7200
Clues:
There are some zip file in my s3 bucket is uploaded for every deploy.
Error code: HEALTH_CONSTRAINTS
Anyone have an idea what the causes of deployment fail?
update1 After i re-launch the instance with iam profile, the application can be deployed. But it is still failed, when i click view events, there is log as follow:
Error CodeScriptFailed
Script Namescripts/install_dependencies
MessageScript at specified location: scripts/install_dependencies run as user root failed with exit code 1
Log TailLifecycleEvent - BeforeInstall
Script - scripts/install_dependencies
[stdout]Loaded plugins: priorities, update-motd, upgrade-helper
[stdout]Resolving Dependencies
[stdout]--> Running transaction check
[stdout]---> Package httpd.x86_64 0:2.2.31-1.8.amzn1 will be installed
[stdout]--> Processing Dependency: httpd-tools = 2.2.31-1.8.amzn1 for package: httpd-2.2.31-1.8.amzn1.x86_64
[stdout]--> Processing Dependency: apr-util-ldap for package: httpd-2.2.31-1.8.amzn1.x86_64
[stdout]--> Running transaction check
[stdout]---> Package apr-util-ldap.x86_64 0:1.4.1-4.17.amzn1 will be installed
[stdout]---> Package httpd-tools.x86_64 0:2.2.31-1.8.amzn1 will be installed
[stdout]--> Processing Conflict: httpd24-2.4.23-1.66.amzn1.x86_64 conflicts httpd < 2.4.23
[stdout]--> Processing Conflict: httpd24-tools-2.4.23-1.66.amzn1.x86_64 conflicts httpd-tools < 2.4.23
[stdout]--> Finished Dependency Resolution
[stderr]Error: httpd24-tools conflicts with httpd-tools-2.2.31-1.8.amzn1.x86_64
[stderr]Error: httpd24 conflicts with httpd-2.2.31-1.8.amzn1.x86_64
[stdout] You could try using --skip-broken to work around the problem
[stdout] You could try running: rpm -Va --nofiles --nodigest
Anyone what is the problem?
The error code HEALTH_CONSTRAINTS means more instances failed than expected, which is defined by the deployment configuration.
For more information about why the deployment failed, on the deployment console https://region.console.aws.amazon.com/codedeploy/home?region=region#/deployments, you can click on the failed deploymentID, then it will redirect to the deployment details page, which contains all of the instances included in the specified deployment, and each line contains the instance's lifecycle event. Then click on the ViewEvents, then if there is View Logs link, you can see the reason why this instance deployment failed.
If the console doesn't have enough information for what you need, then the log on the instance can be found at less /var/log/aws/codedeploy-agent/codedeploy-agent.log. It contains the logs for most recent deployments.
This happens because the codeDeploy checks health of the ec2 instances by hitting instances. Before deployment, you need to run below bash script on the instances and check if the script worked. httpd service must be started. Reboot the instance.
#!/bin/bash
sudo su
yum update -y
yum install httpd -y
yum install ruby
yum install aws-cli
cd ~
aws s3 cp s3://aws-codedeploy-us-east-1/latest/install . --region us-east-1
chmod +x ./install
./install auto
echo 'hello world' > /var/www/html/index.html
hostname >> /var/www/html/index.html
chkconfig httpd on
service httpd start
It depends on your deployment configuration, but basically 1 or more deployments failed.
HEALTH_CONSTRAINTS: The deployment failed on too many instances to be
successfully deployed within the instance health constraints specified
http://docs.aws.amazon.com/codedeploy/latest/APIReference/API_ErrorInformation.html
Check your deployment configuration settings. The overall failure/success of the deployment is based on these settings. Try the CodeDeployDefault.AllAtOnce, and dial in as needed.
Also, double check AWS CodeDeploy Instance Health settings, especially minimum-healthy-hosts
It seems there is a conflict between one of the dependencies you asked to install in your appspec.yaml file and your httpd24-tools service.
[stderr]Error: httpd24-tools conflicts with httpd-tools-2.2.31-1.8.amzn1.x86_64
[stderr]Error: httpd24 conflicts with httpd-2.2.31-1.8.amzn1.x86_64
[stdout] You could try using --skip-broken to work around the problem
So try to solve the dependency installation problem. You can try to install dependencies manually on your ec2 and find a solution for this conflict and when you solved it bring the solution to your appspec.yaml file and install the dependencies via code deploy.