I've single docker container and have to deploy on AWS Cloud using AWS ECR with Elastic Beanstalk. I'm using Dockerrun.aws.json file to provide the information about repository details. I have pushed my image to my docker hub and Elastic Container Registry.
Using DockerHub in ECS, It can pull the docker image from docker hub and starts the container without any issues and working the app as expected. On the other hand, the container gets stopped when the image pulled from AWS ECR Repository for the same application. The deployment gets failed for the reason: Essential container in task exited
Dockerrun.aws.json
{
"containerDefinitions": [
{
"essential": true,
"image": "01234567891.dkr.ecr.us-east-1.amazonaws.com/app:1",
"memory": 512,
"name": "web",
"portMappings": [
{
"containerPort": 5000,
"hostPort": 80
}
]
}
],
"family": "",
"volumes": [],
"AWSEBDockerrunVersion": "2"
}
I logged into the instance and tried to get the logs of the containers.
But, I got this error standard_init_linux.go:211: exec user process caused "exec format error"
Dockerfile
FROM python:3.4-alpine
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt
CMD ["python", "app.py"]
Seems like there is depended on docker container in task definition or the docker-compose file.
This error occur you have container B that is opened on A and A is esetional for services, so container B will automatically exit.
You need to debug why A is exit.
Essential container in task exited
If a container marked as essential in task definitions exits or dies, that can cause a task to stop. When an essential container exiting is the cause of a stopped task, the Step 6 can provide more diagnostic information as to why the container stopped.
stopped-task-errors
The problem lies in AWS CodeBuild Project. I've mistakenly provided the wrong architecture for the build. The docker image built on different architecture and tried to run on the different architecture in the deployment state. I've changed to the same architecture which is used for the deployment. Both the docker hub image and ECR image seems working fine.
Related
I am following a tutorial to deploy a Flask application with Docker to AWS Elastic Beanstalk (EB). I created an AWS Elastic Container Registry (ECR) and ran some commands which successfully pushed the Docker image to the ECR:
docker build -t app-backend
docker tag app-backend:latest [URL_ID].dkr.ecr.us-east-1.amazonaws.com/app-backend:latest
docker push [URL_ID].dkr.ecr.us-east-1.amazonaws.com/app-backend:latest
Then I tried to deploy to EB:
eb init (selecting a Docker EB application I created on the AWS GUI)
eb deploy
On "eb init" I get the error "Cannot setup CodeCommit because there is no Source Control setup, continuing with initialization", but I assume this can be ignored as it otherwise looked fine. On "eb deploy" though, the deployment fails. In "eb-engine.log" (found in the AWS GUI), I see error messages like:
[ERROR] An error occurred during execution of command [app-deploy] - [Docker Specific Build Application]. Stop running the command. Error: failed to pull docker image: Command /bin/sh -c docker pull [URL_ID].dkr.ecr.us-east-1.amazonaws.com/app-backend:latest failed with error exit status 1. Stderr:failed to register layer: Error processing tar file(exit status 1): write /root/.cache/pip/http/5/e/7/3/b/[long number]: no space left on device
When I manually run the pull command the error references (locally, not from the EB instance), the command seems to respond as expected:
docker pull [URL_ID].dkr.ecr.us-east-1.amazonaws.com/app-backend:latest
What could be causing this deployment failure?
My Dockerrun.aws.json file looks like this:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "[URL_ID].dkr.ecr.us-east-1.amazonaws.com/app-backend",
"Update": "true"
},
"Ports": [
{
"ContainerPort": 5000,
"HostPort": 5000
}
]
}
I solved this by following how to prevent error "no space left on device" when deploying multi container docker application on AWS beanstalk?.
Basically you find your Elastic Beanstalk instance in the EC2 AWS GUI, you modify the volumes to add space to the EB instance. Then you follow the link in that Stack Overflow post to repartition your EB instance by SSHing into it with eb ssh and then using commands like df -H and lsblk to see how much space in in each partition. And use commands like:
sudo growpart /dev/xvda 1
sudo xfs_growfs -d /
to repartition the hard drive as to use all the new space you added in the AWS EC2 GUI. You can check with df -H and lsblk to see if the repartitioning gave you more space.
Then the eb deploy command should work. If SSH isn't setup yet, you may have to do eb ssh --setup first.
I've deployed an Aurelia application to AWS Elastic Beanstalk via AWS ECR and have run into some difficulty. The docker container, when run locally, works perfectly (see below for Dockerfile).
FROM nginx:1.15.8-alpine
COPY dist /usr/share/nginx/html
The deployment works quite well, however when I navigate to the AWS provided endpoint http://docker-tester.***.elasticbeanstalk.com/ I get 502 Bad Gateway
nginx/1.12.1.
I can't figure out what might be the issue. The docker container in question is a simple Hello World example created via the au new command; it's nothing fancy at all.
Below is my Dockerrun.aws.json file
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "***.dkr.ecr.eu-central-1.amazonaws.com/tester:latest",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8080"
}
],
"Logging": "/var/log/nginx"
}
My Elastic Beanstalk configuration is rather small with an EC2 instance type of t2.micro. I'm using the free tier as an opportunity to learn.
I greatly appreciate any help, or links to some reading that may point in the right direction.
It has nothing to do with your aurelia application. You are missing EXPOSE statement (which is mandatory) in your Dockerfile. You can change it like this.
FROM nginx:1.15.8-alpine
EXPOSE 80
COPY dist /usr/share/nginx/html
If you try to run it without EXPOSE, you will get an error
ERROR: ValidationError - The Dockerfile must list ports to expose on the Docker container. Specify at least one port, and then try again.
You should test your application before pushing it to ElasticBeanstalk
install eb cli (assuming that you have pip, if not then you need to install it as well)
pip install awsebcli --upgrade --user
then initialize local repository for deployment
eb init -p docker <application-name>
and you can test it
eb local run --port <port-number>
I'm attempting to deploy a docker image from AWS ECR to Elastic Beanstalk. I've set up all required permissions for Elastic Beanstalk to both S3 and ECR. Communication between these services seems fine, however I get the following errors when attempting to fire up an Elastic Beanstalk environment:
No Docker image specified in either Dockerfile or Dockerrun.aws.json. Abort deployment.
[Instance: i-01cf0bac1863e4eda] Command failed on instance. Return code: 1 Output: No Docker image specified in either Dockerfile or Dockerrun.aws.json. Abort deployment. Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03build.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
I'm uploading a single Dockerrun.aws.json which points to the image on ECR. Below is my Dockerrun.aws.json file:
{
"AWSEBDockerrunVersion": "1",
"containerDefinitions": {
"Name": "***.eu-central-1.amazonaws.com/***:latest",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "5000"
}
],
"Logging": "/var/log/nginx"
}
The docker image does exist on ECR at the location specified in the containerDefinitions Name field.
Am I missing something here?
Turns out containerDefinitions is not applicable in this situation. I'm not sure where I found it (maybe from a dockerrun sample somewhere). The actual property name is as below:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "***.eu-central-1.amazonaws.com/***:latest",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "5000"
}
],
"Logging": "/var/log/nginx"
}
You are not missing anything. Had the same problem. It was because of Dockerfile encoding. Use UTF-8 instead of UTF-8-BOM. More details here:
https://github.com/verygood-ops/eb_docker/blob/master/elasticbeanstalk/hooks/appdeploy/pre/03build.sh#L58
FROM_IMAGE=`cat Dockerfile | grep -i ^FROM | head -n 1 | awk '{ print $2 }' | sed $'s/\r//'`
...
I have encountered this error when trying to use AWSEBDockerrunVersion 1 schema on an environment running "Docker running on 64bit Amazon Linux 2" as the platform. The error message gives nothing away.
Creating a new environment as "Docker running on 64bit Amazon Linux" and redeploying my original Dockerrun.aws.json solved the issue for me. You could also migrate your Dockerrun.aws.json to the version 2 schema.
So I am using Travis CI to automatically deploy my application to AWS Elasticbeanstalk environment. I have this issue that I need to update the nginx.conf file that is located in the host machine files.
Im running a Single container Docker image inside that host machine.
How can I copy or link the nginx.conf file from docker container to host machines nginx.conf file.
Currently my Dockerrun.aws.json looks like that:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "some:image:url:here",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8001"
}
],
"Volumes": [
{
"HostDirectory": "/etc/nginx/nginx.conf",
"ContainerDirectory": "/home/node/app/nginx.conf"
}
]
}
When I tried to use dockerrunversion: 2, it gave me an error on the build that version is wrong.
How can I link those two files with Single Container Docker application?
The "Volumes" key is used to map full volumes, not individual files. See Dockerrun.aws.json file specifications for an explanation.
I know of 2 ways you can solve this problem: 1) Custom AMI or 2) use a Dockerfile with your Dockerrun.aws.json.
1. Build a Custom AMI
The idea behind building a custom AMI is to launch an instance from one of Amazons existing AMIs. You make the changes you need to it (in your case, change the nginx.conf). Finally you create a new AMI from this instance and it will be available to you when you create your environment in Elastic Beanstalk. Here are the detailed steps to create your own AMI and how to use it with Elastic Beanstalk.
2. Use a Dockerfile with your Dockerrun.aws.json
If you dont build your own AMI, you can copy your conf file with the help of a Dockerfile. Dockerfile is a text file that provides commands to Elastic Beanstalk to run to build your custom image. The Docerfile reference details the commands that can be added to a Dockerfile to build your image. You are going to need to to use the Copy command or if the file is simple, you can use Run and echo to build it like in the example here.
Once you create your Dockerfile, you will need to put the Dockerfile and your Dockerrun.aws.json into a directory and create a zip file with both. Provide this to Elastic Beanstalk as your source bundle. Follow this guide to build the source bundle correctly.
I have a node.js application packaged in a docker image hosted in a public repository.
I have deployed that image in an AWS Beanstalk docker application successfully.
The problem is that I was expecting the Beanstalk application to be automatically updated when I update the image in the public repository, as the following configuration sugggests.
Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "peveuve/dynamio-payment-service",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8000"
}
],
"Logging": "/var/log/dynamio"
}
The Dockerfile is very simple:
FROM node:4.2.1-onbuild
# Environment variables
ENV NODE_ENV test
ENV PORT 8000
# expose application port outside
EXPOSE $PORT
The Amazon documentation is pretty clear on that:
Optionally include the Update key. The default value is "true" and
instructs Elastic Beanstalk to check the repository, pull any updates
to the image, and overwrite any cached images.
But I have to update the Beanstalk application manually by uploading a new version of the Dockerrun.aws.json descriptor. Did I miss something? Is it supposed to work like that?
You can use the aws command-line tool to trigger the update:
aws elasticbeanstalk update-environment --application-name [your_app_name] --environment-name [your_environment_name] --version-label [your_version_label]
You specify the version that contains the Dockerrun.aws.json file, that way a new version won't be added to the application. In this case the Dockerrun file works as the "source" for the application, but it only tells aws to pull the docker image, so it would be redundant to create new versions for the application in Elastic Beanstalk (unless you use specifically tagged docker images in the Dockerrun file)
Links:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_image.html
http://docs.aws.amazon.com/elasticbeanstalk/latest/api/API_UpdateEnvironment.htm
The documentation should be more clear. What they are saying is with update=true:
EBS will do a docker pull before it does a docker run when the application is first started. It will not continually poll docker hub.
In contrast, issuing a docker run without first doing a docker pull will always use the locally stored version of that machine, which may not always be the latest.
In order to acheive what you want, you'll need to set up a webhook on Docker Hub, that calls an application you control, that rebuilds your ELB app.