I'm currently trying to figure out how to run a container on Elastic Beanstalk with the privileged mode. I read the documentation, but i can't find a way to do it.
I'm assuming you're launching to Docker running in ECS.
ECS using task definitions to define how a docker container should start up. Specifically, the task definition property: privileged is what you're looking for.
ElasticBeanstalk uses the Dockerrun.aws.json file to generate a task definition. According to the documentation for v2 of the file, you can add this flag to one of the objects in the containerDefinitions block.
So, something like this should work
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "my-app",
"image": "some:app",
"essential": true,
"memory": 128,
"privileged": true,
}
]
}
Related
I've single docker container and have to deploy on AWS Cloud using AWS ECR with Elastic Beanstalk. I'm using Dockerrun.aws.json file to provide the information about repository details. I have pushed my image to my docker hub and Elastic Container Registry.
Using DockerHub in ECS, It can pull the docker image from docker hub and starts the container without any issues and working the app as expected. On the other hand, the container gets stopped when the image pulled from AWS ECR Repository for the same application. The deployment gets failed for the reason: Essential container in task exited
Dockerrun.aws.json
{
"containerDefinitions": [
{
"essential": true,
"image": "01234567891.dkr.ecr.us-east-1.amazonaws.com/app:1",
"memory": 512,
"name": "web",
"portMappings": [
{
"containerPort": 5000,
"hostPort": 80
}
]
}
],
"family": "",
"volumes": [],
"AWSEBDockerrunVersion": "2"
}
I logged into the instance and tried to get the logs of the containers.
But, I got this error standard_init_linux.go:211: exec user process caused "exec format error"
Dockerfile
FROM python:3.4-alpine
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt
CMD ["python", "app.py"]
Seems like there is depended on docker container in task definition or the docker-compose file.
This error occur you have container B that is opened on A and A is esetional for services, so container B will automatically exit.
You need to debug why A is exit.
Essential container in task exited
If a container marked as essential in task definitions exits or dies, that can cause a task to stop. When an essential container exiting is the cause of a stopped task, the Step 6 can provide more diagnostic information as to why the container stopped.
stopped-task-errors
The problem lies in AWS CodeBuild Project. I've mistakenly provided the wrong architecture for the build. The docker image built on different architecture and tried to run on the different architecture in the deployment state. I've changed to the same architecture which is used for the deployment. Both the docker hub image and ECR image seems working fine.
I am using AWS Elastic Beanstalk to run a multi-container docker build, and have run into issues with getting my private docker repository to work.
I have created a "dockercfg.json" file to hold my auth, thus:
{"https://index.docker.io/v1/":{"auth":"59...22","email":"ra...#...com"}}
and uploaded it to an S3 bucket in the same region as my EB instance, and created a Dockerrun.aws.json file thus:
{
"AWSEBDockerrunVersion": 2,
"authentication": {
"bucket": "hayl-docker",
"key": "dockercfg.json"
},
"containerDefinitions": [
{
"name": "hayl",
"image": "raddishiow/hayl-docker:uwsgi",
"essential": true,
"memory": 512,
"portMappings": [
{
"hostPort": 443,
"containerPort": 443
}
]
}
]
}
but I keep getting errors like this:
STOPPED, Reason CannotPullContainerError: Error response from daemon: pull access denied for raddishiow/hayl-docker, repository does not exist or may require 'docker login'
I've verified that AWS is able to access the "docker cfg.json" file. I'm not sure it's using the credentials though...
I have changed the docker repository to public briefly and it pulls successfully, but that's not an option really as the image contains sensitive code that I don't want in the public domain.
The auth token I'm using was created using the docker website, as my local docker config file doesn't store my login details...
I've tried manually base64 encoding my password as docker would do to store it in the config file, but this doesn't work either.
Any help would be greatly appreciated, as I've been tearing my hair out for days over this now.
Turns out the "auth" token must be generated from your username and password encoded into base64, in the format "username:password".
I'm having problems using latest tag in an ECR task definition, where image parameter has value like XXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/reponame/web:latest.
I'm expecting this task definition to pull an image with latest tag from ECR once a new service instance (task) is run on the container instance (an EC2 instance registered to the cluster).
However in my case when I connect to the container instance remotely and list docker images, I can see that it has not pulled the latest release image from ECR.
The latest tag there is two release versions behind the current one, from since I updated the task definition to use latest tag instance of explicitly defining the version tag i.e. :v1.05.
I have just one container instance on this cluster.
It's possible there is some quirk in my process, but this question is mainly about how this latest should behave in this kind scenario?
My docker image build and tagging, ECR push, ECS task definition update, and ECS service update process:
# Build the image with multiple tags
docker build -t reponame/web:latest -t reponame/web:v1.05 .
# Tag the image with the ECR repo URI
docker tag ${imageId} XXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/reponame/web
# Push both tags separately
docker push XXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/reponame/web:v1.05
docker push XXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/reponame/web:latest
# Run only if the definition file's contents has been updated
aws ecs register-task-definition --cli-input-json file://web-task-definition.json
# Update the service with force-new-deployment
aws ecs update-service \
--cluster my-cluster-name \
--service web \
--task-definition web \
--force-new-deployment
With a task definition file:
{
"family": "web",
"containerDefinitions": [
{
"name": "web",
"image": "XXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/reponame/web:latest",
"essential": true,
"memory": 768,
"memoryReservation": 512,
"cpu": 768,
"portMappings": [
{
"containerPort": 5000,
"hostPort": 80
}
],
"entryPoint": [
"yarn", "start"
],
"environment": [
{
"name": "HOST",
"value": "0.0.0.0"
},
{
"name": "NUXT_HOST",
"value": "0.0.0.0"
},
{
"name": "NUXT_PORT",
"value": "5000"
},
{
"name": "NODE_ENV",
"value": "production"
},
{
"name": "API_URL",
"value": "/api"
}
]
}
]
}
Turned out the problem was with my scripts. Was using a different variable that had an old value still stored with my terminal session.
I've validated that by using latest tag in the task definition's image source url does have a newly started service instance to pull in the image with latest tag from ECR.
Without needing to register a new revision of the task definition.
As a sidenote, one needs to be careful with handling the latest tag. In this scenario it seems to work out, but in many other cases it would be error prone: Ref1, Ref2
You must label and push latest when you build a new image, otherwise the label will not be updated on the registry.
There is also an option to force pull when running an image, so that the docker host will not assume that just because it pulled latest yesterday, it should still try and pull latest today.
I'm attempting to deploy a docker image from AWS ECR to Elastic Beanstalk. I've set up all required permissions for Elastic Beanstalk to both S3 and ECR. Communication between these services seems fine, however I get the following errors when attempting to fire up an Elastic Beanstalk environment:
No Docker image specified in either Dockerfile or Dockerrun.aws.json. Abort deployment.
[Instance: i-01cf0bac1863e4eda] Command failed on instance. Return code: 1 Output: No Docker image specified in either Dockerfile or Dockerrun.aws.json. Abort deployment. Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03build.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
I'm uploading a single Dockerrun.aws.json which points to the image on ECR. Below is my Dockerrun.aws.json file:
{
"AWSEBDockerrunVersion": "1",
"containerDefinitions": {
"Name": "***.eu-central-1.amazonaws.com/***:latest",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "5000"
}
],
"Logging": "/var/log/nginx"
}
The docker image does exist on ECR at the location specified in the containerDefinitions Name field.
Am I missing something here?
Turns out containerDefinitions is not applicable in this situation. I'm not sure where I found it (maybe from a dockerrun sample somewhere). The actual property name is as below:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "***.eu-central-1.amazonaws.com/***:latest",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "5000"
}
],
"Logging": "/var/log/nginx"
}
You are not missing anything. Had the same problem. It was because of Dockerfile encoding. Use UTF-8 instead of UTF-8-BOM. More details here:
https://github.com/verygood-ops/eb_docker/blob/master/elasticbeanstalk/hooks/appdeploy/pre/03build.sh#L58
FROM_IMAGE=`cat Dockerfile | grep -i ^FROM | head -n 1 | awk '{ print $2 }' | sed $'s/\r//'`
...
I have encountered this error when trying to use AWSEBDockerrunVersion 1 schema on an environment running "Docker running on 64bit Amazon Linux 2" as the platform. The error message gives nothing away.
Creating a new environment as "Docker running on 64bit Amazon Linux" and redeploying my original Dockerrun.aws.json solved the issue for me. You could also migrate your Dockerrun.aws.json to the version 2 schema.
I have a docker image which runs with this command
docker run -it -p 8118:8118 -p 9050:9050 -d dperson/torproxy
It requires a port as an argument.
What I tried?
I pushed this image to ECR repo, created task related to this image. After I created service with network-load-balancer. But the server is not responding when I try to GET DNS name of network-load-balancer.
I think this is because I didn't configure the port for the container.
How can I do this?
Port Mappings are part of the Task Definition > Container Definitions.
This can be done through the UI Add Container or using the CLI / SDK RegisterTaskDefinition
{
"containerDefinitions": [
{
...
"portMappings": [
{
"containerPort": number,
"hostPort": number,
"protocol": "string"
}
],
...
}
]
}