Hi I have a small docker container that I want to ship to elasticbeanstalk
So this is my configuration:
my Dockerfile that am using to build the image:
FROM mwaaas/sms_packages:0.0.2
WORKDIR /root
# create a folder code where we will put our code
RUN mkdir code
# add code to the folder
ADD . code
WORKDIR /root/code
CMD gunicorn --bind=0.0.0.0:3000 --env DJANGO_SETTINGS_MODULE=sms_platform.settings sms_platform.wsgi
EXPOSE 3000
and the Dockerrunaws.json is configured this way
{
"AWSEBDockerrunVersion": "2",
"containerDefinitions": [
{
"name": "sms_platform_app",
"image": "mwaaas/sms_platform:{{version}}",
"essential": true,
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 3000
}
]
}
]
}
If i check at the logs:
ddef82a9bc59 mwaaas/sms_platform:staging_ "/bin/sh -c 'gunicor
For some reasons I want gunicorn command to be in the Dockerrunaws.json
so I removed the CMD command in the docker file and
fixed it in the Dockerrunaws.json file
{
"AWSEBDockerrunVersion": "2",
"containerDefinitions": [
{
"name": "sms_platform_app",
"image": "mwaaas/sms_platform:production_32b7",
"essential": true,
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 3000
}
],
"command": "gunicorn --bind=0.0.0.0:3000 --env DJANGO_SETTINGS_MODULE=sms_platform.settings sms_platform.wsgi"
}
]
}
Some how elastic beanstalk is not running the command.
when I check at the logs I see:
67798ba99a94 mwaaas/sms_platform:staging_b68d "/sbin/my_init"
am expecting the command to be gunicorn instead its /sbin/myinit
Related
We are running clair and clair-db containers in the same fargate task. Below is a snippet of our task definition.
{
"family": "clair",
"networkMode": "awsvpc",
"containerDefinitions": [
{
"name": "db",
"image": "<REPO_URL>/clairdb:v1.0",
"essential": true,
"command": [
"sh",
"-c",
"echo clair db runs"
],
"portMappings": [
{
"containerPort": 5432,
"hostPort": 5432,
"protocol": "tcp"
}
],
},
{
"name": "clair",
"image": "<REPO_URL>/clair:v1.0",
"essential": true,
"command": [
"sh",
"-c",
"echo clair runs"
],
"portMappings": [
{
"containerPort": 6060,
"hostPort": 6060,
"protocol": "tcp"
}
],
As per the AWS fargate docs, localhost can be used to communicate between these two containers of a single task in awsvpc mode. We have given the below option in Clair config.yaml
clair:
database:
type: pgsql
options:
source: host=localhost port=5432 user=postgres password=xxxx sslmode=disable statement_timeout=60000
So as per this, clair should ideally be able to link to the clair-db container running on localhost:5432 on the same network. Clair-db container is running fine in fargate, but clair container is failing with the below logs:
{"Event":"pgsql: could not open database: dial tcp 127.0.0.1:5432: connect: connection refused","Level":"fatal","Location":"main.go:97","Time":"2021-03-23 13:26:38.737437"}
In docker terms, this is how we link these two conatainers:
docker run -p 5432:5432 -d --name db arminc/clair-db:2017-05-05
docker run -p 6060:6060 --link db:postgres -d --name clair arminc/clair-local-scan:v2.0.0-rc.0
Are we missing anything here? Any idea why connection to localhost isn't working in fargate containers for clair?
I have setup a Django server on AWS Elastic Beanstalk. In that server, I have following services
Application server
Celery Worker
Celery Beat
I have using Docker to deploy my application which means I build my docker image, us that image to run all three services. My Dockerrun.aws.json file is below.
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"command": [
"sh",
"-c",
"./entry_point.sh && gunicorn Project.wsgi:application -w 2 -b :8000 --timeout 120 --graceful-timeout 120 --worker-class gevent"
],
"environment": [
],
"essential": true,
"image": "695189796512.dkr.ecr.us-west-2.amazonaws.com/Project-20181107174734",
"name": "app",
"memory": 500,
"portMappings": [
{
"containerPort": 8000,
"hostPort": 80
}
]
},
{
"command": [
"celery",
"-A",
"Project",
"beat",
"--loglevel=info",
"--uid",
"django"
],
"environment": [
],
"essential": true,
"image": "695189796512.dkr.ecr.us-west-2.amazonaws.com/Project-20181107174734",
"memory": 200,
"name": "celery-beat"
},
{
"command": [
"celery",
"-A",
"Project",
"worker",
"--loglevel=info",
"--uid",
"django"
],
"environment": [
],
"essential": true,
"image": "695189796512.dkr.ecr.us-west-2.amazonaws.com/Project-20181107174734",
"memory": 200,
"name": "celery-worker"
}
],
"family": "",
"volumes": []
}
Problem:
This configuration is working fine. But the problem with this configuration is that on all my nodes, all three services run. When load increases, my server scales up to multiple nodes and all services run on each node. I can have multiple celery worker running on server as my task queue is same (I am using SQS). But I only want one Celery Beat service running on all my nodes because multiple Beat services can cause task added to queue multiple time.
What I wanted to do is to run a centralized service in my server. I want help that how I can achieve this in my current setup.
Also I want to know is there any problem in using multiple instance of Celery Worker on my server?.
I am trying to run a dockerised Jenkins and postgres database on AWS elastic beanstalk in a multi-container t2.micro environment:
Dockerrun.aws.json
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "postgres-db",
"image": "postgres:9.5-alpine",
"essential": true,
"memory": 256,
"portMappings": [
{
"hostPort": 5432,
"containerPort": 5432
}
]
},
{
"name": "jenkins-blueocean",
"image": "<account_id>.dkr.ecr.ap-southeast-2.amazonaws.com/<image>:latest",
"essential": true,
"memory": 256,
"mountPoints": [
{
"sourceVolume": "jenkins-data",
"containerPath": "/var/jenkins_home"
}
],
"portMappings": [
{
"hostPort": 80,
"containerPort": 8080
}
],
"links": [
"postgres-db"
]
}
],
"volumes": [
{
"name": "jenkins-data",
"host": {
"sourcePath": "/var/jenkins-data"
}
}
]
}
AWS shows it deploys fine but the logs for jenkins-blueocean container has that error:
/var/log/containers/jenkins-blueocean-7ce78063214b-stdouterr.log
touch: cannot touch '/var/jenkins_home/copy_reference_file.log': Permission denied
Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?
Am I missing something to allow jenkins access to the volume?
Thanks in advance!
Not 100% sure if this is the right path but we ended up following the .ebextensions method of running commands to setup the volume path to allow the jenkins user from the jenkins-blueocean container full access to do its thing.
mkdir -p /var/jenkins-data
chmod 777 /var/jenkins-data
This was because the permissions on the location in the docker instance has r-x rights for other users, with root user having rwx.
I am following the instructions at https://docs.docker.com/compose/django/ to get a basic dockerized django app going. I am able to run it locally without a problem but I am having trouble to deploy it to AWS using Elastic Beanstalk. After reading here, I figured that I need to translate docker-compose.yml into Dockerrun.aws.json for it to work.
The original docker-compose.yml is
version: '2'
services:
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
and here is what I translated so far
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "db"
},
{
"name": "web"
}
],
"containerDefinitions": [
{
"name": "db",
"image": "postgres",
"essential": true,
"memory": 256,
"mountPoints": [
{
"sourceVolume": "db"
"containerPath": "/var/app/current/db"
}
]
},
{
"name": "web",
"image": "web",
"essential": true,
"memory": 256,
"mountPoints": [
{
"sourceVolume": "web"
"containerPath": "/var/app/current/web"
}
],
"portMappings": [
{
"hostPort": 8000,
"containerPort": 8000
}
],
"links": [
"db"
],
"command": "python manage.py runserver 0.0.0.0:8000"
}
]
}
but it's not working. What am I doing wrong?
I was struggling to get the ins and outs of the Dockerrun format. Check out Container Transform: "Transforms docker-compose, ECS, and Marathon configurations"... it's a life-saver. Here is what it outputs for your example:
{
"containerDefinitions": [
{
"essential": true,
"image": "postgres",
"name": "db"
},
{
"command": [
"python",
"manage.py",
"runserver",
"0.0.0.0:8000"
],
"essential": true,
"mountPoints": [
{
"containerPath": "/code",
"sourceVolume": "_"
}
],
"name": "web",
"portMappings": [
{
"containerPort": 8000,
"hostPort": 8000
}
]
}
],
"family": "",
"volumes": [
{
"host": {
"sourcePath": "."
},
"name": "_"
}
]
}
Container web is missing required parameter "image".
Container web is missing required parameter "memory".
Container db is missing required parameter "memory".
That is, in this new format, you must tell it how much memory to allot each container. Also, you need to provide an image - there is no option to build. As is mentioned in the comments, you want to build and push to DockerHub or ECR, then give it that location: eg [org name]/[repo]:latest on Dockerhub, or the URL for ECR. But container-transform does the mountPoints and volumes for you - it's amazing.
You have a few issues.
1) 'web' doesn't appear to be an 'image', you define it as 'build . ' in your docker-compose.. Remember, the Dockerrun.aws.json will have to pull the image from somewhere (easiest is to use ECS's Repositories)
2) I think 'command' is an array. So you'd have:
"command": ["python" "manage.py" "runserver" "0.0.0.0:8000"]
3) your mountPoints are correct, but the volume definition at the top is wrong.
{
"name": "web",
"host": {
"sourcePath": "/var/app/current/db"
}
Im not 100% certain, but the path works for me.
if you have the Dockerrun.aws.json file, next to is a directory called /db .. then that will be the mount location.
I have an environment with a few containers. Some of them are linked. When I run the environment with "docker-compose up -d", it creates entries in etc/hosts for linked containers. When I run it with "eb local run", no entries are created. Why is that?
My Dockerrun.aws.json
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "api",
"image": "php7",
"essential": true,
"memory": 128,
"portMappings": [
{
"hostPort": 8080,
"containerPort": 80
}
],
"mountPoints": [
{
"sourceVolume": "api",
"containerPath": "/var/www/html/"
}
]
},
{
"name": "nodeapi",
"image": "nodejs",
"essential": true,
"memory": 256,
"portMappings": [
{
"hostPort": 5000,
"containerPort": 5000
}
],
"mountPoints": [
{
"sourceVolume": "nodeapi",
"containerPath": "/var/www/app/"
}
],
"Logging": "/var/eb_log"
},
{
"name": "proxy",
"image": "nginx",
"essential": true,
"memory": 128,
"links": [
"api",
"nodeapi"
],
"portMappings": [
{
"hostPort": 8443,
"containerPort": 80
}
]
}
]
}
This generates docker-compose.yml:
api:
image: php7
ports:
- 8080:80
nodeapi:
image: nodejs
ports:
- 5000:5000
proxy:
image: nginx
links:
- api:api
- nodeapi:nodeapi
ports:
- 8443:80
Docker switched to DNS based lookups a while back instead of adding entries to /etc/hosts. Linking is also discouraged in favor of using a common network for the containers.
Ok, this is was a local issue. I upgraded Docker and EB cli to the latest versions and this solved the issue. I'm not sure why EB cli failed to add aliases to etc/hosts previously, but after upgrade it does. Now I get same results either by using "docker-compose up" or "eb local run". All linked container are linked now and work as expected.