elasticbeanstalk multicontainer will not pick the latest Image from - amazon-web-services

I have a python server setup of multi containers with Dockerrun.aws.json file that picks up the images from ECR:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"host": {
"sourcePath": "API"
},
"name": "_Api"
}
],
"containerDefinitions": [
{
"essential": true,
"Update": true,
"memory": 128,
"name": "my_api",
"image": "xxxxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/my-api:test1",
"mountPoints": [
{
"containerPath": "/code",
"sourceVolume": "_Api"
}
]
},
{
"essential": true,
"memory": 128,
"name": "nginx",
"image": "xxxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/dashboard-nginx:test1",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"links": [
"my_api"
]
},
{
"essential": true,
"memory": 128,
"name": "redis",
"image": "redis:latest",
"portMappings": [
{
"containerPort": 6379,
"hostPort": 6379
}
]
}
]
I have done some modifications to the containers and wish to test it locally using the eb local run command
But no matter what i do it is using original old images
I have a parallel docker-compose.yml i was using before EB - which works as expected:
version: '3'
services:
my_api:
image: xxxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/my-api:test1
build: ./API
expose:
- "5555"
volumes:
- ./API:/code
depends_on:
- redis
redis:
image: redis:latest
networks:
- service
ports:
- "6379:6379"
expose:
- "6379"
nginx:
image: xxxxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/my-nginx:test1
build:
context: ./API
dockerfile: Dockerfile-nginx
ports:
- 80:80
depends_on:
- my_api
I tried to build and push with docker-compose . and with docker , with a new tag and more
But still i get the same behavior
The .elasticbeanstalk/docker-compose.yml file seems to get updated , but even running docker-compose up --build with it still uses the older image
I tried running docker system prune -a to make eb pull the new tagged container - but still , somehow, i got the old images again
Even deploy to AWS acts the same
When i run the docker ps -a , i can see that the containers used are only different by their names but uses different image ids:
89b258852e84 xxxxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/my-nginx:test1 "nginx -g 'daemon of…" 9 minutes ago Exited (0) 5 minutes ago dashboard-nginx
23196d6e8016 xxxxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/my-api:test1 "uwsgi --ini app.ini" 9 minutes ago Exited (0) 5 minutes ago dashboard-api
95b4473bc38f redis:latest "docker-entrypoint.s…" 9 minutes ago Exited (0) 5 minutes ago live_dashboard_redis_1
32be5539e905 xxxxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/my-nginx:test1 "nginx -g 'daemon of…" 10 minutes ago Exited (0) 6 minutes ago elasticbeanstalk_nginx_1
51d89fcdfd94 redis:latest "docker-entrypoint.s…" 10 minutes ago Exited (0) 6 minutes ago elasticbeanstalk_redis_1
e10715455525 xxxxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/my-api:test1 "uwsgi --ini app.ini" 10 minutes ago Exited (0) 6 minutes ago elasticbeanstalk_myapi_1
What have i missed or did not fully understand ?
is there any way to make eb local rebuild and use the latest images locally?
Why is not EB pull the latest version of an Image when deploying ?
Any help or suggestion would be greatly appreciated
EDIT1
Some more info i gathered
when i inspect the docker-compose image and the eb one i can see they have a diffrent Mount section which can explain the code diffrences :
docker-compose
"Mounts": [
{
"Type": "bind",
"Source": "/host_mnt/c/Workspaces/PRJ/DevOps/Tools/proj/API",
"Destination": "/code",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
]
eb:
"Mounts": [
{
"Type": "volume",
"Name": "API",
"Source": "/var/lib/docker/volumes/API/_data",
"Destination": "/code",
"Driver": "local",
"Mode": "rw",
"RW": true,
"Propagation": ""
}
]
strange as i am working on windows

After few days with AWS Support on the phone
Finally we got an answer
So if anyone face this in the future , you need to check your mount setup
Make the volume pick from /var/app/current/API
"volumes": [
{
"host": {
"sourcePath": "/var/app/current/API"
},
"name": "_Api"
}
],

Related

Error while deploying web app to AWS ElasticBeanStalk

I am getting the below error while deploying to aws elastic beanstalk from travis CI.
Service:AmazonECS, Code:ClientException, Message:Container list cannot be empty., Class:com.amazonaws.services.ecs.model.ClientException
.travis.yml:
sudo: required
language: generic
services:
- docker
before_install:
- docker build -t sathishpskdocker/react-test -f ./client/Dockerfile.dev ./client
script:
- docker run -e CI=true sathishpskdocker/react-test npm test
after_success:
- docker build -t sathishpskdocker/multi-client ./client
- docker build -t sathishpskdocker/multi-nginx ./nginx
- docker build -t sathishpskdocker/multi-server ./server
- docker build -t sathishpskdocker/multi-worker ./worker
# Log in to the docker CLI
- echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_ID" --password-stdin
# Take those images and push them to docker hub
- docker push sathishpskdocker/multi-client
- docker push sathishpskdocker/multi-nginx
- docker push sathishpskdocker/multi-server
- docker push sathishpskdocker/multi-worker
deploy:
provider: elasticbeanstalk
region: 'us-west-2'
app: 'multi-docker'
env: 'Multidocker-env'
bucker_name: elasticbeanstalk-us-west-2-194531873493
bucker_path: docker-multi
On:
branch: master
access_key_id: $AWS_ACCESS_KEY
secret_access_key: $AWS_SECRET_KEY
Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": 2,
"containerDefintions": [
{
"name": "client",
"image": "sathishpskdocker/multi-client",
"hostname": "client",
"essential": false,
"memory": 128
},
{
"name": "server",
"image": "sathishpskdocker/multi-server",
"hostname": "api",
"essential": false,
"memory": 128
},
{
"name": "worker",
"image": "sathishpskdocker/multi-worker",
"hostname": "worker",
"essential": false,
"memory": 128
},
{
"name": "nginx",
"image": "sathishpskdocker/multi-nginx",
"hostname": "nginx",
"essential": true,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
],
"links": ["client", "server"],
"memory": 128
}
]
}
Deploying part alone failing with the error:
Service:AmazonECS, Code:ClientException, Message:Container list cannot be empty., Class:com.amazonaws.services.ecs.model.ClientException
Ah, Never mind, it's my mistake. There is typo in the dockerrun config file which wrongly reads containerDefintions instead of containerDefinitions.
Thanks everyone whoever taking look at my question. Cheers!

Getting error `repository does not exist or may require 'docker login': denied: requested access to the resource is denied` in Elastic Beanstalk

While deploying dotnet app as docker with Milticontainer option in Elasticbean stalk, Getting the error like
2021-05-20 01:26:55 ERROR ECS task stopped due to: Task failed to start. (traveltouchapi: CannotPullContainerError: Error response from daemon: pull access denied for traveltouchapi, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
postgres_image: )
2021-05-20 01:26:58 ERROR Failed to start ECS task after retrying 2 times.
2021-05-20 01:27:00 ERROR [Instance: i-0844a50e307bd8b23] Command failed on instance. Return code: 1 Output: .
Environment details for: TravelTouchApi-dev3
Application name: TravelTouchApi
Region: ap-south-1
Deployed Version: app-c1ba-210520_065320
Environment ID: e-i9t6f6vszk
Platform: arn:aws:elasticbeanstalk:ap-south-1::platform/Multi-container Docker running on 64bit Amazon Linux/2.26.0
Tier: WebServer-Standard-1.0
CNAME: TravelTouchApi-dev3.ap-south-1.elasticbeanstalk.com
Updated: 2021-05-20 01:23:27.384000+00:00
My Dockerfile is
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
# Install Node.js
RUN curl -fsSL https://deb.nodesource.com/setup_14.x | bash - \
&& apt-get install -y \
nodejs \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /src/TravelTouchApi
COPY ["TravelTouchApi.csproj", "./"]
RUN dotnet restore "TravelTouchApi.csproj"
COPY . .
WORKDIR "/src/TravelTouchApi"
RUN dotnet build "TravelTouchApi.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "TravelTouchApi.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "TravelTouchApi.dll"]
My docker-compose.yml is
version: '3.4'
networks:
traveltouchapi-dev:
driver: bridge
services:
traveltouchapi:
image: traveltouchapi:latest
depends_on:
- "postgres_image"
build:
context: .
dockerfile: Dockerfile
ports:
- "80:80"
environment:
DB_CONNECTION_STRING: "host=postgres_image;port=5432;database=blogdb;username=bloguser;password=bloguser"
networks:
- traveltouchapi-dev
postgres_image:
image: postgres:latest
ports:
- "5432"
restart: always
volumes:
- db_volume:/var/lib/postgresql/data
environment:
POSTGRES_USER: "bloguser"
POSTGRES_PASSWORD: "bloguser"
POSTGRES_DB: "blogdb"
networks:
- traveltouchapi-dev
volumes:
db_volume:
My Dockerrun.aws.json
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"environment": [
{
"name": "POSTGRES_USER",
"value": "bloguser"
},
{
"name": "POSTGRES_PASSWORD",
"value": "bloguser"
},
{
"name": "POSTGRES_DB",
"value": "blogdb"
}
],
"essential": true,
"image": "postgres:latest",
"memory": 200,
"mountPoints": [
{
"containerPath": "/var/lib/postgresql/data",
"sourceVolume": "Db_Volume"
}
],
"name": "postgres_image",
"portMappings": [
{
"containerPort": 5432
}
]
},
{
"environment": [
{
"name": "DB_CONNECTION_STRING",
"value": "host=postgres_image;port=5432;database=blogdb;username=bloguser;password=bloguser"
}
],
"essential": true,
"image": "traveltouchapi:latest",
"name": "traveltouchapi",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"memory": 200
}
],
"family": "",
"volumes": [
{
"host": {
"sourcePath": "db_volume"
},
"name": "Db_Volume"
}
]
}
I think you are missing the login step before deploy the applications.
Can you try use this command before deploying?
aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_DEFAULT_ACCID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
The image name must contains with full repo/tag name 'natheesh/traveltouchapi: latest' in Dockerrun.json

AWS Elastic Beanstalk gives "could not translate host name "db" to address" error

I've been trying to deploy my docker consisted of Django, Postgresql and Nginx. It works fine when I do sudo docker-compose up However when deploy it on AWS EB, it gives me
could not translate host name "db" to address: Name or service not known
What I've done is I pushed my docker to docker hub using sudo docker build -t myname/dockername -f Dockerfile . and I simply do eb deploy
File Structure
myproject
myproject
settings.py
urls.py
...
Dockerfile
Dockerrun.aws.json
manage.py
requirements.txt
...
Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
EXPOSE 8000
CMD ["sh", "on-container-start.sh"]
Dockerrun.aws.json
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "myname/dockername:latest",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8000"
}
]
}
docker-compose.yml
version: '3'
services:
db:
image: postgres
hostname: db
networks:
- some_network
web:
restart: always
build: .
volumes:
- .:/code
hostname: web
expose:
- "8000"
depends_on:
- db
links:
- db:db
networks:
- some_network
nginx:
image: nginx
hostname: nginx
ports:
- "8000:8000"
volumes:
- ./config/nginx:/etc/nginx/conf.d
depends_on:
- web
networks:
- some_network
networks:
some_network:
One thing I realize is that when I use docker-compose up on my machine, I get 3 different containers running. However on EB, I see only one container running.
I think it's because I'm fetching the image from docker hub that I built with those files and that somehow caused these 3 containers to be one and it's messing up with recognizing host names? I am quite not sure still. Help will be greatly appreciated. Thanks!
Dockerrun.aws.json should correlate with docker-compose.yml
The reason of issue that host name ”db“ could not be translated to address is that the docker-compose.yml and Dockerrun.aws.json files describe a different architecture:
There are 3 containers in docker-compose.yml
There is only 1 container in Dockerrun.aws.json
Therefore, the application tries to resolve the db hostname and cannot find it, because db not declared in Dockerrun.aws.json
Fix Dockerrun.aws.json
So, update your Dockerrun.aws.json. You can do it either manually or using convenient tool micahhausler/container-transform:
a) either update it manually
You can use samples, such as:
k2works/aws-eb-docker-multi-container-sample**
b) or update it using micahhausler/container-transform
You can try micahhausler/container-transform:
Transforms docker-compose, ECS, and Marathon configurations
Transforms docker-compose, ECS, and Marathon configurations
Here is what it outputs for your case:
$ container-transform docker-compose.yml > Dockerrun.aws.json
Dockerrun.aws.json
{
"containerDefinitions": [
{
"essential": true,
"image": "postgres",
"name": "db"
},
{
"essential": true,
"image": "nginx",
"mountPoints": [
{
"containerPath": "/etc/nginx/conf.d",
"sourceVolume": "_ConfigNginx"
}
],
"name": "nginx",
"portMappings": [
{
"containerPort": 8000,
"hostPort": 8000
}
]
},
{
"essential": true,
"links": [
"db:db"
],
"mountPoints": [
{
"containerPath": "/code",
"sourceVolume": "_"
}
],
"name": "web"
}
],
"family": "",
"volumes": [
{
"host": {
"sourcePath": "."
},
"name": "_"
},
{
"host": {
"sourcePath": "./config/nginx"
},
"name": "_ConfigNginx"
}
]
}
Note:: Of course, you should fix missing settings such as memory for db and nginx containers.
You can omit networks at all
According to Networking in Compose | Docker Documentation:
For example, suppose your app is in a directory called myapp, and your docker-compose.yml looks like this:
docker-compose.yml
version: "3"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
ports:
- "8001:5432"
When you run docker-compose up, the following happens:
A network called myapp_default is created.
A container is created using web’s configuration. It joins the network myapp_default under the name web.
A container is created using db’s configuration. It joins the network myapp_default under the name db.
So, since all your containers linked to the same some_network, you can omit it.
docker-compose.yml
version: '3'
services:
db:
image: postgres
hostname: db
web:
restart: always
build: .
volumes:
- .:/code
hostname: web
expose:
- "8000"
depends_on:
- db
links:
- db:db
nginx:
image: nginx
hostname: nginx
ports:
- "8000:8000"
volumes:
- ./config/nginx:/etc/nginx/conf.d
depends_on:
- web
And $ container-transform docker-compose.yml > Dockerrun.aws.json will produce:
Dockerrun.aws.json
{
"containerDefinitions": [
{
"essential": true,
"image": "postgres",
"name": "db"
},
{
"essential": true,
"image": "nginx",
"mountPoints": [
{
"containerPath": "/etc/nginx/conf.d",
"sourceVolume": "_ConfigNginx"
}
],
"name": "nginx",
"portMappings": [
{
"containerPort": 8000,
"hostPort": 8000
}
]
},
{
"essential": true,
"links": [
"db:db"
],
"mountPoints": [
{
"containerPath": "/code",
"sourceVolume": "_"
}
],
"name": "web"
}
],
"family": "",
"volumes": [
{
"host": {
"sourcePath": "."
},
"name": "_"
},
{
"host": {
"sourcePath": "./config/nginx"
},
"name": "_ConfigNginx"
}
]
}

Docker swarm worker node cannot serve the nginx service it is hosting

As a learning exercise, I'm trying to set up a docker swarm on two test AWS EC2 instances, but I'm running into a problem when I try to access the service from the IP address of the worker node.
On the master server, I ran docker swarm init. Then I took the output token and ran docker swarm join --token <token> <Master Private IP>:2377
Then I did a simple docker service create -p 80:80 --name nginx nginx on the master, followed by a docker service scale nginx=2. Now, checking with docker service ps nginx gives the following:
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
idux5dftj9oj nginx.1 nginx:latest ip-172-31-13-2 Running Running 12 minutes ago
2nwfw3fncybj nginx.2 nginx:latest ip-172-31-14-130 Running Running 38 seconds ago
I've opened the inbound ports on the security groups according to this guide, specifically:
TCP port 2377
TCP and UDP port 7946
UDP port 4789
The master and worker servers have the same security group, so I just set the source to itself.
When I run curl http://localhost on the master, it gives me this, which proves it works:
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
<!-- Omitting this for brevity -->
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<!-- Omitting this for brevity -->
</body>
But on the worker, I just get curl: (7) Failed to connect to localhost port 80: Connection refused
A docker ps on the worker gives me:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b37770b153db nginx:latest "nginx -g 'daemon of…" 34 minutes ago Up 34 minutes 80/tcp nginx.2.2nwfw3fncybjj7qzeierlx0xr
Running docker service inspect nginx on the master gives:
[
{
"ID": "887xm47oavn367w0o4bo1nmce",
"Version": {
"Index": 652
},
"CreatedAt": "2019-05-19T07:50:54.491113206Z",
"UpdatedAt": "2019-05-19T08:02:53.454804111Z",
"Spec": {
"Name": "nginx",
"Labels": {},
"TaskTemplate": {
"ContainerSpec": {
"Image": "nginx:latest#sha256:23b4dcdf0d34d4a129755fc6f52e1c6e23bb34ea011b315d87e193033bcd1b68",
"Init": false,
"StopGracePeriod": 10000000000,
"DNSConfig": {},
"Isolation": "default"
},
"Resources": {
"Limits": {},
"Reservations": {}
},
"RestartPolicy": {
"Condition": "any",
"Delay": 5000000000,
"MaxAttempts": 0
},
"Placement": {
"Platforms": [
{
"Architecture": "amd64",
"OS": "linux"
},
{
"OS": "linux"
},
{
"Architecture": "arm64",
"OS": "linux"
},
{
"Architecture": "386",
"OS": "linux"
},
{
"Architecture": "ppc64le",
"OS": "linux"
},
{
"Architecture": "s390x",
"OS": "linux"
}
]
},
"ForceUpdate": 0,
"Runtime": "container"
},
"Mode": {
"Replicated": {
"Replicas": 2
}
},
"UpdateConfig": {
"Parallelism": 1,
"FailureAction": "pause",
"Monitor": 5000000000,
"MaxFailureRatio": 0,
"Order": "stop-first"
},
"RollbackConfig": {
"Parallelism": 1,
"FailureAction": "pause",
"Monitor": 5000000000,
"MaxFailureRatio": 0,
"Order": "stop-first"
},
"EndpointSpec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 80,
"PublishedPort": 80,
"PublishMode": "ingress"
}
]
}
},
"PreviousSpec": {
"Name": "nginx",
"Labels": {},
"TaskTemplate": {
"ContainerSpec": {
"Image": "nginx:latest#sha256:23b4dcdf0d34d4a129755fc6f52e1c6e23bb34ea011b315d87e193033bcd1b68",
"Init": false,
"DNSConfig": {},
"Isolation": "default"
},
"Resources": {
"Limits": {},
"Reservations": {}
},
"Placement": {
"Platforms": [
{
"Architecture": "amd64",
"OS": "linux"
},
{
"OS": "linux"
},
{
"Architecture": "arm64",
"OS": "linux"
},
{
"Architecture": "386",
"OS": "linux"
},
{
"Architecture": "ppc64le",
"OS": "linux"
},
{
"Architecture": "s390x",
"OS": "linux"
}
]
},
"ForceUpdate": 0,
"Runtime": "container"
},
"Mode": {
"Replicated": {
"Replicas": 1
}
},
"EndpointSpec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 80,
"PublishedPort": 80,
"PublishMode": "ingress"
}
]
}
},
"Endpoint": {
"Spec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 80,
"PublishedPort": 80,
"PublishMode": "ingress"
}
]
},
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 80,
"PublishedPort": 80,
"PublishMode": "ingress"
}
],
"VirtualIPs": [
{
"NetworkID": "6scdvoeno2tviu4zgyldmq6b4",
"Addr": "10.255.0.82/16"
}
]
}
}
]
Here's the master's docker info
Containers: 3
Running: 3
Paused: 0
Stopped: 0
Images: 4
Server Version: 18.09.6
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
NodeID: q4h5ahgxf1xwuyi2aotyt20iy
Is Manager: true
ClusterID: r88oqh59x74bl1kqrcg5od2qd
Managers: 1
Nodes: 2
Default Address Pool: 10.0.0.0/8
SubnetSize: 24
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 172.31.13.2
Manager Addresses:
172.31.13.2:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: bb71b10fd8f58240ca47fbb579b9d1028eea7c84
runc version: 2b18fe1d885ee5083ef9f0838fee39b62d653e30
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.15.0-1021-aws
Operating System: Ubuntu 18.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.945GiB
Name: ip-172-31-13-2
ID: RM34:I2IM:EJ2V:W74X:ECSD:ABCC:ZB4T:B7UO:OIWW:SUQ2:ILDB:HQLQ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
And here's the worker's docker info
Containers: 3
Running: 3
Paused: 0
Stopped: 0
Images: 4
Server Version: 18.09.5
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
NodeID: slya32xwjmklumhm23bt7xs6m
Is Manager: false
Node Address: 172.31.14.130
Manager Addresses:
172.31.13.2:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: bb71b10fd8f58240ca47fbb579b9d1028eea7c84
runc version: 2b18fe1d885ee5083ef9f0838fee39b62d653e30
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.15.0-1021-aws
Operating System: Ubuntu 18.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.945GiB
Name: ip-172-31-14-130
ID: X7FI:3VCW:OCVI:5XSX:HJ24:2NOD:NQYU:SEYL:JVIJ:J4DI:F5UL:NKZT
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: bizmd
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
As far as I've read, there should not be any problems after adding the worker to the swarm and creating a service. Despite that, the worker cannot access the nginx service that it is already hosting.
What could be causing this issue?
I had the idea to check which ports were actually opened in my worker server (as opposed to just which were opened on the firewall).
netstat -tulpn showed me:
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp6 0 0 :::9443 :::* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
udp 19968 0 127.0.0.53:53 0.0.0.0:* -
udp 0 0 172.31.14.130:68 0.0.0.0:* -
udp 0 0 0.0.0.0:4789 0.0.0.0:* -
I noticed that no process was consuming 7946, which is one of the ports that needed to be opened up. So I restarted the docker service: sudo service docker restart
After the restart finished, I saw a process start up and consumed the port. Sure enough, I was then able to execute curl localhost against either node.

CannotPullContainerError on Deploying Multi-container App on ElasticBeanstalk

I have a multi-container app which I want to deploy on ElasticBeanstalk. Below are my files.
Dockerfile
FROM python:2.7
WORKDIR /app
ADD . /app
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y \
apt-utils \
git \
python \
python-dev \
libpcre3 \
libpcre3-dev \
python-setuptools \
python-pip \
nginx \
supervisor \
default-libmysqlclient-dev \
python-psycopg2 \
libpq-dev \
sqlite3 && \
pip install -U pip setuptools && \
rm -rf /var/lib/apt/lists/*
RUN pip install -r requirements.txt
EXPOSE 8000
RUN chmod +x entry_point.sh
docker-compose.yml
version: "2"
services:
db:
restart: always
container_name: docker_test-db
image: postgres:9.6
expose:
- "5432"
mem_limit: 10m
environment:
- POSTGRES_NAME=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=docker_test
redis:
restart: always
image: redis:3.0
expose:
- "6379"
mem_limit: 10m
web:
# replace username/repo:tag with your name and image details
restart: always
build: .
image: docker_test
container_name: docker_test-container
ports:
- "8000:8000"
environment:
- DATABASE=db
- POSTGRES_NAME=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=docker_test
mem_limit: 500m
depends_on:
- db
- redis
entrypoint: ./entry_point.sh
command: gunicorn docker_test.wsgi:application -w 2 -b :8000 --timeout 120 --graceful-timeout 120 --worker-class gevent
celery:
image: docker_test
container_name: docker_test-celery
command: celery -A docker_test worker -l info
links:
- db
- redis
mem_limit: 10m
depends_on:
- web
cbeat:
image: docker_test
container_name: docker_test-cbeat
command: celery beat --loglevel=info
links:
- db
- redis
mem_limit: 10m
depends_on:
- web
I works file when I run it on my local system. But when I upload it on elasticbeanstalk, It gives my following error.
ECS task stopped due to: Essential container in task exited. (celery:
db: cbeat: web: CannotPullContainerError: API error (404): pull access
denied for docker_test, repository does not exist or may require
'docker login' redis: )
I transform docker-compose.yml to Dockerrun.aws.json by using container-transform. For above file, my Dockerrun.aws.json is following.
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"command": [
"celery",
"beat",
"--loglevel=info"
],
"essential": true,
"image": "docker_test",
"links": [
"db",
"redis"
],
"memory": 10,
"name": "cbeat"
},
{
"command": [
"celery",
"-A",
"docker_test",
"worker",
"-l",
"info"
],
"essential": true,
"image": "docker_test",
"links": [
"db",
"redis"
],
"memory": 10,
"name": "celery"
},
{
"environment": [
{
"name": "POSTGRES_NAME",
"value": "postgres"
},
{
"name": "POSTGRES_USER",
"value": "postgres"
},
{
"name": "POSTGRES_PASSWORD",
"value": "postgres"
},
{
"name": "POSTGRES_DB",
"value": "docker_test"
}
],
"essential": true,
"image": "postgres:9.6",
"memory": 10,
"name": "db"
},
{
"essential": true,
"image": "redis:3.0",
"memory": 10,
"name": "redis"
},
{
"command": [
"gunicorn",
"docker_test.wsgi:application",
"-w",
"2",
"-b",
":8000",
"--timeout",
"120",
"--graceful-timeout",
"120",
"--worker-class",
"gevent"
],
"entryPoint": [
"./entry_point.sh"
],
"environment": [
{
"name": "DATABASE",
"value": "db"
},
{
"name": "POSTGRES_NAME",
"value": "postgres"
},
{
"name": "POSTGRES_USER",
"value": "postgres"
},
{
"name": "POSTGRES_PASSWORD",
"value": "postgres"
},
{
"name": "POSTGRES_DB",
"value": "docker_test"
}
],
"essential": true,
"image": "docker_test",
"memory": 500,
"name": "web",
"portMappings": [
{
"containerPort": 8000,
"hostPort": 8000
}
]
}
],
"family": "",
"volumes": []
}
How can I resolve this problem?
Please push the image "docker_test" to either dockerhub or ECR for Beanstalk to pull image from. Currently, it's on your local & the ECS agent doesn't know about it.
Tag & Push docker_test image to a registry like dockerhub & ECR.
Update image repo URL in Dockerrun.aws.json.
Allow Beanstalk to pull the image.
I'm not that familiar with EB, but I am pretty familiar with ECR and ECS.
I usually get that error when I try pull an image from an empty repo on ECR, in other words the ECR repo was created but you havn't pushed any docker images to the repo yet.
This can also happen when you try pull an image from ECR and it can't find the version number of the image in the tag. I suggest that you change your docker-compose.yml file to use the latest version of the images. This will mean that everywhere you mention the image docker_test you will need suffix it with ":latest"
Something like this:
image: docker_test:latest
I will post my whole docker-compose.yml I made for you at the end of the reply.
I would suggest that you have a look at this doc:https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.container.console.html see the section:"Using Images from an Amazon ECR Repository" they explain how you can resolve the docker login issue.
I hope that helps. Please reply if you have any questions regarding this.
version: "2"
services:
db:
restart: always
container_name: docker_test-db
image: postgres:9.6
expose:
- "5432"
mem_limit: 10m
environment:
- POSTGRES_NAME=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=docker_test
redis:
restart: always
image: redis:3.0
expose:
- "6379"
mem_limit: 10m
web:
# replace username/repo:tag with your name and image details
restart: always
build: .
image: docker_test:latest
container_name: docker_test-container
ports:
- "8000:8000"
environment:
- DATABASE=db
- POSTGRES_NAME=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=docker_test
mem_limit: 500m
depends_on:
- db
- redis
entrypoint: ./entry_point.sh
command: gunicorn docker_test.wsgi:application -w 2 -b :8000 --timeout 120 --graceful-timeout 120 --worker-class gevent
celery:
image: docker_test
container_name: docker_test-celery
command: celery -A docker_test worker -l info
links:
- db
- redis
mem_limit: 10m
depends_on:
- web
cbeat:
image: docker_test:latest
container_name: docker_test-cbeat
command: celery beat --loglevel=info
links:
- db
- redis
mem_limit: 10m
depends_on:
- web