Deploying an ECS application to AWS using docker compose - amazon-web-services

I am following the AWS tutorial on deploying an ECS application using docker compose.
When I run docker compose up, I only receive the message docker UpdateInProgress User Initiated, but nothing else happens:
[+] Running 0/0
- docker UpdateInProgress User Initiated 0.0s
Previously, this worked fine and all the ECS resources (cluster, task definitions, services, load balancer) had been created.
For some reason, now, this does not work anymore (although I have not changed my docker-compose.yml file).
docker-compose.yml:
version: '3'
services:
postgres:
image: ${AWS_DOCKER_REGISTRY}/postgres
networks:
- my-network
ports:
- "5432:5432"
volumes:
- postgres:/data/postgres
server:
image: ${AWS_DOCKER_REGISTRY}/server
networks:
- my-network
env_file:
- .env
ports:
- "${PORT}:${PORT}"
depends_on:
- postgres
entrypoint: "/server/run.sh"
pgadmin:
image: ${AWS_DOCKER_REGISTRY}/pgadmin
networks:
- my-network
depends_on:
- postgres
volumes:
- pgadmin:/root/.pgadmin
ports:
- "${PGADMIN_PORT:-5050}:${PGADMIN_PORT:-5050}"
networks:
my-network:
#driver: bridge
volumes:
postgres:
pgadmin:
I also switched to the correct Docker context before (docker context use my-aws-context).
And I have updated to the latest version of Docker Desktop for Windows and AWS CLI.
Did someone already have a similar problem?

From the message it appears that you are trying to compose up a stack that is existing already (on AWS) and so it's trying to update the CFN stack. Can you check if this is the case? You have a couple of options if that is what's happening: 1) delete the CFN stack (either in AWS or with docker compose down) or 2) launch the docker compose up with the flag --project-name string (where string is an arbitrary name of your choice). By default compose will use the directory name as the project name so if you compose up twice it will try to work on the same stack.

Related

Docker creates a new volume everytime I do docker compose up

I have a docker-compose file that spins up several services. I just got an error stating I ran out of disk space so I typed "docker system df" and saw I have 21 volumes. If I have 3 docker containers with each having a volume attached to them, why its showing me a total count of 21 volumes?
I am using AWS EC2. This is my compose file, is there anything wrong with the way I have the volumes set? The postgres data got persisted when I shutdown and restart, I am just confused aboutthe volume size and the message that I can not rebuild due to no space on a T2 Large instance.
version: "3"
services:
nftapi:
env_file:
- .env
build:
context: .
ports:
- '443:5000'
depends_on:
- postgres
volumes:
- .:/app
- /app/node_modules
networks:
- postgres
postgres:
container_name: postgres
image: postgres:latest
ports:
- "5432:5432"
volumes:
- /data/postgres:/data/postgres
env_file:
- docker.env
networks:
- postgres
pgadmin:
links:
- postgres:postgres
container_name: pgadmin
image: dpage/pgadmin4
ports:
- "8080:80"
volumes:
- /data/pgadmin:/root/.pgadmin
env_file:
- docker.env
networks:
- postgres
networks:
postgres:
driver: bridge
A Docker image's Dockerfile can contain a VOLUME directive. This is an instruction to Docker that tells it that some container directory contains data that needs to be persisted, and Docker should always ensure a volume of some sort is mounted on that directory.
More specifically, the postgres image declares
VOLUME /var/lib/postgresql/data
Your Compose setup doesn't mount anything on that specific directory. Because of this, Docker creates an anonymous volume and mounts it there for you. This isn't specific to the postgres image and other containers in your stack may have similar local data directories. Those anonymous volumes are what you're seeing in the docker system df output (docker volume ls will also show them).
In a later question you also note that Compose has trouble finding these anonymous volumes, and it's better to not rely on this functionality. Make sure you're mounting a host directory or named volume for these data directories via Compose volumes:.
if your main containers are up and running run
docker volume prune
and it should remove any volumes that are detached or unused by any container
i make it a habit to run periodically on my aws instance
docker system prune

No connection in a multicontainer docker environment

I am trying to deploy an application in Docker running on 64bit Amazon Linux 2. I am using a pipeline, which publishes images to a private repository on Dockerhub. Elastic beanstalk uses docker-compose to run containers, but so far I've had no success in accessing the application. I am not using a dockerrun.aws.json file, as v.3 does not support any container configuration, and as far as I know, it's not needed for docker compose.
My docker-compose file contains several services, one of which is a RabbitMQ message broker.
version: '3.9'
services:
Some.API:
image: ...
container_name: some-api
networks:
- my-network
ports:
- "9002:80"
Another.API:
image: ...
container_name: another-api
networks:
- my-network
ports:
- "9003:80"
rabbitmQ:
image: rabbitmq:3-management-alpine
container_name: rabbit-mq
labels:
NAME: rabbitmq
volumes:
- ./rabbitconfig/rabbitmq-isolated.conf:/etc/rabbitmq/rabbitmq.config
networks:
- my-network
ports:
- "4369:4369"
- "5671:5671"
- "5672:5672"
- "25672:25672"
- "15671:15671"
- "15672:15672"
front-end:
image: ...
container_name: front-end
networks:
- my-network
ports:
- "9001:80"
networks:
my-network:
driver: bridge
Once the current version of the application is successfuly deployed to Beanstalk, I see that there is no successful communication in the bridge network.
In the eb-stdouterr.log I see that there are errors while establishing connection between the apis and the message broker:
RabbitMQ.Client.Exceptions.BrokerUnreachableException: None of the specified endpoints were reachable.
The APIs are .NET Core applications, which use the Beanstalk's environment variables to determine the name of the broker service. In the Configuration/Software/Environment properties section there is a following entry:
RABBIT_HOSTNAME | rabbitmq
which should ensure that the services use a proper host name.
Yet, I get exceptions. Any advice?
It turned out that I needed to reference the automatically generated .env file in docker-compose.yml like so:
front-end:
image: ...
container_name: front-end
networks:
- my-network
ports:
- "9001:80"
env_file: <--- these
- .env <--- 2 lines
for each service. Only after doing this the Environment properties from AWS Beanstalk were passed to the containers.

Docker : compose file is incompatible with Amazon ECS

I am trying to deploy my docker image in AWS ECS. I have created the ECR repository and done all required steps till pushing the image to ECS.
My docker-compose.yaml looks like this
version: '3'
services:
djangoapp:
image: xxxxx.dkr.ecr.ca-central-1.amazonaws.com/abc:latest #uri after pushing the image
build: .
volumes:
- .:/opt/services/djangoapp/src
- static_volume:/opt/services/djangoapp/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/media # <-- bind the media volume
networks:
- nginx_network
- database1_network
depends_on:
- database1
nginx:
image: nginx:1.13
ports:
- 80:5000
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static_volume:/opt/services/djangoapp/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/media # <-- bind the media volume
depends_on:
- djangoapp
networks:
- nginx_network
database1:
image: postgres:10
env_file:
- config/db/database1_env
networks:
- database1_network
volumes:
- database1_volume:/var/lib/postgresql/data
networks:
nginx_network:
driver: bridge
database1_network:
driver: bridge
volumes:
database1_volume:
static_volume: # <-- declare the static volume
media_volume: # <-- declare the media volume
I am trying to run the command:
docker ecs compose -n abc up
And i get the following error:
WARN[0000] services.build: unsupported attribute
WARN[0000] services.volumes: unsupported attribute
ERRO[0000] published port can't be set to a distinct value than container port: incompatible attribute
WARN[0000] services.volumes: unsupported attribute
WARN[0000] services.env_file: unsupported attribute
WARN[0000] services.volumes: unsupported attribute
WARN[0000] networks.driver: unsupported attribute
WARN[0000] networks.driver: unsupported attribute
compose file is incompatible with Amazon ECS
I am using the latest version of docker i.e 19.03.08 and the latest aws-cli/2.0.39.
I am facing the same trouble and I was able to accomplish it by removing all attributes in docker-compose.yaml except for image.
In your djangoapp service under the image attribute, you set the value to a URI with the comment "the uri after pushing the image". Presumably this is the URI to a locally-built docker image of djangoapp, which was pushed to a ECR repository.
Since you already built and pushed the djangoapp image to ECR, just leave the image attribute and comment out all other attributes listed in the error message from docker-compose.yaml:
build
volumes
...
In my case it helped.
The list of supported attributes: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cmd-ecs-cli-compose-parameters.html
Docker decided to not allow for this. This is what they said on a Slack convo: "We decided to not support this as this would only apply to ingress traffic, not service-to-service, which would be both confusing and inconsistent with local development".
I'm not sure what you're trying to achieve, but it looks like you want to have a django service backed by a postgres database, and you want to use nginx as a reverse proxy to forward requests to django?
If you are trying to do that, then just us a single network for your cluster, and get rid of the driver option(s) for the networks. Removing those driver options will get rid of the networks.driver errors. The networks in docker compose will map to EC2 security groups. They're not like private/bridged/nat networks you would use in VM management like VMWare or Hyper-V. Putting all of your services on the same network will allow them to communicate with each other if that's what you're looking for. They will also be able to communicate out to the internet, but only those that have ports set will be reachable directly from the internet.
Regarding ports, only symmetrical port mapping is supported in ECS. This means you can't map one external port to a different internal port. They have to be the same. So your nginx configuration must use either 80, or 5000 - not both. Fixing that will get rid of your 3rd error.
As #Maksym specified, you don't need the build option under your djangoapp service. You already have the image built and are just deploying it.
For your volumes, you're trying to mount a host path with .:/opt/services/djangoapp/src. When you use docker compose to deploy to ECS, it uses CloudFormation to deploy your stack and each of your services run "serverless", so there isn't a host to mount a path from. In your case, since you're building djangoapp image yourself, just update your Dockerfile to copy the desired contents to the /opt/services/djangoapp/src filter as part of the image build. Do the same with your nginx service - create your own nginx image and include the files you want in /etc/nginx/conf.d, push it to ECR, and then use that image in your compose yaml. One of those files should be a configuration to reverse proxy port 80 to your djangoapp:5000 port.
For your other errors, I'm not sure. The configuration for env_file looks fine, as does your named volume mappings like static_volume:/opt/services/djangoapp/static. I don't know if updating your version heading to 3.7 in the top of your file will help.
In the end, your file should look similar to this:
version: '3.7'
services:
djangoapp:
image: xxxxx.dkr.ecr.ca-central-1.amazonaws.com/abc:latest #uri after pushing the image
volumes:
- static_volume:/opt/services/djangoapp/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/media # <-- bind the media volume
networks:
- app_network
depends_on:
- database1
nginx:
image: xxxxx.dkr.ecr.ca-central-1.amazonaws.com/my-nginx:latest
ports:
- 80
volumes:
- static_volume:/opt/services/djangoapp/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/media # <-- bind the media volume
depends_on:
- djangoapp
networks:
- app_network
database1:
image: postgres:10
env_file:
- config/db/database1_env
networks:
- app_network
volumes:
- database1_volume:/var/lib/postgresql/data
networks:
app_network:
name: app_network
volumes:
database1_volume:
name: app_database1_volume
static_volume:
name: app_static_volumne
media_volume:
name: app_media_volume

Django Docker/Kubernetes Postgres data not appearing

I just tried switching from docker-compose to docker stacks/kubernetes. In compose I was able to specify where the postgres data volume was and the data persisted nicely.
volumes:
- ./postgres-data:/var/lib/postgresql/data
I tried doing the same thing with the stack file and I can connect to the pod and use psql to see the schema but none of the data entered from docker-compose is there.
Any ideas why this might be?
Here's the stack.yml
version: '3.3'
services:
django:
image: image
build:
context: .
dockerfile: docker/Dockerfile
deploy:
replicas: 5
environment:
- DJANGO_SETTINGS_MODULE=config.settings.local
- SECRET_KEY=password
- NAME=postgres
- USER=postgres
- HOST=db
- PASSWORD=password
- PORT=5432
volumes:
- .:/application
command: ["gunicorn", "--bind 0.0.0.0:8000", "config.wsgi"]
ports:
- "8000:8000"
links:
- db
db:
image: mdillon/postgis:9.6-alpine
volumes:
- ./postgres-data:/var/lib/postgresql/data
You failed to mention how your cluster is provisioned, where is it running etc. so I will make an assumption we're talking about local tests here. If so, you probably have local docker/docker-compose and minikube installed.
If that is the case, please mind that minikube runs in it's own VM so it will not be affected by changes you make on your host by ie. docker, as it has it's own filesystem in vm.
Hint: you can run docker against docker daemon of minikube if you first run eval $(minikube docker-env)
For docker stacks, run the docker inspect command, it should show the mount point of the Postgres container.
docker service inspect --format='{{range .Spec.TaskTemplate.ContainerSpec.Mounts}} {{.Source}}{{end}}' <StackName>
Fixed in the last Docker Edge update.

Selenium Grid Setup using Docker Compose on AWS ECS

Context :
I am trying to setup a selenium grid to run my UI tests on CI.CI is Jenkins 2.0 and it runs on AWS ECS.When I create a selenium grid using the docker compose and invoke the tests on my MAC (OS Sierra) , it works perfectly.
When run on the AWS ECS , it shows me an : java.awt.AWTError: Can't connect to X11 window server using '99.0' as the value of the DISPLAY variable.
The test code itself is in a container and using a bridge network I have added the container to the same network as the grid.
The docker compose looks something like this :
version: '3'
services:
chromenode:
image: selenium/node-chrome:3.4.0
volumes:
- /dev/shm:/dev/shm
- /var/run/docker.sock:/var/run/docker.sock
container_name: chromenode
hostname: chromenode
depends_on:
- seleniumhub
ports:
- "5900:5900"
environment:
- "HUB_PORT_4444_TCP_ADDR=seleniumhub"
- "HUB_PORT_4444_TCP_PORT=4444"
networks:
- grid_network
seleniumhub:
image: selenium/hub:3.4.0
ports:
- "4444:4444"
container_name: seleniumhub
hostname: seleniumhub
networks:
- grid_network
volumes:
- /var/run/docker.sock:/var/run/docker.sock
testservice:
build:
context: .
dockerfile: DockerfileTest
networks:
- grid_network
networks:
grid_network:
driver: bridge
Please let me know if more info is required.
unset DISPLAY This helped me to solve the problem
This helps in most cases (e.g. starting application servers or other java based tools) and avoids to modify all that many command lines.
It can also be comfortable to add it to the .bash_profile for a dedicated app-server/tools user.
Can you please try this
- no_proxy=""