No connection in a multicontainer docker environment - amazon-web-services

I am trying to deploy an application in Docker running on 64bit Amazon Linux 2. I am using a pipeline, which publishes images to a private repository on Dockerhub. Elastic beanstalk uses docker-compose to run containers, but so far I've had no success in accessing the application. I am not using a dockerrun.aws.json file, as v.3 does not support any container configuration, and as far as I know, it's not needed for docker compose.
My docker-compose file contains several services, one of which is a RabbitMQ message broker.
version: '3.9'
services:
Some.API:
image: ...
container_name: some-api
networks:
- my-network
ports:
- "9002:80"
Another.API:
image: ...
container_name: another-api
networks:
- my-network
ports:
- "9003:80"
rabbitmQ:
image: rabbitmq:3-management-alpine
container_name: rabbit-mq
labels:
NAME: rabbitmq
volumes:
- ./rabbitconfig/rabbitmq-isolated.conf:/etc/rabbitmq/rabbitmq.config
networks:
- my-network
ports:
- "4369:4369"
- "5671:5671"
- "5672:5672"
- "25672:25672"
- "15671:15671"
- "15672:15672"
front-end:
image: ...
container_name: front-end
networks:
- my-network
ports:
- "9001:80"
networks:
my-network:
driver: bridge
Once the current version of the application is successfuly deployed to Beanstalk, I see that there is no successful communication in the bridge network.
In the eb-stdouterr.log I see that there are errors while establishing connection between the apis and the message broker:
RabbitMQ.Client.Exceptions.BrokerUnreachableException: None of the specified endpoints were reachable.
The APIs are .NET Core applications, which use the Beanstalk's environment variables to determine the name of the broker service. In the Configuration/Software/Environment properties section there is a following entry:
RABBIT_HOSTNAME | rabbitmq
which should ensure that the services use a proper host name.
Yet, I get exceptions. Any advice?

It turned out that I needed to reference the automatically generated .env file in docker-compose.yml like so:
front-end:
image: ...
container_name: front-end
networks:
- my-network
ports:
- "9001:80"
env_file: <--- these
- .env <--- 2 lines
for each service. Only after doing this the Environment properties from AWS Beanstalk were passed to the containers.

Related

Deploying an ECS application to AWS using docker compose

I am following the AWS tutorial on deploying an ECS application using docker compose.
When I run docker compose up, I only receive the message docker UpdateInProgress User Initiated, but nothing else happens:
[+] Running 0/0
- docker UpdateInProgress User Initiated 0.0s
Previously, this worked fine and all the ECS resources (cluster, task definitions, services, load balancer) had been created.
For some reason, now, this does not work anymore (although I have not changed my docker-compose.yml file).
docker-compose.yml:
version: '3'
services:
postgres:
image: ${AWS_DOCKER_REGISTRY}/postgres
networks:
- my-network
ports:
- "5432:5432"
volumes:
- postgres:/data/postgres
server:
image: ${AWS_DOCKER_REGISTRY}/server
networks:
- my-network
env_file:
- .env
ports:
- "${PORT}:${PORT}"
depends_on:
- postgres
entrypoint: "/server/run.sh"
pgadmin:
image: ${AWS_DOCKER_REGISTRY}/pgadmin
networks:
- my-network
depends_on:
- postgres
volumes:
- pgadmin:/root/.pgadmin
ports:
- "${PGADMIN_PORT:-5050}:${PGADMIN_PORT:-5050}"
networks:
my-network:
#driver: bridge
volumes:
postgres:
pgadmin:
I also switched to the correct Docker context before (docker context use my-aws-context).
And I have updated to the latest version of Docker Desktop for Windows and AWS CLI.
Did someone already have a similar problem?
From the message it appears that you are trying to compose up a stack that is existing already (on AWS) and so it's trying to update the CFN stack. Can you check if this is the case? You have a couple of options if that is what's happening: 1) delete the CFN stack (either in AWS or with docker compose down) or 2) launch the docker compose up with the flag --project-name string (where string is an arbitrary name of your choice). By default compose will use the directory name as the project name so if you compose up twice it will try to work on the same stack.

Unable to access application deployed on AWS app runner

I am unable to access a spring boot application. I have deployed a Spring boot application on AWS app runner using ECR. I don't see any errors on neither application logs Event logs. App runner status is also green.
On app runner port is configured to 8080.
docker-compose file
services:
my-app:
image: my-app
ports:
- "8080:8080"
environment:
- DATABASE_HOST=${DATABASE_HOST}
- DATABASE_USER=${DATABASE_USER}
- DATABASE_PASSWORD=${DATABASE_PASSWORD}
- DATABASE_NAME=${DATABASE_NAME}
- DATABASE_PORT=3306
networks:
- dummy-mysql
depends_on:
mysqldb:
condition: service_healthy
mysqldb:
image: mysql:8
ports:
- '3307:3306'
networks:
- dummy-mysql
environment:
- MYSQL_ROOT_PASSWORD=${DATABASE_PASSWORD}
- MYSQL_DATABASE=${DATABASE_NAME}
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
timeout: 20s
retries: 10
networks:
dummy-mysql:

Docker-compose: db connection from web container to neo4j container using bolt

I'm working on django project with neo4j db using neomodel and django-neomodel.
I'm trying to containerize it using docker-compose.
when I build the images everything seems fine, but any connection from web container to db using bolt is refused. although I can access the neo4j db from the browser on http, and even from local machine on bolt.
this is the error I get:
neo4j.exceptions.ServiceUnavailable: Failed to establish connection to ('127.0.0.1', 7688) (reason 111)
I'm using the following configs:
<pre>Django == 3.1.1
neo4j==4.1.0
neomodel==3.3.0
neobolt==1.7.17 </pre>
this is my docker-compose file:
version: '3'
services:
backend:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- neo4j_db
networks:
- mynetwork
links:
- neo4j_db
neo4j_db:
image: neo4j:3.5.17-enterprise
ports:
- "7474:7474"
- "7688:7687"
expose:
- 7474
- 7687
volumes:
- ./db/dbms:/data/dbms
environment:
- NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
- dbms.connector.bolt.listen_address=:7688
- dbms.connector.bolt.advertised_address=:7688
networks:
- mynetwork
networks:
mynetwork:
driver: bridge
and here's connection configs in django settings:
NEOMODEL_NEO4J_BOLT_URL = os.environ.get('NEO4J_BOLT_URL', 'bolt://neo4j:pass#123#127.0.0.1:7688')
Thanks in advance..
To connect from one container to another one (inside the same docker-compose project) you should use container name of the target container instead of the localhost (or 127.0.0.1). In your case it would be neo4j_db.
When connecting from other container you should use the internal port, in your case 7687.
In the neo4j service, the bolt.listen_address should be 7687 instead of 7688 (honestly, I'm not sure why you are changing the default port).
To wrap up, the connection url should be:
bolt://neo4j:pass#neo4j_db:7687

Docker : compose file is incompatible with Amazon ECS

I am trying to deploy my docker image in AWS ECS. I have created the ECR repository and done all required steps till pushing the image to ECS.
My docker-compose.yaml looks like this
version: '3'
services:
djangoapp:
image: xxxxx.dkr.ecr.ca-central-1.amazonaws.com/abc:latest #uri after pushing the image
build: .
volumes:
- .:/opt/services/djangoapp/src
- static_volume:/opt/services/djangoapp/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/media # <-- bind the media volume
networks:
- nginx_network
- database1_network
depends_on:
- database1
nginx:
image: nginx:1.13
ports:
- 80:5000
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static_volume:/opt/services/djangoapp/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/media # <-- bind the media volume
depends_on:
- djangoapp
networks:
- nginx_network
database1:
image: postgres:10
env_file:
- config/db/database1_env
networks:
- database1_network
volumes:
- database1_volume:/var/lib/postgresql/data
networks:
nginx_network:
driver: bridge
database1_network:
driver: bridge
volumes:
database1_volume:
static_volume: # <-- declare the static volume
media_volume: # <-- declare the media volume
I am trying to run the command:
docker ecs compose -n abc up
And i get the following error:
WARN[0000] services.build: unsupported attribute
WARN[0000] services.volumes: unsupported attribute
ERRO[0000] published port can't be set to a distinct value than container port: incompatible attribute
WARN[0000] services.volumes: unsupported attribute
WARN[0000] services.env_file: unsupported attribute
WARN[0000] services.volumes: unsupported attribute
WARN[0000] networks.driver: unsupported attribute
WARN[0000] networks.driver: unsupported attribute
compose file is incompatible with Amazon ECS
I am using the latest version of docker i.e 19.03.08 and the latest aws-cli/2.0.39.
I am facing the same trouble and I was able to accomplish it by removing all attributes in docker-compose.yaml except for image.
In your djangoapp service under the image attribute, you set the value to a URI with the comment "the uri after pushing the image". Presumably this is the URI to a locally-built docker image of djangoapp, which was pushed to a ECR repository.
Since you already built and pushed the djangoapp image to ECR, just leave the image attribute and comment out all other attributes listed in the error message from docker-compose.yaml:
build
volumes
...
In my case it helped.
The list of supported attributes: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cmd-ecs-cli-compose-parameters.html
Docker decided to not allow for this. This is what they said on a Slack convo: "We decided to not support this as this would only apply to ingress traffic, not service-to-service, which would be both confusing and inconsistent with local development".
I'm not sure what you're trying to achieve, but it looks like you want to have a django service backed by a postgres database, and you want to use nginx as a reverse proxy to forward requests to django?
If you are trying to do that, then just us a single network for your cluster, and get rid of the driver option(s) for the networks. Removing those driver options will get rid of the networks.driver errors. The networks in docker compose will map to EC2 security groups. They're not like private/bridged/nat networks you would use in VM management like VMWare or Hyper-V. Putting all of your services on the same network will allow them to communicate with each other if that's what you're looking for. They will also be able to communicate out to the internet, but only those that have ports set will be reachable directly from the internet.
Regarding ports, only symmetrical port mapping is supported in ECS. This means you can't map one external port to a different internal port. They have to be the same. So your nginx configuration must use either 80, or 5000 - not both. Fixing that will get rid of your 3rd error.
As #Maksym specified, you don't need the build option under your djangoapp service. You already have the image built and are just deploying it.
For your volumes, you're trying to mount a host path with .:/opt/services/djangoapp/src. When you use docker compose to deploy to ECS, it uses CloudFormation to deploy your stack and each of your services run "serverless", so there isn't a host to mount a path from. In your case, since you're building djangoapp image yourself, just update your Dockerfile to copy the desired contents to the /opt/services/djangoapp/src filter as part of the image build. Do the same with your nginx service - create your own nginx image and include the files you want in /etc/nginx/conf.d, push it to ECR, and then use that image in your compose yaml. One of those files should be a configuration to reverse proxy port 80 to your djangoapp:5000 port.
For your other errors, I'm not sure. The configuration for env_file looks fine, as does your named volume mappings like static_volume:/opt/services/djangoapp/static. I don't know if updating your version heading to 3.7 in the top of your file will help.
In the end, your file should look similar to this:
version: '3.7'
services:
djangoapp:
image: xxxxx.dkr.ecr.ca-central-1.amazonaws.com/abc:latest #uri after pushing the image
volumes:
- static_volume:/opt/services/djangoapp/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/media # <-- bind the media volume
networks:
- app_network
depends_on:
- database1
nginx:
image: xxxxx.dkr.ecr.ca-central-1.amazonaws.com/my-nginx:latest
ports:
- 80
volumes:
- static_volume:/opt/services/djangoapp/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/media # <-- bind the media volume
depends_on:
- djangoapp
networks:
- app_network
database1:
image: postgres:10
env_file:
- config/db/database1_env
networks:
- app_network
volumes:
- database1_volume:/var/lib/postgresql/data
networks:
app_network:
name: app_network
volumes:
database1_volume:
name: app_database1_volume
static_volume:
name: app_static_volumne
media_volume:
name: app_media_volume

Selenium Grid Setup using Docker Compose on AWS ECS

Context :
I am trying to setup a selenium grid to run my UI tests on CI.CI is Jenkins 2.0 and it runs on AWS ECS.When I create a selenium grid using the docker compose and invoke the tests on my MAC (OS Sierra) , it works perfectly.
When run on the AWS ECS , it shows me an : java.awt.AWTError: Can't connect to X11 window server using '99.0' as the value of the DISPLAY variable.
The test code itself is in a container and using a bridge network I have added the container to the same network as the grid.
The docker compose looks something like this :
version: '3'
services:
chromenode:
image: selenium/node-chrome:3.4.0
volumes:
- /dev/shm:/dev/shm
- /var/run/docker.sock:/var/run/docker.sock
container_name: chromenode
hostname: chromenode
depends_on:
- seleniumhub
ports:
- "5900:5900"
environment:
- "HUB_PORT_4444_TCP_ADDR=seleniumhub"
- "HUB_PORT_4444_TCP_PORT=4444"
networks:
- grid_network
seleniumhub:
image: selenium/hub:3.4.0
ports:
- "4444:4444"
container_name: seleniumhub
hostname: seleniumhub
networks:
- grid_network
volumes:
- /var/run/docker.sock:/var/run/docker.sock
testservice:
build:
context: .
dockerfile: DockerfileTest
networks:
- grid_network
networks:
grid_network:
driver: bridge
Please let me know if more info is required.
unset DISPLAY This helped me to solve the problem
This helps in most cases (e.g. starting application servers or other java based tools) and avoids to modify all that many command lines.
It can also be comfortable to add it to the .bash_profile for a dedicated app-server/tools user.
Can you please try this
- no_proxy=""