I am unable to access a spring boot application. I have deployed a Spring boot application on AWS app runner using ECR. I don't see any errors on neither application logs Event logs. App runner status is also green.
On app runner port is configured to 8080.
docker-compose file
services:
my-app:
image: my-app
ports:
- "8080:8080"
environment:
- DATABASE_HOST=${DATABASE_HOST}
- DATABASE_USER=${DATABASE_USER}
- DATABASE_PASSWORD=${DATABASE_PASSWORD}
- DATABASE_NAME=${DATABASE_NAME}
- DATABASE_PORT=3306
networks:
- dummy-mysql
depends_on:
mysqldb:
condition: service_healthy
mysqldb:
image: mysql:8
ports:
- '3307:3306'
networks:
- dummy-mysql
environment:
- MYSQL_ROOT_PASSWORD=${DATABASE_PASSWORD}
- MYSQL_DATABASE=${DATABASE_NAME}
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
timeout: 20s
retries: 10
networks:
dummy-mysql:
Related
I am uploading the .yml multicontainer file to EB from the AWS console and everything works fine the fist time, but next time I deploy the same .yml EB start a new mySql database, it seems that doesn't detect the volumes. Any ideas? Thanks!
Here my .yml file:
version: '3.3'
services:
database:
image: mysql:5.7.25
environment:
- MYSQL_ROOT_PASSWORD=testpass
- MYSQL_DATABASE=testdb
container_name: local_database
ports:
- '3306:3306'
volumes:
- './mysql_data:/var/lib/mysql'
nest:
image: rsteercen/nest_local:latest
container_name: nest
command: sh -c "sleep 10 && npm run migrations:run && node dist/src/main"
environment:
- DATABASE_TYPE=mysql
- DATABASE_HOST=local_database
- DATABASE_PORT=3306
- PORT=3000
- DATABASE_USER=root
- DATABASE_PASS=testpass
- DATABASE_NAME=testdb
ports:
- '80:3000'
depends_on:
- database
How is your volume defined?
If you're just using docker-compose, you should define also the volume there, fe. like:
volumes:
db-data:
driver_opts:
lifecycle_policy: AFTER_30_DAYS
and then attach it to the service:
services:
database:
volumes:
- db-data:/mysql
I am trying to deploy an application in Docker running on 64bit Amazon Linux 2. I am using a pipeline, which publishes images to a private repository on Dockerhub. Elastic beanstalk uses docker-compose to run containers, but so far I've had no success in accessing the application. I am not using a dockerrun.aws.json file, as v.3 does not support any container configuration, and as far as I know, it's not needed for docker compose.
My docker-compose file contains several services, one of which is a RabbitMQ message broker.
version: '3.9'
services:
Some.API:
image: ...
container_name: some-api
networks:
- my-network
ports:
- "9002:80"
Another.API:
image: ...
container_name: another-api
networks:
- my-network
ports:
- "9003:80"
rabbitmQ:
image: rabbitmq:3-management-alpine
container_name: rabbit-mq
labels:
NAME: rabbitmq
volumes:
- ./rabbitconfig/rabbitmq-isolated.conf:/etc/rabbitmq/rabbitmq.config
networks:
- my-network
ports:
- "4369:4369"
- "5671:5671"
- "5672:5672"
- "25672:25672"
- "15671:15671"
- "15672:15672"
front-end:
image: ...
container_name: front-end
networks:
- my-network
ports:
- "9001:80"
networks:
my-network:
driver: bridge
Once the current version of the application is successfuly deployed to Beanstalk, I see that there is no successful communication in the bridge network.
In the eb-stdouterr.log I see that there are errors while establishing connection between the apis and the message broker:
RabbitMQ.Client.Exceptions.BrokerUnreachableException: None of the specified endpoints were reachable.
The APIs are .NET Core applications, which use the Beanstalk's environment variables to determine the name of the broker service. In the Configuration/Software/Environment properties section there is a following entry:
RABBIT_HOSTNAME | rabbitmq
which should ensure that the services use a proper host name.
Yet, I get exceptions. Any advice?
It turned out that I needed to reference the automatically generated .env file in docker-compose.yml like so:
front-end:
image: ...
container_name: front-end
networks:
- my-network
ports:
- "9001:80"
env_file: <--- these
- .env <--- 2 lines
for each service. Only after doing this the Environment properties from AWS Beanstalk were passed to the containers.
I'm working on django project with neo4j db using neomodel and django-neomodel.
I'm trying to containerize it using docker-compose.
when I build the images everything seems fine, but any connection from web container to db using bolt is refused. although I can access the neo4j db from the browser on http, and even from local machine on bolt.
this is the error I get:
neo4j.exceptions.ServiceUnavailable: Failed to establish connection to ('127.0.0.1', 7688) (reason 111)
I'm using the following configs:
<pre>Django == 3.1.1
neo4j==4.1.0
neomodel==3.3.0
neobolt==1.7.17 </pre>
this is my docker-compose file:
version: '3'
services:
backend:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- neo4j_db
networks:
- mynetwork
links:
- neo4j_db
neo4j_db:
image: neo4j:3.5.17-enterprise
ports:
- "7474:7474"
- "7688:7687"
expose:
- 7474
- 7687
volumes:
- ./db/dbms:/data/dbms
environment:
- NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
- dbms.connector.bolt.listen_address=:7688
- dbms.connector.bolt.advertised_address=:7688
networks:
- mynetwork
networks:
mynetwork:
driver: bridge
and here's connection configs in django settings:
NEOMODEL_NEO4J_BOLT_URL = os.environ.get('NEO4J_BOLT_URL', 'bolt://neo4j:pass#123#127.0.0.1:7688')
Thanks in advance..
To connect from one container to another one (inside the same docker-compose project) you should use container name of the target container instead of the localhost (or 127.0.0.1). In your case it would be neo4j_db.
When connecting from other container you should use the internal port, in your case 7687.
In the neo4j service, the bolt.listen_address should be 7687 instead of 7688 (honestly, I'm not sure why you are changing the default port).
To wrap up, the connection url should be:
bolt://neo4j:pass#neo4j_db:7687
I'm trying to set up the nginx proxy on my Amazon AWS Docker instance together with a pimcore instance. This is my compose file:
version: '3.4'
services:
nginx-proxy:
image: codesuki/ecs-nginx-proxy
ports:
- "80:80"
pimcore-jcii:
image: ****/pimcore5:current
ports:
- "8000:80"
links:
- "db"
volumes:
- efs-storage:/data
db:
image: mariadb
restart: always
ports:
- "3306:3306"
volumes:
efs-storage:
driver: cloudstor:aws
driver_opts:
backing: shared
If I deploy this stack the nginx proxy container can not start. It appears the following error message:
task: non-zero exit (1)
I've got the error message by "docker inspect ".
What am I doing wrong? Or where can I gather more informations about the state of the container?
Context :
I am trying to setup a selenium grid to run my UI tests on CI.CI is Jenkins 2.0 and it runs on AWS ECS.When I create a selenium grid using the docker compose and invoke the tests on my MAC (OS Sierra) , it works perfectly.
When run on the AWS ECS , it shows me an : java.awt.AWTError: Can't connect to X11 window server using '99.0' as the value of the DISPLAY variable.
The test code itself is in a container and using a bridge network I have added the container to the same network as the grid.
The docker compose looks something like this :
version: '3'
services:
chromenode:
image: selenium/node-chrome:3.4.0
volumes:
- /dev/shm:/dev/shm
- /var/run/docker.sock:/var/run/docker.sock
container_name: chromenode
hostname: chromenode
depends_on:
- seleniumhub
ports:
- "5900:5900"
environment:
- "HUB_PORT_4444_TCP_ADDR=seleniumhub"
- "HUB_PORT_4444_TCP_PORT=4444"
networks:
- grid_network
seleniumhub:
image: selenium/hub:3.4.0
ports:
- "4444:4444"
container_name: seleniumhub
hostname: seleniumhub
networks:
- grid_network
volumes:
- /var/run/docker.sock:/var/run/docker.sock
testservice:
build:
context: .
dockerfile: DockerfileTest
networks:
- grid_network
networks:
grid_network:
driver: bridge
Please let me know if more info is required.
unset DISPLAY This helped me to solve the problem
This helps in most cases (e.g. starting application servers or other java based tools) and avoids to modify all that many command lines.
It can also be comfortable to add it to the .bash_profile for a dedicated app-server/tools user.
Can you please try this
- no_proxy=""