For an unknown reason, I ran into a docker error when I tried to run a docker-compose up on my project this morning.
My web container isn't able to connect to the db host and nc still returning
web_1 | nc: bad address 'db'
There is the relevant part of my docker-compose definition :
version: '3.2'
services:
web:
build: ./app
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./app/:/usr/src/app/
ports:
- 8000:8000
env_file:
- ./.env.dev
depends_on:
- db
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=
- POSTGRES_PASSWORD=
- POSTGRES_DB=
mailhog:
# mailhog declaration
volumes:
postgres_data:
I've suspected the network to be broken and it actually is. This is what I get when I inspect the docker network relative to this project :
(docker network inspect my_docker_network)
[
{
"Name": "my_docker_network",
"Id": "f09c148d9f3253d999e276c8b1061314e5d3e1f305f6124666e2e32a8e0d9efd",
"Created": "2020-11-18T13:30:29.710456682-05:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {}, // <=== This is empty !
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "my-project"
}
}
]
Versions :
Docker : 18.09.1, build 4c52b90
Docker-compose : 1.21.0, build unknown
I was able to fix that by running docker-compose down && docker-compose up but it could be kinda bad if your down was removing all your volumes and so, your data...
The inspection of networking is now alright :
[
{
"Name": "my_docker_network",
"Id": "236c45042b03c3a2922d9a9fabf644048901c66b3c1fd15507aca2c464c1d7ef",
"Created": "2020-12-04T12:04:40.765889533-05:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"0939787203f2e222f2db380e8d5b36928e95bc7242c58df56b3e6e419efdd280": {
"Name": "my_docker_db_1",
"EndpointID": "af206a7e957682d3d9aee2ec0ffae2c51638cbe8821d3b20eb786165a0159c9d",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
},
"ae90bd27539e89d0b26e0768aec765431ee623f45856e13797f3ba0262cca3f2": {
"Name": "my_docker_web_1",
"EndpointID": "09b5cefed6c5b49d31497419fd5784dcd887a23875e6c998209615c7ec8863f4",
"MacAddress": "02:42:ac:13:00:04",
"IPv4Address": "172.19.0.4/16",
"IPv6Address": ""
},
"f2d3e46ab544b146bdc0aafba9fddb4e6c9d9ffd02c2015627516c7d6ff17567": {
"Name": "my_docker_mailhog_1",
"EndpointID": "242a693e6752f05985c377cd7c30f6781f0576bcd5ffede98f77f82efff8c78f",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "my_docker_project"
}
}
]
But : Does someone have any idea of what happened and how to prevent this problem to reappear ?
I had the same problem, but with rabbitmq service in my compose file. At first I solved it by deleting all existing container and volumes on my machine, (but it happened again here and then) but later I updated the rabbitmq image version to latest in docker-compose.yml:
image: rabbitmq:latest
and the problem did not reappear afterwards...
Related
In Docker Compose to communicate between images is used name of service. For example:
In docker-compose.yml file should be defined
depends_on:
- database
This dependency can be used in:
"server=database;uid=root;pwd=root;database=database"
Mainly name of defined services in docker-compose.yml file indicate hostname. I use AWS Elastic Beanstalk to deploy my microservices architecture to the cloud and when I run local run by Dockerrun.aws.json generated by container-transform this dependency is not available.
My question is. Do I do some wrong?
Does dependency like in Docker Compose available from WS Elastic Beanstalk?
In my real examples. Parts of docker-compose.yml
version: '3'
services:
rabbitmq: # login guest:guest
image: rabbitmq:3-management
hostname: "rabbitmq"
labels:
NAME: "rabbitmq"
ports:
- "4369:4369"
- "5671:5671"
- "5672:5672"
- "25672:25672"
- "15671:15671"
- "15672:15672"
xms.accounts:
image: ditrikss/accounts
build: ./Microservices/Account/Xms
restart: always
ports:
- 6001:80
depends_on:
- xdb.accounts
- rabbitmq
environment:
- ASPNETCORE_ENVIRONMENT=Production
xdb.accounts:
image: mysql/mysql-server
restart: always
environment:
MYSQL_DATABASE: 'xdb_accounts'
MYSQL_USER: 'root'
MYSQL_PASSWORD: 'root'
MYSQL_ROOT_PASSWORD: 'root'
ports:
- '6002:3306'
volumes:
- "./Databases/Scripts/xdb_Accounts/Create/1_accounts.sql:/docker-entrypoint-initdb.d/1.sql"
- "./Databases/Scripts/xdb_Accounts/Create/2_passwords.sql:/docker-entrypoint-initdb.d/2.sql"
- "./Databases/Scripts/xdb_Accounts/Create/3_channel_features.sql:/docker-entrypoint-initdb.d/3.sql"
- "./Databases/Scripts/xdb_Accounts/Create/4_streaming_features.sql:/docker-entrypoint-initdb.d/4.sql"
And reflecting code of Dockerrun.aws.json file
{
"AWSEBDockerrunVersion": "2",
"containerDefinitions": [
{
"dockerLabels": {
"NAME": "rabbitmq"
},
"essential": true,
"image": "rabbitmq:3-management",
"name": "rabbitmq",
"portMappings": [
{
"containerPort": 4369,
"hostPort": 4369
},
{
"containerPort": 5671,
"hostPort": 5671
},
{
"containerPort": 5672,
"hostPort": 5672
},
{
"containerPort": 25672,
"hostPort": 25672
},
{
"containerPort": 15671,
"hostPort": 15671
},
{
"containerPort": 15672,
"hostPort": 15672
}
]
}
{
"environment": [
{
"name": "MYSQL_DATABASE",
"value": "xdb_accounts"
},
{
"name": "MYSQL_USER",
"value": "root"
},
{
"name": "MYSQL_PASSWORD",
"value": "root"
},
{
"name": "MYSQL_ROOT_PASSWORD",
"value": "root"
}
],
"essential": true,
"image": "mysql/mysql-server",
"mountPoints": [
{
"containerPath": "/docker-entrypoint-initdb.d/1.sql",
"sourceVolume": "_DatabasesScriptsXdb_AccountsCreate1_Accounts_Sql"
},
{
"containerPath": "/docker-entrypoint-initdb.d/2.sql",
"sourceVolume": "_DatabasesScriptsXdb_AccountsCreate2_Passwords_Sql"
},
{
"containerPath": "/docker-entrypoint-initdb.d/3.sql",
"sourceVolume": "_DatabasesScriptsXdb_AccountsCreate3_Channel_Features_Sql"
},
{
"containerPath": "/docker-entrypoint-initdb.d/4.sql",
"sourceVolume": "_DatabasesScriptsXdb_AccountsCreate4_Streaming_Features_Sql"
}
],
"name": "xdb.accounts",
"portMappings": [
{
"containerPort": 3306,
"hostPort": 6002
}
]
},
{
"environment": [
{
"name": "ASPNETCORE_ENVIRONMENT",
"value": "Production"
}
],
"essential": true,
"image": "ditrikss/accounts",
"name": "xms.accounts",
"portMappings": [
{
"containerPort": 80,
"hostPort": 6001
}
]
}
]
}
Thanks in advance!
According to Dockerrun.aws.json v2 reference, you should add links section in your Dockerrun.aws.json file:
Definition of links:
List of containers to link to. Linked containers can discover
each other and communicate securely.
Example usage:
{
"name": "nginx-proxy",
"image": "nginx",
"essential": true,
"memory": 128,
"portMappings": [{
"hostPort": 80,
"containerPort": 80
}],
"links": [
"php-app"
],
"mountPoints": [
{
"sourceVolume": "php-app",
"containerPath": "/var/www/html",
"readOnly": true
}
]
}
I'm using eb local run to locally test out a multicontainer application that I plan on deploying to Elastic Beanstalk. Before deciding to move to EB, I was using docker-compose to spin up the containers. Here's what my docker-compose.yml looked like.
docker-compose.yml:
version: '3.7'
services:
web:
build:
context: ./app
dockerfile: Dockerfile.prod
image: <ECR-Repo>:web
command: gunicorn oddstracker_admin.wsgi:application --bind 0.0.0.0:8000
expose:
- 8000
env_file:
- .env.staging
nginx-proxy:
container_name: nginx-proxy
build: nginx
image: <ECR-Repo>:nginx-proxy
restart: always
ports:
- 443:443
- 80:80
volumes:
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- /var/run/docker.sock:/tmp/docker.sock:ro
depends_on:
- web
nginx-proxy-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
env_file:
- .env.staging.proxy-companion
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
depends_on:
- nginx-proxy
volumes:
certs:
html:
vhost:
An important aspect (at least what I think is important) is that the nginx-proxy service depends on the web and the nginx-proxy-letsencrypt service depends on nginx-proxy. This ensured that each container would not spin up until the previous one was ready.
When I moved to EB, I was forced to write a Dockerrun.aws.json file in order to run eb local run. Here it is.
Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"command": [
"gunicorn",
"oddstracker_admin.wsgi:application",
"--bind",
"0.0.0.0:8000"
],
"essential": true,
"image": "<ECR-Repo>:web",
"name": "web"
},
{
"essential": true,
"image": "<ECR-Repo>:nginx-proxy",
"mountPoints": [
{
"containerPath": "/etc/nginx/certs",
"sourceVolume": "Certs"
},
{
"containerPath": "/usr/share/nginx/html",
"sourceVolume": "Html"
},
{
"containerPath": "/etc/nginx/vhost.d",
"sourceVolume": "Vhost"
},
{
"containerPath": "/tmp/docker.sock",
"sourceVolume": "VarRunDocker_Sock"
}
],
"name": "nginx-proxy",
"portMappings": [
{
"containerPort": 443,
"hostPort": 443
},
{
"containerPort": 80,
"hostPort": 80
}
],
"links": [
"web"
]
},
{
"essential": true,
"image": "jrcs/letsencrypt-nginx-proxy-companion",
"mountPoints": [
{
"containerPath": "/var/run/docker.sock",
"sourceVolume": "VarRunDocker_Sock"
},
{
"containerPath": "/etc/nginx/certs",
"sourceVolume": "Certs"
},
{
"containerPath": "/usr/share/nginx/html",
"sourceVolume": "Html"
},
{
"containerPath": "/etc/nginx/vhost.d",
"sourceVolume": "Vhost"
}
],
"name": "nginx-proxy-letsencrypt",
"links": [
"nginx-proxy"
]
}
],
"family": "",
"volumes": [
{
"host": {
"sourcePath": "certs"
},
"name": "Certs"
},
{
"host": {
"sourcePath": "html"
},
"name": "Html"
},
{
"host": {
"sourcePath": "vhost"
},
"name": "Vhost"
},
{
"host": {
"sourcePath": "/var/run/docker.sock"
},
"name": "VarRunDocker_Sock"
}
]
}
When it autogenerates a docker-compose.yml to run the application, there is no way to provide a depends_on flag for the various services. Specifically, when I run eb local run, nginx-proxy-letsencrypt runs before nginx-proxy is able to finish and fails the entire process.
So my question: Has anyone found a way to solve this issue, possibly with an additional set of commands within their Dockerrun.aws.json?
When I moved to EB, I was forced to write a Dockerrun.aws.json ?
I don't see a reason why not to use docker-compose.yml. EB supports docker compose:
Docker Compose features. This platform will allow you to leverage the features provided by the Docker Compose tool to define and run multiple containers. You can include the docker-compose.yml file to deploy to Elastic Beanstalk.
So if you have working docker-compose file, my recommendation would be to try to use it.
I'm trying to deploy a simple node and mysql server/database with elastic beanstalk. I've successfully containerized the app so I know my docker-compose.yml file is working. Now I'm trying to convert that file to a Dockerrun.aws.json file and deploy using elastic beanstalk. I get the same error everytime I use eb create which is:
ERROR No ecs task definition (or empty definition file) found in environment
So first I tried writing this dockerrun.aws.json file manually, but to no avail. Then I found the container-transform package which supposedly converts these files to different formats. I had a coworker run it for me since I have issues with certain packages referring to python2.7 default on mac instead of 3. Anyways, I then used this file that container-transform gave me and modified a few values to match other examples of dockerrun.aws.json files I've seen, such as memory allotment, and the correct images.
This is my dockerrun.aws.json file:
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"environment": [
{
"name": "NODE_ENV",
"value": "${NODE_ENV}"
},
{
"name": "MYSQL_URI",
"value": "${mysqlcontainer}"
}
],
"essential": true,
"image": "node:latest",
"links": [
"mysqlcontainer"
],
"memory": 512,
"name": "fec-api",
"mountPoints": [
{
"containerPath": "/var/lib/mysql",
"sourceVolume": "My-Db"
}
],
"portMappings": [
{
"containerPort": 3000,
"hostPort": 3000
}
]
},
{
"environment": [
{
"name": "MYSQL_DATABASE",
"value": "fec2"
},
{
"name": "MYSQL_USER",
"value": "fec"
},
{
"name": "MYSQL_PASSWORD",
"value": "fecpassword"
},
{
"name": "MYSQL_ROOT_PASSWORD",
"value": "rootpassword"
}
],
"essential": true,
"image": "mysql:5.7",
"memory": 512,
"mountPoints": [
{
"containerPath": "/var/lib/mysql",
"sourceVolume": "My-Db"
}
],
"name": "mysqlcontainer",
"portMappings": [
{
"containerPort": 3306,
"hostPort": 4306
}
]
}
],
"family": "",
"volumes": [
{
"host": {
"sourcePath": "my-db"
},
"name": "My-Db"
}
]
}
This is my docker-compose.yml file, which works locally:
version: '3'
services:
mysqlcontainer:
container_name: mysqlcontainer
build: ./db/
# image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: 'fec2'
MYSQL_USER: 'fec'
MYSQL_PASSWORD: 'fecpassword'
MYSQL_ROOT_PASSWORD: 'rootpassword'
ports:
- '4306:3306'
# expose:
# - '3306'
# Where our data will be persisted
volumes:
- my-db:/var/lib/mysql
# - /usr/src/init:/docker-entrypoint-initdb.d/
fec-api:
container_name: fec-api
build: ./server/
ports:
- '3000:3000' # expose ports - HOST:CONTAINER
environment:
- NODE_ENV=${NODE_ENV}
- MYSQL_URI=${mysqlcontainer}
depends_on:
- "mysqlcontainer"
links:
- "mysqlcontainer"
volumes:
my-db:
I expected to get a green checkmark on the elastic beanstalk dashboard after running eb init and eb create. When I run eb init, it does recognize that I have docker configurations so it must know I have the dockerrun.aws.json file in the root directory, but when I eb create I get the error. Any idea how to diagnose this issue, or how to properly compose the docker-compose.yml equivalent for the dockerrun.aws.json file?
I'm deploying an Angular - Django app on a Digital Ocean droplet. It's composed of 3 Docker containers:
cards_front: the Angular front-end
cards_api: the django rest framework back-end
cards_db: the postgres database
They're all on the same network:
[
{
"Name": "ivan_cards_api_network",
"Id": "ddbd3524e02a7c918f6e09851731e015fdb7e8647358c5ed0c4cd949cf651fd9",
"Created": "2018-10-09T23:44:33.293036243Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"0d3144b27eaf6d7320357b6d703566e489f672b09b61dba0caf311c6e1c4711c": {
"Name": "cards_front",
"EndpointID": "47b1f8f42c4d18afeafeb9da502fd0197e726f29bd6d3d3c2960b44737bd579a",
"MacAddress": "02:42:ac:16:00:04",
"IPv4Address": "172.22.0.4/16",
"IPv6Address": ""
},
"3e9233f4bfc023632aaf13a146d1a50f75b4944503d9f226cf81140e92ccb532": {
"Name": "cards_api",
"EndpointID": "34d4780dc6f907a8cb9621223d6effe0a0aac1662d5272ae4a5104ba7f3808c4",
"MacAddress": "02:42:ac:16:00:03",
"IPv4Address": "172.22.0.3/16",
"IPv6Address": ""
},
"e5e208a20523c2d41433b850dc64db175de8ee7d0d156e2917c12fd8ebdf97ab": {
"Name": "cards_db",
"EndpointID": "8a8f44bbcdf2f95e716e2763e33bed31e1d2bdbfae7f6d78c8dee33de426a7ef",
"MacAddress": "02:42:ac:16:00:02",
"IPv4Address": "172.22.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "cards_api_network",
"com.docker.compose.project": "ivan",
"com.docker.compose.version": "1.22.0"
}
}
ALLOWED_HOSTS on django settings is set to ['*']
When I test the Angular front-end on the browser I get on Chrome's developer tools:
GET http://localhost:8000/themes net::ERR_CONNECTION_RESET
So, the Angular container is failing to communicate with the django container.
But if I do a curl localhost:8000/themes from inside the DO droplet I get a response.
I know there's something missing on the network configuration, but I can't figure out what it is.
Thank you
EDIT:
If I do a curl from inside the Angular container to the django container I get a response (the empty array):
root#90cea47dd13d:/# curl 172.22.0.3:8000/themes
[]
I am following the instructions at https://docs.docker.com/compose/django/ to get a basic dockerized django app going. I am able to run it locally without a problem but I am having trouble to deploy it to AWS using Elastic Beanstalk. After reading here, I figured that I need to translate docker-compose.yml into Dockerrun.aws.json for it to work.
The original docker-compose.yml is
version: '2'
services:
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
and here is what I translated so far
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "db"
},
{
"name": "web"
}
],
"containerDefinitions": [
{
"name": "db",
"image": "postgres",
"essential": true,
"memory": 256,
"mountPoints": [
{
"sourceVolume": "db"
"containerPath": "/var/app/current/db"
}
]
},
{
"name": "web",
"image": "web",
"essential": true,
"memory": 256,
"mountPoints": [
{
"sourceVolume": "web"
"containerPath": "/var/app/current/web"
}
],
"portMappings": [
{
"hostPort": 8000,
"containerPort": 8000
}
],
"links": [
"db"
],
"command": "python manage.py runserver 0.0.0.0:8000"
}
]
}
but it's not working. What am I doing wrong?
I was struggling to get the ins and outs of the Dockerrun format. Check out Container Transform: "Transforms docker-compose, ECS, and Marathon configurations"... it's a life-saver. Here is what it outputs for your example:
{
"containerDefinitions": [
{
"essential": true,
"image": "postgres",
"name": "db"
},
{
"command": [
"python",
"manage.py",
"runserver",
"0.0.0.0:8000"
],
"essential": true,
"mountPoints": [
{
"containerPath": "/code",
"sourceVolume": "_"
}
],
"name": "web",
"portMappings": [
{
"containerPort": 8000,
"hostPort": 8000
}
]
}
],
"family": "",
"volumes": [
{
"host": {
"sourcePath": "."
},
"name": "_"
}
]
}
Container web is missing required parameter "image".
Container web is missing required parameter "memory".
Container db is missing required parameter "memory".
That is, in this new format, you must tell it how much memory to allot each container. Also, you need to provide an image - there is no option to build. As is mentioned in the comments, you want to build and push to DockerHub or ECR, then give it that location: eg [org name]/[repo]:latest on Dockerhub, or the URL for ECR. But container-transform does the mountPoints and volumes for you - it's amazing.
You have a few issues.
1) 'web' doesn't appear to be an 'image', you define it as 'build . ' in your docker-compose.. Remember, the Dockerrun.aws.json will have to pull the image from somewhere (easiest is to use ECS's Repositories)
2) I think 'command' is an array. So you'd have:
"command": ["python" "manage.py" "runserver" "0.0.0.0:8000"]
3) your mountPoints are correct, but the volume definition at the top is wrong.
{
"name": "web",
"host": {
"sourcePath": "/var/app/current/db"
}
Im not 100% certain, but the path works for me.
if you have the Dockerrun.aws.json file, next to is a directory called /db .. then that will be the mount location.