these are my first times with docker, I'm trying to put the directives for MariaDB in my composer file. Once the file is written, if I try to run the docker compose up command, it gives me the following error: yaml: line 8: did not find expected key. Does anyone have the same problem as me? How can I solve? Thanks so much.
Below is my docker-compose.yaml file
version: '3'
services:
backend:
build: ./
restart: always
volumes:
- ./application:/var/www/html
ports: [80:80]
mariadb:
image: 'bitnami/mariadb:latest'
ports:
- '3306:3306'
volumes:
- './mariadb_data:/bitnami/mariadb'
environment:
- ALLOW_EMPTY_PASSWORD=yes
- MARIADB_DATABASE=db_test
- MARIADB_USER=test_user
- MARIADB_PASSWORD=password
- MARIADB_ROOT_HOST='%'
volumes:
application:
driver: local
mariadb_data:
driver: local
It happens when we do our own yml file for docker, you need to indent two spaces for sub entries under image details:
version: '1'
services:
mariadb-ikg:
image: bitnami/mariadb:10.3
ports:
- 3306:3306
volumes:
- D:/docker/bitnami-mariadb/databases:/bitnami/mariadb
environment:
- MARIADB_ROOT_PASSWORD=123456
phpfpm-ikg:
image: wyveo/nginx-php-fpm:php80
ports:
- 80:80
volumes:
- D:/docker/wyveo-nginx-php-fpm/wordpress:/usr/share/nginx/html
depends_on:
- mariadb-ikg
Related
I was learning FARGATE services using AWS-ecs-cli.
I was trying to execute ecs-cli compose --file docker-compose.yml service start.
But the error message said,
ERRO[0000] Unable to open ECS Compose Project error="services.nginx.depends_on.0 must be a string" FATA[0000] Unable to create and read ECS Compose Project error="services.nginx.depends_on.0 must be a string"
How can I solve this problem? Here is my code about 'docker-compose.yml'
#docker-compose.yml
version: '3'
services:
nginx:
essential: true
build: ./nginx
image: [ECR Image URI]
restart: always
ports:
- "80:80"
volumes:
- /srv/docker-server
- /var/log/nginx
depends_on:
- container_name: django
django:
essential: true
build: ./fastcampus_test
image: [ECR Image URI]
restart: always
command: uwsgi --ini uwsgi.ini
volumes:
- /srv/docker-server
- /var/log/uwsgi
depends_on:
- django
In YAML, lines that start with - can be thought of as lists, and lines with a colon are key/value maps. In your case you want a list of strings (your error is describing offset 0 of your depends_on list).
I have develop a project with Django/Docker/Postgresql and use docker-compose to deploy on a linux remote server.
I want to deploy 2 apps based on the same code (and same settings file), preprod and demo, with two disctincts PostgreSQL databases (databases are not dockerized): ecrf_covicompare_preprod and ecrf_covicompare_demo, respectively for preprod and demo.
Apps tests will be done by differents teams.
I have :
2 docker-compose files, docker-compose.preprod.yml and docker-compose.demo.yml, respectively for preprod and demo
.env files, .env.preprod and .env.preprod.demo, respectively for preprod and demo
Databases parameters of connection are set in these .env files.
But my 2 apps connect to the same database (ecrf_covicompare_preprod).
If I connect to my 'web demo' container to print environment variables I get SQL_DATABASE=ecrf_covicompare_demo which is correct
docker-compose.preprod.yml
version: '3.7'
services:
web:
restart: always
container_name: ecrf_covicompare_web
build:
context: ./app
dockerfile: Dockerfile.preprod
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
volumes:
- app_volume:/usr/src/app
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
expose:
- 8000
env_file:
- ./.env.preprod
entrypoint: [ "/usr/src/app/entrypoint.preprod.sh" ]
depends_on:
- redis
healthcheck:
test: [ "CMD", "curl", "-f", "http://localhost:8000/" ]
interval: 30s
timeout: 10s
retries: 50
redis:
container_name: ecrf_covicompare_redis
image: "redis:alpine"
celery:
container_name: ecrf_covicompare_celery
build:
context: ./app
dockerfile: Dockerfile.preprod
command: celery -A core worker -l info
volumes:
- app_volume:/usr/src/app
env_file:
- ./.env.preprod
depends_on:
- web
- redis
celery-beat:
container_name: ecrf_covicompare_celery-beat
build:
context: ./app
dockerfile: Dockerfile.preprod
command: celery -A core beat -l info
volumes:
- app_volume:/usr/src/app
env_file:
- ./.env.preprod
depends_on:
- web
- redis
nginx:
container_name: ecrf_covicompare_nginx
build: ./nginx
restart: always
volumes:
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
ports:
- 1370:80
depends_on:
- web
.env.preprod
SQL_DATABASE=ecrf_covicompare_preprod
SQL_USER=user_preprod
DATABASE=postgres
DJANGO_SETTINGS_MODULE=core.settings.preprod
docker-compose.demo.yml (simplified)
version: '3.7'
services:
demo_web:
container_name: ecrf_covicompare_web_demo
//
env_file:
- ./.env.preprod.demo
//
demo_redis:
container_name: ecrf_covicompare_redis_demo
image: "redis:alpine"
demo_celery:
container_name: ecrf_covicompare_celery_demo
//
env_file:
- ./.env.preprod.demo
depends_on:
- demo_web
- demo_redis
demo_celery-beat:
container_name: ecrf_covicompare_celery-beat_demo
//
env_file:
- ./.env.preprod.demo
depends_on:
- demo_web
- demo_redis
demo_nginx:
container_name: ecrf_covicompare_nginx_demo
//
ports:
- 1380:80
depends_on:
- demo_web
.env.preprod.demo
SQL_DATABASE=ecrf_covicompare_demo
SQL_USER=user_preprod
DATABASE=postgres
DJANGO_SETTINGS_MODULE=core.settings.preprod
Im new to all the docker compose stuff but to me your configuration looks fine. A few ideas I had:
you mention two different PostgreSQL databases. Are those hosted on the same PostgreSQL server or two different servers? In both .env files you set DATABASE=postgres. If they are running on the same server instance I could imagine that this leads to them using the same database depending on how this variable is used later on.
are you sure that the env variables are set on time? Once you manually check them from inside th container they are set correctly. But also while your containers are booting up? No expert on how docker compose handles these files but maybe you could try printing the env variables during container initialization from within some script.
Are you completely sure its not hardcoded somewhere? Maybe try searching all source files for the DB name they both connect to. I failed with this far too often to not check this.
Hope this helps. Its a bit of a guess but your configuration looks fine to me otherwise.
I am trying to install the Mayan-EDMS image with the Django app and Postgres database using docker-compose but each time, I try to build docker-compose using docker-compose up it gives an error.
ERROR: yaml.parser.ParserError: while parsing a block mapping
in "./docker-compose.yml", line 8, column 3
expected <block end>, but found '<block mapping start>'
in "./docker-compose.yml", line 29, column 4
here is my docker-compose.yml
docker-compose contain postgres:11.4-alpine,redis:5.0-alpine and mayanedms/mayanedms:3
version: "3"
networks:
bridge:
driver: bridge
services:
app:
container_name: django
restart: always
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app:/app
environment:
- DB_NAME=app
- DB_USER=insights
- DB_HOST=db
- DB_PORT=5432
depends_on:
- db
command: >
sh -c "mkdir -p logs media &&
python manage.py wait_for_db &&
python manage.py runserver 0.0.0.0:8000"
db:
image: postgres:11.4-alpine
container_name: postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=insights
- POSTGRES_DB=app
redis:
command:
- redis-server
- --appendonly
- "no"
- --databases
- "2"
- --maxmemory
- "100mb"
- --maxclients
- "500"
- --maxmemory-policy
- "allkeys-lru"
- --save
- ""
- --tcp-backlog
- "256"
- --requirepass
- "${MAYAN_REDIS_PASSWORD:-mayanredispassword}"
image: redis:5.0-alpine
networks:
- bridge
restart: unless-stopped
volumes:
- redis_data:/data
mayanedms:
image: mayanedms/mayanedms:3
container_name: mayanedms
restart: unless-stopped
ports:
- "80:8000"
depends_on:
- db
- redis
volumes:
- mayanedms_data:/var/lib/mayan
environment: &mayan_env
MAYAN_CELERY_BROKER_URL: redis://:${MAYAN_REDIS_PASSWORD:-mayanredispassword}#redis:6379/0
MAYAN_CELERY_RESULT_BACKEND: redis://:${MAYAN_REDIS_PASSWORD:-mayanredispassword}#redis:6379/1
MAYAN_DATABASES: "{'default':{'ENGINE':'django.db.backends.postgresql','NAME':'${MAYAN_DATABASE_DB:-mayan}','PASSWORD':'${MAYAN_DATABASE_PASSWORD:-mayandbpass}','USER':'${MAYAN_DATABASE_USER:-mayan}','HOST':'postgresql'}}"
MAYAN_DOCKER_WAIT: "db:5432 redis:6379"
networks:
- bridge
background_tasks:
restart: always
container_name: process_tasks
build:
context: .
depends_on:
- app
- db
environment:
- DB_NAME=app
- DB_USER=insights
- DB_HOST=db
- DB_PORT=5432
volumes:
- ./app:/app
command: >
sh -c "python manage.py process_tasks --sleep=3 --log-std --traceback"
volumes:
postgres_data:
redis_data:
mayanedms_data:
thank you for help
YAML indentation in your docker-compose.yml is wrong. YAML files rely on space indentation to define structure, but indentation for service db uses 3 space where app uses 2 space - when parsing your file, Compose interpret db (3 spaces) to be a sub-component of app (2 spaces), its like you are doing:
services:
app:
...
db:
...
Or an equivalent in json:
"services": {
"app": {
"db": {
...
}
}
}
Where what you need is:
services:
app:
...
db:
...
Equivalent in json:
"services": {
"app": {
...
},
"db": {
...
}
}
Same issue for all the other services definition and with volumes. volumes must be a top-level element, but with a space it is read a sub-component of services
I am new with docker. I am having trouble with multiple containers deploy at a same time, it's occurring race condition. Every time I enter docker-compose up --build command elasticsearch or redis starts first and database starts and exits with error code 0 as well as celery and nginx. I tried using "sleep" command, but no luck(maybe I missed something). Here is my docker-compose.yml file -
version: "3"
services:
db:
image: postgres:9.6-alpine
container_name: myblogdb
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=mydb
volumes:
- myblogdb_data:/var/lib/postgresql/data/
ports:
- "4949:5432"
web:
build: ./app
command: sh -c "gunicorn djangoApp.wsgi:application --bind 0.0.0.0:8000"
volumes:
- ./app:/usr/src/app/
- my_blog_static_volume:/usr/src/app/djangoApp/settings/staticfiles
- my_blog_media_volume:/usr/src/app/mediafiles
ports:
- "8000:8000"
depends_on:
- db
- redis
- es
nginx:
restart: always
build: ./nginx
volumes:
- my_blog_static_volume:/usr/src/app/djangoApp/settings/staticfiles
- my_blog_media_volume:/usr/src/app/mediafiles
ports:
- "1337:80"
depends_on:
- web
redis:
image: "redis:alpine"
es:
image: elasticsearch:5.6.15-alpine
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms256M -Xmx256M"
volumes:
- my_blog_esdata:/usr/share/elasticsearch/data/
ports:
- "9200:9200"
celery:
restart: always
build: ./app
command: sh -c "celery -A djangoApp worker -l info"
volumes:
- ./app:/usr/src/app/
depends_on:
- db
- redis
- web
celery-beat:
restart: always
build: ./app
command: sh -c "celery -A djangoApp beat -l info"
volumes:
- ./app:/usr/src/app/
depends_on:
- db
- redis
- web
volumes:
myblogdb_data:
my_blog_static_volume:
my_blog_media_volume:
my_blog_esdata:
Please let me know if I'm missing something here. Thanks
You need to add a script like wait-for-it or wait-for in order to control startup and shutdown order in compose that basically tells a service to wait for another service before running the start command.
So if you want Django to wait for PostgreSQL the command in docker-compose will be:
["./wait-for", "db:5432", "--", "gunicorn", "djangoApp.wsgi:application", "--bind", "0.0.0.0:8000"]
There is a full explanation in the following answer, the answer describe it for MySQL and Golang but same concept applies to your case
I am testing my app with docker-compose which consist of DynamoDB as an inner container.docker-compose file as below:
version: '2'
services:
appName:
mem_limit: 1024m
build:
dockerfile: dockerfile.test
context: .
ports:
- "8090:8090"
env_file:
- env/test.env
depends_on:
- redis
- postgres
- dynamodb
- memcached
entrypoint: "./bin/entrypoint.sh"
redis:
image: "redis:alpine"
postgres:
image: "postgres:9.6-alpine"
dynamodb:
image: "tutum/dynamodb:latest"
ports:
- "8000:8000"
hostname: dynamodb
memcached:
image: "memcached:alpine"
On building up the code i am getting following error:
java.lang.IncompatibleClassChangeError: Class com.amazonaws.http.conn.ssl.SdkTLSSocketFactory does not implement the requested interface org.apache.http.conn.scheme.SchemeSocketFactory at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:165) org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304) org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611) org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446) org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863) org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82) org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57) com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:837) com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:607)
com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:376) com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:338) com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:287) com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:2000) com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:1970) com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.getItem(AmazonDynamoDBClient.java:1329) com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMapper.load(DynamoDBMapper.java:433) com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMapper.load(DynamoDBMapper.java:496) com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMapper.load(DynamoDBMapper.java:400)
The fix required to get it working is changing the hostname of dynamodb from dynamodb to http://dynamodb, In docker file