I'm having a hard time deploying my app built with Django, Postgres, DjangoQ, Redis and ES on AWS Elastic Beanstalk, using docker-compose.yml.
I've used EB CLI (eb init, eb create) to do it and it shows the environment is successfully launched but I still have the following problem.
On the EC2 instance, there is no postgres, djangoq and ec containers built like it says in the docker-compose file as below. Only django, redis and ngnix containers are found on the ec2 instance.
The environment variables that I specified in the docker-compose.yml file aren't being configured to the django container on EC2, so I can't run django there.
I'm pretty lost and am not sure where to even start to fix the problems here.. Any insight will be very much appreciated..
version: '3'
services:
django:
build:
context: .
dockerfile: docker/Dockerfile
command: gunicorn --bind 0.0.0.0:5000 etherscan_project.wsgi:application
env_file: .env
volumes:
- $PWD:/srv/app/:delegated
depends_on:
- redis
- db
- es
django-q:
build:
context: .
dockerfile: docker/Dockerfile
command: >
sh -c "python manage.py makemigrations &&
python manage.py migrate &&
python manage.py qcluster"
env_file: .env
volumes:
- $PWD:/srv/app/:delegated
depends_on:
- redis
- db
- django
- es
db:
image: postgres:latest
expose:
- 5432
env_file: .env
volumes:
- ./docker/volumes/postgres:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $POSTGRES_DB"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:latest
expose:
- 6379
ports:
- 6379:6379
volumes:
- ./docker/volumes/redis:/data
nginx:
container_name: nginx
image: nginx:1.13
ports:
- 80:80
depends_on:
- db
- django
- redis
volumes:
- ./docker/nginx:/etc/nginx/conf.d
- $PWD:/srv/app/:delegated
es:
image: docker.elastic.co/elasticsearch/elasticsearch:7.13.4
ports:
- 9200:9200
- 9300:9300
expose:
- 9200
- 9300
environment:
- discovery.type=single-node
- xpack.security.enabled=false
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- ./docker/volumes/es:/usr/share/elasticsearch/data
volumes:
app-files:
driver_opts:
type: nfs
device: $PWD
o: bind
can you confirm that your environment variables are being used correctly? A common mistake with EB and docker-compsoe is that it is assumed that your .env file works the same way in EB as it does in docker-compose when it does not. I have made that mistake before. Check out the docs https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.container.console.html#docker-env-cfg.env-variables
Related
I'm trying to import PostgreSql dump to docker container, but it doesn't work
Dockerfile:
FROM postgres
COPY postgres.sql /docker-entrypoint-initdb.d/
version: "3.9"
docker-compose.yml
services:
db:
build: ./DB
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=gamenews
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=321678
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
environment:
- POSTGRES_NAME=gamenews
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=321678
depends_on:
- db
structure:
docker compose up LOGS:
enter image description here
I would suggest that you use a proper Postgres image:
postgres:
image: postgres:13
volumes:
- '.:/app:rw'
- 'postgres:/var/lib/postgresql/data'
Here's a list of all the tags you can use: https://hub.docker.com/_/postgres
Just spin that up, your volume maps the data to your hd, so it's not ephemeral in the container, then you can run pg_restore on your dump file.
I am completely new to Docker, right now I am using an open-source tool, which is a Docker application, I need to make some changes to the existing application for my requirement and it should reflect the changes, I did a lot of searching, then I found that we can do this with the help of Docker Volumes, but I am unable to follow any of the articles on the web as well as documentation? Any suggestions will be appreciated.
docker-compose.yml:
version: "3.3"
services:
cvat_db:
container_name: cvat_db
image: postgres:10-alpine
networks:
default:
aliases:
- db
restart: always
environment:
POSTGRES_USER: root
POSTGRES_DB: cvat
POSTGRES_HOST_AUTH_METHOD: trust
volumes:
- cvat_db:/var/lib/postgresql/data
cvat_redis:
container_name: cvat_redis
image: redis:4.0-alpine
networks:
default:
aliases:
- redis
restart: always
cvat:
container_name: cvat
image: cvat/server
restart: always
depends_on:
- cvat_redis
- cvat_db
build:
context: .
args:
http_proxy:
https_proxy:
no_proxy: nuclio,${no_proxy}
socks_proxy:
USER: "django"
DJANGO_CONFIGURATION: "production"
TZ: "Etc/UTC"
CLAM_AV: "no"
environment:
DJANGO_MODWSGI_EXTRA_ARGS: ""
ALLOWED_HOSTS: '*'
CVAT_REDIS_HOST: "cvat_redis"
CVAT_POSTGRES_HOST: "cvat_db"
volumes:
- cvat_data:/home/django/data
- cvat_keys:/home/django/keys
- cvat_logs:/home/django/logs
- cvat_models:/home/django/models
cvat_ui:
container_name: cvat_ui
image: cvat/ui
restart: always
build:
context: .
args:
http_proxy:
https_proxy:
no_proxy:
socks_proxy:
dockerfile: Dockerfile.ui
networks:
default:
aliases:
- ui
depends_on:
- cvat
cvat_proxy:
container_name: cvat_proxy
image: nginx:stable-alpine
restart: always
depends_on:
- cvat
- cvat_ui
environment:
CVAT_HOST: localhost
ports:
- "8080:80"
volumes:
- ./cvat_proxy/nginx.conf:/etc/nginx/nginx.conf:ro
- ./cvat_proxy/conf.d/cvat.conf.template:/etc/nginx/conf.d/cvat.conf.template:ro
command: /bin/sh -c "envsubst '$$CVAT_HOST' < /etc/nginx/conf.d/cvat.conf.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
networks:
default:
ipam:
driver: default
config:
- subnet: 172.28.0.0/24
volumes:
cvat_db:
cvat_data:
cvat_keys:
cvat_logs:
cvat_models:
Docker volumes are mostly used as a way to save data outside of your container. If you mount a volume and store data in it, the data will not be erased when the container is destroyed. In order to mount a volume, you have to add -v <directory in your machine>:<directory in your container> to your docker run command. It may fulfill your requirements, but it most likely wont be enough.
If your assignment requires you to change for instance the behaviour of the application, then you have to rebuild the docker image and use it in your docker-compose.yml
I dont know what part I am missing but celery not conneting to redis when I am running docker-compose up --build
error: Cannot connect to redis://127.0.0.1:6379/0: Error 111 connecting to 127.0.0.1:
6379. Connection refused.
Here is my file docker-compose.yml
version: '3'
services:
web:
build: .
image: resolution
depends_on:
- db
- redis
- celery
command: bash -c "python3 /code/manage.py migrate && python3 /code/manage.py initialsetup && python3 /code/manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
ports:
- "8000:8000"
links:
- db:db
- redis:redis
- celery:celery
restart: always
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- PGHOST=trust
- PGPORT=5432
db:
image: postgres:latest
environment:
POSTGRES_DB: 'postgres'
POSTGRES_PASSWORD: 'postgres'
POSTGRES_USER: 'postgres'
POSTGRES_HOST: 'trust'
redis:
image: "redis:alpine"
ports:
- "6379:6379"
restart: on-failure
celery:
image: resolution
command: celery -A mayan worker -l info
environment:
- DJANGO_SETTINGS_MODULE=mayan.settings.production
volumes:
- .:/code
depends_on:
- db
- redis
links:
- redis:redis
restart: on-failure
celery and redis are running in different containers.
According to the error message that you shared, most likely, your celery is trying to connect to localhost to reach the RedisDB, which is not on localhost.
Seach for the celery configuration file that contains the CELERY_BROKER_URL and CELERY_RESULT_BACKEND values. Most likely they look like this:
CELERY_BROKER_URL = 'redis://localhost:6379'
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
They should look like this, pointing to the redis service name that you defined in your compose file:
CELERY_BROKER_URL = 'redis://redis:6379'
CELERY_RESULT_BACKEND = 'redis://redis:6379'
If you don't have such a config, search directly for the place where the Celery instance is initialized and make sure it looks like this:
app = Celery('server', broker='redis://redis:6379/0')
I'm following this guide and currently trying to run my compose app using docker ecs compose up but I'm getting this error:
% docker ecs compose up
service "feature-test-web" refers to undefined volume : invalid compose project
Here's my docker-compose.yml:
version: '3.7'
x-web:
&web
build: ./web
volumes:
- ./web:/app
- /app/node_modules
x-api:
&api
build: ./api
volumes:
- ./api:/app
env_file:
- ./api/.env
depends_on:
- postgres
- redis
links:
- mailcatcher
services:
web:
<< : *web
environment:
- API_HOST=http://localhost:3000
ports:
- "1234:1234"
depends_on:
- api
api:
<< : *api
ports:
- "3000:3000"
stdin_open: true
tty: true
postgres:
image: postgres:11.2-alpine
volumes:
- ./pgdata:/var/lib/postgresql/data
environment:
- POSTGRES_USER=portal
- POSTGRES_PASS=portal
ports:
- 8000:5432
restart: on-failure
healthcheck:
test: "exit 0"
redis:
image: redis:5.0.4-alpine
ports:
- '6379:6379'
sidekiq:
build: ./api
env_file:
- ./api/.env
depends_on:
- postgres
- redis
mailcatcher:
image: schickling/mailcatcher
ports:
- '1080:1080'
feature-test-api:
<< : *api
depends_on:
- selenium
- feature-test-web
feature-test-web:
<< : *web
environment:
- API_HOST=http://feature-test-api:3210
selenium:
image: selenium/standalone-chrome-debug:3.141.59-neon
volumes:
- /dev/shm:/dev/shm
ports:
- 5901:5900
What am I missing? Running docker compose up by itself works and I'm able to go to localhost:1234 to see the app running. I'm trying to deploy this to AWS but it's been very difficult to do so if I'm doing this wrong, any pointers to the right way would be much appreciated as well.
As mentioned on the comments section, the volume mounts won't work on ECS as the cluster won't have a local copy of your code.
So as a first step, remove the entire volumes section.
Second, you'll need to first build a docker image out of your code, and push it to a docker registry of your liking, then link to it on your docker compose as follows
x-api:
&api
image: 12345.abcd.your-region.amazonaws.com/your-docker-repository
env_file:
- ./api/.env
depends_on:
- postgres
- redis
My answer here could give you more insight into how I deploy to ECS.
I'm running a RESTful Django project on port 8000 and React project on port 3000.
For development I've had all my urls on the frontend as
href='localhost:8000/api/name' or href='localhost:8000/api/address'.
Now that i'm going into production, I want my href's to be href='mysite.com/api/name' or href='mysite.com/api/address'.I can't figure out how to do this.How do I access my RESTful data which is on another container?
I found this article but don't think its correct for production.
docker-compose.yml
version: "3.2"
services:
backend:
build: ./backend
volumes:
- ./backend:/app/backend
ports:
- "8000:8000"
stdin_open: true
tty: true
command: python3 manage.py runserver 0.0.0.0:8000
depends_on:
- db
- cache
links:
- db
frontend:
build: ./frontend
volumes:
- ./frontend:/app
#One-way volume to use node_modules from inside image
- /app/node_modules
ports:
- "3000:3000"
environment:
- NODE_ENV=development
- CHOKIDAR_USEPOLLING=true
depends_on:
- backend
tty: true
command: npm start
db:
image: mysql:latest
command: --default-authentication-plugin=mysql_native_password
volumes:
- "./mysql:/var/lib/mysql"
- "./.data/conf:/etc/mysql/conf.d"
ports:
- "3306:3306"
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: temp
MYSQL_USER: root
MYSQL_PASSWORD: root
volumes:
mysql: {}
you can pass the for example API_URL ("mysite.com/api/" for prod, and "localhost:8000/api/" for dev ) as an environment variable to react, and use it to make the urls