Everytime I restart my ec2 server I have to do:
sudo systemctl start docker and then docker-compose up -d to launch all my containers.
Would there be a way to automatically run these two commands at the start of the instance?
I have read this answer and I think ideally I would like to know how to do that:
Create a systemd service and enable it. All the enabled systems
services will be started on powering.
Do you know how to create such systemd service?
[EDIT 1]: Following Chris William's comment, here is what I have done:
Thanks Chris, so I created a docker_boot.service with the following content:
[Unit]
Description=docker boot
After=docker.service
[Service]
Type=simple
Restart=always
RestartSec=1
User=ec2-user
ExecStart=/usr/bin/docker-compose -f docker-compose.yml up
[Install]
WantedBy=multi-user.target
I created it in /etc/systemd/system folder
I then did:
sudo systemctl enable docker
sudo systemctl enable docker_boot
When I turn on the server, the only Docker images that are running are certbot/certbot and telethonkids/shinyproxy
Please find below the content of my docker-compose.yml file.
Do you see what is missing so that all images are up and running?
version: "3.5"
services:
rstudio:
environment:
- USER=username
- PASSWORD=password
image: "rocker/tidyverse:latest"
build:
context: ./Docker_RStudio
dockerfile: Dockerfile
volumes:
- /home/ec2-user/R_and_Jupyter_scripts:/home/maxence/R_and_Jupyter_scripts
working_dir: /home/ec2-user/R_and_Jupyter_scripts
container_name: rstudio
ports:
- 8787:8787
jupyter:
image: 'jupyter/datascience-notebook:latest'
ports:
- 8888:8888
volumes:
- /home/ec2-user/R_and_Jupyter_scripts:/home/joyvan/R_and_Jupyter_scripts
working_dir: /home/joyvan/R_and_Jupyter_scripts
container_name: jupyter
shiny:
image: "rocker/shiny:latest"
build:
context: ./Docker_Shiny
dockerfile: Dockerfile
container_name: shiny
ports:
- 3838:3838
nginx:
image: nginx:alpine
container_name: nginx
restart: on-failure
networks:
- net
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
ports:
- 80:80
- 443:443
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
depends_on:
- shinyproxy
certbot:
image: certbot/certbot
container_name: certbot
restart: on-failure
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
shinyproxy:
image: telethonkids/shinyproxy
container_name: shinyproxy
restart: on-failure
networks:
- net
volumes:
- ./application.yml:/opt/shinyproxy/application.yml
- /var/run/docker.sock:/var/run/docker.sock
expose:
- 8080
cron:
build:
context: ./cron
dockerfile: Dockerfile
container_name: cron
volumes:
- ./Docker_Shiny/app:/home
networks:
- net
networks:
net:
name: net
Using Amazon Linux 2 I tried to replicate the issue. Obviously, I don't have all the dependencies to run your exact docker-compose.yml, thus I used the docker-compose.yml from here for my verification. The file setups wordpress with mysql .
Steps I took were following (executed as ec2-user in home folder):
1. Install docker
sudo yum update -y
sudo yum install -y docker
sudo systemctl enable docker
sudo systemctl start docker
2. Install docker-compose
sudo curl -L https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m) -o /usr/bin/docker-compose
sudo chmod +x /usr/bin/docker-compose
3. Create docker-compose.yml
mkdir myapp
Create file ./myapp/docker-compose.yml:
version: '3.3'
services:
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
depends_on:
- db
image: wordpress:latest
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
volumes:
db_data: {}
4. Create docker_boot.service
The file is different then yours, as there were few potential issues in your file:
not using absolute paths
ec2-user may have no permissions to run docker
Create file ./myapp/docker_boot.service:
[Unit]
Description=docker boot
After=docker.service
[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/home/ec2-user/myapp
ExecStart=/usr/bin/docker-compose -f /home/ec2-user/myapp/docker-compose.yml up -d --remove-orphans
[Install]
WantedBy=multi-user.target
5. Copy docker_boot.service to systemd
sudo cp -v ./myapp/docker_boot.service /etc/systemd/system
6. Enable and start docker_boot.service
sudo systemctl enable docker_boot.service
sudo systemctl start docker_boot.service
Note: First start may take some time, as it will pull all docker images required. Alternatively start docker-compose manually first to avoid this.
7. Check status of the docker_boot.service
sudo systemctl status docker_boot.service
8. Check if the wordpress is up
curl -L localhost:8000
9. Reboot
Check if the docker_boot.service is running after instance reboot by logging in into the instance and using sudo systemctl status docker_boot.service and/or curl -L localhost:8000.
To have a service launch at launch you would run the following command sudo systemctl enable docker.
For having it then launch your docker compose up -d command you'd need to create a new service for your specific action, and then enable it with the contents similar to the below.
[Unit]
After=docker.service
Description=Docker compose
[Service]
ExecStart=docker-compose up -d
[Install]
WantedBy=multi-user.target
More information for this is available in this post.
Related
Im new to docker.
I have a docker-compose.yml file like this :
version: '3.7'
services:
nginx_sarahmaso:
build:
context: .
dockerfile: ./compose/production/nginx/Dockerfile
restart: always
volumes:
- staticfiles_sarahmaso:/app/static
- mediafiles_sarahmaso:/app/media
ports:
- 4000:80
depends_on:
- web_sarahmaso
networks:
spa_network_sarahmaso:
web_sarahmaso:
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
restart: always
command: /start
volumes:
- staticfiles_sarahmaso:/app/static
- mediafiles_sarahmaso:/app/media
- sqlite_sarahmaso:/app/db
env_file:
- ./env/prod-sample
networks:
spa_network_sarahmaso:
networks:
spa_network_sarahmaso:
volumes:
sqlite_sarahmaso:
staticfiles_sarahmaso:
mediafiles_sarahmaso:
I'm deploying this on a server with a sh script running these commands :
mkdir -p /app
rm -rf /app/* && tar -xf /tmp/project.tar -C /app
sudo docker-compose -f /app/docker-compose.yml build
sudo supervisorctl restart react-wagtail-project
sudo ufw allow port
However the supervisorctl doesnt run correctly. But the console tells me "Successfully built dc10bd26b175"after the docker build.
But when i run docker-compose ps or docker ps -a i dont see any containers.
Docker-compose ps asks me for a docker-compose.yml file and if i do docker-compose ps -f path_to/docker-compose.yml the console shows me the help slug :
List containers.
Usage: ps [options] [SERVICE...]
Options:
-q, --quiet Only display IDs
--services Display services
--filter KEY=VAL Filter services by a property
-a, --all Show all stopped containers (including those created by the run command)
How come i dont see my containers?
It seems your containers are not started.
With your line sudo docker-compose -f /app/docker-compose.yml build you are building your container, as the console message tells you.
I do not exactly know what this line does sudo supervisorctl restart react-wagtail-project, but to me, it does not look like a command to START your newly built containers.
Try to explicitely start your containers by adding
./path_to_compose/docker-compose up or
./path_to_compose/docker-compose up -d to your script.
I'm working on a Django project and it's dockerized, I've deployed my application at the Amazon EC2 instance, so currently, the EC2 protocol is HTTP and I want to make it HTTPS so I've created a cloud front distribution to redirect at my EC2 instance but unfortunately I'm getting the following error.
error:
CloudFront attempted to establish a connection with the origin, but either the attempt failed or the origin closed the connection. We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner.
If you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation.
Generated by cloudfront (CloudFront)
Request ID: Pa0WApol6lU6Ja5uBuqKVPVTJFBpkcnJQgtXMYzQP6SPBhV4CtMOVw==
docker-compose.yml
version: "3.8"
services:
db:
container_name: db
image: "postgres"
restart: always
volumes:
- ./scripts/init.sql:/docker-entrypoint-initdb.d/init.sql
- postgres-data:/var/lib/postgresql/data/
env_file:
- prod.env
app:
container_name: app
build:
context: .
restart: always
volumes:
- static-data:/vol/web
depends_on:
- db
env_file:
- prod.env
proxy:
container_name: proxy
build:
context: ./proxy
restart: always
depends_on:
- app
ports:
- 80:8000
volumes:
- static-data:/vol/static
volumes:
postgres-data:
static-data:
Dockerfile
FROM python:3
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
WORKDIR /app
EXPOSE 8000
COPY ./core/ /app/
COPY ./scripts /scripts
# installing nano and cron service
RUN apt-get update
RUN apt-get install -y cron
RUN apt-get install nano
RUN pip install --upgrade pip
COPY requirements.txt /app/
# install dependencies and manage assets
RUN pip install -r requirements.txt && \
mkdir -p /vol/web/static && \
mkdir -p /vol/web/media
# files for cron logs
RUN mkdir /cron
RUN touch /cron/django_cron.log
# start cron service
RUN service cron start
RUN service cron restart
RUN chmod +x /scripts/run.sh
CMD ["/scripts/run.sh"]
I am trying to run celery in a separate docker container alongside a django/redis docker setup.
When I run docker-compose up -d --build, my logs via docker-compose logs --tail=0 --follow show the celery_1 container spamming the console repeatedly with
Usage: nc [OPTIONS] HOST PORT - connect
nc [OPTIONS] -l -p PORT [HOST] [PORT] - listen
-e PROG Run PROG after connect (must be last)
-l Listen mode, for inbound connects
-lk With -e, provides persistent server
-p PORT Local port
-s ADDR Local address
-w SEC Timeout for connects and final net reads
-i SEC Delay interval for lines sent
-n Don't do DNS resolution
-u UDP mode
-v Verbose
-o FILE Hex dump traffic
-z Zero-I/O mode (scanning)
I am able to get celery working correctly by removing the celery service from docker-compose.yaml and manually running docker exec -it backend_1 celery -A proj -l info after docker-compose up -d --build. How do I replicate the functionality of this manual process within docker-compose.yaml?
My docker-compose.yaml looks like
version: '3.7'
services:
backend:
build: ./backend
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./backend/app/:/usr/src/app/
ports:
- 8000:8000
env_file:
- ./.env.dev
depends_on:
- db
- redis
links:
- db:db
celery:
build: ./backend
command: celery -A proj worker -l info
volumes:
- ./backend/app/:/usr/src/app/
depends_on:
- db
- redis
redis:
image: redis:5.0.6-alpine
command: redis-server
expose:
- "6379"
db:
image: postgres:12.0-alpine
ports:
- 5432:5432
volumes:
- /tmp/postgres_data:/var/lib/postgresql/data/
I found out the problem was that my celery service could not resolve the SQL host. This was because my SQL host is defined in .env.dev which the celery service did not have access to. I added
env_file:
- ./.env.dev
to the celery service and everything worked as expected.
I have a docker-compose.yml file which I am trying to run inside of Google Cloud Container-Optimized OS (https://cloud.google.com/community/tutorials/docker-compose-on-container-optimized-os). Here's my docker-compose.yml file:
version: '3'
services:
client:
build: ./client
volumes:
- ./client:/usr/src/app
ports:
- "4200:4200"
- "9876:9876"
links:
- api
command: bash -c "yarn --pure-lockfile && yarn start"
sidekiq:
build: .
command: bundle exec sidekiq
volumes:
- .:/api
depends_on:
- db
- redis
- api
redis:
image: redis
ports:
- "6379:6379"
db:
image: postgres
ports:
- "5433:5432"
api:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
When I run docker-compose up, I eventually get the error:
Cannot start service api: error while creating mount source path '/rootfs/home/jeremy/my-repo': mkdir /rootfs: read-only file system
Reading further, it appears that /rootfs is locked down (https://cloud.google.com/container-optimized-os/docs/concepts/security), with only a few paths writeable. I'd like to mount all my volumes to one of these directories such as /home, any suggestions on how I can do this? Is it possible to mount all my volumes to /home/xxxxxx by default without having to change my docker-compose.yml file, such as passing a flag to docker-compose up?
Good friends, I am developing an application in django 1.11 with docker on windows, recently update the git repository of the project and also made some changes with docker containers.
The problem is that when entering the main page and some other URLS nothing happens, but when I try to login to the administrator, the django container is closed and I do not even get any error by the browser, console or log .
Example:
When I come in here they are fine
GET / 200 OK
POST / 403 Forbidden
GET / api / auth / 405 Method not allowed
But when I enter these, without showing any message, close the docker container (proyect_django_1 exited with code 0)
GET / admin No answer
POST / api / auth / No answer
My docker-compose
version: '3'
services:
db:
build: docker/postgres
volumes:
- ./docker/data/postgres:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
- POSTGRES_DB=project
redis:
image: redis:3.2-alpine
volumes:
- ./docker/data/redis:/data
rabbit:
image: rabbitmq:3-management-alpine
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=admin
django:
build:
context: .
args:
- REQUIREMENTS=development.txt
command: python3.6 manage.py runserver 0.0.0.0:8008
volumes:
- ./:/code
working_dir: /code/project
env_file: ./docker/DevelopmentEnv
ports:
- "8008:8008"
links:
- db
- rabbit
- redis
depends_on:
- db
celeryworker:
build:
context: .
args:
- REQUIREMENTS=development.txt
working_dir: /code/project
volumes:
- ./:/code
env_file: ./docker/DevelopmentEnv
links:
- db
- rabbit
command: celery -A config worker -l INFO -Q celery
frontend:
image: node:8.4-alpine
volumes:
- ./:/code
working_dir: /code/frontend
command: ash -c "yarn install --no-bin-links && yarn run build"
socketio:
image: node:8.4-alpine
volumes:
- ./:/code
working_dir: /code/sockets
command: ash -c "yarn install --no-bin-links && yarn start"
ports:
- "3000:3000"
links:
- redis
- django
depends_on:
- redis
My dockerfile
FROM python:3.6.2-alpine3.6
ARG REQUIREMENTS
RUN apk update
RUN apk add postgresql-dev postgresql-client
RUN apk add libffi-dev gcc
RUN apk add musl-dev zlib-dev jpeg-dev
RUN apk add --no-cache --virtual .build-deps-testing \
--repository http://dl-cdn.alpinelinux.org/alpine/edge/testing \
gdal-dev
RUN mkdir /code
ADD ./ /code/
WORKDIR /code
RUN pip3.6 install -r requirements/$REQUIREMENTS
WORKDIR /code/project
You could add restart: always to your django service definition. This will start a new django container if the previous one exits for any reason.
You should be getting some logs about why the process is exiting. Try running docker inspect <container-name> to see if there are any clues about why the process exits. There is probably a bug in your Python code triggered by some URLs, and it causes the process to exit.