Traefik2 route multiple services to multiple subpaths in flask - flask

I have an issue with multiple services running by flask.
If I comment service app1 or app2, everything works great.
But when I run two sevicves simultaneously and "curl localhost/app1", I got 404 error.
"curl localhost/app2" works great.
Here's the docker-compose.yml.
version: "2.4"
services:
traefik:
image: "traefik:v2.1.3"
container_name: "traefik"
command:
#- "--log.level=DEBUG"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
ports:
- "80:80"
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
app1:
image: pytorch:1.4.0
build:
context: .
target: prod
dockerfile: ./app1/Dockerfile
labels:
- traefik.enable=true
- "traefik.http.routers.app1.rule=Path(`/app1`)"
- "traefik.http.routers.app1.entrypoints=web"
app2:
image: pytorch:1.4.0
build:
context: .
target: prod
dockerfile: ./app2/Dockerfile
labels:
- traefik.enable=true
- "traefik.http.routers.app2.rule=Path(`/app2`)"
- "traefik.http.routers.app2.entrypoints=web"

Related

How to use Docker Volumes?

I am completely new to Docker, right now I am using an open-source tool, which is a Docker application, I need to make some changes to the existing application for my requirement and it should reflect the changes, I did a lot of searching, then I found that we can do this with the help of Docker Volumes, but I am unable to follow any of the articles on the web as well as documentation? Any suggestions will be appreciated.
docker-compose.yml:
version: "3.3"
services:
cvat_db:
container_name: cvat_db
image: postgres:10-alpine
networks:
default:
aliases:
- db
restart: always
environment:
POSTGRES_USER: root
POSTGRES_DB: cvat
POSTGRES_HOST_AUTH_METHOD: trust
volumes:
- cvat_db:/var/lib/postgresql/data
cvat_redis:
container_name: cvat_redis
image: redis:4.0-alpine
networks:
default:
aliases:
- redis
restart: always
cvat:
container_name: cvat
image: cvat/server
restart: always
depends_on:
- cvat_redis
- cvat_db
build:
context: .
args:
http_proxy:
https_proxy:
no_proxy: nuclio,${no_proxy}
socks_proxy:
USER: "django"
DJANGO_CONFIGURATION: "production"
TZ: "Etc/UTC"
CLAM_AV: "no"
environment:
DJANGO_MODWSGI_EXTRA_ARGS: ""
ALLOWED_HOSTS: '*'
CVAT_REDIS_HOST: "cvat_redis"
CVAT_POSTGRES_HOST: "cvat_db"
volumes:
- cvat_data:/home/django/data
- cvat_keys:/home/django/keys
- cvat_logs:/home/django/logs
- cvat_models:/home/django/models
cvat_ui:
container_name: cvat_ui
image: cvat/ui
restart: always
build:
context: .
args:
http_proxy:
https_proxy:
no_proxy:
socks_proxy:
dockerfile: Dockerfile.ui
networks:
default:
aliases:
- ui
depends_on:
- cvat
cvat_proxy:
container_name: cvat_proxy
image: nginx:stable-alpine
restart: always
depends_on:
- cvat
- cvat_ui
environment:
CVAT_HOST: localhost
ports:
- "8080:80"
volumes:
- ./cvat_proxy/nginx.conf:/etc/nginx/nginx.conf:ro
- ./cvat_proxy/conf.d/cvat.conf.template:/etc/nginx/conf.d/cvat.conf.template:ro
command: /bin/sh -c "envsubst '$$CVAT_HOST' < /etc/nginx/conf.d/cvat.conf.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
networks:
default:
ipam:
driver: default
config:
- subnet: 172.28.0.0/24
volumes:
cvat_db:
cvat_data:
cvat_keys:
cvat_logs:
cvat_models:
Docker volumes are mostly used as a way to save data outside of your container. If you mount a volume and store data in it, the data will not be erased when the container is destroyed. In order to mount a volume, you have to add -v <directory in your machine>:<directory in your container> to your docker run command. It may fulfill your requirements, but it most likely wont be enough.
If your assignment requires you to change for instance the behaviour of the application, then you have to rebuild the docker image and use it in your docker-compose.yml

Undefined volume when deploying docker container to ECS

I'm following this guide and currently trying to run my compose app using docker ecs compose up but I'm getting this error:
% docker ecs compose up
service "feature-test-web" refers to undefined volume : invalid compose project
Here's my docker-compose.yml:
version: '3.7'
x-web:
&web
build: ./web
volumes:
- ./web:/app
- /app/node_modules
x-api:
&api
build: ./api
volumes:
- ./api:/app
env_file:
- ./api/.env
depends_on:
- postgres
- redis
links:
- mailcatcher
services:
web:
<< : *web
environment:
- API_HOST=http://localhost:3000
ports:
- "1234:1234"
depends_on:
- api
api:
<< : *api
ports:
- "3000:3000"
stdin_open: true
tty: true
postgres:
image: postgres:11.2-alpine
volumes:
- ./pgdata:/var/lib/postgresql/data
environment:
- POSTGRES_USER=portal
- POSTGRES_PASS=portal
ports:
- 8000:5432
restart: on-failure
healthcheck:
test: "exit 0"
redis:
image: redis:5.0.4-alpine
ports:
- '6379:6379'
sidekiq:
build: ./api
env_file:
- ./api/.env
depends_on:
- postgres
- redis
mailcatcher:
image: schickling/mailcatcher
ports:
- '1080:1080'
feature-test-api:
<< : *api
depends_on:
- selenium
- feature-test-web
feature-test-web:
<< : *web
environment:
- API_HOST=http://feature-test-api:3210
selenium:
image: selenium/standalone-chrome-debug:3.141.59-neon
volumes:
- /dev/shm:/dev/shm
ports:
- 5901:5900
What am I missing? Running docker compose up by itself works and I'm able to go to localhost:1234 to see the app running. I'm trying to deploy this to AWS but it's been very difficult to do so if I'm doing this wrong, any pointers to the right way would be much appreciated as well.
As mentioned on the comments section, the volume mounts won't work on ECS as the cluster won't have a local copy of your code.
So as a first step, remove the entire volumes section.
Second, you'll need to first build a docker image out of your code, and push it to a docker registry of your liking, then link to it on your docker compose as follows
x-api:
&api
image: 12345.abcd.your-region.amazonaws.com/your-docker-repository
env_file:
- ./api/.env
depends_on:
- postgres
- redis
My answer here could give you more insight into how I deploy to ECS.

How to serve static files using Traefik and Nginx in docker-compose

I am trying to serve static files using Traefik and Nginx, also docker. My Django application works well, I can access all pages, but can't setup static files serving. When I open site.url/staic/ It redirects me to the 404 page. For the code skeleton, I am using cookiecutter-django.
Here is my docker configuration:
django:
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
image: dreamway_team_production_django
depends_on:
- postgres
- redis
env_file:
- ./.envs/.production/.django
- ./.envs/.production/.postgres
command: /start
postgres:
**
traefik:
build:
context: .
dockerfile: ./compose/production/traefik/Dockerfile
image: dreamway_team_production_traefik
depends_on:
- django
- nginx
volumes:
- production_traefik:/etc/traefik/acme
ports:
- "0.0.0.0:80:80"
- "0.0.0.0:443:443"
redis:
**
nginx:
image: nginx:1.17.4
depends_on:
- django
volumes:
- ./config/nginx.conf:/etc/nginx/conf.d/default.conf
- ./dreamway_team/static:/static
and my config for traefik:
log:
level: INFO
entryPoints:
web:
address: ":80"
web-secure:
address: ":443"
certificatesResolvers:
letsencrypt:
acme:
email: "mail"
storage: /etc/traefik/acme/acme.json
httpChallenge:
entryPoint: web
http:
routers:
web-router:
rule: "Host(`[DOMAIN_NAME]`)"
entryPoints:
- web
middlewares:
- redirect
- csrf
service: django
web-secure-router:
rule: "Host(`[DOMAIN_NAME]`)"
entryPoints:
- web-secure
middlewares:
- csrf
service: django
tls:
certResolver: letsencrypt
middlewares:
redirect:
redirectScheme:
scheme: https
permanent: true
csrf:
headers:
hostsProxyHeaders: ["X-CSRFToken"]
services:
django:
loadBalancer:
servers:
- url: http://django:5000
providers:
file:
filename: /etc/traefik/traefik.yml
watch: true
Any help would be appreciated! Thanks!

Data from postgres mysteriously getting deleted

I am using cookiecutter-django(https://github.com/pydanny/cookiecutter-django) for one of my live projects. And from last few days, I am observing the database data randomly getting deleted in parts. I checked logs but I found nothing. I don't know how to approach the issue to resolve it. Will really appreciate any guidance. I am using docker setup with Traefik, Postgres, Redis and celery and django. The code is deployed on a Digital Ocean Bucket. Only I have access to the bucket (Rules out the possibility of any person doing this)
version: '3'
volumes:
production_postgres_data: {}
production_postgres_data_backups: {}
production_traefik: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
image: fancy_tsunami_production_django
depends_on:
- postgres
- redis
env_file:
- ./.envs/.production/.django
- ./.envs/.production/.postgres
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: fancy_tsunami_production_postgres
volumes:
- production_postgres_data:/var/lib/postgresql/data
- production_postgres_data_backups:/backups
env_file:
- ./.envs/.production/.postgres
traefik:
build:
context: .
dockerfile: ./compose/production/traefik/Dockerfile
image: fancy_tsunami_production_traefik
depends_on:
- django
volumes:
- production_traefik:/etc/traefik/acme
ports:
- "0.0.0.0:80:80"
- "0.0.0.0:443:443"
redis:
image: redis:3.2
celeryworker:
<<: *django
image: fancy_tsunami_production_celeryworker
command: /start-celeryworker
celerybeat:
<<: *django
image: fancy_tsunami_production_celerybeat
command: /start-celerybeat
flower:
<<: *django
image: fancy_tsunami_production_flower
ports:
- "5555:5555"
command: /start-flower

In my Dockered Django application, my Celery task does not update the SQLite database (in other container). What should I do?

This is my docker-compose.yml.
version: "3"
services:
nginx:
image: nginx:latest
container_name: nginx_airport
ports:
- "8080:8080"
volumes:
- ./:/app
- ./docker_nginx:/etc/nginx/conf.d
- ./timezone:/etc/timezone
depends_on:
- web
rabbit:
image: rabbitmq:latest
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=asdasdasd
ports:
- "5672:5672"
- "15672:15672"
web:
build:
context: .
dockerfile: Dockerfile
command: /app/start_web.sh
container_name: django_airport
volumes:
- ./:/app
- ./timezone:/etc/timezone
expose:
- "8080"
depends_on:
- celerybeat
celerybeat:
build:
context: .
dockerfile: Dockerfile
command: /app/start_celerybeat.sh
volumes:
- ./:/app
- ./timezone:/etc/timezone
depends_on:
- celeryd
celeryd:
build:
context: .
dockerfile: Dockerfile
command: /app/start_celeryd.sh
volumes:
- ./:/app
- ./timezone:/etc/timezone
depends_on:
- rabbit
Normally, I have a task that executed every minutes and it updates the database located in "web". Everything works fine in development environment. However, the "celerybeat" and "celeryd" don't update my database when ran via docker-compose? What went wrong?