VueJS + Django Rest Framework in dockers - django

I have a VueJS front end and a Django Rest Framework Backend which are independant (Django does not serve my VueJS app)
In local they work very well together but after using a docker-compose to deploy them on the server they don't want to communicate anymore. I can see my frontend but the axios requests get a TimeOut.
How it's made in my docker compose:
version: '3'
networks:
intern:
external: false
extern:
external: true
services:
backend:
image: #from_registry
container_name: Backend
env_file:
- ../.env
depends_on:
- db
networks:
- intern
volumes:
- statics:/app/static_assets/
- medias:/app/media/
expose:
- "8000"
db:
image: "postgres:latest"
container_name: Db
environment:
POSTGRES_PASSWORD: ****
networks:
- intern
volumes:
- pgdb:/var/lib/postgresql/data
frontend:
image: from_registry
container_name: Frontend
volumes:
- statics:/home/app/web/staticfiles
- medias:/home/app/web/mediafiles
env_file:
- ../.env.local
depends_on:
- backend
networks:
- intern
- extern
labels:
- traefik.http.routers.site.rule=Host(`dev.x-fantasy.com`)
- traefik.http.routers.site.tls=true
- traefik.http.routers.site.tls.certresolver=lets-encrypt
- traefik.port=80
volumes:
pgdb:
statics:
medias:
In my AxiosConfiguration I put:
baseURL="http://backend:8000"
And my front try to access on this URL but get a timeout error.
In the console I have an error
xhr.js:177 POST https://backend:8000/api/v1/token/login net::ERR_TIMED_OUT
It seems that there is a https in place of the http. Can it come from here?
Any idea how to make them communicate?
Thanks

Related

AWS ElasticBeanstalk failed to deploy Django/Postgres app

I'm having a hard time deploying my app built with Django, Postgres, DjangoQ, Redis and ES on AWS Elastic Beanstalk, using docker-compose.yml.
I've used EB CLI (eb init, eb create) to do it and it shows the environment is successfully launched but I still have the following problem.
On the EC2 instance, there is no postgres, djangoq and ec containers built like it says in the docker-compose file as below. Only django, redis and ngnix containers are found on the ec2 instance.
The environment variables that I specified in the docker-compose.yml file aren't being configured to the django container on EC2, so I can't run django there.
I'm pretty lost and am not sure where to even start to fix the problems here.. Any insight will be very much appreciated..
version: '3'
services:
django:
build:
context: .
dockerfile: docker/Dockerfile
command: gunicorn --bind 0.0.0.0:5000 etherscan_project.wsgi:application
env_file: .env
volumes:
- $PWD:/srv/app/:delegated
depends_on:
- redis
- db
- es
django-q:
build:
context: .
dockerfile: docker/Dockerfile
command: >
sh -c "python manage.py makemigrations &&
python manage.py migrate &&
python manage.py qcluster"
env_file: .env
volumes:
- $PWD:/srv/app/:delegated
depends_on:
- redis
- db
- django
- es
db:
image: postgres:latest
expose:
- 5432
env_file: .env
volumes:
- ./docker/volumes/postgres:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $POSTGRES_DB"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:latest
expose:
- 6379
ports:
- 6379:6379
volumes:
- ./docker/volumes/redis:/data
nginx:
container_name: nginx
image: nginx:1.13
ports:
- 80:80
depends_on:
- db
- django
- redis
volumes:
- ./docker/nginx:/etc/nginx/conf.d
- $PWD:/srv/app/:delegated
es:
image: docker.elastic.co/elasticsearch/elasticsearch:7.13.4
ports:
- 9200:9200
- 9300:9300
expose:
- 9200
- 9300
environment:
- discovery.type=single-node
- xpack.security.enabled=false
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- ./docker/volumes/es:/usr/share/elasticsearch/data
volumes:
app-files:
driver_opts:
type: nfs
device: $PWD
o: bind
can you confirm that your environment variables are being used correctly? A common mistake with EB and docker-compsoe is that it is assumed that your .env file works the same way in EB as it does in docker-compose when it does not. I have made that mistake before. Check out the docs https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.container.console.html#docker-env-cfg.env-variables

Undefined volume when deploying docker container to ECS

I'm following this guide and currently trying to run my compose app using docker ecs compose up but I'm getting this error:
% docker ecs compose up
service "feature-test-web" refers to undefined volume : invalid compose project
Here's my docker-compose.yml:
version: '3.7'
x-web:
&web
build: ./web
volumes:
- ./web:/app
- /app/node_modules
x-api:
&api
build: ./api
volumes:
- ./api:/app
env_file:
- ./api/.env
depends_on:
- postgres
- redis
links:
- mailcatcher
services:
web:
<< : *web
environment:
- API_HOST=http://localhost:3000
ports:
- "1234:1234"
depends_on:
- api
api:
<< : *api
ports:
- "3000:3000"
stdin_open: true
tty: true
postgres:
image: postgres:11.2-alpine
volumes:
- ./pgdata:/var/lib/postgresql/data
environment:
- POSTGRES_USER=portal
- POSTGRES_PASS=portal
ports:
- 8000:5432
restart: on-failure
healthcheck:
test: "exit 0"
redis:
image: redis:5.0.4-alpine
ports:
- '6379:6379'
sidekiq:
build: ./api
env_file:
- ./api/.env
depends_on:
- postgres
- redis
mailcatcher:
image: schickling/mailcatcher
ports:
- '1080:1080'
feature-test-api:
<< : *api
depends_on:
- selenium
- feature-test-web
feature-test-web:
<< : *web
environment:
- API_HOST=http://feature-test-api:3210
selenium:
image: selenium/standalone-chrome-debug:3.141.59-neon
volumes:
- /dev/shm:/dev/shm
ports:
- 5901:5900
What am I missing? Running docker compose up by itself works and I'm able to go to localhost:1234 to see the app running. I'm trying to deploy this to AWS but it's been very difficult to do so if I'm doing this wrong, any pointers to the right way would be much appreciated as well.
As mentioned on the comments section, the volume mounts won't work on ECS as the cluster won't have a local copy of your code.
So as a first step, remove the entire volumes section.
Second, you'll need to first build a docker image out of your code, and push it to a docker registry of your liking, then link to it on your docker compose as follows
x-api:
&api
image: 12345.abcd.your-region.amazonaws.com/your-docker-repository
env_file:
- ./api/.env
depends_on:
- postgres
- redis
My answer here could give you more insight into how I deploy to ECS.

Docker: How do I use access a service that's in another container from the frontend?

I'm running a RESTful Django project on port 8000 and React project on port 3000.
For development I've had all my urls on the frontend as
href='localhost:8000/api/name' or href='localhost:8000/api/address'.
Now that i'm going into production, I want my href's to be href='mysite.com/api/name' or href='mysite.com/api/address'.I can't figure out how to do this.How do I access my RESTful data which is on another container?
I found this article but don't think its correct for production.
docker-compose.yml
version: "3.2"
services:
backend:
build: ./backend
volumes:
- ./backend:/app/backend
ports:
- "8000:8000"
stdin_open: true
tty: true
command: python3 manage.py runserver 0.0.0.0:8000
depends_on:
- db
- cache
links:
- db
frontend:
build: ./frontend
volumes:
- ./frontend:/app
#One-way volume to use node_modules from inside image
- /app/node_modules
ports:
- "3000:3000"
environment:
- NODE_ENV=development
- CHOKIDAR_USEPOLLING=true
depends_on:
- backend
tty: true
command: npm start
db:
image: mysql:latest
command: --default-authentication-plugin=mysql_native_password
volumes:
- "./mysql:/var/lib/mysql"
- "./.data/conf:/etc/mysql/conf.d"
ports:
- "3306:3306"
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: temp
MYSQL_USER: root
MYSQL_PASSWORD: root
volumes:
mysql: {}
you can pass the for example API_URL ("mysite.com/api/" for prod, and "localhost:8000/api/" for dev ) as an environment variable to react, and use it to make the urls

How to serve static files using Traefik and Nginx in docker-compose

I am trying to serve static files using Traefik and Nginx, also docker. My Django application works well, I can access all pages, but can't setup static files serving. When I open site.url/staic/ It redirects me to the 404 page. For the code skeleton, I am using cookiecutter-django.
Here is my docker configuration:
django:
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
image: dreamway_team_production_django
depends_on:
- postgres
- redis
env_file:
- ./.envs/.production/.django
- ./.envs/.production/.postgres
command: /start
postgres:
**
traefik:
build:
context: .
dockerfile: ./compose/production/traefik/Dockerfile
image: dreamway_team_production_traefik
depends_on:
- django
- nginx
volumes:
- production_traefik:/etc/traefik/acme
ports:
- "0.0.0.0:80:80"
- "0.0.0.0:443:443"
redis:
**
nginx:
image: nginx:1.17.4
depends_on:
- django
volumes:
- ./config/nginx.conf:/etc/nginx/conf.d/default.conf
- ./dreamway_team/static:/static
and my config for traefik:
log:
level: INFO
entryPoints:
web:
address: ":80"
web-secure:
address: ":443"
certificatesResolvers:
letsencrypt:
acme:
email: "mail"
storage: /etc/traefik/acme/acme.json
httpChallenge:
entryPoint: web
http:
routers:
web-router:
rule: "Host(`[DOMAIN_NAME]`)"
entryPoints:
- web
middlewares:
- redirect
- csrf
service: django
web-secure-router:
rule: "Host(`[DOMAIN_NAME]`)"
entryPoints:
- web-secure
middlewares:
- csrf
service: django
tls:
certResolver: letsencrypt
middlewares:
redirect:
redirectScheme:
scheme: https
permanent: true
csrf:
headers:
hostsProxyHeaders: ["X-CSRFToken"]
services:
django:
loadBalancer:
servers:
- url: http://django:5000
providers:
file:
filename: /etc/traefik/traefik.yml
watch: true
Any help would be appreciated! Thanks!

I cannot login from Dockered Django web application. What is wrong?

I successfully put my web application in a Docker container. Looking through...
I can register users. When I tried to register the same user my web application says that the user is already registered.
I cannot logged in as any users.
I can go to admin login panel.
I cannot logged in as an admin.
Other than that everything went fine.
I am using SQLite. And this is my docker-compose.yml.
version: "3"
services:
nginx:
image: nginx:latest
container_name: nginx_airport
ports:
- "8080:8080"
volumes:
- ./:/app
- ./nginx:/etc/nginx/conf.d
- ./static:/app/static
depends_on:
- web
rabbit:
hostname: rabbit_airport
image: rabbitmq:latest
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=asdasdasd
ports:
- "5673:5672"
web:
build: ./
container_name: django_airport
volumes:
- ./:/app
- ./static:/app/static
expose:
- "8080"
links:
- rabbit
depends_on:
- rabbit
I don't think I need separate container for SQLite don't I?