Django: CloudFront wasn't able to connect to the origin - django

I'm working on a Django project and it's dockerized, I've deployed my application at the Amazon EC2 instance, so currently, the EC2 protocol is HTTP and I want to make it HTTPS so I've created a cloud front distribution to redirect at my EC2 instance but unfortunately I'm getting the following error.
error:
CloudFront attempted to establish a connection with the origin, but either the attempt failed or the origin closed the connection. We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner.
If you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation.
Generated by cloudfront (CloudFront)
Request ID: Pa0WApol6lU6Ja5uBuqKVPVTJFBpkcnJQgtXMYzQP6SPBhV4CtMOVw==
docker-compose.yml
version: "3.8"
services:
db:
container_name: db
image: "postgres"
restart: always
volumes:
- ./scripts/init.sql:/docker-entrypoint-initdb.d/init.sql
- postgres-data:/var/lib/postgresql/data/
env_file:
- prod.env
app:
container_name: app
build:
context: .
restart: always
volumes:
- static-data:/vol/web
depends_on:
- db
env_file:
- prod.env
proxy:
container_name: proxy
build:
context: ./proxy
restart: always
depends_on:
- app
ports:
- 80:8000
volumes:
- static-data:/vol/static
volumes:
postgres-data:
static-data:
Dockerfile
FROM python:3
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
WORKDIR /app
EXPOSE 8000
COPY ./core/ /app/
COPY ./scripts /scripts
# installing nano and cron service
RUN apt-get update
RUN apt-get install -y cron
RUN apt-get install nano
RUN pip install --upgrade pip
COPY requirements.txt /app/
# install dependencies and manage assets
RUN pip install -r requirements.txt && \
mkdir -p /vol/web/static && \
mkdir -p /vol/web/media
# files for cron logs
RUN mkdir /cron
RUN touch /cron/django_cron.log
# start cron service
RUN service cron start
RUN service cron restart
RUN chmod +x /scripts/run.sh
CMD ["/scripts/run.sh"]

Related

mysql file not found in $PATH error using Docker and Django?

I'm working on using Django and Docker with MariaDB on Mac OS X.
I'm trying to grant privileges to Django to set up a test db. For this, I have a script that executes this code, sudo docker exec -it $container mysql -u root -p. When I do this after spinning up, instead of the prompt for the password for the database, I get this error message,
OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: exec: "mysql": executable file not found in $PATH: unknown
On an Ubuntu machine, I can delete the data folder for the database and spin up, spin down, and run the command without the error, but on my Mac, which is my primary machine that I'd like to use, this fix doesn't work. Any ideas? I've had a peer pull code and try it on their Mac and it does the same thing to them! Docker is magic to me.
Here's my docker-compose.yml.
version: "3.3"
networks:
django_db_net:
external: false
services:
django:
build: ./docker/django
restart: 'unless-stopped'
depends_on:
- db
networks:
- django_db_net
user: "${HOST_USER_ID}:${HOST_GROUP_ID}"
volumes:
- ./src:/src
working_dir: /src/vger
command: ["/src/wait-for-it.sh", "db:3306", "--", "python", "manage.py", "runserver", "0.0.0.0:8000"]
ports:
- "${DJANGO_PORT}:8000"
db:
image: mariadb:latest
user: "${HOST_USER_ID}:${HOST_GROUP_ID}"
volumes:
- ./data:/var/lib/mysql
restart: always
environment:
- MYSQL_ROOT_PASSWORD=this_is_a_bad_password
- MYSQL_USER=django
- MYSQL_PASSWORD=django
- MYSQL_DATABASE=vger
networks:
- django_db_net
And my Dockerfile
FROM python:latest
ENV PYTHONUNBUFFERED=1
RUN pip3 install --upgrade pip & \
pip3 install django mysqlclient
ENV MYSQL_MAJOR 8.0
RUN apt-key adv --keyserver hkp://pool.sks-keyservers.net:80 --recv-keys 8C718D3B5072E1F5 & \
echo "deb http://repo.mysql.com/apt/debian/ buster mysql-${MYSQL_MAJOR}" > /etc/apt/sources.list.d/mysql.list & apt-get update & \
apt-get -y --no-install-recommends install default-libmysqlclient-dev
WORKDIR /src
I fixed it!
This is really silly, but OS X doesn't like the "$container" so if you explicitly just write the name of container for the database, it works like a charm!

Docker django runs server but browser doesn't show landing page

I have successfully build docker and the server runs without error but when I browse the website it doesn't show anything.
Here are the configuration files I'm using:
.env.dev
DEBUG=1
SECRET_KEY=foo
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
Dockerfile
FROM python:3.9.1-slim-buster
# set working directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# add app
COPY . .
docker-compose.yml
version: '3.8'
services:
movies:
build: ./app
command: python core/manage.py runserver 0.0.0.0:8000
volumes:
- ./app/:/usr/src/app/
ports:
- 8009:8000
env_file:
- ./app/.env.dev
Any idea why it isn't browsing?
When you put localhost or 127.0.0.1 as allowed hosts, you refer to the container, not the host machine. So, even if you link the ports of the container to those of the host, the server will not accept the connections since they are not coming from the container IP.

Django, and React inside Docker inside a single Digital Ocean Droplet: 400 Bad Request for Django, React works fine

I have Django and React inside the same Docker container using docker-compose.yml and running this container inside a Digital Ocean Droplet running Ubuntu. When I navigate to http://my_ip_address:3000 which is the React app, it works just fine, but when I navigate to http://my_ip_address:8000 which is the Django app, I get a 400 Bad Request error from the server.
project/back-end/Dockerfile
FROM python:3.7
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
WORKDIR /nerdrich
COPY Pipfile Pipfile.lock /nerdrich/
RUN pip install pipenv && pipenv install --system
COPY . /nerdrich/
EXPOSE 8000
project/front-end/Dockerfile
# official node.js runtime for Docker
FROM node:12
# Create and set the directory for this container
WORKDIR /app/
# Install Application dependencies
COPY package.json yarn.lock /app/
RUN yarn install --no-optional
# Copy over the rest of the project
COPY . /app/
# Set the default port for the container
EXPOSE 3000
CMD yarn start
project/docker-compose.yml
version: "3"
services:
web:
build: ./back-end
command: python /nerdrich/manage.py runserver
volumes:
- ./back-end:/nerdrich
ports:
- "8000:8000"
stdin_open: true
tty: true
client:
build: ./front-end
volumes:
- ./front-end:/app
- /app/node_modules
ports:
- '3000:3000'
stdin_open: true
environment:
- NODE_ENV=development
depends_on:
- "web"
command:
yarn start
project/back-end/nerdrich/.env
ALLOWED_HOSTS=['165.227.82.162']
I can provide any additional information if needed.

Cannot access running django server?

I've created a docker image for django rest project, with following Dockerfile and docker-compose file,
Dockerfile
FROM python:3
# Set environment variables
ENV PYTHONUNBUFFERED 1
COPY requirements.txt /
# Install dependencies.
RUN pip install -r /requirements.txt
# Set work directory.
RUN mkdir /app
WORKDIR /app
# Copy project code.
COPY . /app/
EXPOSE 8000
docker-compose file
version: "3"
services:
dj:
container_name: dj
build: django
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./django:/app
ports:
- "8000:8000"
And docker-compose up command bring up the server like this,
but in web browser i can't access the server, browser says ERR_ADDRESS_INVALID
Docker version 18.09.2
0.0.0.0 is IPv4 for "everywhere"; you can't usually make outbound connections to it. If you have a Docker Desktop application, try http://localhost:8000; if it's Docker Toolbox, you'll need the docker-machine ip address, usually http://192.168.99.100:8000.
thanks to David Maze problem is solved.

Auto-reloading of code changes with Django development in Docker with Gunicorn

I'm using a Docker container for Django development, and the container runs Gunicorn with Nginx. I'd like code changes to auto-load, but the only way I can get them to load is by rebuilding with docker-compose (docker-compose build). The problem with "build" is that it re-runs all my pip installs.
I'm using the Gunicorn --reload flag, which is apparently supposed to do what I want. Here are my Docker config files:
## Dockerfile:
FROM python:3.4.3
RUN mkdir /code
WORKDIR /code
ADD . /code/
RUN pip install -r /code/requirements/docker.txt
## docker-compose.yml:
web:
restart: always
build: .
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app/static
env_file: .env
command: /usr/local/bin/gunicorn myapp.wsgi:application -w 2 -b :8000 --reload
nginx:
restart: always
build: ./config/nginx
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
postgres:
restart: always
image: postgres:latest
volumes:
- /var/lib/postgresql
ports:
- "5432:5432"
I've tried some of the other Docker commands (docker-compose restart, docker-compose up), but the code won't refresh.
What am I missing?
Thanks to kikicarbonell, I looked into having a volume for my code, and after looking at the Docker Compose recommended Django setup, I added volumes: - .:/code to my web container in docker-compose.yml, and now any code changes I make automatically apply.
## docker-compose.yml:
web:
restart: always
build: .
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app/static
- .:/code
env_file: .env
command: /usr/local/bin/gunicorn myapp.wsgi:application -w 2 -b :8000 --reload
Update: for a thorough example of using Gunicorn and Django with Docker, checkout this example project from Rackspace, which also shows how to use docker-machine to launch the setup on remote servers like Rackspace Cloud.
Caveat: currently, this method does not work when your code is stored locally and the docker host is remote (e.g., on a cloud provider like Digital Ocean or Rackspace). This also applies to virtual machines if your local file system is not mounted on the VM. Note that there are separate volume drivers (e.g., flocker), and there might be something out there to address this need. For now, the "fix" is to rsync/scp your files up to a directory on the remote docker host. Then, the --reload flag will auto-reload gunicorn after any scp/rsync. Update: If pushing code to remote docker host, I find it far easier to just rebuild the docker container (e.g., docker-compose build web && docker-compose up -d). This can be slower though than the rsync approach if your src folder is large.
You have another problem- Docker caches each layer that it builds. You shouldn't have to re-run pip install every time!
ADD . /code/
RUN pip install -r /code/requirements/docker.txt
This is your problem- Docker checks every ADD statement to see if any files have changed and invalidates the cache for it and every later step if it has. The correct way to do this is...
ADD ./requirements/docker.txt /code/requirements/
RUN pip install -r /code/requirements/docker.txt
ADD ./code/
Which will only invalidate your pip install line if your requirements file changes!
It seems like you need to match your WORKDIR/COPY commands in your Dockerfile in your docker-compose.yml when creating the volume. Here is an example:
Dockerfile
WORKDIR /app
COPY . /app
docker-compose.yml
app:
/ other commands /
volumes:
- ./app:/app
I faced very similar problem trying to configure auto-reload of the project with a little bit different setup. I set up volumes but it did not work anyway. After an hour of googling and thorough examination of my code I figured out that volume paths in Dockerfile and docker-compose.yml simply do not match. Make sure that they are the same.
My Dockerfile
FROM python:3.6.9-alpine3.10
COPY ./requirements/local.txt /app/requirements/local.txt
RUN set -ex \
&& apk add --no-cache --virtual .build-deps postgresql-dev git gcc libgcc musl-dev jpeg-dev zlib-dev build-base \
&& python -m venv /env \
&& /env/bin/pip install --upgrade pip \
&& /env/bin/pip install --no-cache-dir -r /app/requirements/local.txt \
&& runDeps="$(scanelf --needed --nobanner --recursive /env \
| awk '{ gsub(/,/, "\nso:", $2); print "so:" $2 }' \
| sort -u \
| xargs -r apk info --installed \
| sort -u)" \
&& apk add --virtual rundeps $runDeps \
&& apk del .build-deps
### Here is the path to the project
COPY . /app
WORKDIR /app/project
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
EXPOSE 8088
My docker-compose.yml
version: '3'
services:
web:
build:
context: ../..
dockerfile: compose/local/Dockerfile
restart: on-failure
command: python manage.py runserver 0.0.0.0:8088 --settings=project.settings.local
volumes:
# - .:/var/www/app # messed up path
- .:/app # correct path
env_file:
- ../../.env.local
depends_on:
- db
ports:
- "8000:8000"
Since I never found a desirable solution consider this interesting hack. Posting here I wanted to see if anyone has similar/good/bad experiences with this "work around".
To make code reload locally for development I simply created a View that immediately calls exit(). The exit will crash Django and a reload will occur where code changes are available. The reboot takes a fraction of a second and can be done via a tab in the browser, a requests.get call, or any other similar call. The reload is not automatic but it does skip any Docker lag such as a restart.
When the exit is called you will see the PID increment (if tailing logs):
web | [2019-07-15 18:29:52 +0000] [22] [INFO] Worker exiting (pid: 22)
web | [2019-07-15 18:29:52 +0000] [24] [INFO] Booting worker with pid: 24
I hope this helps others and/or gets feed back on this approach.
I you use docker-compose:
DockerFile: When you build image from Dockerfile you need to add some directory to save your code (in my case /api/):
WORKDIR /api/ -> important
COPY . . -> important
Docker-compose: Your docker-compose file haves you app service with the image in django just builded from Dockerfile, now you need to add the volume with the same WORKDIR that you use in Dockerfile:
volumes:
- .:/app -> important
And is all.