Cannot launch my Django project with Gunicorn inside Docker - django

I'm new to Docker.
Visual Studio Code has an extension named Remote - Containers and I use it to dockerize a Django project.
For the first step the extension creates a Dockerfile:
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.10.5
EXPOSE 8000
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /app
COPY . /app
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
# File wsgi.py was not found. Please enter the Python path to wsgi file.
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "project.wsgi"]
Then it adds Development Container Configuration file:
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.238.0/containers/docker-existing-dockerfile
{
"name": "django-4.0.5",
// Sets the run context to one level up instead of the .devcontainer folder.
"context": "..",
// Update the 'dockerFile' property if you aren't using the standard 'Dockerfile' filename.
"dockerFile": "../Dockerfile"
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Uncomment the next line to run commands after the container is created - for example installing curl.
// "postCreateCommand": "apt-get update && apt-get install -y curl",
// Uncomment when using a ptrace-based debugger like C++, Go, and Rust
// "runArgs": [ "--cap-add=SYS_PTRACE", "--security-opt", "seccomp=unconfined" ],
// Uncomment to use the Docker CLI from inside the container. See https://aka.ms/vscode-remote/samples/docker-from-docker.
// "mounts": [ "source=/var/run/docker.sock,target=/var/run/docker.sock,type=bind" ],
// Uncomment to connect as a non-root user if you've added one. See https://aka.ms/vscode-remote/containers/non-root.
// "remoteUser": "vscode"
}
And finally I run the command Rebuild and Reopen in Container.
After a few seconds the container is running and I see a command prompt. Considering that at the end of Dockerfile there's this line:
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "project.wsgi"]
...the Django application must be running, but it isn't and http://127.0.0.1:8000 refuses to connect.
But if I run the same command at the command prompt:
gunicorn --bind 0.0.0.0:8000 project.wsgi
It works just fine?
Why? All files are created by VSCode extension. Same instructions are posted on VSCode official website, yet it's not working.
P.S:
When the container loaded for the first time, I created a project called 'project' so the folder structure is like this:

Related

Django Fargate. The requested resource was not found on this server

I have seem The requested resource was not found on this server, like a thousand times but none of the awnser match my problem.
I made this app from statch, and add a Dockerfile for it.
FROM python:3
ENV PYTHONUNBUFFERED=1
WORKDIR /www
COPY requirements.txt /www/
RUN pip install -r requirements.txt
COPY . /www/
EXPOSE 80
CMD [ "python", "manage.py", "runserver", "0.0.0.0:80" ]
Then i run this:
docker build -t pasquin-django-mainapp .
docker run pasquin-django-mainapp
I did that in my local environment, not woking. Y push this Dcokerfile to ECR and then use it in ECS + Fargate, same bada result. I dont know what else todo.
Somebody, help! thanks!
ps: From docker-compose works just marvelous!

How to translate flask run to Dockerfile?

So I'm trying to learn how to containerize flask apps and so far I've understood two ways of firing up a flask app locally:
One is to have this code in the main file:
if __name__ == '__main__':
APP.run(host='0.0.0.0', port=8080, debug=False)
and run with
python3 main.py
The other is to remove the above code from main.py, and just define an environment variable and do flask run:
export FLASK_APP=main.py
flask run
I tried to convert both of these methods into a Dockerfile:
ENTRYPOINT["python3", "main.py"]
which works quite well for the first. However when I try to do something like:
ENV FLASK_APP "./app/main.py"
ENTRYPOINT ["flask", "run"]
I am not able to reach my server via the browser. The container starts up all well, just that it's not reachable. One trick that does work, is if I add the host address in the entrypoint like so:
ENTRYPOINT ["flask", "run", "--host=0.0.0.0"]
I am not sure why do I have to the --host to the entrypoint when locally I can do also without it. Another funny thing that I noticed, was that if I put the host as --host=127.0.0.1, it still doesn't work.
Can someone explain what really is happening here? Either I don't understand the ENTRYPOINT correctly or maybe flask.. or maybe both.
EDIT: The whole Dockerfile for reference is:
FROM python:stretch
COPY . /app
WORKDIR /app
RUN pip3 install --upgrade pip
RUN pip3 install -r requirements.txt
ENV FLASK_APP "/app/main.py"
ENTRYPOINT ["flask", "run", "--host=127.0.0.1"]
Try to define FLASK_APP env via absolute path.
Or put this string to your Dockerfile:
WORKDIR /path/to/dir/that/contains/main.py/file
Oh sorry. Host in ENTRYPOINT statement must be 0.0.0.0 :
ENTRYPOINT ["flask", "run", "--host=0.0.0.0"]
And do not forget to tie 5000 port outside via -p option :
docker run -p 5000:5000 <your container>
i believe this task would be better accomplished implementing CMD as opposed to ENTRYPOINT.
(you should also define the working directory before running COPY command.)
for example, your Dockerfile should look something like..
FROM python:stretch
WORKDIR /app
COPY . .
RUN pip3 install --upgrade pip
RUN pip3 install -r requirements.txt
CMD python3 main.py

Django running fine outside Docker but never starts within Docker, how to find issue?

I had a working Django project that had built and deployed many images over the last few weeks. My django project kept working fine when ran as "python manage.py runserver" and the Docker images kept being built fine (all successful builds). However the django app now doesn't deploy. What could be the issue and where should I start to look for it? I've tried the logs but they only say "Starting Django" without actually starting the service
I use github and have gone back to previous versions of the code and none of them now work, even though the code is exactly the same. It also fails to deploy the Django server on AWS Elastic Beanstalk infrastructure which is my ultimate goal with this code.
start.sh:
#!/bin/bash
echo Starting Django
cd TN_API
exec python ../manage.py runserver 0.0.0.0:8000
Dockerfile:
FROM python:2.7.13-slim
# Set the working directory to /app
WORKDIR /TN_API
# Copy the current directory contents into the container at /app
ADD . /TN_API
# COPY startup script into known file location in container
COPY start.sh /start.sh
# Install requirements
RUN pip install -r requirements.txt
# EXPOSE port 8000 to allow communication to/from server
EXPOSE 8000
# CMD specifcies the command to execute to start the server running.
CMD ["/start.sh"]
Commands:
sudo docker run -d tn-api
sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d28115a919f9 tn-api "/start.sh" 11 seconds ago Up 8 seconds 8000/tcp festive_darwin
sudo docker logs [container id]
Starting Django
(doesn't do the whole:
Performing system checks...
System check identified no issues (0 silenced).
August 06, 2017 - 20:54:36
Django version 1.10.5, using settings 'TN_API.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.)
I changed several things and although it doesn't work locally it seems to work fine when deployed to AWS. I still don't get the feedback I used to get such as below but that's ok. I can hit the server and it works. Thank you all for your help.
Performing system checks...
System check identified no issues (0 silenced). August 06, 2017 - 20:54:36 Django version 1.10.5, using settings 'TN_API.settings' Starting development server at http://127.0.0.1:8000/ Quit the server with CONTROL-C.
It looks like the path is wrong for the manage.py script in /start.sh.
Your start.sh:
#!/bin/bash
echo Starting Django
cd TN_API
exec python ../manage.py runserver 0.0.0.0:8000
Seeing that you set WORKDIR to you project directory in the Dockerfile, the start.sh script is actually run from inside the project directory - which means it is actually doing this:
cd /TN_API # WORKDIR directive in Dockerfile
echo Starting Django # from the start.sh script
cd /TN_API/TN_API # looking for TN_API within your current pwd
exec python /TN_API/manage.py runserver 0.0.0.0:8000 # goes up a level (..) to look for manage.py
So it could be that your context for running runserver is off.
You can avoid this path jumping by rewriting your Dockerfile to include an CMD directive as follows:
FROM python:2.7.13-slim
# Set the working directory to /app
WORKDIR /TN_API
# Copy the current directory contents into the container at /app
ADD . /TN_API
# Install requirements
RUN pip install -r requirements.txt
# EXPOSE port 8000 to allow communication to/from server
EXPOSE 8000
# CMD specifcies the command to execute to start the server running.
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
Here using python manage.py runserver 0.0.0.0:8000 will work since you set the WORKDIR to your project directory already. So you wouldn't need your start.sh script necessarily.

'docker exec' environment different from 'docker run'/'docker attach' environment?

I recently tried creating a docker environment for developing django projects.
I've noticed that files created or copied via the Dockerfile are only accessible when using 'docker run' or 'docker attach', in which case for the latter, you would be able to go in and see whatever files you've moved in their appropriate places.
However, when running 'docker exec', the container doesn't seem to show any trace of those files whatsoever. Why is this happening?
Dockerfile
############################################################
# Dockerfile to run a Django-based web application
# Based on an Ubuntu Image
############################################################
# Set the base image to use to Ubuntu
FROM ubuntu:14.04
# Set env variables used in this Dockerfile (add a unique prefix, such as DOCKYARD)
# Local directory with project source
ENV DOCKYARD_SRC=hello_django
# Directory in container for all project files
ENV DOCKYARD_SRVHOME=/srv
# Directory in container for project source files
ENV DOCKYARD_SRVPROJ=/srv/hello_django
# Update the default application repository sources list
RUN apt-get update && apt-get -y upgrade
RUN apt-get install -y python python-pip
# Create application subdirectories
WORKDIR $DOCKYARD_SRVHOME
RUN sudo mkdir media static logs
# Copy application source code to SRCDIR
COPY $DOCKYARD_SRC $DOCKYARD_SRVPROJ
# Install Python dependencies
RUN pip install -r $DOCKYARD_SRVPROJ/requirements.txt
# Port to expose
EXPOSE 8000
# Copy entrypoint script into the image
WORKDIR $DOCKYARD_SRVPROJ
COPY ./docker-entrypoint.sh /
With the above Dockerfile, my source folder hello_django is copied to its appropriate place in /srv/hello_django, and my docker-entrypoint.sh was moved to root successfully as well.
I can confirm this in 'docker run -it bash'/'docker attach', but can't access any of them through 'docker exec -it bash", for example.
(Dockerfile obtained from this tutorial).
Problem Demonstration
I'm running docker through vagrant, if that's relevant.
Vagrantfile
config.vm.box = "hashicorp/precise64"
config.vm.network "forwarded_port", guest: 8000, host: 8000
config.vm.provision "docker" do |d|
d.pull_images "django"
end

Files created inside docker are write protected on host

I am using docker container for rails and ember.I am mounting the source from my local to the container. All the changes I make here on local are reflected in the container.
Now I want to use generators to create files. The files are created, but they are write protected on my machine.
When I try to do docker-compose run frontend bash, I get a root#061e4159d4ef:/frontend# superuser prompt access inside of the container. I can create files when I am in this mode. These files are write protected in my host.
I have also tried docker-compose run --user "$(id -u):$(id -g)" frontend bash, I get a I have no name!#31bea5ae977c:/frontend$, I am unable to create any file in this mode. Below is the error message that I get.
I have no name!#31bea5ae977c:/frontend$ ember g template about
/frontend/node_modules/ember-cli/node_modules/configstore/node_modules/mkdirp/index.js:90
throw err0;
^
Error: EACCES: permission denied, mkdir '/.config'
at Error (native)
at Object.fs.mkdirSync (fs.js:916:18)
at sync (/frontend/node_modules/ember-cli/node_modules/configstore/node_modules/mkdirp/index.js:71:13)
at Function.sync (/frontend/node_modules/ember-cli/node_modules/configstore/node_modules/mkdirp/index.js:77:24)
at Object.create.all.get (/frontend/node_modules/ember-cli/node_modules/configstore/index.js:39:13)
at Object.Configstore (/frontend/node_modules/ember-cli/node_modules/configstore/index.js:28:44)
at clientId (/frontend/node_modules/ember-cli/lib/cli/index.js:22:21)
at module.exports (/frontend/node_modules/ember-cli/lib/cli/index.js:65:19)
at /usr/local/lib/node_modules/ember-cli/bin/ember:26:3
at /usr/local/lib/node_modules/ember-cli/node_modules/resolve/lib/async.js:44:21
Here is my Dockerfile:
FROM node:6.2
ENV INSTALL_PATH /frontend
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
# Copy package.json separately so it's recreated when package.json
# changes.
COPY package.json ./package.json
RUN npm install
COPY . $INSTALL_PATH
RUN npm install -g phantomjs bower ember-cli ;\
bower --allow-root install
EXPOSE 4200
EXPOSE 49152
CMD [ "ember", "server" ]
Here is my docker-compose.yml file, please note it is not in the current directory, but the parent.
frontend:
build: "frontend/"
dockerfile: "Dockerfile"
environment:
- EMBER_ENV=development
ports:
- "4200:4200"
- "49152:49152"
volumes:
- ./frontend:/frontend
I want to know, how can I use generateors? I am new to learning docker. Any help is appreciated.
You get the I have no name! because of this: $(id -u):$(id -g)
The user id and group in your host are not linked to any user in your container.
Solution:
Execute chown UID:GID -R /frontend inside the container if its already running and you cannot stop it for some reason. Otherwise you could just do the chown command in the host and then run your container again. Node that UID and GID must belong to a user inside the container
Example: chown 101:101 -R /frontend with 101 is the UID:GID of www-data.
If there are no other user exept root in your container, you will have to create a new one. To do so you must create a Dockerfile and put something like this:
FROM your_image_name
RUN useradd -ms /bin/bash newuser
More information about Dockerfiles can be found here or just by googlin' it.