How to translate flask run to Dockerfile? - dockerfile

So I'm trying to learn how to containerize flask apps and so far I've understood two ways of firing up a flask app locally:
One is to have this code in the main file:
if __name__ == '__main__':
APP.run(host='0.0.0.0', port=8080, debug=False)
and run with
python3 main.py
The other is to remove the above code from main.py, and just define an environment variable and do flask run:
export FLASK_APP=main.py
flask run
I tried to convert both of these methods into a Dockerfile:
ENTRYPOINT["python3", "main.py"]
which works quite well for the first. However when I try to do something like:
ENV FLASK_APP "./app/main.py"
ENTRYPOINT ["flask", "run"]
I am not able to reach my server via the browser. The container starts up all well, just that it's not reachable. One trick that does work, is if I add the host address in the entrypoint like so:
ENTRYPOINT ["flask", "run", "--host=0.0.0.0"]
I am not sure why do I have to the --host to the entrypoint when locally I can do also without it. Another funny thing that I noticed, was that if I put the host as --host=127.0.0.1, it still doesn't work.
Can someone explain what really is happening here? Either I don't understand the ENTRYPOINT correctly or maybe flask.. or maybe both.
EDIT: The whole Dockerfile for reference is:
FROM python:stretch
COPY . /app
WORKDIR /app
RUN pip3 install --upgrade pip
RUN pip3 install -r requirements.txt
ENV FLASK_APP "/app/main.py"
ENTRYPOINT ["flask", "run", "--host=127.0.0.1"]

Try to define FLASK_APP env via absolute path.
Or put this string to your Dockerfile:
WORKDIR /path/to/dir/that/contains/main.py/file

Oh sorry. Host in ENTRYPOINT statement must be 0.0.0.0 :
ENTRYPOINT ["flask", "run", "--host=0.0.0.0"]
And do not forget to tie 5000 port outside via -p option :
docker run -p 5000:5000 <your container>

i believe this task would be better accomplished implementing CMD as opposed to ENTRYPOINT.
(you should also define the working directory before running COPY command.)
for example, your Dockerfile should look something like..
FROM python:stretch
WORKDIR /app
COPY . .
RUN pip3 install --upgrade pip
RUN pip3 install -r requirements.txt
CMD python3 main.py

Related

Cannot launch my Django project with Gunicorn inside Docker

I'm new to Docker.
Visual Studio Code has an extension named Remote - Containers and I use it to dockerize a Django project.
For the first step the extension creates a Dockerfile:
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.10.5
EXPOSE 8000
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /app
COPY . /app
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
# File wsgi.py was not found. Please enter the Python path to wsgi file.
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "project.wsgi"]
Then it adds Development Container Configuration file:
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.238.0/containers/docker-existing-dockerfile
{
"name": "django-4.0.5",
// Sets the run context to one level up instead of the .devcontainer folder.
"context": "..",
// Update the 'dockerFile' property if you aren't using the standard 'Dockerfile' filename.
"dockerFile": "../Dockerfile"
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Uncomment the next line to run commands after the container is created - for example installing curl.
// "postCreateCommand": "apt-get update && apt-get install -y curl",
// Uncomment when using a ptrace-based debugger like C++, Go, and Rust
// "runArgs": [ "--cap-add=SYS_PTRACE", "--security-opt", "seccomp=unconfined" ],
// Uncomment to use the Docker CLI from inside the container. See https://aka.ms/vscode-remote/samples/docker-from-docker.
// "mounts": [ "source=/var/run/docker.sock,target=/var/run/docker.sock,type=bind" ],
// Uncomment to connect as a non-root user if you've added one. See https://aka.ms/vscode-remote/containers/non-root.
// "remoteUser": "vscode"
}
And finally I run the command Rebuild and Reopen in Container.
After a few seconds the container is running and I see a command prompt. Considering that at the end of Dockerfile there's this line:
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "project.wsgi"]
...the Django application must be running, but it isn't and http://127.0.0.1:8000 refuses to connect.
But if I run the same command at the command prompt:
gunicorn --bind 0.0.0.0:8000 project.wsgi
It works just fine?
Why? All files are created by VSCode extension. Same instructions are posted on VSCode official website, yet it's not working.
P.S:
When the container loaded for the first time, I created a project called 'project' so the folder structure is like this:

Django Fargate. The requested resource was not found on this server

I have seem The requested resource was not found on this server, like a thousand times but none of the awnser match my problem.
I made this app from statch, and add a Dockerfile for it.
FROM python:3
ENV PYTHONUNBUFFERED=1
WORKDIR /www
COPY requirements.txt /www/
RUN pip install -r requirements.txt
COPY . /www/
EXPOSE 80
CMD [ "python", "manage.py", "runserver", "0.0.0.0:80" ]
Then i run this:
docker build -t pasquin-django-mainapp .
docker run pasquin-django-mainapp
I did that in my local environment, not woking. Y push this Dcokerfile to ECR and then use it in ECS + Fargate, same bada result. I dont know what else todo.
Somebody, help! thanks!
ps: From docker-compose works just marvelous!

ModuleNotFoundError: No module named 'flask_sqlalchemy' error observer while i do running the docker images

I am getting ModuleNotFoundError No module named 'flask_sqlalchemy' when try to docker run of builded docker images.The same python flask run using terminal is working fine for me but not in docker?
FROM python:3.6
ADD . /app
WORKDIR /app
RUN pip install flask gunicorn
EXPOSE 8000
CMD ["gunicorn", "-b", "0.0.0.0:8000", "app"]
The good practice is to save file with installed pip packages, e.g. in requirements.txt. (preferably with only needed packages, e.g. when you are using pyevn).
This is done by pip freeze > requirements.txt (when located on local machine, in terminal).
Then, all you should do is replace RUN pip install flask gunicorn with RUN pip install -r requirements.txt and all installed packages on the local machine will be installed in the docker too.

Do I need to use virtual environment while using docker?

I didn't know what is the benefit of Docker over virtual environment. Is it necessary to use virtual environment and docker at the same time.
Before today, Basically I used virtual environment to crate Django project. But today My friend recommended me to use docker. I'm confused what should i use ?
I used this command to create virtual environment
python3 -m venv virtual_environment_name
Is this is best way to create virtual environment or should i use another way to create virtual environment
I would rather use pipenv to replace virtualenv at local development environment, and docker without virtual environment at production. Here is my Dockerfile(run django with gunicorn):
FROM python:3.6
ENV PYTHONUNBUFFERED 1
# switch system download source
RUN python -c "s='mirrors.163.com';import re;from pathlib import Path;p=Path('/etc/apt/sources.list');p.write_text(re.sub(r'(deb|security)\.debian\.org', s, p.read_text()))"
RUN apt-get update
# aliyun source for pip
RUN python -c "s='mirrors.aliyun.com';from pathlib import Path;p=Path.home()/'.pip';p.mkdir();(p/'pip.conf').write_text(f'[global]\nindex-url=https://{s}/pypi/simple\n[install]\ntrusted-host={s}\n')"
# Optional: install and conf vim, install ipython
RUN apt-get install -y vim
RUN wget https://raw.githubusercontent.com/waketzheng/carstino/master/.vimrc
RUN pip install ipython
# copy source code to docker image
WORKDIR /carrot
ADD . .
# required packages for carrot
RUN apt-get install -y ruby-sass
# install gunicorn and Pipfile
RUN pip install pipenv gunicorn
RUN pipenv install --system
RUN python manage.py collectstatic --noinput
# database name and rpc server ip
ENV POSTGRES_HOST=db
ENV RPC_SERVER_IP=172.21.0.2
EXPOSE 9000
# the PASSWORD env should be replace to a real one
CMD ["gunicorn", "--bind", ":9000", "--env", "PASSWORD=123456", "--error-logfile", "gunicorn.error", "--log-file", "gunicorn.log", "carrot.wsgi:application"]

Docker error with pipenv on django app: Warning: --system is intended to be used for pre-existing Pipfile

When I was trying to dockerize my django app, I followed a tutorial telling me to structure my Dockerfile like this
FROM python:3.6
ENV PYTHONUNBUFFERED 1
COPY . /code/
WORKDIR /code/
RUN pip install pipenv
RUN pipenv install --system
EXPOSE 8000
After I saved that and run docker build .
the system threw me this error
Warning: --system is intended to be used for pre-existing Pipfile
installation,not installation of specific packages. Aborting.
I think it is complaining about the --system suffix above but the tutorial says it's crucial to have it so that my packages are applied in the entire docker container. I'm new to docker and even pipenv because I took over a previous person's code and isn't sure where their pipfile is or even if they have a pipfile. If you have any insights on how to fix this error thank you in advance.
pipenv --rm
This helped me! I was starting the "Django for beginners" and at the very beginning, got this error (accidently deleted Pipfile & Pipfile.lock)
Your warning is saying you that there is no Pipfile in your project dir.
--system is intended to be used for pre-existing Pipfile.
So before running
docker build .
run
pipenv install
in your project folder
Above solution didn't work for me.
After installing in the virtual env I also had to explicitly include Pipfile and Pipfile.lock into my dockerfile:
COPY Pipfile* .
# Install dependencies
RUN pip install pipenv && pipenv install --system
Then rebuild with docker compose:
docker-compose build
You can find more info in this thread
It has Error in pipenv
It is 👇 ERROR:: --system is intended to be used for pre-existing Pipfile installation, not installation of specific packages. Aborting.
try it
pipenv check or python3 -m pipenv check
Be careful when using Docker bind mounts!
Summary: In my case, I was using bind mounts in my dev environment, and mounting a docker bind mount on a non-empty directory would overwrite the contents of the container's directory, removing the Pipfile and Pipfile.lock, which showed the error mentioned when running the container.
Explanation
Directory structure on the host
> ls project/
docker-compose.yml Dockerfile Pipfile Pipfile.lock app/
Dockerfile
My Dockerfile would copy the contents of the project and then install the dependencies with pipenv, like this:
FROM python:3.8
# ...
COPY Pipfile Pipfile.lock /app/
RUN pipenv install --deploy --ignore-pipfile
COPY ./app /app/
CMD ["pipenv", "run", "uvicorn", "etc..", "--reload"]
Pipfile, Pipfile.lock and the code of ./app would all be in the same /app directory inside the container.
docker-compose.yml
I wanted uvicorn to hot-reload, so I mounted the code in /app inside the container's /app directory.
service:
app:
#...
volumes:
- type: bind
source: ./app
target: /app
This meant that when I changed the code in /app, the code in the container's /app directory would also change.
Effects
The side effect of this bind mount is that the content mounted on /app "obscured" the content previously copied in there.
Container's content with the bind mount:
> ls app/
code1.py code2.py
Container's content without the bind mount:
> ls app/
Pipfile Pipfile.lock code1.py code2.py
Solution
Either make sure that you include the Pipfile and Pipfile.lock as well when mounting the bind mount, or make sure that you COPY these 2 files to a directory that won't get overwritten by a bind mount.