I have a project which was decomposed into 3 different microservices, let's call then A, B, and C.
Each of services A and B holds their own Pipfile and Dockerfile (and the corresponding pipfile.lock).
For example:
Service A Pipfile:
[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[packages]
pandas= "*"
nltk = "*"
Service B Pipfile:
[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[packages]
pandas= "*"
tensorflow= "*"
and in each one of the separate Dockerfile (one for A and one for B) we deploy using :
RUN pipenv install --deploy --system --ignore-pipfile
Now, service C use both of the services code (A,B) so it's holds only Dockerfile which contains the lines:
WORKDIR ./A/
RUN pipenv install --deploy --system --ignore-pipfile
WORKDIR ./B/
RUN pipenv install --deploy --system --ignore-pipfile
Which install dependencies from the two services.
However, that architecture seems to be wrong as pandas package version must be the same as they meet in C and there are a lot of packages like Pandas in real life.
One solution I thought about it's to specify pandas version and then each of the Pipfile, but then why should I write it twice? it against any coder instinct I have.
Other solution, is to create Pipfile for C which will contain the joined dependencies (pandas) and in both of A and B Dockerfile deploy the corresponding Pipfile.lock and C Pipfile.lock, but that means that during development pipenv need to know two Pipfile and Pipfile.lock which is not supported.
Any ideas?
Related
After running a docker image of my django app, I notice that downloading files is no longer available. The only thing I get is a copy of the website page I am currently on and not the requested file.
Locally, it is working fine.
Here is how my code is organized :
In main/views.py :
path_to_report = f"media/Reports/{request.user.username}/{request.user.username}{now.hour}{now.minute}{now.second}.txt"
return render(request, "main/result.html", {"dic_files": dic_files, "nbr":len(files), "dic_small":dic_small, "dic_projects":dic_projects, "path_to_report":f"/app/{path_to_report}"})
In main/result.html
<a href=/{{path_to_report}} download>
<button class="btn btn-success" name="rapport" value="rapport"> Télécharger votre rapport</button>
</a>
Here is my dockerfile :
# Use the official lightweight Python image.
# https://hub.docker.com/_/python
FROM python:3.8
# Allow statements and log messages to immediately appear in the Knative logs
ENV PYTHONUNBUFFERED True
EXPOSE 8000
## api-transport-https installation
RUN apt-get install apt-transport-https ca-certificates gnupg
# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
# Install production dependencies.
RUN pip3 install --upgrade pip setuptools wheel
RUN pip3 install -r requirements.txt
RUN python manage.py makemigrations
RUN python manage.py migrate
RUN python manage.py collectstatic --no-input
ENTRYPOINT ["gunicorn", "myteam.wsgi:application", "--bind=0.0.0.0:8000", "--workers=4", "--timeout=300", "--log-level=debug"]
In your main/views.py, try replacing:
"path_to_report":f"/app/{path_to_report}"
with :
"path_to_report":path_to_report
I didn't know what is the benefit of Docker over virtual environment. Is it necessary to use virtual environment and docker at the same time.
Before today, Basically I used virtual environment to crate Django project. But today My friend recommended me to use docker. I'm confused what should i use ?
I used this command to create virtual environment
python3 -m venv virtual_environment_name
Is this is best way to create virtual environment or should i use another way to create virtual environment
I would rather use pipenv to replace virtualenv at local development environment, and docker without virtual environment at production. Here is my Dockerfile(run django with gunicorn):
FROM python:3.6
ENV PYTHONUNBUFFERED 1
# switch system download source
RUN python -c "s='mirrors.163.com';import re;from pathlib import Path;p=Path('/etc/apt/sources.list');p.write_text(re.sub(r'(deb|security)\.debian\.org', s, p.read_text()))"
RUN apt-get update
# aliyun source for pip
RUN python -c "s='mirrors.aliyun.com';from pathlib import Path;p=Path.home()/'.pip';p.mkdir();(p/'pip.conf').write_text(f'[global]\nindex-url=https://{s}/pypi/simple\n[install]\ntrusted-host={s}\n')"
# Optional: install and conf vim, install ipython
RUN apt-get install -y vim
RUN wget https://raw.githubusercontent.com/waketzheng/carstino/master/.vimrc
RUN pip install ipython
# copy source code to docker image
WORKDIR /carrot
ADD . .
# required packages for carrot
RUN apt-get install -y ruby-sass
# install gunicorn and Pipfile
RUN pip install pipenv gunicorn
RUN pipenv install --system
RUN python manage.py collectstatic --noinput
# database name and rpc server ip
ENV POSTGRES_HOST=db
ENV RPC_SERVER_IP=172.21.0.2
EXPOSE 9000
# the PASSWORD env should be replace to a real one
CMD ["gunicorn", "--bind", ":9000", "--env", "PASSWORD=123456", "--error-logfile", "gunicorn.error", "--log-file", "gunicorn.log", "carrot.wsgi:application"]
When I was trying to dockerize my django app, I followed a tutorial telling me to structure my Dockerfile like this
FROM python:3.6
ENV PYTHONUNBUFFERED 1
COPY . /code/
WORKDIR /code/
RUN pip install pipenv
RUN pipenv install --system
EXPOSE 8000
After I saved that and run docker build .
the system threw me this error
Warning: --system is intended to be used for pre-existing Pipfile
installation,not installation of specific packages. Aborting.
I think it is complaining about the --system suffix above but the tutorial says it's crucial to have it so that my packages are applied in the entire docker container. I'm new to docker and even pipenv because I took over a previous person's code and isn't sure where their pipfile is or even if they have a pipfile. If you have any insights on how to fix this error thank you in advance.
pipenv --rm
This helped me! I was starting the "Django for beginners" and at the very beginning, got this error (accidently deleted Pipfile & Pipfile.lock)
Your warning is saying you that there is no Pipfile in your project dir.
--system is intended to be used for pre-existing Pipfile.
So before running
docker build .
run
pipenv install
in your project folder
Above solution didn't work for me.
After installing in the virtual env I also had to explicitly include Pipfile and Pipfile.lock into my dockerfile:
COPY Pipfile* .
# Install dependencies
RUN pip install pipenv && pipenv install --system
Then rebuild with docker compose:
docker-compose build
You can find more info in this thread
It has Error in pipenv
It is 👇 ERROR:: --system is intended to be used for pre-existing Pipfile installation, not installation of specific packages. Aborting.
try it
pipenv check or python3 -m pipenv check
Be careful when using Docker bind mounts!
Summary: In my case, I was using bind mounts in my dev environment, and mounting a docker bind mount on a non-empty directory would overwrite the contents of the container's directory, removing the Pipfile and Pipfile.lock, which showed the error mentioned when running the container.
Explanation
Directory structure on the host
> ls project/
docker-compose.yml Dockerfile Pipfile Pipfile.lock app/
Dockerfile
My Dockerfile would copy the contents of the project and then install the dependencies with pipenv, like this:
FROM python:3.8
# ...
COPY Pipfile Pipfile.lock /app/
RUN pipenv install --deploy --ignore-pipfile
COPY ./app /app/
CMD ["pipenv", "run", "uvicorn", "etc..", "--reload"]
Pipfile, Pipfile.lock and the code of ./app would all be in the same /app directory inside the container.
docker-compose.yml
I wanted uvicorn to hot-reload, so I mounted the code in /app inside the container's /app directory.
service:
app:
#...
volumes:
- type: bind
source: ./app
target: /app
This meant that when I changed the code in /app, the code in the container's /app directory would also change.
Effects
The side effect of this bind mount is that the content mounted on /app "obscured" the content previously copied in there.
Container's content with the bind mount:
> ls app/
code1.py code2.py
Container's content without the bind mount:
> ls app/
Pipfile Pipfile.lock code1.py code2.py
Solution
Either make sure that you include the Pipfile and Pipfile.lock as well when mounting the bind mount, or make sure that you COPY these 2 files to a directory that won't get overwritten by a bind mount.
I am absolutely new to docker. I have an existing Django Rest project whose structure looks like following:
My requirements.txt:
django==1.8.8
djangorestframework
markdown
django-filter
django-rest-auth
django-cors-headers
django-secure
django-sslserver
django-rest-auth[extras]
Normally I create a virtual env > activate it > do pip install requirements.txt and additionally I need easy_install mysql-python to get started.
I want to dockerize this project. Can someone help me build a simple docker file this project?
As #DriesDeRydt suggest in his comment, in the provided link there is a very simple example of a docker file which installs requirements:
Add the following content to the Dockerfile.
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
This Dockerfile starts with a Python 2.7 base image. The base image is
modified by adding a new code directory. The base image is further
modified by installing the Python requirements defined in the
requirements.txt file.
You can change the image to fit your needed python version, by pulling the needed python image. For example:
FROM python:2.7 change to FROM python:3.5 or FROM python:latest
But as the above Dockerfile stands and assuming that you will place it inside the server folder, it will work for a test case.
Here is the Dockerfile documentation for further reading.
I have a Django project running in docker.
When I add some packages to my requirments.txt file, they don't get downloaded when I run docker-compose up
Here is the relevant commands from my Dockerfile:
ADD ./evdc/requirements.txt /opt/evdc-venv/
ADD ./env-requirements.txt /opt/evdc-venv/
# Active venv
RUN . /opt/evdc-venv/bin/activate && pip install -r /opt/evdc- venv/requirements.txt
RUN . /opt/evdc-venv/bin/activate && pip install -r /opt/evdc-venv/env-requirements.txt
It seems docker is using a cached version of my requirements.txt file, as when I shell into the container, the requirements.txt file in /opt/evdc-venv/requirements.txt does not include the new packages.
Is there some way I can delete this cached version of requirements.txt?
Dev OS: Windows 10
Docker: 17.03.0-ce
docker-compose: 1.11.2
docker-compose up doesn't build a new image unless you have the build section defined with your Dockerfile and you pass it the --build parameter. Without that, it will reuse the existing image.
If your docker-compose.yml does not include the build section, and your build your images with docker build ..., then after you recreate your image, a docker-compose up will recreate the impacted containers.