Unable to find user django-user: no matching entries in passwd file - django

I was following a tutorial on docker and django.
Not really hard but at one time, i get on this error and i don't understand where is the error on the dockerfile.
I understand the error given by docker but dont understand what is wrong in the Dockerfile.
Docker file :
FROM python:3.9-alpine
COPY ./requirements.txt /requirements.txt
COPY ./app /app
WORKDIR /app
RUN python -m venv /py && \
/py/bin/pip install --upgrade pip && \
/py/bin/pip install -r /requirements.txt && \
adduser --disabled-password --no-create-home django-user
ENV PATH="/py/bin:$PATH"
USER django-user
And here is the error :
Creating django-deploy-gap_app_run ... done
Error response from daemon: unable to find user django-user: no matching entries in passwd file
Any help is welcome
Thank's !

By removing the image and docker-compose again, the error disappear.
I suppose, i made something wrong the first time, and while i keep composing with the old image problem juste keep popping.

Related

/entrypoint.sh: line 8: syntax error: unexpected end of file

Docker project was created on Linux machine, I'm running windows and I can't get docker-compose up to work. I've read through Stackoverflow's answers and so far I've tried the following (none have worked):
Using Visual Studio Code I saved as "LF" instead of CRLF
Deleted the file entirely, created a new one using Visual Studio Code, typed the words
Cut the entire file, pasted it in Notepad so that formatting gets cleared, copied and pasted back
Added various forms of #!/bin/bash to the start of the entrypoint.sh
Changed Docker File to use COPY instead of ADD
At this point I'm not sure what else to try. Any ideas?
Edit
entrypoint.sh
if [ "$1" == 'celery' ]; then
celery -A vicmun worker -l info --uid=celery --gid=celery
else
./../wait_for_it.sh db:5433 --timeout=10
python manage.py migrate
python manage.py runserver 0.0.0.0:8000
fi
Dockerfile
FROM python:3.9
ENV PYTHONUNBUFFERED 1
ARG APP_ENV=${APP_ENV}
RUN mkdir /src
RUN mkdir /static
WORKDIR /src
ADD ./src /src
ADD entrypoint-${APP_ENV}.sh /entrypoint.sh
ADD wait_for_it.sh /wait_for_it.sh
RUN addgroup --system celery && adduser --system --ingroup celery celery
RUN ["chmod", "+x", "/wait_for_it.sh"]
RUN apt-get -y update
RUN apt-get -y install ffmpeg
RUN pip install -r requirements.txt
ENTRYPOINT ["bash", "/entrypoint.sh"]
I don't know your config, but I could resolve that problem by adding in the CMD.
In my case, I could execute a script with docker as follows:
Dockerfile
FROM python:3.10-alpine3.15
ENV PYTHONUNBUFFERED=1
WORKDIR /app
RUN apk update \
&& apk add --no-cache gcc musl-dev postgresql-dev python3-dev libffi-dev \
&& pip install --upgrade pip
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
COPY . .
CMD [ "sh", "entrypoint.sh" ]
entrypoint.sh
#!/bin/sh
python manage.py makemigrations
python manage.py migrate
python manage.py runserver 0.0.0.0:8000
Well, I feel the cringe for this. Turns out the solution was something I had already done, but it didn't go through until I rebuilt with --no-cache option.
Solution was to:
Using Visual Studio Code I saved as "LF" instead of CRLF
and run docker-compose build --no-cache

Dataflow with python flex template - launcher timeout

I'm trying to run my python dataflow job with flex template. job works fine locally when I run with direct runner (without flex template) however when I try to run it with flex template, job stuck in "Queued" status for a while and then fail with timeout.
Here is some of logs I found in GCE console:
INFO:apache_beam.runners.portability.stager:Executing command: ['/usr/local/bin/python', '-m', 'pip', 'download', '--dest', '/tmp/dataflow-requirements-cache', '-r', '/dataflow/template/requirements.txt', '--exists-action', 'i', '--no-binary', ':all:'
Shutting down the GCE instance, launcher-202011121540156428385273524285797, used for launching.
Timeout in polling result file: gs://my_bucket/staging/template_launches/2020-11-12_15_40_15-6428385273524285797/operation_result.
Possible causes are:
1. Your launch takes too long time to finish. Please check the logs on stackdriver.
2. Service my_service_account#developer.gserviceaccount.com may not have enough permissions to pull container image gcr.io/indigo-computer-272415/samples/dataflow/streaming-beam-py:latest or create new objects in gs://my_bucket/staging/template_launches/2020-11-12_15_40_15-6428385273524285797/operation_result.
3. Transient errors occurred, please try again.
For 1, I see no useful lo. For 2, service account is default service account so it should all permissions.
How can I debug this further?
Here is my Docker file:
FROM gcr.io/dataflow-templates-base/python3-template-launcher-base
ARG WORKDIR=/dataflow/template
RUN mkdir -p ${WORKDIR}
WORKDIR ${WORKDIR}
ADD localdeps localdeps
COPY requirements.txt .
COPY main.py .
COPY setup.py .
COPY bq_field_pb2.py .
COPY bq_table_pb2.py .
COPY core_pb2.py .
ENV FLEX_TEMPLATE_PYTHON_REQUIREMENTS_FILE="${WORKDIR}/requirements.txt"
ENV FLEX_TEMPLATE_PYTHON_PY_FILE="${WORKDIR}/main.py"
ENV FLEX_TEMPLATE_PYTHON_SETUP_FILE="${WORKDIR}/setup.py"
RUN pip install -U --no-cache-dir -r ./requirements.txt
I'm following this guide - https://cloud.google.com/dataflow/docs/guides/templates/using-flex-templates
A possible cause of this issue can be found within the requirements.txt file. If you are trying to install apache-beam within the requirements file the flex template will experience the exact issue you are describing: Jobs stay some time in the Queued state and finally fail with Timeout in polling result.
The reason being, they are affected by this issue. This only affects flex templates, the jobs run properly locally or with Standard Templates.
The solution is to install it separately in the Dockerfile.
RUN pip install -U apache-beam==<your desired version>
RUN pip install -U -r ./requirements.txt
Download the requirements to speed up launching the Dataflow job.
FROM gcr.io/dataflow-templates-base/python3-template-launcher-base
ARG WORKDIR=/dataflow/template
RUN mkdir -p ${WORKDIR}
WORKDIR ${WORKDIR}
COPY . .
ENV FLEX_TEMPLATE_PYTHON_PY_FILE="${WORKDIR}/main.py"
ENV FLEX_TEMPLATE_PYTHON_SETUP_FILE="${WORKDIR}/setup.py"
ENV FLEX_TEMPLATE_PYTHON_REQUIREMENTS_FILE="${WORKDIR}/requirements.txt"
RUN apt-get update \
# Upgrade pip and install the requirements.
&& pip install --no-cache-dir --upgrade pip \
&& pip install --no-cache-dir -r $FLEX_TEMPLATE_PYTHON_REQUIREMENTS_FILE \
# Download the requirements to speed up launching the Dataflow job.
&& pip download --no-cache-dir --dest /tmp/dataflow-requirements-cache -r $FLEX_TEMPLATE_PYTHON_REQUIREMENTS_FILE
# Since we already downloaded all the dependencies, there's no need to rebuild everything.
ENV PIP_NO_DEPS=True

Docker error when containerizing app in Google Cloud Run

I am trying to run transformers from huggingface in Google Cloud Run.
My first idea was to run one of the dockerfiles provided by huggingface, but it seems that is not possible.
Any ideas on how to get around this error?
Step 6/9 : WORKDIR /workspace
---> Running in xxx
Removing intermediate container xxx
---> xxx
Step 7/9 : COPY . transformers/
---> xxx
Step 8/9 : RUN cd transformers/ && python3 -m pip install --no-cache-dir .
---> Running in xxx
←[91mERROR: Directory '.' is not installable. Neither 'setup.py' nor 'pyproject.toml' found.
The command '/bin/sh -c cd transformers/ && python3 -m pip install --no-cache-dir .' returned a non-zero code: 1
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
←[0m
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
ERROR: (gcloud.builds.submit) build xxx completed with status "FAILURE"
Dockerfile from huggingface:
FROM nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04
LABEL maintainer="Hugging Face"
LABEL repository="transformers"
RUN apt update && \
apt install -y bash \
build-essential \
git \
curl \
ca-certificates \
python3 \
python3-pip && \
rm -rf /var/lib/apt/lists
RUN python3 -m pip install --no-cache-dir --upgrade pip && \
python3 -m pip install --no-cache-dir \
mkl \
tensorflow
WORKDIR /workspace
COPY . transformers/
RUN cd transformers/ && \
python3 -m pip install --no-cache-dir .
CMD ["/bin/bash"]
.dockerignore file from Google Cloud Run documentation:
Dockerfile
README.md
*.pyc
*.pyo
*.pyd
__pycache__
.pytest_cache
---- Edit:
Managed to get working based on the answer from Dustin. I basically:
left the Dockerfile in the root folder, together with the transformers folder.
updated the COPY line from the dockerfile to:
COPY . ./
The error is:
Directory '.' is not installable. Neither 'setup.py' nor 'pyproject.toml' found.
This is due to these two lines in your Dockerfile:
COPY . transformers/
RUN cd transformers/ && \
python3 -m pip install --no-cache-dir .
This attempts to copy the local directory containing the Dockerfile into the container, and then install it as a Python project.
It looks like the Dockerfile expects to be run at the repository root of https://github.com/huggingface/transformers. You should cloning the repo and move the Dockerfile you want to build into the root, and then build again.

Can't modify files created in docker container

I got a container with django application running in it and I sometimes go into the shell of the container and run ./manage makemigrations to create migrations for my app.
Files are created successfully and synchronized between host and container.
However in my host machine I am not able to modify any file created in container.
This is my Dockerfile
FROM python:3.8-alpine3.10
LABEL maintainer="Marek Czaplicki <marek.czaplicki>"
WORKDIR /app
COPY ./requirements.txt ./requirements.txt
RUN set -ex; \
apk update; \
apk upgrade; \
apk add libpq libc-dev gcc g++ libffi-dev linux-headers python3-dev musl-dev pcre-dev postgresql-dev postgresql-client swig tzdata; \
apk add --virtual .build-deps build-base linux-headers; \
apk del .build-deps; \
pip install pip -U; \
pip --no-cache-dir install -r requirements.txt; \
rm -rf /var/cache/apk/*; \
adduser -h /app -D -u 1000 -H uwsgi_user
ENV PYTHONUNBUFFERED=TRUE
COPY . .
ENTRYPOINT ["sh", "./entrypoint.sh"]
CMD ["sh", "./run_backend.sh"]
and run_backend.sh
./manage.py collectstatic --noinput
./manage.py migrate && exec uwsgi --strict uwsgi.ini
what can I do to be able to modify these files in my host machine? I don't want to chmod every file or directory every time I create it.
For some reason there is one project in which files created in container are editable by host machine, but I cannot find any difference between these two.
By default, Docker containers runs as root. This has two issues:
In development as you can see, the files are owned by root, which is often not what you want.
In production this is a security risk (https://pythonspeed.com/articles/root-capabilities-docker-security/).
For development purposes, docker run --user $(id -u) yourimage or the Compose example given in the other answer will match the user to your host user.
For production, you'll want to create a user inside the image; see the page linked above for details.
Usually files created inside docker container are owned by the root user of the container.
You could try with this inside your container:
chown 1000:1000 file-you-want-to-edit-outside
You could add this as the last layer of your Dockerfile as RUN
Edit:
If you are using docker-compose, you can add user to your container:
service:
container:
user: ${CURRENT_HOST_USER}
And have CURRENT_HOST_USER be equal to $(id -u):$(id -g)
The solution was to add
USER uwsgi_user
to Dockerfile and then simpy run docker exec -it container-name sh

Docker Image command pythonreturning non-zero code

So I'm trying to build a new docker image with Python2.7 and pip for python 2.7 however I'm getting a "The command '/bin/sh -c pip2 install -r requirements.txt' returned a non-zero code: 1" error when trying to build the image.
FROM colstrom/python:legacy
MAINTAINER **REDACTED**
RUN pip2 install -r requirements.txt
CMD ["python2.7", "parser.py"]
Any ideas?
You have to COPY/ADD or mount your data (at least requirements.txt and parser.py) into the container.
Assuming your Dockerfile resides at the root directory of your project:
FROM colstrom/python:legacy
MAINTAINER **REDACTED**
COPY . .
RUN pip2 install -r requirements.txt
CMD ["python2.7", "parser.py"]