Files created inside docker are write protected on host - ember.js

I am using docker container for rails and ember.I am mounting the source from my local to the container. All the changes I make here on local are reflected in the container.
Now I want to use generators to create files. The files are created, but they are write protected on my machine.
When I try to do docker-compose run frontend bash, I get a root#061e4159d4ef:/frontend# superuser prompt access inside of the container. I can create files when I am in this mode. These files are write protected in my host.
I have also tried docker-compose run --user "$(id -u):$(id -g)" frontend bash, I get a I have no name!#31bea5ae977c:/frontend$, I am unable to create any file in this mode. Below is the error message that I get.
I have no name!#31bea5ae977c:/frontend$ ember g template about
/frontend/node_modules/ember-cli/node_modules/configstore/node_modules/mkdirp/index.js:90
throw err0;
^
Error: EACCES: permission denied, mkdir '/.config'
at Error (native)
at Object.fs.mkdirSync (fs.js:916:18)
at sync (/frontend/node_modules/ember-cli/node_modules/configstore/node_modules/mkdirp/index.js:71:13)
at Function.sync (/frontend/node_modules/ember-cli/node_modules/configstore/node_modules/mkdirp/index.js:77:24)
at Object.create.all.get (/frontend/node_modules/ember-cli/node_modules/configstore/index.js:39:13)
at Object.Configstore (/frontend/node_modules/ember-cli/node_modules/configstore/index.js:28:44)
at clientId (/frontend/node_modules/ember-cli/lib/cli/index.js:22:21)
at module.exports (/frontend/node_modules/ember-cli/lib/cli/index.js:65:19)
at /usr/local/lib/node_modules/ember-cli/bin/ember:26:3
at /usr/local/lib/node_modules/ember-cli/node_modules/resolve/lib/async.js:44:21
Here is my Dockerfile:
FROM node:6.2
ENV INSTALL_PATH /frontend
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
# Copy package.json separately so it's recreated when package.json
# changes.
COPY package.json ./package.json
RUN npm install
COPY . $INSTALL_PATH
RUN npm install -g phantomjs bower ember-cli ;\
bower --allow-root install
EXPOSE 4200
EXPOSE 49152
CMD [ "ember", "server" ]
Here is my docker-compose.yml file, please note it is not in the current directory, but the parent.
frontend:
build: "frontend/"
dockerfile: "Dockerfile"
environment:
- EMBER_ENV=development
ports:
- "4200:4200"
- "49152:49152"
volumes:
- ./frontend:/frontend
I want to know, how can I use generateors? I am new to learning docker. Any help is appreciated.

You get the I have no name! because of this: $(id -u):$(id -g)
The user id and group in your host are not linked to any user in your container.
Solution:
Execute chown UID:GID -R /frontend inside the container if its already running and you cannot stop it for some reason. Otherwise you could just do the chown command in the host and then run your container again. Node that UID and GID must belong to a user inside the container
Example: chown 101:101 -R /frontend with 101 is the UID:GID of www-data.
If there are no other user exept root in your container, you will have to create a new one. To do so you must create a Dockerfile and put something like this:
FROM your_image_name
RUN useradd -ms /bin/bash newuser
More information about Dockerfiles can be found here or just by googlin' it.

Related

Cannot launch my Django project with Gunicorn inside Docker

I'm new to Docker.
Visual Studio Code has an extension named Remote - Containers and I use it to dockerize a Django project.
For the first step the extension creates a Dockerfile:
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.10.5
EXPOSE 8000
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /app
COPY . /app
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
# File wsgi.py was not found. Please enter the Python path to wsgi file.
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "project.wsgi"]
Then it adds Development Container Configuration file:
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.238.0/containers/docker-existing-dockerfile
{
"name": "django-4.0.5",
// Sets the run context to one level up instead of the .devcontainer folder.
"context": "..",
// Update the 'dockerFile' property if you aren't using the standard 'Dockerfile' filename.
"dockerFile": "../Dockerfile"
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Uncomment the next line to run commands after the container is created - for example installing curl.
// "postCreateCommand": "apt-get update && apt-get install -y curl",
// Uncomment when using a ptrace-based debugger like C++, Go, and Rust
// "runArgs": [ "--cap-add=SYS_PTRACE", "--security-opt", "seccomp=unconfined" ],
// Uncomment to use the Docker CLI from inside the container. See https://aka.ms/vscode-remote/samples/docker-from-docker.
// "mounts": [ "source=/var/run/docker.sock,target=/var/run/docker.sock,type=bind" ],
// Uncomment to connect as a non-root user if you've added one. See https://aka.ms/vscode-remote/containers/non-root.
// "remoteUser": "vscode"
}
And finally I run the command Rebuild and Reopen in Container.
After a few seconds the container is running and I see a command prompt. Considering that at the end of Dockerfile there's this line:
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "project.wsgi"]
...the Django application must be running, but it isn't and http://127.0.0.1:8000 refuses to connect.
But if I run the same command at the command prompt:
gunicorn --bind 0.0.0.0:8000 project.wsgi
It works just fine?
Why? All files are created by VSCode extension. Same instructions are posted on VSCode official website, yet it's not working.
P.S:
When the container loaded for the first time, I created a project called 'project' so the folder structure is like this:

Setting USER in Dockerfile prevents saving file fields (eg. ImageField) in Django

I am trying to containerize Django with Dockerfile and docker-compose.yml as defined below. I built the Dockerfile as (fiifidev/postgres:test) for the compose file. Everything works fine. However anytime I try to save a model with a file field (eg. ImageField or FileField), I get Permission Error PermissionError: [Errno 13] Permission denied docker.
I suspect I am not adding the appropriate permission of user creation (useradd) in the Dockerfile (not sure). But when I remove the USER everything works fine.
How can I fix this any help will be much appreciated.
FROM python:3.10-slim-bullseye as base
# Setup env
ENV LANG C.UTF-8
ENV LC_ALL C.UTF-8
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONFAULTHANDLER 1
FROM base AS python-deps
# Install pipenv and compilation dependencies
RUN pip install pipenv
RUN apt-get update && apt-get install -y --no-install-recommends gcc
# Install python dependencies in /.venv
COPY Pipfile .
COPY Pipfile.lock .
RUN PIPENV_VENV_IN_PROJECT=1 pipenv install --deploy
FROM base AS runtime
# Copy virtual env from python-deps stage
COPY --from=python-deps /.venv /.venv
ENV PATH="/.venv/bin:$PATH"
# Create and switch to a new user
RUN useradd --create-home appuser
WORKDIR /home/appuser/src
USER appuser
# Install application into container
COPY --chown=appuser:appuser . .
version: "3.9"
services:
web:
image: fiifidev/postgres:test
command: sh -c "python manage.py makemigrations &&
python manage.py migrate &&
python manage.py initiate_admin &&
python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/home/appuser/src
networks:
postgres-network:
env_file:
- .env
ports:
- ${APP_PORT}:8000
restart: "on-failure"
networks:
postgres-network:
external: true
From Dockerfile:
WORKDIR /home/appuser/src
USER appuser
# Install application into container
COPY --chown=appuser:appuser . .
Here you are creating a src directory and copying your code into it. This is baked into the resulting image.
From docker-compose.yml:
volumes:
- .:/home/appuser/src
Here you are mounting the current directory on your host on top of the the src directory. A mount will take precedence over the image's idea of what a directory contains, so this effectively means your COPY and chown have no effect. (Those files are still there in your image, but the mount hides them; they are not available.)
The behavior as far as permissions here will vary by platform. On Windows, I don't know what would happen. Using Docker Desktop on Mac it would "just work", because Docker Desktop more or less ignores the permissions in this case and makes everything happy.
On Linux, however, the file ownership inside the container and outside the container must match for you to be able to write to it. What's probably happening is that the files in your mounted directory are owned by a different uid (yours) than the 'appuser' in the container has. Therefore, you get permission errors because you don't have permission to write to the files/directory.
There are two solutions you could try:
Chown the files in the host directory (outside the container) to have the same uid that 'appuser' has inside the container.
Or you could specify the uid of 'appuser' when creating it (during the image build). In that case, specify 'appuser' has the same UID as your user on the Linux host. It wouldn't matter if the Linux user has a different name; if the uid number is the same, then you will have permissions.
EDIT: Including from comments. The OP ended up solving this in docker-compose.yml by adding:
user: "${UID}:${GID}"
This would set the user/group for the service to the UID and GID from the environment of the user calling docker-compose. That is, assuming that the shell has set the UID and GID environment variables; not all shells or OSes will behave in the exact same manner.

Docker container not updating media folder from AWS S3 bucket

I'm having trouble getting a docker container to update the images (like .png's) on my local system.
The docker container has script in the dockerfile that copies the images into a folder in the container. Then, that folder gets copied into a new directory that is a set up as a shared volume. This process is split between the dockerfile for this container, and a "command" entry in the docker-compose.yaml.
Everything seems to run fine; following the output of the copy command looks right. I don't get any errors, and once the command stops running the container stops.
I've tried destroying the container and image completely and recreating it, but I still see the old images in the application. I'm guessing that it's not overwriting the existing images, but I don't know why.
Dockerfile:
FROM ubuntu:18.04
# Install packages
RUN apt-get update
RUN apt-get install -y apt-utils
RUN apt-get install -y curl unzip
# Intall AWS CLI
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
RUN unzip awscliv2.zip
RUN ./aws/install
# Load AWS Access Keys
COPY ./config /root/.aws/
COPY ./credentials /root/.aws/
RUN chmod 600 /root/.aws/config /root/.aws/credentials
# Download media
RUN mkdir -p /var/media
RUN aws s3 cp --recursive s3://images/ /var/media
Snippet from the docker-compose.yaml:
version: '3.1'
services:
client:
[[other stuff here]]
media:
image: [[our own image stored with AWS]]
command: 'cp -R -v -f /var/media/images/ /var/media-access/'
volumes:
- ./src/media:/var/media-access
[[other stuff here]]
Any advice is appreciated!

docker-compose doesn't mount volumes correct for django container

Running Docker on Windows 10 with WSL 2 Ubuntu on it. I have the following Dockerfile:
FROM ubuntu
#base directory
ENV HOME /root
#subdirectory name for the REST project
ENV PROJECT_NAME django_project
#subdirectory name of the users app
ENV APP_NAME users
#set the working directory
WORKDIR $HOME
#install Python 3, the Django REST framework and the Cassandra Python driver
RUN apt-get update
RUN apt -y install python3-pip 2> /dev/null
RUN pip3 install djangorestframework
RUN pip3 install cassandra-driver
#initialize the project (blank project) and creates a folder called $PROJECT_NAME
#with manager.py on its root directory
RUN django-admin startproject $PROJECT_NAME .
#install an app in the project and create a folder named after it
RUN python3 manage.py startapp $APP_NAME
ENV CASSANDRA_SEEDS cas1
ENTRYPOINT ["python3","manage.py", "runserver", "0.0.0.0:8000"]
I build a image with docker build -t django-img . and then I have the following .yaml:
version: '3'
services:
django_c:
container_name: django_c
image: django-img
environment:
- CASSANDRA_SEEDS='cas1'
ports:
- '8000:8000'
volumes:
- /mnt/c/Users/claud/docker-env/django/django_project:/django_project
When I run docker-compose up -d inside django-project folder (.yml and Dockerfile are there), I get the container running, but I can't see in the host any file from the container. If I run ls in the container, however, I see all files are there:
How am I supposed to edit the container files using an editor in my host?
p.s.: I've already tested the volume slashes ("/") with another container and they work fine since I'm using WSL.
ADDITION
Here the content of my container folders using relative paths, I tried
volumes:
- /mnt/c/Users/claud/docker-env/django/django_project:/root/django_project
but it still did not show the files in the host.
I think the issue is that your volume mount refers to the absolute path /django_project, but you specify WORKDIR $HOME which is /root in your Dockerfile. An additional clue is that you see your files when you ls -la ./django_project in the container using a relative path.
I'll bet you can fix the problem by updating your docker-compose.yml django_c service definition to specify /root/django_project as your volume mount instead:
volumes:
- /mnt/c/Users/claud/docker-env/django/django_project:/root/django_project

Setting up docker for django, vue.js, rabbitmq

I'm trying to add Docker support to my project.
My structure looks like this:
front/Dockerfile
back/Dockerfile
docker-compose.yml
My Dockerfile for django:
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y python-software-properties software-properties-common
RUN add-apt-repository ppa:ubuntugis/ubuntugis-unstable
RUN apt-get update && apt-get install -y python3 python3-pip binutils libproj-dev gdal-bin python3-gdal
ENV APPDIR=/code
WORKDIR $APPDIR
ADD ./back/requirements.txt /tmp/requirements.txt
RUN ./back/pip3 install -r /tmp/requirements.txt
RUN ./back/rm -f /tmp/requirements.txt
CMD $APPDIR/run-django.sh
My Dockerfile for Vue.js:
FROM node:9.11.1-alpine
# install simple http server for serving static content
RUN npm install -g http-server
# make the 'app' folder the current working directory
WORKDIR /app
# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./
# install project dependencies
RUN npm install
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .
# build app for production with minification
RUN npm run build
EXPOSE 8080
CMD [ "http-server", "dist" ]
and my docker-compose.yml:
version: '2'
services:
rabbitmq:
image: rabbitmq
api:
build:
context: ./back
environment:
- DJANGO_SECRET_KEY=${SECRET_KEY}
volumes:
- ./back:/app
rabbit1:
image: "rabbitmq:3-management"
hostname: "rabbit1"
ports:
- "15672:15672"
- "5672:5672"
labels:
NAME: "rabbitmq1"
volumes:
- "./enabled_plugins:/etc/rabbitmq/enabled_plugins"
django:
extends:
service: api
command:
./back/manage.py runserver
./back/uwsgi --http :8081 --gevent 100 --module websocket --gevent-monkey-patch --master --processes 4
ports:
- "8000:8000"
volumes:
- ./backend:/app
vue:
build:
context: ./front
environment:
- HOST=localhost
- PORT=8080
command:
bash -c "npm install && npm run dev"
volumes:
- ./front:/app
ports:
- "8080:8080"
depends_on:
- django
Running docker-compose fails with:
ERROR: for chatapp2_django_1 Cannot start service django: b'OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \\"./back/manage.py\\": stat ./back/manage.py: no such file or directory": unknown'
ERROR: for rabbit1 Cannot start service rabbit1: b'driver failed programming external connectivity on endpoint chatapp2_rabbit1_1 (05ff4e8c0bc7f24216f2fc960284ab8471b47a48351731df3697c6d041bbbe2f): Error starting userland proxy: listen tcp 0.0.0.0:15672: bind: address already in use'
ERROR: for django Cannot start service django: b'OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \\"./back/manage.py\\": stat ./back/manage.py: no such file or directory": unknown'
ERROR: Encountered errors while bringing up the project.
I don't understand what is this 'unknown' directory it's trying to get. Have I set this all up right for my project structure?
For the django part you're missing a copy of your code for the django app which im assuming is in back. You'll need to add ADD /back /code. You probably also wanna probably run the python alpine docker build instead of the ubuntu as it will significantly reduce build times and container size.
This is what I would do:
# change this to whatever python version your app is targeting (mine is typically 3.6)
FROM python:3.6-alpine
ADD /back /code
# whatever other dependencies you'll need i run with the psycopg2-binary build so i need these (the nice part of the python-alpine image is you don't need to install any of those python specific packages you were installing before
RUN apk add --virtual .build-deps gcc musl-dev postgresql-dev
RUN pip install -r /code/requirements.txt
# expose whatever port you need for your Django app (default is 8000, we use non-default but you can do whatever you need)
EXPOSE 8000
WORKDIR /code
#dont need /code here since WORKDIR is effectively a change directory
RUN chmod +x /run-django.sh
RUN apk add --no-cache bash postgresql-libs
CMD ["/run-django.sh"]
We have a similar run-django.sh script that we call python manage.py makemigrations and python manage.py migrate. I'm assuming yours is similar.
Long story short, you weren't copying in the code from back to code.
Also in your docker-compose you dont have build context like you do for the vue service.
As for your rabbitmq container failure, you need to stop the /etc service associated with rabbit on your computer. I get this error if i'm trying to expose a postgresql container or a redis container and have to /etc/init.d/postgresql stop or /etc/init.d/redis stop to stop the service running on your machine in order to allow for no collisions on that default port for that service.