'docker exec' environment different from 'docker run'/'docker attach' environment? - django

I recently tried creating a docker environment for developing django projects.
I've noticed that files created or copied via the Dockerfile are only accessible when using 'docker run' or 'docker attach', in which case for the latter, you would be able to go in and see whatever files you've moved in their appropriate places.
However, when running 'docker exec', the container doesn't seem to show any trace of those files whatsoever. Why is this happening?
Dockerfile
############################################################
# Dockerfile to run a Django-based web application
# Based on an Ubuntu Image
############################################################
# Set the base image to use to Ubuntu
FROM ubuntu:14.04
# Set env variables used in this Dockerfile (add a unique prefix, such as DOCKYARD)
# Local directory with project source
ENV DOCKYARD_SRC=hello_django
# Directory in container for all project files
ENV DOCKYARD_SRVHOME=/srv
# Directory in container for project source files
ENV DOCKYARD_SRVPROJ=/srv/hello_django
# Update the default application repository sources list
RUN apt-get update && apt-get -y upgrade
RUN apt-get install -y python python-pip
# Create application subdirectories
WORKDIR $DOCKYARD_SRVHOME
RUN sudo mkdir media static logs
# Copy application source code to SRCDIR
COPY $DOCKYARD_SRC $DOCKYARD_SRVPROJ
# Install Python dependencies
RUN pip install -r $DOCKYARD_SRVPROJ/requirements.txt
# Port to expose
EXPOSE 8000
# Copy entrypoint script into the image
WORKDIR $DOCKYARD_SRVPROJ
COPY ./docker-entrypoint.sh /
With the above Dockerfile, my source folder hello_django is copied to its appropriate place in /srv/hello_django, and my docker-entrypoint.sh was moved to root successfully as well.
I can confirm this in 'docker run -it bash'/'docker attach', but can't access any of them through 'docker exec -it bash", for example.
(Dockerfile obtained from this tutorial).
Problem Demonstration
I'm running docker through vagrant, if that's relevant.
Vagrantfile
config.vm.box = "hashicorp/precise64"
config.vm.network "forwarded_port", guest: 8000, host: 8000
config.vm.provision "docker" do |d|
d.pull_images "django"
end

Related

Cannot launch my Django project with Gunicorn inside Docker

I'm new to Docker.
Visual Studio Code has an extension named Remote - Containers and I use it to dockerize a Django project.
For the first step the extension creates a Dockerfile:
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.10.5
EXPOSE 8000
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /app
COPY . /app
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
# File wsgi.py was not found. Please enter the Python path to wsgi file.
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "project.wsgi"]
Then it adds Development Container Configuration file:
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.238.0/containers/docker-existing-dockerfile
{
"name": "django-4.0.5",
// Sets the run context to one level up instead of the .devcontainer folder.
"context": "..",
// Update the 'dockerFile' property if you aren't using the standard 'Dockerfile' filename.
"dockerFile": "../Dockerfile"
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Uncomment the next line to run commands after the container is created - for example installing curl.
// "postCreateCommand": "apt-get update && apt-get install -y curl",
// Uncomment when using a ptrace-based debugger like C++, Go, and Rust
// "runArgs": [ "--cap-add=SYS_PTRACE", "--security-opt", "seccomp=unconfined" ],
// Uncomment to use the Docker CLI from inside the container. See https://aka.ms/vscode-remote/samples/docker-from-docker.
// "mounts": [ "source=/var/run/docker.sock,target=/var/run/docker.sock,type=bind" ],
// Uncomment to connect as a non-root user if you've added one. See https://aka.ms/vscode-remote/containers/non-root.
// "remoteUser": "vscode"
}
And finally I run the command Rebuild and Reopen in Container.
After a few seconds the container is running and I see a command prompt. Considering that at the end of Dockerfile there's this line:
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "project.wsgi"]
...the Django application must be running, but it isn't and http://127.0.0.1:8000 refuses to connect.
But if I run the same command at the command prompt:
gunicorn --bind 0.0.0.0:8000 project.wsgi
It works just fine?
Why? All files are created by VSCode extension. Same instructions are posted on VSCode official website, yet it's not working.
P.S:
When the container loaded for the first time, I created a project called 'project' so the folder structure is like this:

Google Cloud Run inaccessible even on successful build

My Google Cloud Run image was build successfully using Cloud Build via Github repo. I don't see anything concerning in the build logs.
This is my Dockerfile:
# Use the official lightweight Node.js 10 image.
# https://hub.docker.com/_/node
FROM node:17-slim
RUN set -ex; \
apt-get -y update; \
apt-get -y install ghostscript; \
apt-get -y install pngquant; \
rm -rf /var/lib/apt/lists/*
# Create and change to the app directory.
WORKDIR /usr/src/app
# Copy application dependency manifests to the container image.
# A wildcard is used to ensure both package.json AND package-lock.json are copied.
# Copying this separately prevents re-running npm install on every code change.
COPY package*.json ./
# Install dependencies.
# If you add a package-lock.json speed your build by switching to 'npm ci'.
RUN npm ci --only=production
# RUN npm install --production
# Copy local code to the container image.
COPY . ./
# Run the web service on container startup.
CMD [ "npm", "start" ]
But when I try to access the cloud through the public URL I see:
Oops, something went wrong…
Continuous deployment has been set up, but your repository has failed to build and deploy.
This revision is a placeholder until your code successfully builds and deploys to the Cloud Run service myapi in asia-east1 of the GCP project myproject.
What's next?
From the Cloud Run service page, click "Build History".
Examine your build logs to understand why it failed.
Fix the issue in your code or Dockerfile (if any).
Commit and push the change to your repository.
It appears that the node app did not run. What am I doing wrong?
Turns out that cloudbuild.yaml is not really optional. Adding the file with the following resolved the issue:
steps:
# Build the container image
- name: "gcr.io/cloud-builders/docker"
args: ["build", "-t", "gcr.io/$PROJECT_ID/myapi:$COMMIT_SHA", "."]
# Push the container image to Container Registry
- name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/$PROJECT_ID/myapi:$COMMIT_SHA"]
# Deploy container image to Cloud Run
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
entrypoint: gcloud
args:
- "run"
- "deploy"
- "myapi"
- "--image"
- "gcr.io/$PROJECT_ID/myapi:$COMMIT_SHA"
- "--region"
- "asia-east1"
images:
- "gcr.io/$PROJECT_ID/myapi:$COMMIT_SHA"

docker-compose doesn't mount volumes correct for django container

Running Docker on Windows 10 with WSL 2 Ubuntu on it. I have the following Dockerfile:
FROM ubuntu
#base directory
ENV HOME /root
#subdirectory name for the REST project
ENV PROJECT_NAME django_project
#subdirectory name of the users app
ENV APP_NAME users
#set the working directory
WORKDIR $HOME
#install Python 3, the Django REST framework and the Cassandra Python driver
RUN apt-get update
RUN apt -y install python3-pip 2> /dev/null
RUN pip3 install djangorestframework
RUN pip3 install cassandra-driver
#initialize the project (blank project) and creates a folder called $PROJECT_NAME
#with manager.py on its root directory
RUN django-admin startproject $PROJECT_NAME .
#install an app in the project and create a folder named after it
RUN python3 manage.py startapp $APP_NAME
ENV CASSANDRA_SEEDS cas1
ENTRYPOINT ["python3","manage.py", "runserver", "0.0.0.0:8000"]
I build a image with docker build -t django-img . and then I have the following .yaml:
version: '3'
services:
django_c:
container_name: django_c
image: django-img
environment:
- CASSANDRA_SEEDS='cas1'
ports:
- '8000:8000'
volumes:
- /mnt/c/Users/claud/docker-env/django/django_project:/django_project
When I run docker-compose up -d inside django-project folder (.yml and Dockerfile are there), I get the container running, but I can't see in the host any file from the container. If I run ls in the container, however, I see all files are there:
How am I supposed to edit the container files using an editor in my host?
p.s.: I've already tested the volume slashes ("/") with another container and they work fine since I'm using WSL.
ADDITION
Here the content of my container folders using relative paths, I tried
volumes:
- /mnt/c/Users/claud/docker-env/django/django_project:/root/django_project
but it still did not show the files in the host.
I think the issue is that your volume mount refers to the absolute path /django_project, but you specify WORKDIR $HOME which is /root in your Dockerfile. An additional clue is that you see your files when you ls -la ./django_project in the container using a relative path.
I'll bet you can fix the problem by updating your docker-compose.yml django_c service definition to specify /root/django_project as your volume mount instead:
volumes:
- /mnt/c/Users/claud/docker-env/django/django_project:/root/django_project

Cloud Run needs NGINX or not?

I am using cloud run for my blog and a work site and I really love it.
I have deployed python APIs and Vue/Nuxt Apps by containerising it according to the google tutorials.
One thing I don't understand is why there is no need to have NGINX on the front.
# Use the official lightweight Node.js 12 image.
# https://hub.docker.com/_/node
FROM node:12-slim
# Create and change to the app directory.
WORKDIR /usr/src/app
# Copy application dependency manifests to the container image.
# A wildcard is used to ensure both package.json AND package-lock.json are copied.
# Copying this separately prevents re-running npm install on every code change.
COPY package*.json ./
# Install production dependencies.
RUN npm install --only=production
# Copy local code to the container image.
COPY . ./
# Run the web service on container startup.
RUN npm run build
CMD [ "npm", "start" ]
# Use the official lightweight Python image.
# https://hub.docker.com/_/python
FROM python:3.7-slim
# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
# Install production dependencies.
RUN apt-get update && apt-get install -y \
libpq-dev \
gcc
RUN pip install -r requirements.txt
# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
CMD exec gunicorn -b :$PORT --workers=4 main:server
All this works without me calling Nginx ever.
But I read alot of articles whereby people bundle NGINX in their container. So I would like some clarity. Are there any downsides to what I am doing?
One considerable advantage of using NGINX or a static file server is the size of the container image. When serving SPAs (without SSR), all you need is to get the bundled files to the client. There's no need to bundle build dependencies or runtime that's needed to compile the application.
Your first image is copying whole source code with dependencies into the image, while all you need (if not running SSR) are the compiled files. NGINX can give you the "static site server" that will only serve your build and is a lightweight solution.
Regarding python, unless you can bundle it somehow, it looks ok to use without the NGINX.

Files created inside docker are write protected on host

I am using docker container for rails and ember.I am mounting the source from my local to the container. All the changes I make here on local are reflected in the container.
Now I want to use generators to create files. The files are created, but they are write protected on my machine.
When I try to do docker-compose run frontend bash, I get a root#061e4159d4ef:/frontend# superuser prompt access inside of the container. I can create files when I am in this mode. These files are write protected in my host.
I have also tried docker-compose run --user "$(id -u):$(id -g)" frontend bash, I get a I have no name!#31bea5ae977c:/frontend$, I am unable to create any file in this mode. Below is the error message that I get.
I have no name!#31bea5ae977c:/frontend$ ember g template about
/frontend/node_modules/ember-cli/node_modules/configstore/node_modules/mkdirp/index.js:90
throw err0;
^
Error: EACCES: permission denied, mkdir '/.config'
at Error (native)
at Object.fs.mkdirSync (fs.js:916:18)
at sync (/frontend/node_modules/ember-cli/node_modules/configstore/node_modules/mkdirp/index.js:71:13)
at Function.sync (/frontend/node_modules/ember-cli/node_modules/configstore/node_modules/mkdirp/index.js:77:24)
at Object.create.all.get (/frontend/node_modules/ember-cli/node_modules/configstore/index.js:39:13)
at Object.Configstore (/frontend/node_modules/ember-cli/node_modules/configstore/index.js:28:44)
at clientId (/frontend/node_modules/ember-cli/lib/cli/index.js:22:21)
at module.exports (/frontend/node_modules/ember-cli/lib/cli/index.js:65:19)
at /usr/local/lib/node_modules/ember-cli/bin/ember:26:3
at /usr/local/lib/node_modules/ember-cli/node_modules/resolve/lib/async.js:44:21
Here is my Dockerfile:
FROM node:6.2
ENV INSTALL_PATH /frontend
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
# Copy package.json separately so it's recreated when package.json
# changes.
COPY package.json ./package.json
RUN npm install
COPY . $INSTALL_PATH
RUN npm install -g phantomjs bower ember-cli ;\
bower --allow-root install
EXPOSE 4200
EXPOSE 49152
CMD [ "ember", "server" ]
Here is my docker-compose.yml file, please note it is not in the current directory, but the parent.
frontend:
build: "frontend/"
dockerfile: "Dockerfile"
environment:
- EMBER_ENV=development
ports:
- "4200:4200"
- "49152:49152"
volumes:
- ./frontend:/frontend
I want to know, how can I use generateors? I am new to learning docker. Any help is appreciated.
You get the I have no name! because of this: $(id -u):$(id -g)
The user id and group in your host are not linked to any user in your container.
Solution:
Execute chown UID:GID -R /frontend inside the container if its already running and you cannot stop it for some reason. Otherwise you could just do the chown command in the host and then run your container again. Node that UID and GID must belong to a user inside the container
Example: chown 101:101 -R /frontend with 101 is the UID:GID of www-data.
If there are no other user exept root in your container, you will have to create a new one. To do so you must create a Dockerfile and put something like this:
FROM your_image_name
RUN useradd -ms /bin/bash newuser
More information about Dockerfiles can be found here or just by googlin' it.