Setting up docker for django, vue.js, rabbitmq - django

I'm trying to add Docker support to my project.
My structure looks like this:
front/Dockerfile
back/Dockerfile
docker-compose.yml
My Dockerfile for django:
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y python-software-properties software-properties-common
RUN add-apt-repository ppa:ubuntugis/ubuntugis-unstable
RUN apt-get update && apt-get install -y python3 python3-pip binutils libproj-dev gdal-bin python3-gdal
ENV APPDIR=/code
WORKDIR $APPDIR
ADD ./back/requirements.txt /tmp/requirements.txt
RUN ./back/pip3 install -r /tmp/requirements.txt
RUN ./back/rm -f /tmp/requirements.txt
CMD $APPDIR/run-django.sh
My Dockerfile for Vue.js:
FROM node:9.11.1-alpine
# install simple http server for serving static content
RUN npm install -g http-server
# make the 'app' folder the current working directory
WORKDIR /app
# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./
# install project dependencies
RUN npm install
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .
# build app for production with minification
RUN npm run build
EXPOSE 8080
CMD [ "http-server", "dist" ]
and my docker-compose.yml:
version: '2'
services:
rabbitmq:
image: rabbitmq
api:
build:
context: ./back
environment:
- DJANGO_SECRET_KEY=${SECRET_KEY}
volumes:
- ./back:/app
rabbit1:
image: "rabbitmq:3-management"
hostname: "rabbit1"
ports:
- "15672:15672"
- "5672:5672"
labels:
NAME: "rabbitmq1"
volumes:
- "./enabled_plugins:/etc/rabbitmq/enabled_plugins"
django:
extends:
service: api
command:
./back/manage.py runserver
./back/uwsgi --http :8081 --gevent 100 --module websocket --gevent-monkey-patch --master --processes 4
ports:
- "8000:8000"
volumes:
- ./backend:/app
vue:
build:
context: ./front
environment:
- HOST=localhost
- PORT=8080
command:
bash -c "npm install && npm run dev"
volumes:
- ./front:/app
ports:
- "8080:8080"
depends_on:
- django
Running docker-compose fails with:
ERROR: for chatapp2_django_1 Cannot start service django: b'OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \\"./back/manage.py\\": stat ./back/manage.py: no such file or directory": unknown'
ERROR: for rabbit1 Cannot start service rabbit1: b'driver failed programming external connectivity on endpoint chatapp2_rabbit1_1 (05ff4e8c0bc7f24216f2fc960284ab8471b47a48351731df3697c6d041bbbe2f): Error starting userland proxy: listen tcp 0.0.0.0:15672: bind: address already in use'
ERROR: for django Cannot start service django: b'OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \\"./back/manage.py\\": stat ./back/manage.py: no such file or directory": unknown'
ERROR: Encountered errors while bringing up the project.
I don't understand what is this 'unknown' directory it's trying to get. Have I set this all up right for my project structure?

For the django part you're missing a copy of your code for the django app which im assuming is in back. You'll need to add ADD /back /code. You probably also wanna probably run the python alpine docker build instead of the ubuntu as it will significantly reduce build times and container size.
This is what I would do:
# change this to whatever python version your app is targeting (mine is typically 3.6)
FROM python:3.6-alpine
ADD /back /code
# whatever other dependencies you'll need i run with the psycopg2-binary build so i need these (the nice part of the python-alpine image is you don't need to install any of those python specific packages you were installing before
RUN apk add --virtual .build-deps gcc musl-dev postgresql-dev
RUN pip install -r /code/requirements.txt
# expose whatever port you need for your Django app (default is 8000, we use non-default but you can do whatever you need)
EXPOSE 8000
WORKDIR /code
#dont need /code here since WORKDIR is effectively a change directory
RUN chmod +x /run-django.sh
RUN apk add --no-cache bash postgresql-libs
CMD ["/run-django.sh"]
We have a similar run-django.sh script that we call python manage.py makemigrations and python manage.py migrate. I'm assuming yours is similar.
Long story short, you weren't copying in the code from back to code.
Also in your docker-compose you dont have build context like you do for the vue service.
As for your rabbitmq container failure, you need to stop the /etc service associated with rabbit on your computer. I get this error if i'm trying to expose a postgresql container or a redis container and have to /etc/init.d/postgresql stop or /etc/init.d/redis stop to stop the service running on your machine in order to allow for no collisions on that default port for that service.

Related

CIDC with BitBucket, Docker Image and Azure

EDITED
I am learning CICD and Docker. So far I have managed to successfully create a docker image using the code below:
Dockerfile
# Docker Operating System
FROM python:3-slim-buster
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
#App folder on Slim OS
WORKDIR /app
# Install pip requirements
COPY requirements.txt requirements.txt
RUN python -m pip install --upgrade pip pip install -r requirements.txt
#Copy Files to App folder
COPY . /app
docker-compose.yml
version: '3.4'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
ports:
- 8000:8000
My code is on BitBucket and I have a pipeline file as follows:
bitbucket-pipelines.yml
image: atlassian/default-image:2
pipelines:
branches:
master:
- step:
name:
Build And Publish To Azure
services:
- docker
script:
- docker login -u $AZURE_USER -p $AZURE_PASS xxx.azurecr.io
- docker build -t xxx.azurecr.io .
- docker push xxx.azurecr.io
With xxx being the Container registry on Azure. When the pipeline job runs I am getting denied: requested access to the resource is denied error on BitBucket.
What did I not do correctly?
Thanks.
The Edit
Changes in docker-compose.yml and bitbucket-pipeline.yml
docker-compose.yml
version: '3.4'
services:
web:
build: .
image: xx.azurecr.io/myticket
container_name: xx
command: python manage.py runserver 0.0.0.0:80
ports:
- 80:80
bitbucket-pipelines.yml
image: atlassian/default-image:2
pipelines:
branches:
master:
- step:
name:
Build And Publish To Azure
services:
- docker
script:
- docker login -u $AZURE_USER -p $AZURE_PASS xx.azurecr.io
- docker build -t xx.azurecr.io/xx .
- docker push xx.azurecr.io/xx
You didnt specify CMD or ENTRYPOINT in your dockerfile.
There are stages when building a dockerfile
Firstly you call an image, then you package your requirements etc.. those are stages that are being executed while the container is building. you are missing the last stage that executes a command inside the container when its already up.
Both ENTRYPOINT and CMD are essential for building and running Dockerfiles.
for it to work you must add a CMD or ENTRYPOINT at the bottom of your dockerfile..
Change your files accordingly and try again.
Dockerfile
# Docker Operating System
FROM python:3-slim-buster
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
#App folder on Slim OS
WORKDIR /app
# Install pip requirements
COPY requirements.txt requirements.txt
RUN python -m pip install --upgrade pip pip install -r requirements.txt
#Copy Files to App folder
COPY . /app
# Execute commands inside the container
CMD manage.py runserver 0.0.0.0:8000
Check you are able to build and run the image by going to its directory and running
docker build -t app .
docker run -d -p 80:80 app
docker ps
See if your container is running.
Next
Update the image property in the docker-compose file.
Prefix the image name with the login server name of your Azure container registry, .azurecr.io. For example, if your registry is named myregistry, the login server name is myregistry.azurecr.io (all lowercase), and the image property is then myregistry.azurecr.io/azure-vote-front.
Change the ports mapping to 80:80. Save the file.
The updated file should look similar to the following:
docker-compose.yml
Copy
version: '3'
services:
foo:
build: .
image: foo.azurecr.io/atlassian/default-image:2
container_name: foo
ports:
- "80:80"
By making these substitutions, the image you build is tagged for your Azure container registry, and the image can be pulled to run in Azure Container Instances.
More in documentation

Docker-compose executes django twice

I am running in windows 10, and trying to set up a project via docker-compose and django.
If you are interested, It will take you 3 minutes to follow this tutorial and you will get the same error as me. docs.docker.com/samples/django –
When I run
docker-compose run app django-admin startproject app_settings .
I get the following error
CommandError: /app /manage.py already exists. Overlaying a project into an existing directory won't replace conflicting files.
Or when I do this
docker-compose run app python manage.py startapp core
I get the following error
CommandError: 'core' conflicts with the name of an existing Python module and cannot be used as an
app name. Please try another name.
Seems like the command is maybe executed twice? Not sure why?
Docker file
FROM python:3.9-slim
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update && apt-get install
RUN apt-get install -y \
libpq-dev \
gcc \
&& apt-get clean
COPY ./requirements.txt .
RUN pip install -r requirements.txt
RUN mkdir /app
WORKDIR /app
COPY ./app /app
Docker-compose
version: "3.9"
compute:
container_name: compute
build: ./backend
# command: python manage.py runserver 0.0.0.0:8000
# volumes:
# - ./backend/app:/app
ports:
- "8000:8000"
environment:
- POSTGRES_NAME=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
depends_on:
- db
Try running your image without any arguments, you are already using the command keyword in your docker-compose or just remove that line from the file.

Running "/usr/local/bin/gunicorn" in a docker build says " stat /usr/local/bin/gunicorn: no such file or directory"

From the toplevel maps directory, I'm able to install the gunicorn extension ...
(venv) localhost:maps davea$ pip3 install gunicorn
Collecting gunicorn
Downloading gunicorn-20.0.4-py2.py3-none-any.whl (77 kB)
|████████████████████████████████| 77 kB 1.2 MB/s
Requirement already satisfied: setuptools>=3.0 in ./web/venv/lib/python3.7/site-packages (from gunicorn) (45.1.0)
Installing collected packages: gunicorn
Successfully installed gunicorn-20.0.4
Below is my docker-compose.yml file
version: '3'
services:
web:
restart: always
build: ./web
ports: # to access the container from outside
- "8000:8000"
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn maps.wsgi:application -w 2 -b :8000
apache:
restart: always
build: ./apache/
ports:
- "80:80"
#volumes:
# - web-static:/www/static
links:
- web:web
mysql:
restart: always
image: mysql:5.7
environment:
MYSQL_DATABASE: 'maps_data'
# So you don't have to use root, but you can if you like
MYSQL_USER: 'chicommons'
# You can use whatever password you like
MYSQL_PASSWORD: 'password'
# Password for root access
MYSQL_ROOT_PASSWORD: 'password'
ports:
- "3406:3406"
volumes:
- my-db:/var/lib/mysql
volumes:
my-db:
And then I have web/Dockerfile as follows ...
FROM python:3.7-slim
RUN apt-get update && apt-get install
RUN apt-get install -y libmariadb-dev-compat libmariadb-dev
RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc \
&& rm -rf /var/lib/apt/lists/*
RUN python -m pip install --upgrade pip
RUN mkdir -p /app/
WORKDIR /app/
RUN pip3 freeze > requirements.txt
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
COPY . /app/
However, when I build/start my docker instance, I'm told it can't find my "gunicorn" command ...
(venv) localhost:maps davea$ docker-compose up
Starting maps_web_1 ...
Starting maps_web_1 ... error
ERROR: for maps_web_1 Cannot start service web: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"/usr/local/bin/gunicorn\": stat /usr/local/bin/gunicorn: no such file or directory": unknown
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"/usr/local/bin/gunicorn\": stat /usr/local/bin/gunicorn: no such file or directory": unknown
ERROR: Encountered errors while bringing up the project.
Your Docker container is a totally isolated environment. Nothing you install on the host is visible inside the container; nothing that happens inside the container is accessible on the host. There's ways to bridge this boundary (with docker run -v bind mounts) but that's not possible during the docker build phase.
In this example your local source tree has a requirements.txt file that lists out the packages that need to be installed when the container is created. (The RUN pip freeze line has no effect; the COPY on the line after it copies it from your local source tree.) It's enough to add the dependency to the requirements.txt file
gunicorn
In your development environment, you can re-run pip install -r requirements.txt to update the packages installed in your virtual environment. When you re-run docker build, having this line in the requirements.txt file will cause it to be installed when the package is built.
You can clean up the Dockerfile a little bit. The resulting Dockerfile would be a pretty typical one for Python packages with C dependencies:
# Start from a totally clean environment with Python installed,
# but no non-system libraries and nothing from your host system.
FROM python:3.7-slim
# Install C dependencies.
# It's important to do apt-get update and install in the
# same command. It's more efficient to only do it once.
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
gcc \
libmariadb-dev \
libmariadb-dev-compat
# Update pip
RUN python -m pip install --upgrade pip
# Create the application directory and point there
# (WORKDIR will implicitly create it)
WORKDIR /app/
# Install all of the Python dependencies. These are
# listed, one to a line, in the requirements.txt file,
# possibly with version constraints. Having this as
# a separate block allows Docker to not repeat it if
# only your application code changes.
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
# Copy in the rest of the application.
COPY . .
# Specify what port your application uses, and the
# default command to use when launching the container.
EXPOSE 8000
CMD /usr/local/bin/gunicorn maps.wsgi:application -w 2 -b :8000

What is wrong with my Gulp Watch task with Docker? [duplicate]

This question already has an answer here:
Docker bound mount - can not see changes on browser
(1 answer)
Closed 3 years ago.
EDIT: Gulp "Watch" doesn't work on windows with a mounted volumes because no "file change" event is sent. My current solution is to run Docker Windows Volume Watcher on my local machine while I see if I can integrate this solution into my code.
I'm trying to run a gulp watch task in my dockerfile and gulp isn't catching when my files are getting changed.
Quick Notes:
This set up works when I use it for my locally hosted wordpress installs
The file changes reflect in my docker container according to pycharm's docker service
Running the "styles" gulp task works, it's just the file watching that does not
It's clear to me that there's some sort of disconnect between how gulp watches for changes, and how Docker is letting that happen.
Github link
**Edit: It looks possible to do what I want, here's a link to someone doing it slightly differently.
gulpfile excerpt:
export const watchForChanges = () => {
watch('scss-js/scss/**/*.scss', gulp.series('styles'));
watch('scss-js/js/**/*.js', scripts);
watch('scss-js/scss/*.scss', gulp.series('styles'));
watch('scss-js/js/*.js', scripts);
// Try absolute path to see if it works
watch('scss-js/scss/bundle.scss', gulp.series('styles'));
}
...
// Compile SCSS through styles command
export const styles = () => {
// Want more than one SCSS file? Just turn the below string into an array
return src('scss-js/scss/bundle.scss')
// If we're in dev, init sourcemaps. Any plugins below need to be compatible with sourcemaps.
.pipe(gulpif(!PRODUCTION, sourcemaps.init()))
// Throw errors
.pipe(sass().on('error', sass.logError))
// In production use auto-prefixer, fix general grid and flex issues.
.pipe(
gulpif(
PRODUCTION,
postcss([
autoprefixer({
grid: true
}),
require("postcss-flexbugs-fixes"),
require("postcss-preset-env")
])
)
)
.pipe(gulpif(PRODUCTION, cleanCss({compatibility:'ie8'})))
// In dev write source maps
.pipe(gulpif(!PRODUCTION, sourcemaps.write()))
// TODO: Update this source folder
.pipe(dest('blog/static/blog/'))
.pipe(server.stream());
}
...
export const dev = series(parallel(styles, scripts), watchForChanges);
Docker-Compose:
version: "3.7"
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "8002:8000"
- "3001:3001"
- "3000:3000"
volumes:
- ./django_project:/django_project
command: >
sh -c "python manage.py runserver 0.0.0.0:8000"
restart: always
depends_on:
- db
db:
image: postgres
environment:
POSTGRES_PASSWORD: example1
ports:
- "5432:5432"
restart: always
Dockerfile:
FROM python:3.8-buster
MAINTAINER Austin
ENV PYTHONUNBUFFERED 1
# Install node
RUN apt-get update && apt-get -y install nodejs
RUN apt-get install npm -y
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Update Node
# Install base dependencies
RUN apt-get update && apt-get install -y -q --no-install-recommends \
apt-transport-https \
build-essential \
ca-certificates \
curl \
git \
libssl-dev \
wget
ENV NVM_DIR /usr/local/nvm
ENV NODE_VERSION 12.14.0
WORKDIR $NVM_DIR
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash \
&& . $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
ENV NODE_PATH $NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# What PIP installs need to get done?
COPY django_project/requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
# Copy local directory to target new docker directory
RUN mkdir -p /django_project
WORKDIR /django_project
COPY ./django_project /django_project
# Make Postgres Work
EXPOSE 5432/tcp
WORKDIR /django_project
RUN npm install gulp-cli -g
What do you think could be going on?
My guess is you are running on windows, right?
If so take a look at the following answer
https://stackoverflow.com/a/58969398/12153397
Below the gist of the linked answer
Issue identified
Bind mounting actually does not work for docker toolbox:
file change events in mounted folders of host are not propagated to
container by Docker for Windows
Solution
This script is intended to be the answer to this issue: docker-windows-volume-watcher.

Dockerizing an already existing app and database

I am trying to Dockerize an app that is already created (database included).
I've got the proper files in place:
docker-compose.yml
dockerfile
requirements.txt
I'm having trouble with the database part -
How do I configure the docker-compose.yml file to point to the database that is already created?
Here's why I ask - my understanding of Docker - is you create your base app - then "Dockerize" it or package it into an image that you can distribute. I'm a beginner at this - so that may be why I'm not understanding.
Here is my current docker-compose.yml:
version: '2'
services:
db:
image: postgres:9.6
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=qwerty
- POSTGRES_DB=ar_db
ports:
- "5433:5433"
web:
build: .
command: python2.7 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
and dockerfile:
############################################################
# Dockerfile to run a Django-based web application
# Based on an Ubuntu Image
############################################################
# Set the base image to use to Ubuntu
FROM debian:8.8
# Set the file maintainer (your name - the file's author)
MAINTAINER HeatherJ
# Update the default application repository sources list
RUN apt-get update && apt-get -y upgrade
RUN apt-get install -y python python-pip libpq-dev python-dev
#install git
RUN apt-get update && apt-get install -y --no-install-recommends \
git&& rm -rf /var/lib/apt/lists/*
# Set env variables used in this Dockerfile (add a unique prefix, such as DOCKYARD)
# Local directory with project source
ENV DOCKYARD_SRC=EPIC_AR
# Directory in container for all project files
ENV DOCKYARD_SRVHOME=/EPIC_AR
# Directory in container for project source files
ENV DOCKYARD_SRVPROJ=/home/epic/EPIC_AR/EPIC_AR
# Create application subdirectories
WORKDIR $DOCKYARD_SRVHOME
RUN mkdir media static logs
VOLUME ["$DOCKYARD_SRVHOME/media/", "$DOCKYARD_SRVHOME/logs/"]
# Copy application source code to SRCDIR
COPY $DOCKYARD_SRC $DOCKYARD_SRVPROJ
# Install Python dependencies
RUN pip install -r $DOCKYARD_SRVPROJ/requirements.txt
# Copy entrypoint script into the image
WORKDIR $DOCKYARD_SRVPROJ
COPY ./docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]