unable to build docker-compose build - django

I'm getting this error when I tried to docker-compose build my docker-compose.yml file:
In file './docker-compose.yml' service 'version' doesn't have any configuration options. All top level keys in your docker-compose.yml must map to a dictionary of configuration options.
docker version
Client:
Version: 1.12.6
API version: 1.24
Go version: go1.6.3
Git commit: 78d1802
Built: Tue Jan 31 23:47:34 2017
OS/Arch: linux/amd64
Server:
Version: 1.12.6
API version: 1.24
Go version: go1.6.3
Git commit: 78d1802
Built: Tue Jan 31 23:47:34 2017
OS/Arch: linux/amd64
docker --version
Docker version 1.12.6, build 78d1802
docker-compose --version
docker-compose version 1.5.2, build unknown
is this because the build unknown?
docker-composer.yml
version: "2"
services:
postgres:
image: postgres:9.6
volumes:
- pgdata:/var/lib/data/postgres
backend:
build: .
command: gosu app bash
volumes:
- .:/app
- pyenv:/python
links:
- postgres:postgres
ports:
- 8000:8000
volumes:
pyenv:
pgdata:

Try upgrading the docker-compose version. Version 2 files are supported by Compose 1.6.0+ and require a Docker Engine of version 1.10.0+.
Install latest "docker-compose" -
$ sudo curl -o /usr/local/bin/docker-compose -L "https://github.com/docker/compose/releases/download/1.15.0/docker-compose-$(uname -s)-$(uname -m)"
$ sudo chmod +x /usr/local/bin/docker-compose
Ref-
https://docs.docker.com/compose/compose-file/compose-versioning/#version-2

You should install docker-compose using the official documentation https://docs.docker.com/compose/install/
If you are using linux, I have found the apt install for docker-compose shows some weird behavior. So uninstall docker-compose and reinstall it using the official documentation above.
sudo apt-get purge docker-compose

Related

Docker compose can't find entry point, but docker run can?

I'm getting this error when trying to use my compose up:
Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "./Nssams": stat ./Nssams: no such file or directory: unknown
This is my compose file
version: '3.5'
services:
eventlogger:
container_name: nssams-eventlogger_
image: nssams-eventlogger
environment:
- MQTT_HOST=mqtt
depends_on:
- mqtt
command: python run.py
mqtt:
container_name: nssams-mqtt_
image: eclipse-mosquitto:1.6.15
ports:
- 1883:1883
ebcmos:
image: nssams-ebcmos
container_name: ebcmos
networks:
- default
volumes:
- ./capture/ebcmos:/app/capture:wo
- ./logs/ebcmos:/app/
- /dev/ttyUSB0:/dev/ttyUSB0
- ./xmp/ebcmos:/app/xmp/ebcmos
environment:
- CAMERA_NAME=ebcmos
privileged: true
depends_on:
- mqtt
- eventlogger
networks:
default:
driver: bridge
name: nssams_bridge
Here is my Dockerfile:
FROM ubuntu:18.04 AS builder
# Install dependencies for building mqtt client from source.
RUN apt-get update && apt-get -y install build-essential git gcc make cmake cmake-gui cmake-curses-gui doxygen
# removed a bunch of irrelevant installs for stack overflow
RUN mkdir -p /app/build \
&& cd build \
&& cmake .. \
&& cmake --build .
# Production Image
FROM debian:latest AS prod
LABEL maintainer=redacted
WORKDIR /app
# Copy NSSAMS executable to prod image.
COPY --from=builder /app/build/Nssams .
CMD [ "./Nssams" ]
but if I run the below command:
docker run -it --privileged --volume {volume} {image_name} bash
I can exec it, and see my Nssams executable in /app. I then just go ./Nssams in a shell, and the application starts up.
Why can't docker compose do the same?

mysql file not found in $PATH error using Docker and Django?

I'm working on using Django and Docker with MariaDB on Mac OS X.
I'm trying to grant privileges to Django to set up a test db. For this, I have a script that executes this code, sudo docker exec -it $container mysql -u root -p. When I do this after spinning up, instead of the prompt for the password for the database, I get this error message,
OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: exec: "mysql": executable file not found in $PATH: unknown
On an Ubuntu machine, I can delete the data folder for the database and spin up, spin down, and run the command without the error, but on my Mac, which is my primary machine that I'd like to use, this fix doesn't work. Any ideas? I've had a peer pull code and try it on their Mac and it does the same thing to them! Docker is magic to me.
Here's my docker-compose.yml.
version: "3.3"
networks:
django_db_net:
external: false
services:
django:
build: ./docker/django
restart: 'unless-stopped'
depends_on:
- db
networks:
- django_db_net
user: "${HOST_USER_ID}:${HOST_GROUP_ID}"
volumes:
- ./src:/src
working_dir: /src/vger
command: ["/src/wait-for-it.sh", "db:3306", "--", "python", "manage.py", "runserver", "0.0.0.0:8000"]
ports:
- "${DJANGO_PORT}:8000"
db:
image: mariadb:latest
user: "${HOST_USER_ID}:${HOST_GROUP_ID}"
volumes:
- ./data:/var/lib/mysql
restart: always
environment:
- MYSQL_ROOT_PASSWORD=this_is_a_bad_password
- MYSQL_USER=django
- MYSQL_PASSWORD=django
- MYSQL_DATABASE=vger
networks:
- django_db_net
And my Dockerfile
FROM python:latest
ENV PYTHONUNBUFFERED=1
RUN pip3 install --upgrade pip & \
pip3 install django mysqlclient
ENV MYSQL_MAJOR 8.0
RUN apt-key adv --keyserver hkp://pool.sks-keyservers.net:80 --recv-keys 8C718D3B5072E1F5 & \
echo "deb http://repo.mysql.com/apt/debian/ buster mysql-${MYSQL_MAJOR}" > /etc/apt/sources.list.d/mysql.list & apt-get update & \
apt-get -y --no-install-recommends install default-libmysqlclient-dev
WORKDIR /src
I fixed it!
This is really silly, but OS X doesn't like the "$container" so if you explicitly just write the name of container for the database, it works like a charm!

Mounted a local direcotry in my Docker image, but it can't read a file from that directory

I'm trying to build a docker container with MySql, Django, and Apache images. I have set up this docker-compose.yml ...
version: '3'
services:
mysql:
restart: always
image: mysql:5.7
environment:
MYSQL_DATABASE: 'maps_data'
# So you don't have to use root, but you can if you like
MYSQL_USER: 'chicommons'
# You can use whatever password you like
MYSQL_PASSWORD: 'password'
# Password for root access
MYSQL_ROOT_PASSWORD: 'password'
ports:
- "3406:3306"
volumes:
- my-db:/var/lib/mysql
command: ['mysqld', '--character-set-server=utf8mb4', '--collation-server=utf8mb4_unicode_ci']
web:
restart: always
build: ./web
ports: # to access the container from outside
- "8000:8000"
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn maps.wsgi:application --reload -w 2 -b :8000
volumes:
- ./web/:/app
depends_on:
- mysql
apache:
restart: always
build: ./apache/
ports:
- "9090:80"
links:
- web:web
volumes:
my-db:
I would like to mount my docker Django image to a directory on my local machine so that local edits can be reflected in the docker container, which is why I have this
volumes:
- ./web/:/app
in my "web" portion. This is the web/Dockerfile I'm using ...
FROM python:3.7-slim
RUN apt-get update && apt-get install
RUN apt-get install -y libmariadb-dev-compat libmariadb-dev
RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc \
&& rm -rf /var/lib/apt/lists/*
RUN python -m pip install --upgrade pip
WORKDIR /app/
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
RUN ["chmod", "+x", "/app/entrypoint.sh"]
ENTRYPOINT ["/app/entrypoint.sh"]
However, when I run things using "docker-compose up,", I get this error ...
chmod: cannot access '/app/entrypoint.sh': No such file or directory
Even though when I look in my local directory, I can see the file ...
localhost:maps davea$ ls -al web/entrypoint.sh
-rw-r--r-- 1 davea staff 99 Mar 9 15:23 web/entrypoint.sh
I sense I haven't mapped/mounted things properly, but not sure where the issue is.
It seems that your docker-compose and Dockerfiles are set up correctly.
However, 1 thing I notice is that, your entrypoint ENTRYPOINT ["/app/entrypoint.sh"] is executing the file /app/entrypoint.sh which you do not have permission to do so according to the ls -al command
-rw-r--r-- 1 davea staff 99 Mar 9 15:23 web/entrypoint.sh
There are 2 simple solutions for this:
Give execution permission to the entrypoint.sh file:
chmod a+x web/entrypoint.sh
Or if you do not want to give this permission, you can update your entrypoint to be like ENTRYPOINT ["bash", "/app/entrypoint.sh"]
Note that, in either case, this is not a problem with your docker-compose mounting but your Dockerfile and hence, you will need to rebuild your docker image after making the changes like
docker-compose up -d --build

Installing Geospatial libraries in Docker

Django's official documentation lists 3 dependencies needed to start developing PostGIS application. They list a table depending on the database.
I use docker for my local development and I am confused about which of those packages should be installed in the Django container and which in the PostgreSQL container. I am guessing some of them should be on both.
I would appreciate your help with this.
You need to install the Geospatial libraries only in the Django container because they are used for interacting with a spatially enabled DB (such as PostgreSQL with PostGIS). You can deploy such a DB by using as a base a ready-made image for that purpose, such as kartoza/postgis.
Here is a nice example of a Dockerfile that uses python:3.6-slim as a base and builds the GDAL dependencies into the container. The part of the Dockerfile that you need for that is the following:
FROM python:3.6-slim
ENV PYTHONUNBUFFERED=1
# Add unstable repo to allow us to access latest GDAL builds
# Existing binutils causes a dependency conflict, correct version will be installed when GDAL gets intalled
RUN echo deb http://deb.debian.org/debian testing main contrib non-free >> /etc/apt/sources.list && \
apt-get update && \
apt-get remove -y binutils && \
apt-get autoremove -y
# Install GDAL dependencies
RUN apt-get install -y libgdal-dev g++ --no-install-recommends && \
pip install pipenv && \
pip install whitenoise && \
pip install gunicorn && \
apt-get clean -y
# Update C env vars so compiler can find gdal
ENV CPLUS_INCLUDE_PATH=/usr/include/gdal
ENV C_INCLUDE_PATH=/usr/include/gdal
ENV LC_ALL="C.UTF-8"
ENV LC_CTYPE="C.UTF-8"
You can deploy both the Django app and the DB using docker-compose, using the following docker-compose.yaml (from the same repo as the Dockerfile):
# Sample compose file for a django app and postgis
version: '3'
services:
postgis:
image: kartoza/postgis:9.6-2.4
volumes:
- postgis_data:/var/lib/postgresql
environment:
ALLOW_IP_RANGE: 0.0.0.0/0
POSTGRES_PASS: ${POSTGRES_PASSWORD}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_DB: postgis
web:
image: intelligems/geodjango:latest
command: python manage.py runserver 0.0.0.0:8000
environment:
DEBUG: "True"
SECRET_KEY: ${SECRET_KEY}
DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}#postgis:5432/postgis
SENTRY_DSN: ${SENTRY_DSN}
ports:
- 8000:8000
depends_on:
- postgis
volumes:
postgis_data: {}
In this repository, you can find more info and interesting bits of configuration for your issue: https://github.com/intelligems/docker-library/tree/master/geodjango (the Dockerfile snippet above is from that repo).
As a Note:
If you want to create a PostgreSQL with PostGIS enabled database as a "local DB" to interact with your local Django, you can deploy the previously mentioned kartoza/postgis:
Create a Volume:
$ docker volume create postgresql_data
Deploy the container:
$ docker run \
--name=postgresql-with-postgis -d \
-e POSTGRES_USER=user_name \
-e POSTGRES_PASS=user_pass \
-e ALLOW_IP_RANGE=0.0.0.0/0 -p 5433:5432 \
-v postgresql_data:/var/lib/postgresql \
--restart=always \
kartoza/postgis:9.6-2.4
Connect to the default DB (postgres) of the container and create your DB:
$ psql -h localhost -U user_name -d postgres
$ CREATE DATABASE database_name;
Enable PostGIS extension to the database:
$ \connect database_name
$ CREATE EXTENSION postgis;
This will result in a DB with the name database_name listening to port 5432 of your localhost and you can connect to that from your local Django app.

Can circle ci use docker-compose to build the environment

I currently have a few services such as db and web in a django application, and docker-compose is used to string them together.
The web version has code like this..
web:
restart: always
build: ./web
expose:
- "8000"
The docker file in web has python2.7-onbuild, so it uses the requirements.txt file to install all the necessary dependencies.
I am now using circle CI for integration and have a circle.yml file like this..
....
dependencies:
pre:
- pip install -r web/requirements.txt
....
Is there anyway I could avoid the dependency clause in the circle yml file.
Instead I would like Circle CI to use docker-compose.yml instead, if that makes sense.
Yes, using docker-compose in the circle.yml file can be a nice way to run tests because it can mirror ones dev environment very closely. This is a extract from our working tests on a AngularJS project:
---
machine:
services:
- docker
dependencies:
override:
- docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
- sudo pip install --upgrade docker-compose==1.3.0
test:
pre:
- docker-compose pull
- docker-compose up -d
- docker-compose run npm install
- docker-compose run bower install --allow-root --config.interactive=false
override:
# grunt runs our karma tests
- docker-compose run grunt deploy-build compile
Notes:
The docker login is only needed if you have private images in docker hub.
when we wrote our circle.yml file only docker-compose 1.3 was available. This is probably updated now.
I haven't tried this myself but based on the info here https://circleci.com/docs/docker I guess it may work
# circle.yml
machine:
services:
- docker
dependencies:
pre:
- pip install docker-compose
test:
pre:
- docker-compose up -d
Unfortunately, circleCI by default install old version of Docker 1.9.1 which is not compatible with latest version of docker-compose. In order to get more fresh docker version 1.10.0 you should:
machine:
pre:
- curl -sSL https://s3.amazonaws.com/circle-downloads/install-circleci-docker.sh | bash -s -- 1.10.0
- pip install docker-compose
services:
- docker
test:
pre:
- docker-compose up -d
Read more: https://discuss.circleci.com/t/docker-1-10-0-is-available-beta/2100
UPD: Native-Docker support on Circle version 2.
Read more information how to switch to new Circle CI version here: https://circleci.com/docs/2.0/migrating-from-1-2/