I'm currently setting up a build/test pipeline for my app (django) using Google Cloud Build (and testing using cloud-build-local).
In order to properly run the tests I need to start a mysql dependency (I use docker-compose for this ). The issue is that when running docker-compose in a cloud-build step, database init scripts are not properly run, I get a
/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/0-init.sql
ERROR: Can't initialize batch_readline - may be the input source is a directory or a block device.
(running docker-compose out of google-cloud-build properly works)
Here's my docker-compose file:
version: '3.3'
services:
mysql:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: 'dev'
MYSQL_USER: 'dev'
MYSQL_PASSWORD: 'dev'
MYSQL_ROOT_PASSWORD: 'root'
ports:
- '3306:3306'
expose:
- '3306'
volumes:
- reports-db:/var/lib/mysql-reports
- ./dev/databases/init.sql:/docker-entrypoint-initdb.d/0-init.sql
- ... (other init scripts)
volumes:
reports-db:
And cloudbuild.yaml :
steps:
...
- id: 'tests-dependencies'
name: 'docker/compose:1.24.1'
args: ['up', '-d']
...
Files being organized like this:
parent_dir/
dev/
databases/
init.sql
cloudbuild.yaml
docker-compose.yml
...
(all commands are run from parent_dir/)
When I run
cloud-build-local --config=cloudbuild.yaml --dryrun=false .
I get a
...
Step #2 - "tests-dependencies": mysql_1 | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/0-init.sql
Step #2 - "tests-dependencies": mysql_1 | ERROR: Can't initialize batch_readline - may be the input source is a directory or a block device.
...
Knowing that running docker-compose up directly works properly
I'm suspecting that the way volumes are mounted is incorrect but can't find why/how.
If anyone has any input on this it will be really useful :)
Thanks in advance.
Looks like it's an issue proper to cloud-build-local, properly works on GCP
Related
I am working on Windows 10 and running commands in git bash. I have following docker-compose.yml file:
services:
db:
image: postgres:latest
user: 1000:1000
volumes:
- postgres-data:/var/lib/postgresql/data
environment:
- POSTGRES_USER='postgres'
- POSTGRES_PASSWORD='postgres'
ports:
- 8000:8000
volumes:
postgres-data:
external: True
I have created postgres-data volume in terminal by running docker volume create postgres-data. Then I type in docker-compose up. I have read in the Internet that I need to name a volume to run postgres.
However there is still an error:
initdb: error: could not change permissions of directory "/var/lib/postgresql/data": Operation not permitted
On top of that, when the postgres db works on Docker, I want to add web component. I have been following a tutorial https://github.com/docker/awesome-compose/tree/master/official-documentation-samples/django/. What is missing in the docker-compose.yml?
I develop in local k8s cluster with minikube and skaffold. Using Django and DRF for the API.
I'm working on a number of models.py and one thing that is starting to get annoying is anytime I run a ./manage.py command like (showmigrations, makemigrations, etc.) it triggers a skaffold rebuild of the API nodes. It takes less than 10 seconds, but getting annoying none the less.
What should I exclude/include specifically from my skaffold.yaml to prevent this?
apiVersion: skaffold/v2beta12
kind: Config
build:
artifacts:
- image: postgres
context: postgres
sync:
manual:
- src: "**/*.sql"
dest: .
docker:
dockerfile: Dockerfile.dev
- image: api
context: api
sync:
manual:
- src: "**/*.py"
dest: .
docker:
dockerfile: Dockerfile.dev
local:
push: false
deploy:
kubectl:
manifests:
- k8s/ingress/development.yaml
- k8s/postgres/development.yaml
- k8s/api/development.yaml
defaultNamespace: development
It seems that ./manage.py must be recording some state locally, and thus triggering a rebuild. You need to add these state files to your .dockerignore.
Skaffold normally logs at a warning level, which suppresses details of what triggers sync or rebuilds. Run Skaffold with -v info and you'll see more detail:
$ skaffold dev -v info
...
[node] Example app listening on port 3000!
INFO[0336] files added: [backend/src/foo]
INFO[0336] Changed file src/foo does not match any sync pattern. Skipping sync
Generating tags...
- node-example -> node-example:v1.20.0-8-gc9335b0ad-dirty
INFO[0336] Tags generated in 80.293621ms
Checking cache...
- node-example: Not found. Building
INFO[0336] Cache check completed in 1.844615ms
Found [minikube] context, using local docker daemon.
Building [node-example]...
I want to deploy my server to an AWS EC2 instance. When I enter 'sudo docker-compose up' in the ssh console, I get the following error:
ERROR: for nginx Cannot start service nginx: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"rootfs_linux.go:58: mounting \\"/home/ubuntu/site/nginx/default.conf\\" to rootfs \\"/var/lib/docker/overlay2/b24f64910c6ab7727a4cb08afac0d034bb759baa4bfd605466ca760359f411c2/merged\\" at \\"/var/lib/docker/overlay2/b24f64910c6ab7727a4cb08afac0d034bb759baa4bfd605466ca760359f411c2/merged/etc/nginx/conf.d/default.conf\\" caused \\"not a directory\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
This is my docker-compose.yml file:
version: '2'
networks:
SRVR:
services:
nginx:
image: nginx:stable-alpine
container_name: SRVR_nginx
ports:
- "8080:80"
volumes:
- ./code:/code
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
- ./nginx/logs:/var/log/nginx
depends_on:
- php
networks:
- SRVR
php:
build: ./php
container_name: SRVR_php
volumes:
- ./code:/code
ports:
- "9000:9000"
networks:
- SRVR
The same docker-compose.yml is working fine on my local compuer which runs Ubuntu OS. The EC2 instance also runs Ubuntu OS.
Your problem is with ./nginx/default.conf. docker recognises that as a folder and you have /etc/nginx/conf.d/default.conf being a file.
I was too hasty to ask. Here's what's going on. The Nginx container requires the 'default.conf' file to be created manually. If it's not, bringing nginx server up creates a folder called 'default.conf'. Once I've manually copied my original 'default.conf' file to the appropriate location, everything seems to be working fine.
I'm setting up circle-ci to automatically build/deploy to AWS ECR &ECS.
But build is failed due to no Dockerfile.
Maybe this is because I set docker-compose for multiple docker images.
But I don't know how to resolve this issue.
Is there no way to make DockerFile instead of docker-compose?
front: React
backend: Golang
ci-tool: circle-ci
db: mysql
article
├ .circleci
├ client
├ api
└ docker-compose.yml
I set .circleci/config.yml.
version: 2.1
orbs:
aws-ecr: circleci/aws-ecr#6.0.0
aws-ecs: circleci/aws-ecs#0.0.8
workflows:
build_and_push_image:
jobs:
- aws-ecr/build-and-push-image:
region: AWS_REGION
account-url: AWS_ECR_ACCOUNT_URL
repo: 'article-ecr-jpskgc'
tag: '${CIRCLE_SHA1}'
- aws-ecs/deploy-service-update:
requires:
- aws-ecr/build-and-push-image
family: 'article-task-jpskgc'
cluster-name: 'article-cluster-jpskgc'
service-name: 'article-service-jpskgc'
container-image-name-updates: 'container=article-container-jpskgc,tag=${CIRCLE_SHA1}'
Here is the source code in github.
https://github.com/jpskgc/article
I expect build/deploy via circle-ci to ECR/ECS to success, but it actually fails.
This is the error log on circle-ci.
Build docker image
Exit code: 1
#!/bin/bash -eo pipefail
docker build \
\
-f Dockerfile \
-t $AWS_ECR_ACCOUNT_URL/article-ecr-jpskgc:${CIRCLE_SHA1} \
.
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /home/circleci/project/Dockerfile: no such file or directory
Exited with code 1
You must use a Dockerfile, check out the documentation for the orb you are using. Please read through them here. Also docker-compose ≠ docker, therefore I will confirm that one cannot be used in substitution for the other.
Given your docker-compose.yml, I have a few suggestions for your general setup and CI.
For reference here is the docker-compose.yml in question:
version: '3'
services:
db:
image: mysql
ports:
- '3306:3306'
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: article
MYSQL_USER: docker
MYSQL_PASSWORD: docker
nginx:
restart: always
build:
dockerfile: Dockerfile.dev
context: ./nginx
ports:
- '3050:80'
api:
build:
dockerfile: Dockerfile.dev
context: ./api
volumes:
- ./api:/app
ports:
- 2345:2345
depends_on:
- db
tty: true
environment:
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
client:
build:
dockerfile: Dockerfile.dev
context: ./client
volumes:
- /app/node_modules
- ./client:/app
ports:
- 3000:3000
From the above we have the various components, just as you have stated:
MySQL Database
Nginx Loadbalancer
Client App
API Server
Here are my recommendations for each component:
MySQL Database
Since you are deploying to AWS I recommend deploying a MySQL instance on the free tier, please follow this documentation: https://aws.amazon.com/rds/free. With this you can remove your database from CI, which is recommended as ECS is not the ideal service to run a MySQL server.
Nginx Loadbalancer
Because you are using ECS, this is not required as AWS handles all load balancing for you and is redundant.
Client App
Because this is a react application, you shouldn't deploy to ECS -- this is not cost effective you would rather deploy this to Amazon S3. There are many resources on how to do this. You may follow this guide though you may have to make a few change based of the structure of your repository.
This will reduce your overall cost and it makes more sense than an entire Docker container running just to serve static files.
API Server
This is the only thing that should be running in ECS, and all you need to do is point to the correct Dockerfile in your configuration for it be built and pushed successfully.
You may therefore edit your circle ci config as follows, assuming we are using the same Dockerfile in your docker-compose.yml:
build_and_push_image:
jobs:
- aws-ecr/build-and-push-image:
region: AWS_REGION
dockerfile: Dockerfile.dev
path: ./api
account-url: AWS_ECR_ACCOUNT_URL
repo: 'article-ecr-jpskgc'
tag: '${CIRCLE_SHA1}'
Things to Note
My answer does not include:
How to load balance your API service please follow these docs on how to do so: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html
Details on setting up the MySQL server, it assumed you will follow the AWS documentation provided above.
Things you must do:
Point your client app to the API server, this will probably require a code change from what I've seen.
I want to stress that you must Load balance your API server according to these docs yet again.
You do not need to edit your docker-compose.yml
I just tried switching from docker-compose to docker stacks/kubernetes. In compose I was able to specify where the postgres data volume was and the data persisted nicely.
volumes:
- ./postgres-data:/var/lib/postgresql/data
I tried doing the same thing with the stack file and I can connect to the pod and use psql to see the schema but none of the data entered from docker-compose is there.
Any ideas why this might be?
Here's the stack.yml
version: '3.3'
services:
django:
image: image
build:
context: .
dockerfile: docker/Dockerfile
deploy:
replicas: 5
environment:
- DJANGO_SETTINGS_MODULE=config.settings.local
- SECRET_KEY=password
- NAME=postgres
- USER=postgres
- HOST=db
- PASSWORD=password
- PORT=5432
volumes:
- .:/application
command: ["gunicorn", "--bind 0.0.0.0:8000", "config.wsgi"]
ports:
- "8000:8000"
links:
- db
db:
image: mdillon/postgis:9.6-alpine
volumes:
- ./postgres-data:/var/lib/postgresql/data
You failed to mention how your cluster is provisioned, where is it running etc. so I will make an assumption we're talking about local tests here. If so, you probably have local docker/docker-compose and minikube installed.
If that is the case, please mind that minikube runs in it's own VM so it will not be affected by changes you make on your host by ie. docker, as it has it's own filesystem in vm.
Hint: you can run docker against docker daemon of minikube if you first run eval $(minikube docker-env)
For docker stacks, run the docker inspect command, it should show the mount point of the Postgres container.
docker service inspect --format='{{range .Spec.TaskTemplate.ContainerSpec.Mounts}} {{.Source}}{{end}}' <StackName>
Fixed in the last Docker Edge update.