How to fix ”unable to prepare context: unable to evaluate symlinks in Dockerfile path” error in circleci - amazon-web-services

I'm setting up circle-ci to automatically build/deploy to AWS ECR &ECS.
But build is failed due to no Dockerfile.
Maybe this is because I set docker-compose for multiple docker images.
But I don't know how to resolve this issue.
Is there no way to make DockerFile instead of docker-compose?
front: React
backend: Golang
ci-tool: circle-ci
db: mysql
article
 ├ .circleci
 ├ client
 ├ api
 └ docker-compose.yml
I set .circleci/config.yml.
version: 2.1
orbs:
aws-ecr: circleci/aws-ecr#6.0.0
aws-ecs: circleci/aws-ecs#0.0.8
workflows:
build_and_push_image:
jobs:
- aws-ecr/build-and-push-image:
region: AWS_REGION
account-url: AWS_ECR_ACCOUNT_URL
repo: 'article-ecr-jpskgc'
tag: '${CIRCLE_SHA1}'
- aws-ecs/deploy-service-update:
requires:
- aws-ecr/build-and-push-image
family: 'article-task-jpskgc'
cluster-name: 'article-cluster-jpskgc'
service-name: 'article-service-jpskgc'
container-image-name-updates: 'container=article-container-jpskgc,tag=${CIRCLE_SHA1}'
Here is the source code in github.
https://github.com/jpskgc/article
I expect build/deploy via circle-ci to ECR/ECS to success, but it actually fails.
This is the error log on circle-ci.
Build docker image
Exit code: 1
#!/bin/bash -eo pipefail
docker build \
\
-f Dockerfile \
-t $AWS_ECR_ACCOUNT_URL/article-ecr-jpskgc:${CIRCLE_SHA1} \
.
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /home/circleci/project/Dockerfile: no such file or directory
Exited with code 1

You must use a Dockerfile, check out the documentation for the orb you are using. Please read through them here. Also docker-compose ≠ docker, therefore I will confirm that one cannot be used in substitution for the other.
Given your docker-compose.yml, I have a few suggestions for your general setup and CI.
For reference here is the docker-compose.yml in question:
version: '3'
services:
db:
image: mysql
ports:
- '3306:3306'
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: article
MYSQL_USER: docker
MYSQL_PASSWORD: docker
nginx:
restart: always
build:
dockerfile: Dockerfile.dev
context: ./nginx
ports:
- '3050:80'
api:
build:
dockerfile: Dockerfile.dev
context: ./api
volumes:
- ./api:/app
ports:
- 2345:2345
depends_on:
- db
tty: true
environment:
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
client:
build:
dockerfile: Dockerfile.dev
context: ./client
volumes:
- /app/node_modules
- ./client:/app
ports:
- 3000:3000
From the above we have the various components, just as you have stated:
MySQL Database
Nginx Loadbalancer
Client App
API Server
Here are my recommendations for each component:
MySQL Database
Since you are deploying to AWS I recommend deploying a MySQL instance on the free tier, please follow this documentation: https://aws.amazon.com/rds/free. With this you can remove your database from CI, which is recommended as ECS is not the ideal service to run a MySQL server.
Nginx Loadbalancer
Because you are using ECS, this is not required as AWS handles all load balancing for you and is redundant.
Client App
Because this is a react application, you shouldn't deploy to ECS -- this is not cost effective you would rather deploy this to Amazon S3. There are many resources on how to do this. You may follow this guide though you may have to make a few change based of the structure of your repository.
This will reduce your overall cost and it makes more sense than an entire Docker container running just to serve static files.
API Server
This is the only thing that should be running in ECS, and all you need to do is point to the correct Dockerfile in your configuration for it be built and pushed successfully.
You may therefore edit your circle ci config as follows, assuming we are using the same Dockerfile in your docker-compose.yml:
build_and_push_image:
jobs:
- aws-ecr/build-and-push-image:
region: AWS_REGION
dockerfile: Dockerfile.dev
path: ./api
account-url: AWS_ECR_ACCOUNT_URL
repo: 'article-ecr-jpskgc'
tag: '${CIRCLE_SHA1}'
Things to Note
My answer does not include:
How to load balance your API service please follow these docs on how to do so: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html
Details on setting up the MySQL server, it assumed you will follow the AWS documentation provided above.
Things you must do:
Point your client app to the API server, this will probably require a code change from what I've seen.
I want to stress that you must Load balance your API server according to these docs yet again.
You do not need to edit your docker-compose.yml

Related

Deploying an ECS application to AWS using docker compose

I am following the AWS tutorial on deploying an ECS application using docker compose.
When I run docker compose up, I only receive the message docker UpdateInProgress User Initiated, but nothing else happens:
[+] Running 0/0
- docker UpdateInProgress User Initiated 0.0s
Previously, this worked fine and all the ECS resources (cluster, task definitions, services, load balancer) had been created.
For some reason, now, this does not work anymore (although I have not changed my docker-compose.yml file).
docker-compose.yml:
version: '3'
services:
postgres:
image: ${AWS_DOCKER_REGISTRY}/postgres
networks:
- my-network
ports:
- "5432:5432"
volumes:
- postgres:/data/postgres
server:
image: ${AWS_DOCKER_REGISTRY}/server
networks:
- my-network
env_file:
- .env
ports:
- "${PORT}:${PORT}"
depends_on:
- postgres
entrypoint: "/server/run.sh"
pgadmin:
image: ${AWS_DOCKER_REGISTRY}/pgadmin
networks:
- my-network
depends_on:
- postgres
volumes:
- pgadmin:/root/.pgadmin
ports:
- "${PGADMIN_PORT:-5050}:${PGADMIN_PORT:-5050}"
networks:
my-network:
#driver: bridge
volumes:
postgres:
pgadmin:
I also switched to the correct Docker context before (docker context use my-aws-context).
And I have updated to the latest version of Docker Desktop for Windows and AWS CLI.
Did someone already have a similar problem?
From the message it appears that you are trying to compose up a stack that is existing already (on AWS) and so it's trying to update the CFN stack. Can you check if this is the case? You have a couple of options if that is what's happening: 1) delete the CFN stack (either in AWS or with docker compose down) or 2) launch the docker compose up with the flag --project-name string (where string is an arbitrary name of your choice). By default compose will use the directory name as the project name so if you compose up twice it will try to work on the same stack.

Cannot properly init mysql db (using docker-compose) on google cloud build

I'm currently setting up a build/test pipeline for my app (django) using Google Cloud Build (and testing using cloud-build-local).
In order to properly run the tests I need to start a mysql dependency (I use docker-compose for this ). The issue is that when running docker-compose in a cloud-build step, database init scripts are not properly run, I get a
/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/0-init.sql
ERROR: Can't initialize batch_readline - may be the input source is a directory or a block device.
(running docker-compose out of google-cloud-build properly works)
Here's my docker-compose file:
version: '3.3'
services:
mysql:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: 'dev'
MYSQL_USER: 'dev'
MYSQL_PASSWORD: 'dev'
MYSQL_ROOT_PASSWORD: 'root'
ports:
- '3306:3306'
expose:
- '3306'
volumes:
- reports-db:/var/lib/mysql-reports
- ./dev/databases/init.sql:/docker-entrypoint-initdb.d/0-init.sql
- ... (other init scripts)
volumes:
reports-db:
And cloudbuild.yaml :
steps:
...
- id: 'tests-dependencies'
name: 'docker/compose:1.24.1'
args: ['up', '-d']
...
Files being organized like this:
parent_dir/
dev/
databases/
init.sql
cloudbuild.yaml
docker-compose.yml
...
(all commands are run from parent_dir/)
When I run
cloud-build-local --config=cloudbuild.yaml --dryrun=false .
I get a
...
Step #2 - "tests-dependencies": mysql_1 | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/0-init.sql
Step #2 - "tests-dependencies": mysql_1 | ERROR: Can't initialize batch_readline - may be the input source is a directory or a block device.
...
Knowing that running docker-compose up directly works properly
I'm suspecting that the way volumes are mounted is incorrect but can't find why/how.
If anyone has any input on this it will be really useful :)
Thanks in advance.
Looks like it's an issue proper to cloud-build-local, properly works on GCP

Django Docker/Kubernetes Postgres data not appearing

I just tried switching from docker-compose to docker stacks/kubernetes. In compose I was able to specify where the postgres data volume was and the data persisted nicely.
volumes:
- ./postgres-data:/var/lib/postgresql/data
I tried doing the same thing with the stack file and I can connect to the pod and use psql to see the schema but none of the data entered from docker-compose is there.
Any ideas why this might be?
Here's the stack.yml
version: '3.3'
services:
django:
image: image
build:
context: .
dockerfile: docker/Dockerfile
deploy:
replicas: 5
environment:
- DJANGO_SETTINGS_MODULE=config.settings.local
- SECRET_KEY=password
- NAME=postgres
- USER=postgres
- HOST=db
- PASSWORD=password
- PORT=5432
volumes:
- .:/application
command: ["gunicorn", "--bind 0.0.0.0:8000", "config.wsgi"]
ports:
- "8000:8000"
links:
- db
db:
image: mdillon/postgis:9.6-alpine
volumes:
- ./postgres-data:/var/lib/postgresql/data
You failed to mention how your cluster is provisioned, where is it running etc. so I will make an assumption we're talking about local tests here. If so, you probably have local docker/docker-compose and minikube installed.
If that is the case, please mind that minikube runs in it's own VM so it will not be affected by changes you make on your host by ie. docker, as it has it's own filesystem in vm.
Hint: you can run docker against docker daemon of minikube if you first run eval $(minikube docker-env)
For docker stacks, run the docker inspect command, it should show the mount point of the Postgres container.
docker service inspect --format='{{range .Spec.TaskTemplate.ContainerSpec.Mounts}} {{.Source}}{{end}}' <StackName>
Fixed in the last Docker Edge update.

Docker for AWS and Selenium Grid - Connection refused / No route to host (Host unreachable)

What I am trying to achieve is a scalable and on-demand test infrastructure using Selenium Grid.
I can get everything up and running but what I end up with is this:
Here are all the pieces:
Docker for AWS (CloudFormation Stack)
docker-selenium
Docker compose file (below)
The "implied" software used are:
Docker swarm
Stacks
Here is what I can accomplish:
Create, log into, and ping all hosts & nodes within the stack, following the guidelines here: deploy Docker for AWS
Deploy using the compose the file at the end of this inquiry by running:
docker stack deploy -c docker-compose.yml grid
View Selenium Grid console using the public facing DNS name automatically provided by AWS (upon successful creation of the stack). Here is a helpful entry on the subject: Docker Swarm Mode.
Here is are the contents of the compose file I am using:
version: '3'
services:
hub:
image: selenium/hub:3.4.0-chromium
ports:
- 4444:4444
networks:
- selenium
environment:
- JAVA_OPTS=-Xmx1024m
deploy:
update_config:
parallelism: 1
delay: 10s
placement:
constraints: [node.role == manager]
chrome:
image: selenium/node-chrome:3.4.0-chromium
networks:
- selenium
depends_on:
- hub
environment:
- HUB_PORT_4444_TCP_ADDR=hub
- HUB_PORT_4444_TCP_PORT=4444
deploy:
placement:
constraints: [node.role == worker]
firefox:
image: selenium/node-firefox:3.4.0-chromium
networks:
- selenium
depends_on:
- hub
environment:
- HUB_PORT_4444_TCP_ADDR=hub
- HUB_PORT_4444_TCP_PORT=4444
deploy:
placement:
constraints: [node.role == worker]
networks:
selenium:
Any guidance on this issue will be greatly appreciated. Thank you.
I have also tried opening up ports across the swarm:
swarm-exec docker service update --publish-add 5555:5555 gird
A quick Google brought up https://github.com/SeleniumHQ/docker-selenium/issues/255. You need to add the following to the Chrome and Firefox nodes:
entrypoint: bash -c 'SE_OPTS="-host $$HOSTNAME" /opt/bin/entry_point.sh'
This is because the containers have two IP addresses in Swarm Mode and the nodes are picking up the wrong address and advertising that to the hub. This change will have the nodes advertise their hostname so the hub can find the nodes by DNS instead.

Symfony Application on AWS ECS with a data-only container - Is this the right direction?

I have a dockerized Symfony2 Application consisting of four containers:
php-fpm
nginx
mysql
code (data container with a volume)
On my local machine this setup runs without problem with docker-compose:
code:
image: ebc9f7b635b3
nginx:
build: docker/nginx
ports:
- "80:80"
links:
- php
volumes_from:
- code
php:
build: docker/php
volumes_from:
- code
links:
- mysql
mysql:
image: mysql
ports:
- "5000:3306"
command: mysqld --sql_mode="STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
environment:
- MYSQL_ROOT_PASSWORD=xyz
- MYSQL_DATABASE=xyz
- MYSQL_USER=xyz
- MYSQL_PASSWORD=xyz
I wanted to deploy my application on AWS ECS, therefore I prebuilt all the images and pushed them to the AWS Container Registry, created a new cluster with a new service and translated my local docker-compose.yml to a TaskDefinition.
Since yesterday I tried to get it running, but after following the Official Dokumentation
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_data_volumes.html
and searching for hours I can't find a way to get it working.
Either the service get stucked in a PENDING state without bringing up a container (except for the mysql container) or if I attach a volume to the task definition the containers coming up but the data is not mapped.
Is it that I must reference the data only container in a special syntax in the volumesFrom section of the Task Definition?
Is there a solution for this by now other than using EFS?
Thanks!