Deploying docker compose on aws using ecs-cli gives me errors - amazon-web-services

I get Internal error when I deploy my docker compose file to aws using ecs-cli . In my console I
get that the service is up and running as well as in the aws gui but when I try to open the link I get
internal error.amazon view
link not working
Dockerfile.txt
FROM clojure:openjdk-8-lein
RUN apt update && apt install -y git make python3 && apt clean
WORKDIR /opt
RUN mkdir my-project && cd my-project && git clone https://github.com/ThoughtWorksInc/infra-problem.git && cd infra-problem && make libs && make clean all
docker-compose.yml
version: "3"
services:
quotes:
image: selmensh/newsfeeds
build:
context: .
dockerfile: ./Dockerfile.txt
container_name: quotes
command: java -jar ./my-project/infra-problem//build/quotes.jar
environment:
- APP_PORT=9200
ports:
- 9200:9200
newsfeed:
image: selmensh/newsfeeds
build:
context: .
dockerfile: ./Dockerfile.txt
container_name: newsfeed
command: java -jar ./my-project/infra-problem/build/newsfeed.jar
environment:
- APP_PORT=5000
ports:
- 5000:5000
assets:
image: selmensh/newsfeeds
build:
context: .
dockerfile: ./Dockerfile.txt
container_name: assets
command: python3 ./my-project/infra-problem/front-end/public/serve.py
ports:
- 8000:8080
front-end:
image: selmensh/newsfeeds
build:
context: .
dockerfile: ./Dockerfile.txt
command: java -jar ./my-project/infra-problem/build/front-end.jar
environment:
- APP_PORT=8081
- STATIC_URL=http://assets:8000
- QUOTE_SERVICE_URL=http://quotes:9200
- NEWSFEED_SERVICE_URL=http://newsfeed:5000
- NEWSFEED_SERVICE_TOKEN=T1&eWbYXNWG1w1^YGKDPxAWJ#^et^&kX
depends_on:
- quotes
- newsfeed
ports:
- 80:8081
I also notice that ecs does not support build so I made an image and pushed to docker hub. However I see that this might have some security issues since I clone the code in the docker file. The reason I do this is because the code has a folder called utilities which is common and is required by all the other services.
Is there a better approach ?

It would be great if you could demonstrate more details, it seems a problem in your application, now something I could point you to would be the terraform tool, through it you can better manage your infrastructure (idempotency).

Related

Docker compose can't find entry point, but docker run can?

I'm getting this error when trying to use my compose up:
Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "./Nssams": stat ./Nssams: no such file or directory: unknown
This is my compose file
version: '3.5'
services:
eventlogger:
container_name: nssams-eventlogger_
image: nssams-eventlogger
environment:
- MQTT_HOST=mqtt
depends_on:
- mqtt
command: python run.py
mqtt:
container_name: nssams-mqtt_
image: eclipse-mosquitto:1.6.15
ports:
- 1883:1883
ebcmos:
image: nssams-ebcmos
container_name: ebcmos
networks:
- default
volumes:
- ./capture/ebcmos:/app/capture:wo
- ./logs/ebcmos:/app/
- /dev/ttyUSB0:/dev/ttyUSB0
- ./xmp/ebcmos:/app/xmp/ebcmos
environment:
- CAMERA_NAME=ebcmos
privileged: true
depends_on:
- mqtt
- eventlogger
networks:
default:
driver: bridge
name: nssams_bridge
Here is my Dockerfile:
FROM ubuntu:18.04 AS builder
# Install dependencies for building mqtt client from source.
RUN apt-get update && apt-get -y install build-essential git gcc make cmake cmake-gui cmake-curses-gui doxygen
# removed a bunch of irrelevant installs for stack overflow
RUN mkdir -p /app/build \
&& cd build \
&& cmake .. \
&& cmake --build .
# Production Image
FROM debian:latest AS prod
LABEL maintainer=redacted
WORKDIR /app
# Copy NSSAMS executable to prod image.
COPY --from=builder /app/build/Nssams .
CMD [ "./Nssams" ]
but if I run the below command:
docker run -it --privileged --volume {volume} {image_name} bash
I can exec it, and see my Nssams executable in /app. I then just go ./Nssams in a shell, and the application starts up.
Why can't docker compose do the same?

docker-compose ps doesnt show me my containers

Im new to docker.
I have a docker-compose.yml file like this :
version: '3.7'
services:
nginx_sarahmaso:
build:
context: .
dockerfile: ./compose/production/nginx/Dockerfile
restart: always
volumes:
- staticfiles_sarahmaso:/app/static
- mediafiles_sarahmaso:/app/media
ports:
- 4000:80
depends_on:
- web_sarahmaso
networks:
spa_network_sarahmaso:
web_sarahmaso:
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
restart: always
command: /start
volumes:
- staticfiles_sarahmaso:/app/static
- mediafiles_sarahmaso:/app/media
- sqlite_sarahmaso:/app/db
env_file:
- ./env/prod-sample
networks:
spa_network_sarahmaso:
networks:
spa_network_sarahmaso:
volumes:
sqlite_sarahmaso:
staticfiles_sarahmaso:
mediafiles_sarahmaso:
I'm deploying this on a server with a sh script running these commands :
mkdir -p /app
rm -rf /app/* && tar -xf /tmp/project.tar -C /app
sudo docker-compose -f /app/docker-compose.yml build
sudo supervisorctl restart react-wagtail-project
sudo ufw allow port
However the supervisorctl doesnt run correctly. But the console tells me "Successfully built dc10bd26b175"after the docker build.
But when i run docker-compose ps or docker ps -a i dont see any containers.
Docker-compose ps asks me for a docker-compose.yml file and if i do docker-compose ps -f path_to/docker-compose.yml the console shows me the help slug :
List containers.
Usage: ps [options] [SERVICE...]
Options:
-q, --quiet Only display IDs
--services Display services
--filter KEY=VAL Filter services by a property
-a, --all Show all stopped containers (including those created by the run command)
How come i dont see my containers?
It seems your containers are not started.
With your line sudo docker-compose -f /app/docker-compose.yml build you are building your container, as the console message tells you.
I do not exactly know what this line does sudo supervisorctl restart react-wagtail-project, but to me, it does not look like a command to START your newly built containers.
Try to explicitely start your containers by adding
./path_to_compose/docker-compose up or
./path_to_compose/docker-compose up -d to your script.

How to set environmental variables properly Gitlab CI/CD and Docker

I am new to Docker and CI/CD with Gitlab CI/CD. I have .env file in the root directory of my Django project which contains my environmental variable e.g SECRET_KEY=198191891. The .env file is included in .gitignore. I have set up these variables in Gitlab settings for CI/CD. However, the environment variables set in Gitlab CI/CD settings seem to be unavailable
Also, how should the Gitlab CI/CD automation process create a user and DB to connect to and run the tests? When creating the DB and user for the DB on my local machine, I logged into the container docker exec -it <postgres_container_name> /bin/sh and created Postgres user and DB.
Here are my relevant files.
docker-compose.yml
version: "3"
services:
postgres:
image: postgres
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
web:
build: .
command: /usr/local/bin/gunicorn writer.wsgi:application -w 2 -b :8000
environment:
DEBUG: ${DEBUG}
DB_HOST: ${DB_HOST}
DB_NAME: ${DB_NAME}
DB_USER: ${DB_USER}
DB_PORT: ${DB_PORT}
DB_PASSWORD: ${DB_PASSWORD}
SENDGRID_API_KEY: ${SENDGRID_API_KEY}
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
AWS_STORAGE_BUCKET_NAME: ${AWS_STORAGE_BUCKET_NAME}
depends_on:
- postgres
- redis
expose:
- "8000"
volumes:
- .:/writer-api
redis:
image: "redis:alpine"
celery:
build: .
command: celery -A writer worker -l info
volumes:
- .:/writer-api
depends_on:
- postgres
- redis
celery-beat:
build: .
command: celery -A writer beat -l info
volumes:
- .:/writer-api
depends_on:
- postgres
- redis
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
depends_on:
- web
volumes:
pgdata:
.gitlab-ci.yml
image: tmaier/docker-compose:latest
services:
- docker:dind
before_script:
- docker info
- docker-compose --version
stages:
- build
- test
- deploy
build:
stage: build
script:
- echo "Building the app"
- docker-compose build
test:
stage: test
variables:
script:
- echo "Testing"
- docker-compose run web coverage run manage.py test
deploy-staging:
stage: deploy
only:
- develop
script:
- echo "Deploying staging"
- docker-compose up -d
deploy-production:
stage: deploy
only:
- master
script:
- echo "Deploying production"
- docker-compose up -d
Here are my settings for my variables
Here is my failed pipeline job
The SECRET_KEY variable will be available to all your CI jobs, as configured. However, I don't see any references to it in your Docker Compose file to pass it to one or more of your services. For the web service to use it, you'd map it in like the other variables you already have.
web:
build: .
command: /usr/local/bin/gunicorn writer.wsgi:application -w 2 -b :8000
environment:
SECRET_KEY: ${SECRET_KEY}
DEBUG: ${DEBUG}
…
As for creating the database, you should wrap up whatever you currently run interactively in the postgres container in a SQL file or shell script, and then bind-mount it into the container's initialization scripts directory under /docker-entrypoint-initdb.d. See the Initialization scripts section of the postgres image's description for more details.
In my experience the best way to pass environment variables to a container docker is creating an environment file, which works to development environment and production environment:
GitLab CI/CD variables
You must create an environment file on GitLab CI/CD, following the next steps, on your project GitLab you must go to:
settings > CI/CD > Variables
on that you must create a ENV_FILE
Demo image
Next on your build stage in .gitlab-ci.yml copy the ENV_FILE to local process
.gitlab-ci.yml
build:
stage: build
script:
- cp $ENV_FILE .env
- echo "Building the app"
- docker-compose build
your Dockerfile should be stay normally so it doesn't have to change
Dockerfile
FROM python:3.8.6-slim
# Rest of setup goes here...
COPY .env .env
In order for compose variable substitution to work, when the user is not added to docker group, you must add the -E flag to sudo.
script:
- sudo -E docker-compose build

Docker compose could not open directory permisson denied

I am totally a newbie when it comes to Docker. And I am trying to understand it with a dummy project.
I have a django project and my Dockerfile is inside the Django project's root folder. And my docker-compose.yml file is under the top root folder which contains django project folder and other config files.
my docker-compose.yml
version: '3'
services:
db:
image: postgres
container_name: dummy_project_postgres
volumes:
- ./data/db:/var/lib/postgresql/data
event_planner:
build: ./dummy_project
container_name: dummy_project
volumes:
- .:/web
ports:
- "8000:8000"
depends_on:
- db
links:
- db:postgres
and my Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /web
WORKDIR /web
ADD requirements.txt /web/
RUN pip install -r requirements.txt
ADD . /web/
I am trying to run the following commands
# stop and remove the existing containers
docker-compose stop
docker-compose rm -f
# up and run the container
docker-compose build
docker-compose up -d
docker-compose exec dummy_project bash
When I do docker-compose up -d, I see this error.
docker-compose up -d
dummy_project_postgres is up-to-date
Starting dummy_project ... done
warning: could not open directory 'data/db/': Permission denied
I know this question asked before, but I didn't quite get the solution I need and I am stuck for hours now.
EDIT: I have all the permissions for all the folders under the top folder
EDIT2: sudo docker-compose up -d also results the same error.
I solved by adding ":z" to end of volume defintion
version: '3'
services:
db:
image: postgres
container_name: dummy_project_postgres
volumes:
- ./data/db:/var/lib/postgresql/data:z
event_planner:
build: ./dummy_project
container_name: dummy_project
volumes:
- .:/web
ports:
- "8000:8000"
depends_on:
- db
links:
- db:postgres
What ":z" means
Labeling systems like SELinux require that proper labels are placed on
volume content mounted into a container. Without a label, the security
system might prevent the processes running inside the container from
using the content. By default, Docker does not change the labels set
by the OS.
To change the label in the container context, you can add either of
two suffixes :z or :Z to the volume mount. These suffixes tell Docker
to relabel file objects on the shared volumes. The z option tells
Docker that two containers share the volume content. As a result,
Docker labels the content with a shared content label. Shared volume
labels allow all containers to read/write content. The Z option tells
Docker to label the content with a private unshared label. Only the
current container can use a private volume.
https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container---volumes-from
what is 'z' flag in docker container's volumes-from option?
You're trying to mount ./data/db in /var/lib/postgresql/data and you're executing docker-compose with a non-privileged user.
So, we can have two possibilities:
Problem with ./data/db permissions.
Problem with /var/lib/postgresql/data
The simpiest solution is execute docker-compose with a privileged user (root), but if you don't want to do that, you can try this:
Give permissions to ./data/db (I see your EDIT that you've already done it).
Give permissions to /var/lib/postgresql/data
How can you give /var/lib/postgresql/data permissions? Read the following lines:
First, note that /var/lib/postgresql/data is auto-generated by postgre
docker, so, you need to define a new Dockerfile which modifies these
permissions. After that, you need also modify docker-compose to use
this new Dockerfile.
./docker-compose.yml
version: '3'
services:
db:
build:
context: ./mypostgres
dockerfile: Dockerfile_mypostgres
container_name: dummy_project_postgres
volumes:
- ./data/db:/var/lib/postgresql/data
event_planner:
build: ./dumy_project
container_name: dummy_project
volumes:
- .:/web
ports:
- "8000:8000"
depends_on:
- db
links:
- db:postgres
./dumy_project/Dockerfile --> Without changes
./mypostgres/Dockerfile_mypostgres
FROM postgres
RUN mkdir -p /var/lib/postgresql/data
RUN chmod -R 777 /var/lib/postresql/data
ENTRYPOINT docker-entrypoint.sh
This solution is for case that your user is not present in docker group.
First check if your user is in docker group:
grep 'docker' /etc/group
Add user to docker group:
If the command return is empty, then create docker group:
sudo groupadd docker
Else if your user is not present in command return then add him to the group:
sudo usermod -aG docker $USER
Reboot your system
Test it again:
docker run hello-world
Tip: Remember to have the docker service started
If it works, try your docker-compose command again.

Volume share from container to host docker-compose

I'm trying to share data between container and host. So I just want to do this to store container files. The data must be shared from a container to host.
My docker-compose.yml
version: "3.3"
services:
django:
image: python:slim
volumes:
- type: volume
source: ./env
target: /usr/local/lib/python3.6/site-packages
volume:
nocopy: true
- ./src:/usr/src/app
ports:
- '80:80'
working_dir: /usr/src/app
command: bash -c "pip install -r requirements.txt && python manage.py runserver"
When I run docker throws this:
ERROR: for django Cannot create container for service django: invalid
bind mount spec
"/Users/gustavoopb/git/adv/env:/usr/local/lib/python3.6/site-packages:nocopy":
invalid volume specification:
'/Users/gustavoopb/git/adv/env:/usr/local/lib/python3.6/site-packages:nocopy':
invalid mount config for type "bind": field VolumeOptions must not be
specified ERROR: Encountered errors while bringing up the project.
https://docs.docker.com/compose/compose-file/#long-syntax-3
You're trying to use the named volume syntax with a bind mount. I'd switch your syntax to:
version: "3.3"
services:
django:
image: python:slim
volumes:
- type: bind
source: ./env
target: /usr/local/lib/python3.6/site-packages
- ./src:/usr/src/app
ports:
- '80:80'
working_dir: /usr/src/app
command: bash -c "pip install -r requirements.txt && python manage.py runserver"
Note the change in the type and the lack of the nocopy option. Copying files from the image to a host bind isn't supported, that is only available with named volumes.
My problem was keeping the python env when my container goes down. To do this I need to share the env inside of the container to host. I tried the docker docs suggestion, but it wasn't working.
volume:
nocopy: true
My solution:
I create a named container.
version: "2"
services:
django:
image: python:2.7
command: bash -c "pip install -r requirements.txt && python manage.py collectstatic --no-input && python manage.py migrate && python manage.py runserver 0.0.0.0:80"
env_file:
- .env
volumes:
- .:/app
- env:/Library/Python/2.7/site-packages
links:
- database
ports:
- "8000:80"
working_dir: /app
volumes:
env:
The volume specified in the docs is not used in the service, rather, it is specified externally to service. Try removing the last line from volume:
volume:
nocopy: true