Docker compose npm install took hours - amazon-web-services

I am using aws ec2 free tier for Ubuntu 20.04. I am deploying nginx, Jenkins, Nest.js application using docker, but while building Nest.js, npm install glob rimraf took literally HOURS. Is it because of AWS free tier which is too small? When I monitored CPU usage from ec2 console, it is almost 100%. However, glob and rimraf is not a huge package. They are only 56kb and 70kb... but downloading images like node is really fast.
Is there any way to speed up this process?
Here is my dockerFile and docker-compose.yaml
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install glob rimraf
RUN npm install --only=development
COPY . .
RUN npm run build
FROM node:12.19.0-alpine3.9 as production
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY . .
COPY --from=development /usr/src/app/dist ./dist
CMD ["node", "dist/main"]
services:
proxy:
image: nginx:latest # 최신 버전의 Nginx 사용
container_name: proxy # container 이름은 proxy
ports:
- '80:80' # 80번 포트를 host와 container 맵핑
networks:
- nestjs-network
volumes:
- ./proxy/nginx.conf:/etc/nginx/nginx.conf # nginx 설정 파일 volume 맵핑
restart: 'unless-stopped' # 내부에서 에러로 인해 container가 죽을 경우 restart
dev:
container_name: nestjs_api_dev
image: nestjs-api-dev:1.0.0
build:
context: .
target: development
dockerfile: ./Dockerfile
command: node dist/main
# ports:
# - 3000:3000
expose:
- '3000' # 다른 컨테이너에게 3000번 포트 open
networks:
- nestjs-network
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
restart: unless-stopped
prod:
container_name: nestjs_api_prod
image: nestjs-api-prod:1.0.0
build:
context: .
target: production
dockerfile: ./Dockerfile
command: npm run start:prod
ports:
- 3000:3000
- 9229:9229
networks:
- nestjs-network
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
restart: unless-stopped
jenkins:
build:
context: .
dockerfile: ./jenkins/Dockerfile
image: jenkins/jenkins
restart: always
container_name: jenkins
user: root
environment:
- JENKINS_OPTS="--prefix=/jenkins"
ports:
- 8080:8080
expose:
- '8080'
networks:
- nestjs-network
volumes:
- ./jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
environment:
TZ: 'Asia/Seoul'
networks:
nestjs-network:

Related

testdriven.io: The Definitive Guide to Celery and Django. All containers logs are "Waiting for PostgreSQL to become available..."

I've followed each step of part 1 chapter 4 and literally copy pasted the code as shown.
docker-compose is able to build the containers but I always get the Waiting for PostgresSQL to become available being logged from all the other containers as shown below. docker-compose logs
Following is the output of docker ps -a. From which I can see that all containers are running in their respective ports.
docker ps -a logs
I checked the docker container logs of the db and it shows to be running.
postgres container logs
But, I'm unable to open the Django server on port 8010 nor able to view the flower server on port 5557 because in the container logs I'm getting the message "Waiting for PostgreSQL to become available..."
Someone please help. This issue is killing me. I've tried to view the logs of each container and it's showing it's running, yet I'm not able to view the Django and flower server.
Let me know if you guys need more info.
Thanks!
Tried checking if the DB is up and running at the correct port.
checked if Redis is running.
checked the logs of each running container which points to the same message "Waiting for PostgreSQL to become available..."
psycopg2-binary package was used.
Using psycopg2 instead worked for me.
This issue maybe related to Mac M1
I had the same problem with this tutorial (on mac M1) and it took me many hours. The solution for me was finally to use "alpine" instead of "buster", here is my Dockerfile:
FROM alpine:3.17.1
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev py3-pip bash
# Requirements are installed here to ensure they will be cached.
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
COPY ./compose/local/django/entrypoint /entrypoint
RUN sed -i 's/\r$//g' /entrypoint
RUN chmod +x /entrypoint
COPY ./compose/local/django/start /start
RUN sed -i 's/\r$//g' /start
RUN chmod +x /start
COPY ./compose/local/django/celery/worker/start /start-celeryworker
RUN sed -i 's/\r$//g' /start-celeryworker
RUN chmod +x /start-celeryworker
COPY ./compose/local/django/celery/beat/start /start-celerybeat
RUN sed -i 's/\r$//g' /start-celerybeat
RUN chmod +x /start-celerybeat
COPY ./compose/local/django/celery/flower/start /start-flower
RUN sed -i 's/\r$//g' /start-flower
RUN chmod +x /start-flower
WORKDIR /app
ENTRYPOINT ["/entrypoint"]
requirements.txt:
django==4.1.4
celery==5.2.7
redis==4.3.4
flower==1.2.0
psycopg2-binary==2.9.5
And may not directly related, I added start and health checks to the docker compose file, so you don’t need the postgres "ready" polling in the entrypoint script.
Here my docker-compose.yml:
version: '3.8'
services:
web:
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: django_celery_example_web
command: /start
volumes:
- .:/app
ports:
- 8010:8000
env_file:
- ./.env/.dev-sample
depends_on:
redis:
condition: service_started
db:
condition: service_healthy
db:
image: postgres:14.6-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_DB=hello_django
- POSTGRES_USER=hello_django
- POSTGRES_PASSWORD=hello_django
healthcheck:
test: ["CMD-SHELL", "pg_isready -U hello_django"]
interval: 5s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
networks:
- default
celery_worker:
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: django_celery_example_celery_worker
command: /start-celeryworker
volumes:
- .:/app
env_file:
- ./.env/.dev-sample
networks:
- default
depends_on:
redis:
condition: service_started
db:
condition: service_healthy
celery_beat:
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: django_celery_example_celery_beat
command: /start-celerybeat
volumes:
- .:/app
env_file:
- ./.env/.dev-sample
depends_on:
redis:
condition: service_started
db:
condition: service_healthy
flower:
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: django_celery_example_celery_flower
command: /start-flower
volumes:
- .:/app
env_file:
- ./.env/.dev-sample
ports:
- 5557:5555
depends_on:
redis:
condition: service_started
db:
condition: service_healthy
volumes:
postgres_data:

How do you make a synced volume with Docker, Django and Gunicorn?

Google solutions does not help, I feel like the problem is in me using Gunicorn as local server. I just cant get my volume to sync and update with me changing local files, how do I do that? Force re-build volume ever ytime sounds like something highly inefficient
Tried used Watchtower but had no luck as well
compose.yml
services:
back:
container_name: blog-django
build: ./blog-master
command: gunicorn blog.wsgi:application --bind 0.0.0.0:8000
expose:
- 8000
links:
- db
volumes:
- .:/app
- blog-django:/usr/src/app/
- blog-static:/usr/src/app/static
env_file: ./.env
depends_on:
db:
condition: service_healthy
nginx:
container_name: blog-nginx
build: ./nginx/
ports:
- "1337:80"
volumes:
- blog-static:/usr/src/app/static
links:
- back
depends_on:
- back
db:
container_name: blog-db
image: postgres:14
restart: always
expose:
- "5432"
environment:
- POSTGRES_DB=docker
- POSTGRES_USER=docker
- POSTGRES_PASSWORD=docker
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
healthcheck:
test: ["CMD-SHELL", "pg_isready -U docker"]
interval: 5s
timeout: 5s
retries: 5
mailhog:
container_name: mailhog
image: mailhog/mailhog
#logging:
# driver: 'none' # disable saving logs
expose:
- 1025
ports:
- 1025:1025 # smtp server
- 8025:8025 # web ui
volumes:
blog-django:
blog-static:
pgdata:
Dockerfile
FROM python:3.9.6-alpine
WORKDIR /usr/src/app/
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
RUN pip install --upgrade pip
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .

Use pre-existing network without modify inbound ports

I have configurated the next compose.yml
version: "3.9"
x-aws-cluster: cluster
x-aws-vpc: vpc-A
x-aws-loadbalancer: balancerB
services:
backend:
image: backend:1.0
build:
context: ./backend
dockerfile: ./dockerfile
command: >
sh -c "python manage.py makemigrations &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8002"
networks:
- default
- allowedips
ports:
- "8002:8002"
frontend:
tty: true
image: frontend:1.0
build:
context: ./frontend
dockerfile: ./dockerfile
command: >
sh -c "npm i --save-dev vue-loader-v16
npm install
npm run serve"
ports:
- "8080:8080"
networks:
- default
- allowedips
depends_on:
- backend
networks:
default:
external: true
name: sg-1
allowedips:
external: true
name: sg-2
I thought it like:
sg-1: Default security group
sg-2: Allowed IPs access
If I run
docker compose up -d
it runs well without any problem and I can use the app.
My dude is that the process create
Allowedips8002Ingress
Allowedips8080Ingress
Default8002Ingress
Default8080Ingress
I don't want this, I will have a allowed IPs inbound rules in sg-2. How can I avoid this?

"error readlink /var/lib/docker/overlay2 invalid argument" when I run docker-compose up

I'm trying to dockerize project written in django and postgresql. What I've already done:
I have environment variables stores in env_file:
SECRET_KEY=value
DEBUG=value
ALLOWED_HOSTS=['192.168.99.100']
DB_NAME=postgres
DB_USER=postgres
DB_PASSWORD=postgres
DB_HOST=db
DB_PORT=5432
My Dockerfile:
FROM python:3.7-stretch
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
COPY requirements.txt /code/
WORKDIR /code/
RUN pip install -r requirements.txt
COPY . /code/
My docker-compose.yml file:
version: '3'
services:
db:
image: postgres:11-alpine
restart: always
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
POSTGRES_PASSWORD: postgres
container_name: "my_postgres"
ports:
- "5432:5432"
web:
build: .
command: python /code/cameo/manage.py runserver 0.0.0.0:8000
volumes:
- .:/code/
env_file:
- env_file
ports:
- "8000:8000"
depends_on:
- db
volumes:
postgres_data:
Please help, I don't know what I'm doing wrong.
If you are running Docker on Windows, some files may have been corrupted.
The following command worked for me :
"docker-compose down --rmi all"

Why other docker containers do not see the accepted migrations?

docker-compose.yml
version: '3'
services:
# Django web server
web:
volumes:
- "./app/back:/app"
- "../front/public/static:/app/static"
- "./phantomjs-2.1.1:/app/phantomjs"
build:
context: .
dockerfile: dockerfile_django
#command: python manage.py runserver 0.0.0.0:8080
#command: ["uwsgi", "--ini", "/app/back/uwsgi.ini"]
ports:
- "8080:8080"
links:
- async
- ws_server
- mysql
- redis
async:
volumes:
- "./app/async_web:/app"
build:
context: .
dockerfile: dockerfile_async
ports:
- "8070:8070"
# Aiohtp web socket server
ws_server:
volumes:
- "./app/ws_server:/app"
build:
context: .
dockerfile: dockerfile_ws_server
ports:
- "8060:8060"
# MySQL db
mysql:
image: mysql/mysql-server:5.7
volumes:
- "./db_mysql:/var/lib/mysql"
- "./my.cnf:/etc/my.cnf"
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: user_b520
MYSQL_PASSWORD: buzz_17KN
MYSQL_DATABASE: dev_NT_pr
MYSQL_PORT: 3306
ports:
- "3300:3306"
# Redis
redis:
image: redis:4.0.6
build:
context: .
dockerfile: dockerfile_redis
volumes:
- "./redis.conf:/usr/local/etc/redis/redis.conf"
ports:
- "6379:6379"
# Celery worker
celery:
build:
context: .
dockerfile: dockerfile_celery
command: celery -A backend worker -l info --concurrency=20
volumes:
- "./app/back:/app"
- "../front/public/static:/app/static"
links:
- redis
# Celery beat
beat:
build:
context: .
dockerfile: dockerfile_beat
command: celery -A backend beat
volumes:
- "./app/back:/app"
- "../front/public/static:/app/static"
links:
- redis
# Flower monitoring
flower:
build:
context: .
dockerfile: dockerfile_flower
command: celery -A backend flower
volumes:
- "./app/back:/app"
- "../front/public/static:/app/static"
ports:
- "5555:5555"
links:
- redis
dockerfile_django
FROM python:3.4
RUN mkdir /app
WORKDIR /app
ADD app/back/requirements.txt /app
RUN pip3 install -r requirements.txt
# Apply migrations
CMD ["python", "manage.py", "migrate"]
#CMD python manage.py runserver 0.0.0.0:8080 & cron && tail -f /var/log/cron.log
CMD ["uwsgi", "--ini", "/app/uwsgi.ini"]
In a web container migrations applied and everything is working.
I also added CMD ["python", "manage.py", "migrate"] to dockerfile_celery-flower-beat, but they dont applied.
I restart the container using the command:
docker-compose up --force-recreate
How to make the rest of the containers see the migration?
log
flower_1 | File "/usr/local/lib/python3.4/site-packages/MySQLdb/connections.py", line 292, in query
flower_1 | _mysql.connection.query(self, query)
flower_1 | django.db.utils.OperationalError: (1054, "Unknown column 'api_communities.is_closed' in 'field list'")