I have configurated the next compose.yml
version: "3.9"
x-aws-cluster: cluster
x-aws-vpc: vpc-A
x-aws-loadbalancer: balancerB
services:
backend:
image: backend:1.0
build:
context: ./backend
dockerfile: ./dockerfile
command: >
sh -c "python manage.py makemigrations &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8002"
networks:
- default
- allowedips
ports:
- "8002:8002"
frontend:
tty: true
image: frontend:1.0
build:
context: ./frontend
dockerfile: ./dockerfile
command: >
sh -c "npm i --save-dev vue-loader-v16
npm install
npm run serve"
ports:
- "8080:8080"
networks:
- default
- allowedips
depends_on:
- backend
networks:
default:
external: true
name: sg-1
allowedips:
external: true
name: sg-2
I thought it like:
sg-1: Default security group
sg-2: Allowed IPs access
If I run
docker compose up -d
it runs well without any problem and I can use the app.
My dude is that the process create
Allowedips8002Ingress
Allowedips8080Ingress
Default8002Ingress
Default8080Ingress
I don't want this, I will have a allowed IPs inbound rules in sg-2. How can I avoid this?
Related
I'm having issues connecting with my Elasticsearch container since day 1.
First I was using elasticsearch as the hostname, then I've tried the container name web_elasticsearch_1, and finally I'd set a Static IP address to the container and passed it in my configuration file.
PYPI packages:
django==3.2.9
elasticsearch==7.15.1
elasticsearch-dsl==7.4.0
docker-compose.yml
version: "3.3"
services:
web:
build:
context: .
dockerfile: local/Dockerfile
image: project32439/python
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
env_file:
- local/python.env
depends_on:
- elasticsearch
elasticsearch:
image: elasticsearch:7.10.1
environment:
- xpack.security.enabled=false
- discovery.type=single-node
networks:
default:
ipv4_address: 172.18.0.10
settings.py
# Elasticsearch
ELASTICSEARCH_HOST = "172.18.0.10"
ELASTICSEARCH_PORT = 9200
service.py
from django.conf import settings
from elasticsearch import Elasticsearch, RequestsHttpConnection
es = Elasticsearch(
hosts=[{"host": settings.ELASTICSEARCH_HOST, "port": settings.ELASTICSEARCH_PORT}],
use_ssl=False,
verify_certs=False,
connection_class=RequestsHttpConnection,
)
traceback
HTTPConnectionPool(host='172.18.0.10', port=9200): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7f1973ebd6d0>, 'Connection to 172.18.0.10 timed out. (connect timeout=5)'))
By default Docker Compose uses a bridge network to provision inter-container communication. You can read more about this network at the Debian Wiki.
What matters for you, is that by default Docker Compose creates a hostname that equals the service name in the docker-compose.yml file. So update your file:
version: "3.3"
services:
web:
build:
context: .
dockerfile: local/Dockerfile
image: project32439/python
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
env_file:
- local/python.env
depends_on:
- elasticsearch
elasticsearch:
image: elasticsearch:7.10.1
environment:
- xpack.security.enabled=false
- discovery.type=single-node
And now you can connect with elasticsearch:9200 instead of 172.18.0.10 from your web container. For more info see this article.
I am using aws ec2 free tier for Ubuntu 20.04. I am deploying nginx, Jenkins, Nest.js application using docker, but while building Nest.js, npm install glob rimraf took literally HOURS. Is it because of AWS free tier which is too small? When I monitored CPU usage from ec2 console, it is almost 100%. However, glob and rimraf is not a huge package. They are only 56kb and 70kb... but downloading images like node is really fast.
Is there any way to speed up this process?
Here is my dockerFile and docker-compose.yaml
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install glob rimraf
RUN npm install --only=development
COPY . .
RUN npm run build
FROM node:12.19.0-alpine3.9 as production
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY . .
COPY --from=development /usr/src/app/dist ./dist
CMD ["node", "dist/main"]
services:
proxy:
image: nginx:latest # 최신 버전의 Nginx 사용
container_name: proxy # container 이름은 proxy
ports:
- '80:80' # 80번 포트를 host와 container 맵핑
networks:
- nestjs-network
volumes:
- ./proxy/nginx.conf:/etc/nginx/nginx.conf # nginx 설정 파일 volume 맵핑
restart: 'unless-stopped' # 내부에서 에러로 인해 container가 죽을 경우 restart
dev:
container_name: nestjs_api_dev
image: nestjs-api-dev:1.0.0
build:
context: .
target: development
dockerfile: ./Dockerfile
command: node dist/main
# ports:
# - 3000:3000
expose:
- '3000' # 다른 컨테이너에게 3000번 포트 open
networks:
- nestjs-network
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
restart: unless-stopped
prod:
container_name: nestjs_api_prod
image: nestjs-api-prod:1.0.0
build:
context: .
target: production
dockerfile: ./Dockerfile
command: npm run start:prod
ports:
- 3000:3000
- 9229:9229
networks:
- nestjs-network
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
restart: unless-stopped
jenkins:
build:
context: .
dockerfile: ./jenkins/Dockerfile
image: jenkins/jenkins
restart: always
container_name: jenkins
user: root
environment:
- JENKINS_OPTS="--prefix=/jenkins"
ports:
- 8080:8080
expose:
- '8080'
networks:
- nestjs-network
volumes:
- ./jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
environment:
TZ: 'Asia/Seoul'
networks:
nestjs-network:
My django app is failing to connect to the psql container with the standard connection refused error. I used django-cookiecutter which supplies the psql username and password automatically via environment variables and then this I gather is passed back into django with via a .env file that hosts a DATABASE_URL string.
Error
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
When I set a breakpoint in django settings I can see that the DATABASE_URL seems to be converted appropriately into the standard db dict:
{'NAME': 'hustlestat', 'USER': 'HjhPLEwuVjUIIKEHebPqNG<redacted>', 'PASSWORD': 'I43443fR42wRkUaaQ8mkd<redacted>', 'HOST': 'postgres', 'PORT': 5432, 'ENGINE': 'django.db.backends.postgresql'}
When I exec into the psql container with psql hustlestat -U HjhPLEwuVjUIIKEHebPqN<redcated> I can connect to the db using that username. I'm not 100% on the password as it isn't asking me for one when I try to connect.
Here is the docker compose which is generated automatically by cookie cutter:
version: '3'
volumes:
local_postgres_data: {}
local_postgres_data_backups: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: hustlestat_local_django
container_name: django
depends_on:
- postgres
- mailhog
volumes:
- .:/app:z
env_file:
- ./.envs/.local/.django
- ./.envs/.local/.postgres
ports:
- "8000:8000"
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: hustlestat_production_postgres
container_name: postgres
volumes:
- local_postgres_data:/var/lib/postgresql/data:Z
- local_postgres_data_backups:/backups:z
env_file:
- ./.envs/.local/.postgres
docs:
image: hustlestat_local_docs
container_name: docs
build:
context: .
dockerfile: ./compose/local/docs/Dockerfile
env_file:
- ./.envs/.local/.django
volumes:
- ./docs:/docs:z
- ./config:/app/config:z
- ./hustlestat:/app/hustlestat:z
ports:
- "7000:7000"
command: /start-docs
mailhog:
image: mailhog/mailhog:v1.0.0
container_name: mailhog
ports:
- "8025:8025"
redis:
image: redis:5.0
container_name: redis
celeryworker:
<<: *django
image: hustlestat_local_celeryworker
container_name: celeryworker
depends_on:
- redis
- postgres
- mailhog
ports: []
command: /start-celeryworker
celerybeat:
<<: *django
image: hustlestat_local_celerybeat
container_name: celerybeat
depends_on:
- redis
- postgres
- mailhog
ports: []
command: /start-celerybeat
flower:
<<: *django
image: hustlestat_local_flower
container_name: flower
ports:
- "5555:5555"
command: /start-flower
node:
build:
context: .
dockerfile: ./compose/local/node/Dockerfile
image: hustlestat_local_node
container_name: node
depends_on:
- django
volumes:
- .:/app:z
# http://jdlm.info/articles/2016/03/06/lessons-building-node-app-docker.html
- /app/node_modules
command: npm run dev
ports:
- "3000:3000"
# Expose browsersync UI: https://www.browsersync.io/docs/options/#option-ui
- "3001:3001"
The only oddity I have noticed is that despite django being named in the docker compose, when I view the running containers it has a random name such as:
hustlestat_django_run_37888ff2c9ca
Not sure if that is relevant.
Thanks for any help!
Okay have figured this out. I set a DATABASE_URL environment variable because I was originally getting an error saying it was unset. After googling I came across a cookie cutter doc that said to set it but didn't read it well enough to realise that the instruction was intended for non-docker setups. Mine is docker.
The reason I was getting that error is because I was exec'ing into the container and running management commands like this:
docker exec -it django bash then python manage.py migrate
The way this project is setup and environment variables are setup, you can't do that, you have to use this method from outside the exec:
docker-compose -f local.yml run --rm django python manage.py migrate
I thought the two methods were interchangeable but they are not. Everything works now.
docker-compose.yml
version: '3'
services:
# Django web server
web:
volumes:
- "./app/back:/app"
- "../front/public/static:/app/static"
- "./phantomjs-2.1.1:/app/phantomjs"
build:
context: .
dockerfile: dockerfile_django
#command: python manage.py runserver 0.0.0.0:8080
#command: ["uwsgi", "--ini", "/app/back/uwsgi.ini"]
ports:
- "8080:8080"
links:
- async
- ws_server
- mysql
- redis
async:
volumes:
- "./app/async_web:/app"
build:
context: .
dockerfile: dockerfile_async
ports:
- "8070:8070"
# Aiohtp web socket server
ws_server:
volumes:
- "./app/ws_server:/app"
build:
context: .
dockerfile: dockerfile_ws_server
ports:
- "8060:8060"
# MySQL db
mysql:
image: mysql/mysql-server:5.7
volumes:
- "./db_mysql:/var/lib/mysql"
- "./my.cnf:/etc/my.cnf"
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: user_b520
MYSQL_PASSWORD: buzz_17KN
MYSQL_DATABASE: dev_NT_pr
MYSQL_PORT: 3306
ports:
- "3300:3306"
# Redis
redis:
image: redis:4.0.6
build:
context: .
dockerfile: dockerfile_redis
volumes:
- "./redis.conf:/usr/local/etc/redis/redis.conf"
ports:
- "6379:6379"
# Celery worker
celery:
build:
context: .
dockerfile: dockerfile_celery
command: celery -A backend worker -l info --concurrency=20
volumes:
- "./app/back:/app"
- "../front/public/static:/app/static"
links:
- redis
# Celery beat
beat:
build:
context: .
dockerfile: dockerfile_beat
command: celery -A backend beat
volumes:
- "./app/back:/app"
- "../front/public/static:/app/static"
links:
- redis
# Flower monitoring
flower:
build:
context: .
dockerfile: dockerfile_flower
command: celery -A backend flower
volumes:
- "./app/back:/app"
- "../front/public/static:/app/static"
ports:
- "5555:5555"
links:
- redis
dockerfile_django
FROM python:3.4
RUN mkdir /app
WORKDIR /app
ADD app/back/requirements.txt /app
RUN pip3 install -r requirements.txt
# Apply migrations
CMD ["python", "manage.py", "migrate"]
#CMD python manage.py runserver 0.0.0.0:8080 & cron && tail -f /var/log/cron.log
CMD ["uwsgi", "--ini", "/app/uwsgi.ini"]
In a web container migrations applied and everything is working.
I also added CMD ["python", "manage.py", "migrate"] to dockerfile_celery-flower-beat, but they dont applied.
I restart the container using the command:
docker-compose up --force-recreate
How to make the rest of the containers see the migration?
log
flower_1 | File "/usr/local/lib/python3.4/site-packages/MySQLdb/connections.py", line 292, in query
flower_1 | _mysql.connection.query(self, query)
flower_1 | django.db.utils.OperationalError: (1054, "Unknown column 'api_communities.is_closed' in 'field list'")
I'm setting up my project and i set browserify to works with my front end assets and refresh the browser.
For the back end i'm using django, so, i made a proxy between both for works in the same time:
gulfile.js
// Start a server with BrowserSync to preview the site in
function server(done) {
browser.init({
// server: PATHS.dist, //port: PORT
proxy: 'localhost:8000',
notify: false
});
done();
}
But it doesn't works when i rise up the project with composer, simply doesn't show me anything when i rise up composer:
sass_1 | [BS] Proxying: http://localhost:8000
sass_1 | [BS] Access URLs:
sass_1 | -----------------------------------
sass_1 | Local: http://localhost:3000
sass_1 | External: http://172.18.0.7:3000
sass_1 | -----------------------------------
sass_1 | UI: http://localhost:3001
sass_1 | UI External: http://172.18.0.7:3001
sass_1 | -----------------------------------
sass_1 | [BS] Couldn't open browser (if you are using BrowserSync in a headless environment, you might want to set the open option to false)
It works fine when i rise up since my computer, without using Docker, but in docker it can open my browser and i can't get into 3000 port.
I got the same problem with django-debug-toolbar, but i solved it putting the internal ips got from docker configuration.
I tried changing the port inside gulpfile by the ips gateway(solution for before bug) but doesn't works.
My docker-composer file is:
version: '2'
services:
web:
build: .
image: uzman
command: python manage.py runserver 0.0.0.0:8000
ports:
- "3000:3000"
- "8000:8000"
volumes:
- .:/code
depends_on:
- npm
- bower
- sass
- migration
- db
- redis
- elasticsearch
db:
image: postgres:latest
volumes:
- .:/tmp/data/
npm:
image: uzman
command: npm install
volumes:
- ./uzman/static:/code/uzman/static
working_dir: /code/uzman/static
bower:
image: uzman
command: bower install --allow-root
volumes:
- ./uzman/static:/code/uzman/static
working_dir: /code/uzman/static
elasticsearch:
image: elasticsearch:latest
command: elasticsearch -Enetwork.host=0.0.0.0
ports:
- "9200:9200"
- "9300:9300"
redis:
image: redis:latest
sass:
image: uzman
command: npm start
volumes:
- ./uzman/static:/code/uzman/static
working_dir: /code/uzman/static
migration:
image: uzman
command: python manage.py migrate --noinput
volumes:
- .:/code
Can anyone help me with this?
The problem is that browserify doesn't running in the same time, for these reason is not possible get access to it.
So expose the ports for browserify:
ports:
- "3000:3000"
- "3001:3001"
- "8000:8000"
And execute npm:
docker exec -it uzman_web_1 bash -c "cd /code/uzman/static; exec npm start"
It's all.