Ember + Docker slow during transpilation - ember.js

I'm attempting to Dockerize my local development setup to make it much simpler to onboard new developers. Part of my setup in an Ember application. I've followed the instructions at this repository, but am running into huge delays when the Ember app starts up. It gets to the point where it says Serving on http://localhost:4200 and then there's a significant delay (on the order of 10s of minutes) between that message and when you see the output where Ember CLI displays how long it takes to compile everything. That said, the compilation time displayed is only a few minutes.
My docker-compose.yml file:
version: '2'
services:
nginx:
container_name: 'nginx'
image: jwilder/nginx-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- "80:80"
- "443:443"
frontend:
container_name: 'frontend'
env_file: .env
depends_on:
- nginx
- api
environment:
- VIRTUAL_HOST=*.scout.dev
- VIRTUAL_PORT=4200
image: scoutforpets/ember-cli
command: bash -c "npm i && GIT_DIR=/tmp bower i --allow-root && ember s --watcher polling"
volumes:
- ./app-business/:/app/
- ./app-business/ssl/:/etc/nginx/certs/
ports:
- "4200:4200" # Default Port
- "49152:49152" # Live Reload Port
api:
container_name: 'api'
env_file: .env
command: bash -c "npm i -s && npm run start-debug"
image: node:6.3.1
depends_on:
- postgres
- redis
ports:
- "3001:3001" # Default Port
- "9229:9229" # Debug Port
working_dir: /app/
volumes:
- ./api/:/app/
postgres:
container_name: 'postgres'
image: scoutforpets/postgres
ports:
- "5432:5432"
redis:
container_name: 'redis'
image: redis
ports:
- "6379:6379"
Note that my project is mounted from the file system (I'm running OSX Yosemite). I have heard some conversation around mounted file systems being slow, but am having trouble finding a definitive answer.
If someone is successfully using Ember + Docker, I'd love to hear what you're doing!
Thanks!

If you're using Docker for Mac, there is a known issue with the ember build command being slow.
Docs: https://docs.docker.com/docker-for-mac/troubleshoot/#/known-issues
There are a number of issues with the performance of directories bind-mounted with osxfs. In particular, writes of small blocks, and traversals of large directories are currently slow. Additionally, containers that perform large numbers of directory operations, such as repeated scans of large directory trees, may suffer from poor performance. Applications that behave in this way include:
rake
ember build
Symfony
Magento
As a work-around for this behavior, you can put vendor or third-party library directories in Docker volumes, perform temporary file system operations outside of osxfs mounts, and use third-party tools like Unison or rsync to synchronize between container directories and bind-mounted directories. We are actively working on osxfs performance using a number of different techniques and we look forward to sharing improvements with you soon.

Related

Docker on windows Vmmem grows with each rebuild of container

I have a Django + Gunicorn + Nginx website running in a docker container on windows 10 that's been working wonderfully.
Every time that I update my source code I run the following command to rebuild the container:
docker-compose up -d --build [servicename]
This runs as expected, builds the container detached and then swaps them out when complete, Nginx stays running and looses connection for a second then boom, back up and running.
Ill do this maybe 1-5 times a week depending on what I'm adding/fixing and pushing to production.
the issue that I just came across was that the Vmmem process is growing by a factor of 1 each time, so when I run docker-compose up for the first time, memory usage will be about 3,000MB. When I run my rebuild, it will grow to 6,000 MB. and on and on....
Since the system I'm running on has 32GB of RAM, I didn't notice the mass amount of buildup until a few months in of running.
Is this normal behavior from docker? and if so is there anything I can do to to alleviate the build up other than restarting the computer?
-restarting docker does not solve the issue
-pruning does not solve the issue
Edit:
Here is the docker-compose.yml I'm currently using;
version: '3.8'
services:
nginx:
restart: always
image: nginx:latest
container_name: nginx
ports:
- "80:80"
volumes:
- ./config:/etc/nginx/conf.d
- ./static:/static
- ./media:/media
site:
restart: always
build: ./
container_name: site
command: bash -c "python manage.py collectstatic --no-input && gunicorn --workers=8 --worker-class gevent --timeout 120 Site.wsgi -b 0.0.0.0:8080"
volumes:
- ./:/src
- ./static:/static
- ./media:/media
expose:
- "8080"
ports:
- "8080:8080"
I have also limited the wsl2 memory consumption to 8GB with
%UserProfile%\.wslconfig
[wsl2]
memory=8GB
This seems to be working with limiting the consumption to the host machine, but Vmmem still grows up to a max of 8,000MB after each rebuild. I've heard that Linux likes to hold onto its RAM and dump it only when needed, so could that be what I'm seeing here?

Docker keeps saying "No space left on device" when there's space in device

i have two volumes attached to my ec2 instance, one is /dev/sda1 which is root volume and it is 8 Gb while there's another volume /dev/sdb which is 500GB. I can see both volumes when i run sudo fdisk -l. I have a django server running in docker instance running on this ec2 instance and when i upload some data to server docker "I/O error, no space left on device". How can i fix this problem?
EDIT
Following is my docker-compose.yml
# Copyright (C) 2018-2020 Intel Corporation
#
# SPDX-License-Identifier: MIT
#
version: "2.3"
services:
cvat_db:
container_name: cvat_db
image: postgres:10-alpine
networks:
default:
aliases:
- db
restart: always
environment:
POSTGRES_USER: root
POSTGRES_DB: cvat
POSTGRES_HOST_AUTH_METHOD: trust
volumes:
- cvat_db:/var/lib/postgresql/data
cvat_redis:
container_name: cvat_redis
image: redis:4.0-alpine
networks:
default:
aliases:
- redis
restart: always
cvat:
container_name: cvat
image: cvat
restart: always
depends_on:
- cvat_redis
- cvat_db
build:
context: .
args:
http_proxy:
https_proxy:
no_proxy:
socks_proxy:
TF_ANNOTATION: "no"
AUTO_SEGMENTATION: "no"
USER: "django"
DJANGO_CONFIGURATION: "production"
TZ: "Etc/UTC"
OPENVINO_TOOLKIT: "no"
environment:
DJANGO_MODWSGI_EXTRA_ARGS: ""
ALLOWED_HOSTS: '*'
volumes:
- cvat_data:/home/django/data
- cvat_keys:/home/django/keys
- cvat_logs:/home/django/logs
- cvat_models:/home/django/models
cvat_ui:
container_name: cvat_ui
restart: always
build:
context: .
args:
http_proxy:
https_proxy:
no_proxy:
socks_proxy:
dockerfile: Dockerfile.ui
networks:
default:
aliases:
- ui
depends_on:
- cvat
cvat_proxy:
container_name: cvat_proxy
image: nginx:stable-alpine
restart: always
depends_on:
- cvat
- cvat_ui
environment:
CVAT_HOST: ""
ALLOWED_HOSTS: "*"
ports:
- "8080:80"
volumes:
- ./cvat_proxy/nginx.conf:/etc/nginx/nginx.conf:ro
- ./cvat_proxy/conf.d/cvat.conf.template:/etc/nginx/conf.d/cvat.conf.template:ro
command: /bin/sh -c "envsubst '$$CVAT_HOST' < /etc/nginx/conf.d/cvat.conf.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
volumes:
cvat_db:
cvat_data:
cvat_keys:
cvat_logs:
cvat_models:
There are several possibilities. Check out this article and see if it helps: https://www.maketecheasier.com/fix-linux-no-space-left-on-device-error/
It says things like
Deleted File Reserved by Process
Occasionally, a file will be deleted, but a process is still using it.
Linux won’t release the storage associated with the file while the
process is still running. You just need to find the process and
restart it.
Not Enough Inodes
There is a set of metadata on filesystems called “inodes.” Inodes
track information about files. A lot of filesystems have a fixed
amount of inodes, so it’s very possible to fill the max allocation of
inodes without filling the filesystem itself. You can use df to check.
sudo df -i /
Compare the inodes used with the total inodes. If there’s no more
available, unfortunately, you can’t get more. Delete some useless or
out-of-date files to clear up inodes.
Bad Blocks
The last common problem is bad filesystem blocks. Filesystems get
corrupt and hard drives die. Your operating system will most likely
see those blocks as usable unless they’re otherwise marked. The best
way to find and mark those blocks is by using fsck with the -cc flag.
Remember that you can’t use fsck from the same filesystem that you’re
testing.
You might also want to post the question in a Linux exchange like https://unix.stackexchange.com/ or https://serverfault.com/
I Think it's not related to your machine, It's related to your contained volume size limit As you can limit those. Can you inspect you container using docker container inspect <container-Id> and check the volumes associated to it, you can also inspect each volume by it's Id using the docker volume inspect <volume-Id> command.
After some research I'm more confident that these might be the issue check this thread it's a bit related https://github.com/moby/moby/issues/5151

Building Celery container in Django app

I want to learn how to set up a Django app with Celery. I am using the following resource.
I don't understand why we have to build the same image twice for both web and worker this example. For example, see:
# Django web server
web:
build:
context: .
dockerfile: Dockerfile
hostname: web
command: ./run_web.sh
volumes:
- .:/app # mount current directory inside container
ports:
- "8000:8000"
# set up links so that web knows about db, rabbit and redis
links:
- db
- rabbit
- redis
depends_on:
- db
# Celery worker
worker:
build:
context: .
dockerfile: Dockerfile
command: ./run_celery.sh
volumes:
- .:/app
links:
- db
- rabbit
- redis
depends_on:
- rabbit
Does that mean that this docker-compose.yml will create two containers that are duplicates of each other? Seems a little excessive if I just need a celery worker for worker (why set up Django twice?). Maybe I am misunderstanding things. It feels like one option is to just replace command in web with ./run_web.sh; ./run_celery.sh and just set up proper links. This way you could remove worker altogether.
Can someone enlighten me? Thanks for reading.

Selenium Grid Setup using Docker Compose on AWS ECS

Context :
I am trying to setup a selenium grid to run my UI tests on CI.CI is Jenkins 2.0 and it runs on AWS ECS.When I create a selenium grid using the docker compose and invoke the tests on my MAC (OS Sierra) , it works perfectly.
When run on the AWS ECS , it shows me an : java.awt.AWTError: Can't connect to X11 window server using '99.0' as the value of the DISPLAY variable.
The test code itself is in a container and using a bridge network I have added the container to the same network as the grid.
The docker compose looks something like this :
version: '3'
services:
chromenode:
image: selenium/node-chrome:3.4.0
volumes:
- /dev/shm:/dev/shm
- /var/run/docker.sock:/var/run/docker.sock
container_name: chromenode
hostname: chromenode
depends_on:
- seleniumhub
ports:
- "5900:5900"
environment:
- "HUB_PORT_4444_TCP_ADDR=seleniumhub"
- "HUB_PORT_4444_TCP_PORT=4444"
networks:
- grid_network
seleniumhub:
image: selenium/hub:3.4.0
ports:
- "4444:4444"
container_name: seleniumhub
hostname: seleniumhub
networks:
- grid_network
volumes:
- /var/run/docker.sock:/var/run/docker.sock
testservice:
build:
context: .
dockerfile: DockerfileTest
networks:
- grid_network
networks:
grid_network:
driver: bridge
Please let me know if more info is required.
unset DISPLAY This helped me to solve the problem
This helps in most cases (e.g. starting application servers or other java based tools) and avoids to modify all that many command lines.
It can also be comfortable to add it to the .bash_profile for a dedicated app-server/tools user.
Can you please try this
- no_proxy=""

Docker, Django and Selenium - Selenium unable to connect

I have Docker configured to run Postgres and Django using docker-compose.yml and it is working fine.
The trouble I am having is with Selenium not being able to connect to the Django liveserver.
Now it makes sense (to me at least) that django has to access selenium to control the browser and selenium has to access django to access the server.
I have tried using the docker 'ambassador' pattern using the following configuration for docker-compose.yml from here: https://github.com/docker/compose/issues/666
postgis:
dockerfile: ./docker/postgis/Dockerfile
build: .
container_name: postgis
django-ambassador:
container_name: django-ambassador
image: cpuguy83/docker-grand-ambassador
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
command: "-name django -name selenium"
django:
dockerfile: ./docker/Dockerfile-dev
build: .
command: python /app/project/manage.py test my-app
container_name: django
volumes:
- .:/app
ports:
- "8000:8000"
- "8081:8081"
links:
- postgis
- "django-ambassador:selenium"
environment:
- SELENIUM_HOST=http://selenium:4444/wd/hub
selenium:
container_name: selenium
image: selenium/standalone-firefox-debug
ports:
- "4444:4444"
- "5900:5900"
links:
- "django-ambassador:django"
When I check http://DOCKER-MACHINE-IP:4444/wd/hub/static/resource/hub.html
I can see that firefox starts, but all the tests fail as firefox is unable to connect to django
'Firefox can't establish a connection to the server at localhost:8081'
I also tried this solution here https://github.com/docker/compose/issues/1991
however this is not working cause I can't get django to connect to postgis and selenium at the same time
'django.db.utils.OperationalError: could not translate host name "postgis" to address: Name or service not known'
I tried using the networking feature as listed below
postgis:
dockerfile: ./docker/postgis/Dockerfile
build: .
container_name: postgis
net: appnet
django:
dockerfile: ./docker/Dockerfile-dev
build: .
command: python /app/project/manage.py test foo
container_name: django
volumes:
- .:/app
ports:
- "8000:8000"
- "8081:8081"
net: appnet
environment:
- SELENIUM_HOST=http://selenium:4444/wd/hub
selenium:
container_name: selenium
image: selenium/standalone-firefox-debug
ports:
- "4444:4444"
- "5900:5900"
net: appnet
but the result is the same
'Firefox can't establish a connection to the server at localhost:8081'
So how can I get selenium to connect to django?
I have been playing around with this for days - would really appreciate any help.
More Info
Another weird thing is that when the testserver is running not using docker (using my old config of virtualenv etc.) if I run ./manage.py test foo I can access the server through any browser at http://localhost:8081 and get served up webpages, but I can't access the test server when I run the equivalent command if I run it under docker. This is weird cause I am mapping port 8081:8081 - is this related?
Note: I am using OSX and Docker v1.9.1
I ended up coming up with a better solution that didn't require me to hardcode the IP Address. Below is the configuration I used to run tests in django with docker.
Docker-compose file
# docker-compose base file for everything
version: '2'
services:
postgis:
build:
context: .
dockerfile: ./docker/postgis/Dockerfile
container_name: postgis
volumes:
# If you are using boot2docker, postgres data has to live in the VM for now until #581 fixed
# for more info see here: https://github.com/boot2docker/boot2docker/issues/581
- /data/dev/docker_cookiecutter/postgres:/var/lib/postgresql/data
django:
build:
context: .
dockerfile: ./docker/django/Dockerfile
container_name: django
volumes:
- .:/app
depends_on:
- selenium
- postgis
environment:
- SITE_DOMAIN=django
- DJANGO_SETTINGS_MODULE=settings.my_dev_settings
links:
- postgis
- mailcatcher
selenium:
container_name: selenium
image: selenium/standalone-firefox-debug:2.52.0
ports:
- "4444:4444"
- "5900:5900"
Dockerfile (for Django)
ENTRYPOINT ["/docker/django/entrypoint.sh"]
In Entrypoint file
#!/bin/bash
set -e
# Now we need to get the ip address of this container so we can supply it as an environmental
# variable for django so that selenium knows what url the test server is on
# Use below or alternatively you could have used
# something like "$# --liveserver=$THIS_DOCKER_CONTAINER_TEST_SERVER"
if [[ "'$*'" == *"manage.py test"* ]] # only add if 'manage.py test' in the args
then
# get the container id
THIS_CONTAINER_ID_LONG=`cat /proc/self/cgroup | grep 'docker' | sed 's/^.*\///' | tail -n1`
# take the first 12 characters - that is the format used in /etc/hosts
THIS_CONTAINER_ID_SHORT=${THIS_CONTAINER_ID_LONG:0:12}
# search /etc/hosts for the line with the ip address which will look like this:
# 172.18.0.4 8886629d38e6
THIS_DOCKER_CONTAINER_IP_LINE=`cat /etc/hosts | grep $THIS_CONTAINER_ID_SHORT`
# take the ip address from this
THIS_DOCKER_CONTAINER_IP=`(echo $THIS_DOCKER_CONTAINER_IP_LINE | grep -o '[0-9]\+[.][0-9]\+[.][0-9]\+[.][0-9]\+')`
# add the port you want on the end
# Issues here include: django changing port if in use (I think)
# and parallel tests needing multiple ports etc.
THIS_DOCKER_CONTAINER_TEST_SERVER="$THIS_DOCKER_CONTAINER_IP:8081"
echo "this docker container test server = $THIS_DOCKER_CONTAINER_TEST_SERVER"
export DJANGO_LIVE_TEST_SERVER_ADDRESS=$THIS_DOCKER_CONTAINER_TEST_SERVER
fi
eval "$#"
In your django settings file
SITE_DOMAIN = 'django'
Then to run your tests
docker-compose run django ./manage.py test
Whenever you see localhost, try first to port-forward that port (at the VM level)
See "Connect to a Service running inside a docker container from outside"
VBoxManage controlvm "default" natpf1 "tcp-port8081,tcp,,8081,,8081"
VBoxManage controlvm "default" natpf1 "udp-port8081,udp,,8081,,8081"
(Replace default with the name of your docker-machine: see docker-machine ls)
This differs for port mapping at the docker host level (which is your boot2docker-based Linux host)
The OP luke-aus confirms in the comments:
entering the IP address for the network solved the problem!
I've been struggling with this as well, and I finally found a solution that worked for me. You can try something like this:
postgis:
dockerfile: ./docker/postgis/Dockerfile
build: .
django:
dockerfile: ./docker/Dockerfile-dev
build: .
command: python /app/project/manage.py test my-app
volumes:
- .:/app
ports:
- "8000:8000"
links:
- postgis
- selenium # django can access selenium:4444, selenium can access django:8081-8100
environment:
- SELENIUM_HOST=http://selenium:4444/wd/hub
- DJANGO_LIVE_TEST_SERVER_ADDRESS=django:8081-8100 # this gives selenium the correct address
selenium:
image: selenium/standalone-firefox-debug
ports:
- "5900:5900"
I don't think you need to include port 4444 in the selenium config. That port is exposed by default, and there's no need to map it to the host machine, since the django container can access it directly via its link to the selenium container.
[Edit] I've found you don't need to explicitly expose the 8081 port of the django container either. Also, I used a range of ports for the test server, because if tests are run in parallel, you can get an "Address already in use" error, as discussed here.