Docker on windows Vmmem grows with each rebuild of container - django

I have a Django + Gunicorn + Nginx website running in a docker container on windows 10 that's been working wonderfully.
Every time that I update my source code I run the following command to rebuild the container:
docker-compose up -d --build [servicename]
This runs as expected, builds the container detached and then swaps them out when complete, Nginx stays running and looses connection for a second then boom, back up and running.
Ill do this maybe 1-5 times a week depending on what I'm adding/fixing and pushing to production.
the issue that I just came across was that the Vmmem process is growing by a factor of 1 each time, so when I run docker-compose up for the first time, memory usage will be about 3,000MB. When I run my rebuild, it will grow to 6,000 MB. and on and on....
Since the system I'm running on has 32GB of RAM, I didn't notice the mass amount of buildup until a few months in of running.
Is this normal behavior from docker? and if so is there anything I can do to to alleviate the build up other than restarting the computer?
-restarting docker does not solve the issue
-pruning does not solve the issue
Edit:
Here is the docker-compose.yml I'm currently using;
version: '3.8'
services:
nginx:
restart: always
image: nginx:latest
container_name: nginx
ports:
- "80:80"
volumes:
- ./config:/etc/nginx/conf.d
- ./static:/static
- ./media:/media
site:
restart: always
build: ./
container_name: site
command: bash -c "python manage.py collectstatic --no-input && gunicorn --workers=8 --worker-class gevent --timeout 120 Site.wsgi -b 0.0.0.0:8080"
volumes:
- ./:/src
- ./static:/static
- ./media:/media
expose:
- "8080"
ports:
- "8080:8080"
I have also limited the wsl2 memory consumption to 8GB with
%UserProfile%\.wslconfig
[wsl2]
memory=8GB
This seems to be working with limiting the consumption to the host machine, but Vmmem still grows up to a max of 8,000MB after each rebuild. I've heard that Linux likes to hold onto its RAM and dump it only when needed, so could that be what I'm seeing here?

Related

Docker-compose - Cannot connect to Postgres

It looks like a common issue: can't connect to Postgres from a Django app in Docker Compose.
Actually, I tried several solution from the web, but probably I'm missing something I cannot see.
The error I got is:
django.db.utils.OperationalError: could not translate host name "db" to address: Try again
Where the "db" should be the name of the docker-compose service and which must setup in the .env.
My docker-compose.yml:
version: '3.3'
services:
web:
build: .
container_name: drf_app
volumes:
- ./src:/drf
links:
- db:db
ports:
- 9090:8080
env_file:
- /.env
depends_on:
- db
db:
image: postgres:13-alpine
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
- POSTGRES_USER=myuser
- POSTGRES_PASSWORD=mypass
- POSTGRES_DB=mydb
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- 5432:5432
volumes:
postgres_data:
My .env:
SQL_ENGINE=django.db.backends.postgresql
SQL_DATABASE=mydb
SQL_USER=myuser
SQL_PASSWORD=mypass
SQL_HOST=db #this one should match the service name
SQL_PORT=5432
As far as I know, web and db should automatically see each other in the same network, but this doesn't happens.
Inspecting the ip address with ifconfig on each container: django app has 172.17.0.2 and the db 172.19.0.2. They are not able to ping each other.
The result of docker ps command:
400879d47887 postgres:13-alpine "docker-entrypoint.s…" 38 minutes ago Up 38 minutes 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp backend_db_1
I really cannot figure out the issue, so am I missing something?
I write this to save anyone in future from the same issue.
After countless tries, I started thinking that nothing was wrong from the pure docker perspective: I was right.
SOLUTION: My only suspect was related to the execution inside a Virtual Machine, so executing the same docker image on the host worked like a charm!
The networking issue was related to the VM (VirtualBox Ubuntu 20.04)
I do not know if there is a way to work with docker-compose inside a VM, so any suggestion is appreciated.
You said in a comment:
The command I run is the following: docker run -it --entrypoint /bin/sh backend_web
Docker Compose creates several Docker resources, including a default network. If you separately docker run a container it doesn't see any of these resources. The docker run container is on the "default bridge network" and can't use present-day Docker networking capabilities, but the docker-compose container is on a separate "user-defined bridge network" named backend_default. That's why you're seeing a couple of the symptoms you do: the two networks have separate IPv4 CIDR ranges, and Docker's container DNS resolution only happens for the current network.
There's no reason to start a container with an interactive shell and then start your application within that (any more than you don't normally run python and then manually call main() from its REPL). Just start the entire application:
docker-compose up -d
If you do happen to need an interactive shell to debug your container setup or to run some manual tasks like database migrations, you can use docker-compose run for this. This honors most, but not all, of the settings in the docker-compose.yml file. In particular you can't docker-compose run an interactive shell and start your application from it, since it ignores ports:.
# Typical use: run database migrations
docker-compose run web \
./manage.py migrate
# For debugging: run an interactive shell
docker-compose run web bash

Django, DRF, nginx, Jmeter: Sample Time becomes large even though there is no load on the CPU, etc

I am using Jmeter to load test the DRF, and even though the CPU and memory are not at 100%, the throughput and response time is slow.
The Django + nginx and Postgres servers are separated, and both have the following specs
4CPU, 4GB memory.
nginx is using docker's https-portal as shown below.
version: "3"
services:
https-portal:
image: steveltn/https-portal:1
ports:
- "80:80"
- "443:443"
environment:
DOMAINS: "my.domain.com -> http://backend:8000"
STAGE: "production"
volumes:
- https-portal-data:/var/lib/https-portal
- ./nginx/uwsgi_params:/etc/nginx/uwsgi_params
- ./static:/static
depends_on:
- backend
restart: always
backend:
build: .
command: uwsgi --http :8000 --module myapp.wsgi --processes 4
volumes:
- .:/usr/src/app
- ./static:/usr/src/app/static
expose:
- 8000
env_file:
- .env
- .env.prod
restart: always
volumes:
https-portal-data:
Looking at the django logs, there doesn't seem to be anything wrong with django, what do you think could be causing this?
generated 8302 bytes in 29 msecs (HTTP/1.0 200) 7 headers in 208 bytes (1 switches on core 0)
Setting up Jmeter:
Django + nginx server:
DB(postgres) server:
After 5 minutes of continuous loading.
If there is any other information you need, please let me know.
Check your application performance via profiler tool
Depending on the outcome you may also want to check Postgresql query logs and stats
Don't run JMeter in GUI mode, it might be the case you're getting false negative results because JMeter cannot send requests fast enough and/or got stuck due to garbage collection
Follow other JMeter Best Practices and make sure that JMeter doesn't lack CPU, RAM, etc.

Docker keeps saying "No space left on device" when there's space in device

i have two volumes attached to my ec2 instance, one is /dev/sda1 which is root volume and it is 8 Gb while there's another volume /dev/sdb which is 500GB. I can see both volumes when i run sudo fdisk -l. I have a django server running in docker instance running on this ec2 instance and when i upload some data to server docker "I/O error, no space left on device". How can i fix this problem?
EDIT
Following is my docker-compose.yml
# Copyright (C) 2018-2020 Intel Corporation
#
# SPDX-License-Identifier: MIT
#
version: "2.3"
services:
cvat_db:
container_name: cvat_db
image: postgres:10-alpine
networks:
default:
aliases:
- db
restart: always
environment:
POSTGRES_USER: root
POSTGRES_DB: cvat
POSTGRES_HOST_AUTH_METHOD: trust
volumes:
- cvat_db:/var/lib/postgresql/data
cvat_redis:
container_name: cvat_redis
image: redis:4.0-alpine
networks:
default:
aliases:
- redis
restart: always
cvat:
container_name: cvat
image: cvat
restart: always
depends_on:
- cvat_redis
- cvat_db
build:
context: .
args:
http_proxy:
https_proxy:
no_proxy:
socks_proxy:
TF_ANNOTATION: "no"
AUTO_SEGMENTATION: "no"
USER: "django"
DJANGO_CONFIGURATION: "production"
TZ: "Etc/UTC"
OPENVINO_TOOLKIT: "no"
environment:
DJANGO_MODWSGI_EXTRA_ARGS: ""
ALLOWED_HOSTS: '*'
volumes:
- cvat_data:/home/django/data
- cvat_keys:/home/django/keys
- cvat_logs:/home/django/logs
- cvat_models:/home/django/models
cvat_ui:
container_name: cvat_ui
restart: always
build:
context: .
args:
http_proxy:
https_proxy:
no_proxy:
socks_proxy:
dockerfile: Dockerfile.ui
networks:
default:
aliases:
- ui
depends_on:
- cvat
cvat_proxy:
container_name: cvat_proxy
image: nginx:stable-alpine
restart: always
depends_on:
- cvat
- cvat_ui
environment:
CVAT_HOST: ""
ALLOWED_HOSTS: "*"
ports:
- "8080:80"
volumes:
- ./cvat_proxy/nginx.conf:/etc/nginx/nginx.conf:ro
- ./cvat_proxy/conf.d/cvat.conf.template:/etc/nginx/conf.d/cvat.conf.template:ro
command: /bin/sh -c "envsubst '$$CVAT_HOST' < /etc/nginx/conf.d/cvat.conf.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
volumes:
cvat_db:
cvat_data:
cvat_keys:
cvat_logs:
cvat_models:
There are several possibilities. Check out this article and see if it helps: https://www.maketecheasier.com/fix-linux-no-space-left-on-device-error/
It says things like
Deleted File Reserved by Process
Occasionally, a file will be deleted, but a process is still using it.
Linux won’t release the storage associated with the file while the
process is still running. You just need to find the process and
restart it.
Not Enough Inodes
There is a set of metadata on filesystems called “inodes.” Inodes
track information about files. A lot of filesystems have a fixed
amount of inodes, so it’s very possible to fill the max allocation of
inodes without filling the filesystem itself. You can use df to check.
sudo df -i /
Compare the inodes used with the total inodes. If there’s no more
available, unfortunately, you can’t get more. Delete some useless or
out-of-date files to clear up inodes.
Bad Blocks
The last common problem is bad filesystem blocks. Filesystems get
corrupt and hard drives die. Your operating system will most likely
see those blocks as usable unless they’re otherwise marked. The best
way to find and mark those blocks is by using fsck with the -cc flag.
Remember that you can’t use fsck from the same filesystem that you’re
testing.
You might also want to post the question in a Linux exchange like https://unix.stackexchange.com/ or https://serverfault.com/
I Think it's not related to your machine, It's related to your contained volume size limit As you can limit those. Can you inspect you container using docker container inspect <container-Id> and check the volumes associated to it, you can also inspect each volume by it's Id using the docker volume inspect <volume-Id> command.
After some research I'm more confident that these might be the issue check this thread it's a bit related https://github.com/moby/moby/issues/5151

Django Docker/Kubernetes Postgres data not appearing

I just tried switching from docker-compose to docker stacks/kubernetes. In compose I was able to specify where the postgres data volume was and the data persisted nicely.
volumes:
- ./postgres-data:/var/lib/postgresql/data
I tried doing the same thing with the stack file and I can connect to the pod and use psql to see the schema but none of the data entered from docker-compose is there.
Any ideas why this might be?
Here's the stack.yml
version: '3.3'
services:
django:
image: image
build:
context: .
dockerfile: docker/Dockerfile
deploy:
replicas: 5
environment:
- DJANGO_SETTINGS_MODULE=config.settings.local
- SECRET_KEY=password
- NAME=postgres
- USER=postgres
- HOST=db
- PASSWORD=password
- PORT=5432
volumes:
- .:/application
command: ["gunicorn", "--bind 0.0.0.0:8000", "config.wsgi"]
ports:
- "8000:8000"
links:
- db
db:
image: mdillon/postgis:9.6-alpine
volumes:
- ./postgres-data:/var/lib/postgresql/data
You failed to mention how your cluster is provisioned, where is it running etc. so I will make an assumption we're talking about local tests here. If so, you probably have local docker/docker-compose and minikube installed.
If that is the case, please mind that minikube runs in it's own VM so it will not be affected by changes you make on your host by ie. docker, as it has it's own filesystem in vm.
Hint: you can run docker against docker daemon of minikube if you first run eval $(minikube docker-env)
For docker stacks, run the docker inspect command, it should show the mount point of the Postgres container.
docker service inspect --format='{{range .Spec.TaskTemplate.ContainerSpec.Mounts}} {{.Source}}{{end}}' <StackName>
Fixed in the last Docker Edge update.

Ember + Docker slow during transpilation

I'm attempting to Dockerize my local development setup to make it much simpler to onboard new developers. Part of my setup in an Ember application. I've followed the instructions at this repository, but am running into huge delays when the Ember app starts up. It gets to the point where it says Serving on http://localhost:4200 and then there's a significant delay (on the order of 10s of minutes) between that message and when you see the output where Ember CLI displays how long it takes to compile everything. That said, the compilation time displayed is only a few minutes.
My docker-compose.yml file:
version: '2'
services:
nginx:
container_name: 'nginx'
image: jwilder/nginx-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- "80:80"
- "443:443"
frontend:
container_name: 'frontend'
env_file: .env
depends_on:
- nginx
- api
environment:
- VIRTUAL_HOST=*.scout.dev
- VIRTUAL_PORT=4200
image: scoutforpets/ember-cli
command: bash -c "npm i && GIT_DIR=/tmp bower i --allow-root && ember s --watcher polling"
volumes:
- ./app-business/:/app/
- ./app-business/ssl/:/etc/nginx/certs/
ports:
- "4200:4200" # Default Port
- "49152:49152" # Live Reload Port
api:
container_name: 'api'
env_file: .env
command: bash -c "npm i -s && npm run start-debug"
image: node:6.3.1
depends_on:
- postgres
- redis
ports:
- "3001:3001" # Default Port
- "9229:9229" # Debug Port
working_dir: /app/
volumes:
- ./api/:/app/
postgres:
container_name: 'postgres'
image: scoutforpets/postgres
ports:
- "5432:5432"
redis:
container_name: 'redis'
image: redis
ports:
- "6379:6379"
Note that my project is mounted from the file system (I'm running OSX Yosemite). I have heard some conversation around mounted file systems being slow, but am having trouble finding a definitive answer.
If someone is successfully using Ember + Docker, I'd love to hear what you're doing!
Thanks!
If you're using Docker for Mac, there is a known issue with the ember build command being slow.
Docs: https://docs.docker.com/docker-for-mac/troubleshoot/#/known-issues
There are a number of issues with the performance of directories bind-mounted with osxfs. In particular, writes of small blocks, and traversals of large directories are currently slow. Additionally, containers that perform large numbers of directory operations, such as repeated scans of large directory trees, may suffer from poor performance. Applications that behave in this way include:
rake
ember build
Symfony
Magento
As a work-around for this behavior, you can put vendor or third-party library directories in Docker volumes, perform temporary file system operations outside of osxfs mounts, and use third-party tools like Unison or rsync to synchronize between container directories and bind-mounted directories. We are actively working on osxfs performance using a number of different techniques and we look forward to sharing improvements with you soon.