How do I expand volume size on docker image - amazon-web-services

The default on FS /dev/mapper/docker-XXX is 10GB. I followed other instructions to edit /etc/sysconfig/docker-storage and add --storage-opt dm.basesize=50G. Next I do:
sudo service docker restart
sudo service ecs restart
I can see
# ps -ef | grep docker | grep stor
root 5966 1 0 21:45 pts/0 00:00:01 /usr/bin/dockerd --default-ulimit nofile=1024:4096 --storage-driver devicemapper --storage-opt dm.basesize=50G --storage-opt dm.thinpooldev=/dev/mapper/docker-docker--pool --storage-opt dm.use_deferred_removal=true --storage-opt dm.use_deferred_deletion=true --storage-opt dm.fs=ext4
So it looks like it took effect, however when I look into the running docker container it is stll 10GB:
# docker exec -it 601f6a9e9418 bash
root#601f6a9e9418:/# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/docker-202:1-263443-880571d796b21f307753d4f4ecca2141b85119985fac00001ea2622ce643b45f 10190136 7295128 2354336 76% /
Any help is greatly appreciated.

try this :
link : How to increase Docker container default size?
(optional) If you have already downloaded any image via docker pull you need to clean them first - otherwise they won't be resized
docker rmi your_image_name
Edit the storage config
vi /etc/sysconfig/docker-storage
There should be something like DOCKER_STORAGE_OPTIONS="...", change it to DOCKER_STORAGE_OPTIONS="... --storage-opt dm.basesize=100G"
Restart the docker deamon
service docker restart
Pull the image
docker pull your_image_name
(optional) verification
docker run -i -t your_image_name /bin/bash
df -h
I was struggling with this a lot until I found out this link http://www.projectatomic.io/blog/2016/03/daemon_option_basedevicesize/ turns out you have to remove/pull image after enlarging the basesize.

Related

Shell script stops when calling SSH

I am attempting to automate a few things on AWS with one script.
log in and shut down docker-compose then remove all images
copy local files to server
log in and start docker-compose
My script is
#log in and shut down docker-compose then remove all images
ssh -i "~/Documents/AWS-Keys/mykey.pem" ubuntu#XX.XXX.XX.XXX
docker-compose down
docker image prune -f
exit
#copy local files to server
scp -r -i "~/Documents/AWS-Keys/mykey.pem" ./ubuntu ubuntu#XX.XXX.XX.XXX:/home
#log in and start docker-compose
ssh -i "~/Documents/AWS-Keys/mykey.pem" ubuntu#XX.XXX.XX.XXX
docker-compose up -d
exit
I have also tried logout instead of exit, same result.
Running
$ ./upload.sh
The output is:
Welcome to Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-1038-aws x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Tue Mar 2 21:52:40 UTC 2021
System load: 0.07
Usage of /: 66.0% of 7.69GB
Memory usage: 36%
Swap usage: 0%
Processes: 115
Users logged in: 1
IPv4 address for xxxxxxxxxxxxxxx: XXX.XX.X.X
IPv4 address for docker0: XXX.XX.X.X
IPv4 address for eth0: XXX.XX.X.XXX
* Introducing self-healing high availability clusters in MicroK8s.
Simple, hardened, Kubernetes for production, from RaspberryPi to DC.
https://microk8s.io/high-availability
3 updates can be installed immediately.
0 of these updates are security updates.
To see these additional updates run: apt list --upgradable
Last login: Tue Mar 2 21:51:47 2021 from XXX.XX.X.XXX
ubuntu#ip-XXX.XX.X.XXX:~$
After getting some feedback I also tried
ssh -i "~/Documents/AWS-Keys/mykey.pem" ubuntu#XX.XXX.XX.XXX
docker-compose down;
docker image prune -f;
exit
Same result.
My understanding is that you want to run the command on the server, in that case just write it after ssh:
ssh -i "~/Documents/AWS-Keys/mykey.pem" ubuntu#XX.XXX.XX.XXX "docker-compose down ;docker image prune -f"
a longer script you can send via HEREDOC
ssh -i "~/Documents/AWS-Keys/mykey.pem" ubuntu#XX.XXX.XX.XXX <<COMMANDS
docker-compose down
docker image prune -f
COMMANDS

AWS DOCKER dm.basesize in /etc/sysconfig/docker doesn't work

I want to change dm.basesize in my containers .
These are the size of containers to 20GB
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
`-xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 8G 0 disk
xvdg 202:96 0 8G 0 disk
I have a sh
#cloud-boothook
#!/bin/bash
cloud-init-per once docker_options echo 'OPTIONS="${OPTIONS} --storage-opt dm.basesize=20G"' >> /etc/sysconfig/docker
~
I executed this script
I stopped the docker service
[ec2-user#ip-172-31-41-55 ~]$ sudo service docker stop
Redirecting to /bin/systemctl stop docker.service
[ec2-user#ip-172-31-41-55 ~]$
I started docker service
[ec2-user#ip-172-31-41-55 ~]$ sudo service docker start
Redirecting to /bin/systemctl start docker.service
[ec2-user#ip-172-31-41-55 ~]$
But the container size doesn't change.
This is /etc/sysconfig/docker file
#The max number of open files for the daemon itself, and all
# running containers. The default value of 1048576 mirrors the value
# used by the systemd service unit.
DAEMON_MAXFILES=1048576
# Additional startup options for the Docker daemon, for example:
# OPTIONS="--ip-forward=true --iptables=true"
# By default we limit the number of open files per container
OPTIONS="--default-ulimit nofile=1024:4096"
# How many seconds the sysvinit script waits for the pidfile to appear
# when starting the daemon.
DAEMON_PIDFILE_TIMEOUT=10
I read in the aws documentation that I can to execute scripts in the aws instance when I start it . I don't want to restart my aws instance because I lost my data.
Is there a way to update my container size without restart the aws instance?
In the aws documentation I don't find how to set a script when I launch the aws instance.
I follow the tutorial
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html
I don't find a example how to set a script when I launch the aws instance.
UPDATED
I configured the file
/etc/docker/daemon.json
{
"storage-driver": "devicemapper",
"storage-opts": [
"dm.directlvm_device=/dev/xdf",
"dm.thinp_percent=95",
"dm.thinp_metapercent=1",
"dm.thinp_autoextend_threshold=80",
"dm.thinp_autoextend_percent=20",
"dm.directlvm_device_force=false"
]
}
When I start docker, I get
Error starting daemon: error initializing graphdriver: /dev/xdf is not available for use with devicemapper
How can I configure the parameter
dm.directlvm_device=/dev/xdf

How to create a new docker image from a running container on Amazon?

Here is my problem:
I have a task running a Docker image on Amazon ECS but I would like to make a new Docker image from the running instance of the container.
I see the id of the instance on Amazon ECS; I have made an AMI but I would like to make a new docker image that I can pull from Amazon.
Any ideas?
Regards and thanks.
To create a image from container execute the command below:
docker commit container_id imagename
You can run docker commit (docs) to save the container to an image, then push that image with a new tag to the registry.
This can be easily done by using "docker commit".
Let's say you need an image, based on the latest from NGINX, with PHP, build-essential, and nano installed. I'll walk you through the process of pulling the image, running the container, accessing the container, adding the software, and committing the changes to a new image that can then be easily used as a base for your dev containers.
Pulling the image and running the container:
sudo docker pull nginx
sudo docker run -it --name nginx-template-base -p 8080:80 nginx
Modifying the container:
apt-get install nano
​apt-get install php5
Commit the changes:
sudo docker commit CONTAINER_ID nginx-template
The newly created template is ready and you can run using:
sudo docker run -it --name nginx-dev -p 8080:80 nginx-template
Apart from the answer provided by #Ben Whaley, I personally suggest you to make use of Docker APIs. To use Docker APIs you need to configure the docker daemon port and the procedure is explained here configuring docker daemon port
Lets run a container using an base Ubuntu Image and create a folder inside the container:
#docker run -it ubuntu:14.04 /bin/bash
root#58246867493d:/#
root#58246867493d:/# cd /root
root#58246867493d:~# ls
root#58246867493d:~# mkdir TEST_DIR
root#58246867493d:~# exit
Status of the exited container:
# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
58246867493d ubuntu:14.04 "/bin/bash" 2 minutes ago Exited (127) 57 seconds ago hungry_turing
JSON file which is an input for committing a container:
#cat container_create.json
{
"AttachStdin": true,
"AttachStdout": true,
"AttachStderr": true,
"ExposedPorts": {
"property1": {},
"property2": {}
},
"Tty": true,
"OpenStdin": true,
"StdinOnce": true,
"Cmd": null,
"Image": "ubuntu:14.04",
"Volumes": {
"additionalProperties": {}
},
"Labels": {
"property1": "string",
"property2": "string"
}
}
API to commit a container
# curl -X POST http://127.0.0.1:6000/commit?container=58246867493d\&repo=ubuntu\&tag=15.0 -d #container_create.json --header "Content-Type: application/json" | jq .
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 593 100 81 100 512 175 1106 --:--:-- --:--:-- --:--:-- 1108
{
"Id": "sha256:acac1f3733b2240b01e335642d2867585e5933b18de2264315f9b07814de113a"
}
The Id that is generated is the new Image Id which is build from committing a container.
Get docker Images
# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
**ubuntu 15.0 acac1f3733b2 10 seconds ago 188MB**
ubuntu 14.04 132b7427a3b4 10 hours ago 188MB
Run the newly build Image to see the changes commited in the previous container.
# docker run -it ubuntu:15.0 /bin/bash
root#3a48af5eaec9:/# cd /root/
root#3a48af5eaec9:~# ls
TEST_DIR
root#3a48af5eaec9:~# exit
To build an image from Docker file, how to build an image using docker API
For more information on docker APIs, refer here.

Connection refused by Docker container

I've struggled with this for quite some time.
I have a Django application and I'm trying to package it into containers.
The problem is that when I publish to a certain port (8001) the host refuses my connection.
$ docker-machine ip default
192.168.99.100
When I try to curl or reach by browser 192.168.99.100:8001, the connection is refused.
C:\Users\meow>curl 192.168.99.100:8001
curl: (7) Failed to connect to 192.168.99.100 port 8001: Connection refused
First remark: I'm using Docker Toolbox.
Let's start from the docker-compose.yml file.
version: '2'
services:
db:
build: ./MongoDocker
image: ockidocky_mongo
web:
build: ./DjangoDocker
image: ockidocky
#volumes: .:/srv
ports:
- 8001:8000
links:
- db
Second remark: This file orginally gave me some trouble about permission building from scratch. To fix this, I built the images separately.
docker build -t ockidocky .
docker build -t ockidocky_mongo .
Here's the dockerfile for Mongo:
# Based on this tutorial. https://devops.profitbricks.com/tutorials/creating-a-mongodb-docker-container-with-an-attached-storage-volume/
# Removed some sudo here and there because they are useless in Docker for Windows
# Set the base image to use to Ubuntu
FROM ubuntu:latest
# Set the file mantainer
MAINTAINER meow
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10 && \
echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | tee /etc/apt/sources.list.d/mongodb.list && \
apt-get update && apt-get install -y mongodb-org
VOLUME ["/data/db"]
WORKDIR /data
EXPOSE 27017
#Edited with --smallfiles (Check this issue https://github.com/dockerfile/mongodb/issues/9)
CMD ["mongod", "--smallfiles"]
Dockerfile for Django is based on this other tutorial.
I won't include the code, but it works.
It's important to say that the last row is:
ENTRYPOINT ["/docker-entrypoint.sh"]
I changed the docker-entrypoint.sh to run without Gunicorn.
echo Start Apache server.
python manage.py runserver
At this point docker ps tells me that everything is up:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ddfdb20c2d7c ockidocky "/docker-entrypoint.s" 9 minutes ago Up 9 minutes 0.0.0.0:8001->8000/tcp ockidocky_web_1
2e2c2e7a5563 ockidocky_mongo "mongod --smallfiles" 2 hours ago Up 9 minutes 27017/tcp ockidocky_db_1
When I run a docker inspect ockidocky and about ports, it displays:
"Ports": {
"8000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8001"
}
]
},
Is this dependant on mounting volumes?
It is one of the things I really can't figure out and gives me a lot of errors with Docker Toolbox.
As far as I can see everything worked fine during the build, and as far as I know the connection that was refused shouldn't depend on that.
EDIT:
After connectinc to the container and listing the processes with ps -aux, this is what I see:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.7 3.0 218232 31340 ? Ssl 20:15 0:01 python manage.p
root 9 13.1 4.9 360788 50132 ? Sl 20:15 0:26 /usr/bin/python
root 15 0.0 0.2 18024 2596 ? Ss 20:15 0:00 /bin/bash
root 20 0.1 0.3 18216 3336 ? Ss 20:17 0:00 /bin/bash
root 33 0.0 0.2 34424 2884 ? R+ 20:18 0:00 ps -aux
P.s. Feel free to suggest how I can make this project easier for myself.
I solved the issue. I don't know why I had to specify the door on this line of docker-entrypoint.sh:
python manage.py runserver 0.0.0.0:8000
Now docker logs ockidocky_web_1 shows the usual django output messages.
If someone could give a proper explanation, I would be happy to edit and upvote.
I had the same problem and additionally, the ALLOWED_HOSTS in Django settings.py, need to include the docker machine IP.
It's mostly because of failure on the service you are going to run on your desired port(In your case the desired port is 8001)!
If any networking checks is OK and you don't have any problem with your network, just check your service which going to listen on your desired port! With the high chance of probabilty, your service is not loaded or ran successfully!
The check for your service depends on the service you are running, but sometimes(most of the times) docker logs YOUR_CONTAINER_ID could help to know more about your failure reason!

How to increase ulimit in Docker in Elastic Beanstalk?

I would like to increase ulimit in Docker in Elastic Beanstalk to run some apps.
I know that I need to increase ulimit of Docker host and restart docker service but cannot find a way to do it.
I wrote following .ebextensions/01limits.config but still cannot increase ulimit.
commands:
01limits:
command: echo -e "#commands\nroot soft nofile 65536\nroot hard nofile 65536\n* soft nofile 65536\n* hard nofile 65536" >> /etc/security/limits.conf
02restartdocker:
command: service docker restart
ADDED 2014-11-20 09:37 GMT
Also tried with following config file.
commands:
01limits:
command: echo -e "#commands\nroot soft nofile 65536\nroot hard nofile 65536\n* soft nofile 65536\n* hard nofile 65536" >> /etc/security/limits.conf
02restartdocker:
command: service docker stop && ulimit -a 65536 && service docker start
It successfully increased ulimit but showed following error in the management console:
[Instance: i-xxxxxxxx Module: AWSEBAutoScalingGroup ConfigSet: null] Command failed on instance. Return code: 1 Output: [CMD-AppDeploy/AppDeployStage1/AppDeployEnactHook/00flip.sh] command failed with error code 1: /opt/elasticbeanstalk/hooks/appdeploy/enact/00flip.sh Stopping nginx: [ OK ]
Starting nginx: [ OK ]
Stopping current app container: 1c**********... Error response from daemon: Cannot destroy container 1c**********: Driver devicemapper failed to remove root filesystem 1c**************************************************************: Device is Busy 2014/11/20 09:06:36 Error: failed to remove one or more containers.
I am not sure this config is suitable.
This is way late, but here's a (hopefully) working solution to your issue
initctl stop eb-docker && /sbin/service docker stop && ulimit -n 65536 && ulimit -c unlimited && export DMAP=$(df | grep /var/lib | awk '{print $1}') && if [[ $DMAP ]]; then umount $DMAP; fi && /sbin/service docker start && initctl start eb-docker
Explanation is as follows :
Stop docker container service
Stop docker service
Apply your ulimit settings (mine are different from yours)
Find offending devicemapper mount and unmount it (handles if none are mounted)
Start docker
Start docker container service
It's not an elegant solution, but such is the life of hacking around EB.
You might want to break it out into smaller components but the general idea is there.
I was running a version of apachectl that was trying to change the file limit. And it was failing and causing an error with the container.
But, eventually, I ran the ulimit command from inside the container:
root#22806b77a474:/home# ulimit
unlimited
It seems apachectl was trying to raise a limit that isn't actually there in the docker container. I ULIMIT_MAX_FILES to something that didn't cause a problem.
RUN sed -i 's/ULIMIT_MAX_FILES="${APACHE_ULIMIT_MAX_FILES:-ulimit -n 8192}"/ULIMIT_MAX_FILES="ulimit -H -n"/' /usr/sbin/apachectl