Docker compose commands are not working from user data? - amazon-web-services

I'm trying to install drupal on AWS ec2 instance by using terraform. I have created a script file in that I have defined docker installation and the s3 location of docker-compose.yml after that I run the docker-compose up -d command in the script. I called the script file in the user data everything is working fine on the new AWS ec2 instance except docker containers are not starting up.docker-compose file has downloaded to the instance but they were no containers actively running. If again I login into the instance and run the command then both drupal MySQL containers are starting and the drupal website is in an active state but the same command from the script is not working.
#! /bin/bash
sudo yum update -y
sudo yum install -y docker
sudo usermod -a -G docker ec2-user
sudo curl -L https://github.com/docker/compose/releases/download/1.21.0/docker-compose-`uname -s`-`uname -m` | sudo tee /usr/local/bin/docker-compose > /dev/null
sudo chmod +x /usr/local/bin/docker-compose
sudo service docker start
sudo chkconfig docker on
aws s3 cp s3://xxxxxx/docker-compose.yml /home/ec2-user/
docker-compose up -d

I had the same issue. Solved it with:
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

Related

Adding ec2-user to docker group

I want to install docker on my ec2 instance.
sudo yum install docker -y
I came to know that this command automatically creates a group 'docker'
which has root privileges by default.so I add my ec2-user to this group to execute commands without 'sudo'.
sudo usermod -aG docker ec2-user
Now this means ec2-user has root privileges
But if I want to start the docker service,why should I use
sudo systemctl start docker
instead of simply
systemctl start docker
Above command is giving me an error:
Failed to start docker.service: The name org.freedesktop.PolicyKit1 was not provided by any .service files
See system logs and 'systemctl status docker.service' for details.
Please help!
because docker is system service so you must use sudo or run it without sudo as root user
or you can use
sudo systemctl enable docker
and after every reboot docker service will be running automatically

Setting up Docker on EC2 instance has permission issues, how do I either reinstall Docker or fix this?

Issue
When following the AWS guide for installing Docker (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html), I'm stuck on step 8 docker info. Permission should have been added in step 6 so that ec2-user can run this without sudo, but it can't.
Error
$ docker info
-bash: /usr/bin/docker: Permission denied
Troubleshooting
I have restarted the instance, logged out and in, and stopped and started docker.
id ec2-user returns uid=1000(ec2-user) gid=1000(ec2-user) groups=1000(ec2-user),4(adm),10(wheel),190(systemd-journal),992(docker)
I've installed docker-compose and tried to change permissions in other ways:
sudo usermod -a -G sudo ec2-user
sudo setfacl -R -m user:ec2-user:rw /usr/bin/docker
Desired Behaviour
I'd like the permissions to be fixed, whether that means reinstalling Docker or just amending permissions.
If you want to avoid typing sudo whenever you run the docker command, add your username to the docker group:
sudo usermod -aG docker $(whoami)
You will need to log out of the Droplet and back in as the same user to enable this change.
If you need to add a user to the docker group that you’re not logged in as, declare that username explicitly using:
sudo usermod -aG docker username

Using same AWS EFS to share multiple directories

I have create a file system EFS and mount on a EC2 instance on /var/www/html/media. I would like to use the same EFS Filesystem to mount another directory /var/www/html/var.
Is that possible?
I would do:
fs-296e0282.efs.us-west-2.amazonaws.com:/media /var/www/html/media nfs4 defaults
fs-296e0282.efs.us-west-2.amazonaws.com:/var /var/www/html/var nfs4 defaults
But it seems not possible.
It is possible to mount two different directories under file mount system. Initially in order to access your efs just mount a EFS root under your instance using the command
Sudo mount -t efs fs-id:/ /home/efs
Then create subdirectories under the /home/efs folder for example let's have two subdirectories under /home/efs namely media and var
Now you can mount two directories in /var/www/html/media and /var/www/html/var by adding the below in fstab file which will be under /etc/
fs-id:/media /var/www/html/media efs defaults,_netdev 0 0
fs-id:/var /var/www/html/var efs defaults,_netdev 0 0
And reboot your instance. Whatever changes in /var/www/html/media will gets reflected in fs-id:/media folder also the same applies for var folder also hope this helps.
Your EFS id: fs-357f69c8
You want to mount in a EC2 Machine of following folders
/efs
/var/www/html/data
/var/www/html/api/upload
/var/www/html/uploadetfiles
So, first, create the folders
sudo mkdir /efs
sudo mkdir /var/www/html/data
sudo mkdir /var/www/html/api/upload
sudo mkdir /var/www/html/uploadetfiles
For mount in EC2 command will be
sudo mount -t efs -o tls fs-357f69c8:/ /efs
sudo mount -t efs -o tls fs-357f69c8:/ /var/www/html/data
sudo mount -t efs -o tls fs-357f69c8:/ /var/www/html/api/upload
sudo mount -t efs -o tls fs-357f69c8:/ /var/www/html/uploadetfiles
NB:
Your machine should have efs-utils installed
To build and install an RPM:
sudo yum -y install git rpm-build make
sudo git clone https://github.com/aws/efs-utils
cd efs-utils
sudo make rpm
sudo yum -y install build/amazon-efs-utils*rpm
To build and install a Debian package:
sudo apt-get update
sudo apt-get -y install git binutils
sudo git clone https://github.com/aws/efs-utils
cd efs-utils
sudo ./build-deb.sh
sudo apt-get -y install ./build/amazon-efs-utils*deb
Hope this will works

How to install Minikube and kubernates on amazon Linux EC2

Anyone having command to install minikube and kubernates on Amazon-linux . I have tried to some linux command but it's not working.
Thanks
Baharul Islam
kubernetes Installation
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubectl
Minikube Installation
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \
&& chmod +x minikube
To add the Minikube executable to your path:
sudo cp minikube /usr/local/bin && rm minikube
Sources: Here and here
Make sure to remember that Amazon Linux is based on RHEL/CentOS

CoreOS AWS userdata "docker run" on startup won't run

I'm trying to setup CoreOS on AWS to run specific commands on boot to download our DCOS cluster's info tarball and run scripts contained within it. These scripts help add the instance as an "agent" to our DC/OS cluster.
However, I don't seem to be able to get the docker run commands to run. I do see that the userdata is creating the tee's output file (which remains empty) and also the /opt/dcos_install_tmp/ directory (also remains empty).
The docker run commands here download an "awscli" container, fetch packages from S3 (using IAM instance profile credentials), and spit it out to the CoreOS file system.
Installing AWS CLI on CoreOS didn't seem straightforward (there's no package manager, no python), so I had to resort to this.
If I login to the instance and run the same commands by putting them in a script, I have absolutely no issues.
I check "journalctl --identifier=coreos-cloudinit" and found nothing to indicate issues. It just reports:
15:58:34 Parsing user-data as script
There is no "boot" log file for CoreOS in /var/log/ unlike in other AMIs.
I'm really stuck right now and would love some nudges in the right direction.
Here's my userdata (which I post as text during instance boot):
#!/bin/bash
/usr/bin/docker run -it --name cli governmentpaas/awscli aws s3 cp s3://<bucket>/dcos/dcos_preconfig.sh /root && /usr/bin/docker cp cli:/root/dcos_preconfig.sh . && /usr/bin/docker rm cli | tee -a /root/userdatalog.txt
/usr/bin/docker run -it --name cli governmentpaas/awscli aws s3 cp s3://<bucket>/dcos/dcos-install.tar /root && /usr/bin/docker cp cli:/root/dcos-install.tar . && /usr/bin/docker rm cli | tee -a /root/userdatalog.txt
sudo mkdir -p /opt/dcos_install_tmp
sudo tar xf dcos-install.tar -C /opt/dcos_install_tmp | tee -a /root/userdatalog.txt
sudo /bin/bash /opt/dcos_install_tmp/dcos_install.sh slave | tee -a /root/userdatalog.txt
Remove -t flag from the docker run command.
I had a similar problem: DigitalOcean: How to run Docker command on newly created Droplet via Java API
The problem ended up being the -t flag in the docker run command. Apparently this doesn't work because it isn't a terminal or something like that. Remove the flag and it runs fine.