Using same AWS EFS to share multiple directories - amazon-web-services

I have create a file system EFS and mount on a EC2 instance on /var/www/html/media. I would like to use the same EFS Filesystem to mount another directory /var/www/html/var.
Is that possible?
I would do:
fs-296e0282.efs.us-west-2.amazonaws.com:/media /var/www/html/media nfs4 defaults
fs-296e0282.efs.us-west-2.amazonaws.com:/var /var/www/html/var nfs4 defaults
But it seems not possible.

It is possible to mount two different directories under file mount system. Initially in order to access your efs just mount a EFS root under your instance using the command
Sudo mount -t efs fs-id:/ /home/efs
Then create subdirectories under the /home/efs folder for example let's have two subdirectories under /home/efs namely media and var
Now you can mount two directories in /var/www/html/media and /var/www/html/var by adding the below in fstab file which will be under /etc/
fs-id:/media /var/www/html/media efs defaults,_netdev 0 0
fs-id:/var /var/www/html/var efs defaults,_netdev 0 0
And reboot your instance. Whatever changes in /var/www/html/media will gets reflected in fs-id:/media folder also the same applies for var folder also hope this helps.

Your EFS id: fs-357f69c8
You want to mount in a EC2 Machine of following folders
/efs
/var/www/html/data
/var/www/html/api/upload
/var/www/html/uploadetfiles
So, first, create the folders
sudo mkdir /efs
sudo mkdir /var/www/html/data
sudo mkdir /var/www/html/api/upload
sudo mkdir /var/www/html/uploadetfiles
For mount in EC2 command will be
sudo mount -t efs -o tls fs-357f69c8:/ /efs
sudo mount -t efs -o tls fs-357f69c8:/ /var/www/html/data
sudo mount -t efs -o tls fs-357f69c8:/ /var/www/html/api/upload
sudo mount -t efs -o tls fs-357f69c8:/ /var/www/html/uploadetfiles
NB:
Your machine should have efs-utils installed
To build and install an RPM:
sudo yum -y install git rpm-build make
sudo git clone https://github.com/aws/efs-utils
cd efs-utils
sudo make rpm
sudo yum -y install build/amazon-efs-utils*rpm
To build and install a Debian package:
sudo apt-get update
sudo apt-get -y install git binutils
sudo git clone https://github.com/aws/efs-utils
cd efs-utils
sudo ./build-deb.sh
sudo apt-get -y install ./build/amazon-efs-utils*deb
Hope this will works

Related

Docker compose commands are not working from user data?

I'm trying to install drupal on AWS ec2 instance by using terraform. I have created a script file in that I have defined docker installation and the s3 location of docker-compose.yml after that I run the docker-compose up -d command in the script. I called the script file in the user data everything is working fine on the new AWS ec2 instance except docker containers are not starting up.docker-compose file has downloaded to the instance but they were no containers actively running. If again I login into the instance and run the command then both drupal MySQL containers are starting and the drupal website is in an active state but the same command from the script is not working.
#! /bin/bash
sudo yum update -y
sudo yum install -y docker
sudo usermod -a -G docker ec2-user
sudo curl -L https://github.com/docker/compose/releases/download/1.21.0/docker-compose-`uname -s`-`uname -m` | sudo tee /usr/local/bin/docker-compose > /dev/null
sudo chmod +x /usr/local/bin/docker-compose
sudo service docker start
sudo chkconfig docker on
aws s3 cp s3://xxxxxx/docker-compose.yml /home/ec2-user/
docker-compose up -d
I had the same issue. Solved it with:
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

How to install Minikube and kubernates on amazon Linux EC2

Anyone having command to install minikube and kubernates on Amazon-linux . I have tried to some linux command but it's not working.
Thanks
Baharul Islam
kubernetes Installation
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubectl
Minikube Installation
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \
&& chmod +x minikube
To add the Minikube executable to your path:
sudo cp minikube /usr/local/bin && rm minikube
Sources: Here and here
Make sure to remember that Amazon Linux is based on RHEL/CentOS

Unable to install Ansible in ECS optimized AMI

shell script I ran on ECS AMI
sudo yum -y install git
git clone git://github.com/ansible/ansible.git --recursive
cd ./ansible
git submodule update --init --recursive
sudo yum -y install python
sudo yum update
sudo make install
Above installation works with normal AMI, with ECS optimized AMI , I get error with make install
RuntimeError: autoconf error
make: *** [install] Error 1
Create your own AMI. Install what you need. Add docker and the ecs agent so it can participate in the ecs cluster. We happen to use packer to create our amis.

How to use separate volumes for the commit log and data in EBS environment?

I use Cassandra 3.9.
I've learned that I should create separate EBS volumes for the commit log and data when using Cassandra with AWS.
My problem is how?
The followings are what I've done and failed.
Created volumes for the commit log and data on launching instances.
I made the EBS volumes available for use by executing following commands. (You can find these commands here.)
sudo mkfs -t ext4 /dev/xvdk
sudo mkfs -t ext4 /dev/xvdf
sudo mkdir /commitlog
sudo mkdir /data
sudo mount /dev/xvdk /commitlog
sudo mount /dev/xvdf /data
I changed the directories for the commit log and data in cassandra.yaml.
commitlog_directory: /commitlog
data_file_directories: /data
After all these setups done, I ran cassandra but I received an error message.
ERROR 20:49:22 Doesn't have write permissions for /data directory
ERROR 20:49:22 Insufficient permissions on directory /data
So, I changed the ownership of these two directories.
sudo chown ubuntu:ubuntu /commitlog
sudo chown ubuntu:ubuntu /data
I ran cassandra again. I got another error.
ERROR 20:52:44 Unable to verify sstable files on disk
What can be done to solve this problem?
It turned out that every process I took was fine. The problem was that I was using t2.micro instance because of the advantage of free tier.
Once I scaled up every instance from t2.micro to C4.large, everything worked fine.
I considered deleting this post, but I decided to keep it because someone might find it helpful.

AWS code deploy agent not able to install?

Hi i am trying to install code deploy agent in my ec2 agent but not able to succeed
I m following below steps
sudo apt-get update
sudo apt-get install awscli
sudo apt-get install ruby2.0
cd /home/ubuntu
sudo aws s3 cp s3://bucket-name/latest/install . --region region-name
sudo chmod +x ./install
sudo ./install auto
but ./install file is missing for me .
But I dont think its a problem with AMI as I used same steps with same AMI in different ec2 instance. Any one has any idea. please help me.
You need to fill in the bucket name and region name in sudo aws s3 cp s3://bucket-name/latest/install . --region region-name. If you are in us-east-1 you would use: aws-codedeploy-us-east-1 and us-east-1.
All the buckets follow that pattern so you can fill in another region if you are there instead.
See http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-set-up-new-instance.html for a complete list of buckets for each region.