I created an EC2 instance and an EFS, and was able to mount EFS properly on the instance.
I need to auto mount in case the server is rebooted.
According to the documentation, i do that following in /etc/fstab
fs-xxxxxxxx:/ /mnt/efs efs defaults,_netdev 0 0
Using the EFS file system ID in place of xxxxxxxx
But when I reboot the server, EFS does not get mounted, and I save to remount it again
What should I do here?
I'm posting here a more detailed solution since this thread seems to show up near the top for related queries from search engine.
There are two methods to mount an Amazon EFS: "Amazon EFS mount helper" (amazon-efs-utils) and "NFS client" (nfs-utils).
Examples below shows how to mount manually and automatically with each method. Before using, replace the text [value] with your own values.
==============================
===============
Mounting with "Amazon EFS mount helper"
===============
To mount with "Amazon EFS mount helper" manually, you issue the following command format into CLI:
sudo mount -t efs [fs-XXXXXXXX]:/ /path/to/mount/dir
=====
To mount with "Amazon EFS mount helper" automatically, you insert the following line into /etc/fstab
[fs-XXXXXXXX]:/ /path/to/mount/dir efs defaults,_netdev 0 0
===============
Mounting with "NFS client"
===============
To mount with "NFS client" manually, you issue either of the following command format into CLI:
Use the command instruction given from "Amazon EC2 mount instructions (from local VPC)" when you click in to view the Elastic File System ID in question under EFS Web Console.
sudo mount -t nfs4 -o nfsvers=4.1,rsize=XXXXXXX,wsize=XXXXXXX,hard,timeo=XXX,retrans=X,noresvport [fs-XXXXXXXX].efs.[REGION].amazonaws.com:/ /path/to/mount/dir
OR
sudo mount -t nfs4 -o defaults,_netdev [fs-XXXXXXXX].efs.[REGION].amazonaws.com:/ /path/to/mount/dir
=====
To mount with "NFS client" automatically, you insert the following line into /etc/fstab
[fs-XXXXXXXX].efs.[REGION].amazonaws.com:/ /path/to/mount/dir nfs4 defaults,_netdev 0 0
==============================
Given the above example format, do you notice your problem?
You thought you've "Amazon EFS mount helper" installed, but based on the manual mount command format you posted in your first comment reply (not opening post), you actually only have "NFS client" installed on your system. You were using "Amazon EFS mount helper" format inside /etc/fstab to auto mount, but the manual mount command that worked for you is in "NFS client" format. Since your system doesn't have "Amazon EFS mount helper" installed, it doesn't understand the auto mount format inside /etc/fstab so auto mount it doesn't work for you.
The manual mount command you posted above that worked for you is only for "NFS client", not for "Amazon EFS mount helper".
mount -t nfs4 -o nfsvers=4.1 ...
Notice the -t parameter above is nfs4, which is the format for "NFS client". If you were using "Amazon EFS mount helper", the -t parameter should be efs.
To solve the problem, you can use either Amazon EFS mount helper (amazon-efs-utils) or NFS client (nfs-utils), but the command format (in CLI or /etc/fstab) and the mount client being used should be consistent.
In other words:
"Amazon EFS mount helper" <=> efs in both CLI and /etc/fstab
"NFS client" <=> nfs4 in both CLI and /etc/fstab
==============================
Installation instructions for mount client software:
===============
If you want to use "Amazon EFS mount helper", use the following installation instructions for Amazon Linux and Other Distros:
https://docs.aws.amazon.com/efs/latest/ug/using-amazon-efs-utils.html
=====
If you want to use "NFS client", use the following installation instructions on your EC2 instance:
On a Red Hat Enterprise Linux or SUSE Linux instance, including Amazon Linux, use this command:
sudo yum install -y nfs-utils
On an Ubuntu instance, use this command:
sudo apt-get install nfs-common
==============================
Once you have the mount client software installed, use the corresponding mounting instructions posted above.
To solve this using NFS4, please follow the instructions below:
On your AWS account, notice the following:
1) Go to your EFS management screen, you should your EFS_WHATEVER... and there is a small triangle next to it, click down to expand.
2) Notice there is a "DNS Name" right in the middle of the screen, it will say something like: "fs-1234567c.efs.us-west-1.amazonaws.com", note that down, this is your mounting point that we will use later on.
3) By default, if you have just created the new instance, then you must allow it to be seen by your servers, trying to connect will freeze since the firewall is blocking your connection. to allow this scroll down until you see your security group, this is something like sg-abcdef
4) Go into your EC2 servers, select the server that you want it to access your EFS and then click on its "security groups", it should take you into the security groups management screen, note down it's security group ID (this is something sg-12345)
4) Now, clear the filter of your VPC management screen to see all of the SGs...
5) Enter your EFS security group (i.e. sg-abcdef) and click the search button, this should bring up the EFS ACL
6) Click on "Inbound Rules" -> EDIT..
7) Click "ADD" and select "EFS" from the list, enter your server's SG (i.e. sg-12345) and describe it as "Server XXX access" if you like.
8) Now the server should be able to see the EFS Mount,
9) Go into your server and install the necessary components by running as ROOT:
apt-get install nfs-common
10) Now, to test the MOUNT, create a new directory... something like: mkdir /mnt/heider
11) Mount the FS using the following command:
mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-1234567c.efs.us-west-1.amazonaws.com:/ /mnt/heider
Please note that your fs-12345c..... is taken from your DNS name as mentioned above.
12) If this work then great, move to the next step, otherwise revise the above to see if you missed anything.
13) Now to auto-mount this, you need to:
14) Edit /etc/fstab
15) Add the following:
fs-1234567c.efs.us-west-1.amazonaws.com:/ /mnt/heider nfs4 defaults,_netdev 0 0
16) Save the file and exit
17) in Linux command shell type:
mount -a
this will test the mounting, if it's mounted then great, rebooting would auto-mount it.
18) This should auto-mount.
19) Reboot to test, all should be there.
I hope this helps.
In order to use the efs file system type, I believe you need to have the amazon-efs-utils package installed. This will install the additional dependencies.
Anyone who has this issue,
instead of
fs-xxxxxxxx:/ /mnt/efs efs defaults,_netdev 0 0
use
{target_ip}:/ /mnt/efs nfs4 defaults,_netdev 0 0
This works fine for me, and it auto mounts on the newly created instances
Related
I'm trying to mount an EFS volume with NFS on macOS, but am having permissions trouble. I am running the following command to mount the volume:
sudo mount -t nfs -o vers=4 -o tcp -w <IP Address>:/ efs/
and am able to successfully mount the volume, but it mounts with root privileges, and I need to be able to grant access to the volume to the local user. I need the local user to be able to both read and write to the volume.
Trying to chown -R $(whoami) ./efs results in an Unknown error: 10039.
I can successfully chmod 666 the files inside of the mount (sometimes with odd behaviors), but I ultimately need to just grant the local user write access to the volume.
Am I missing an option in the mount command or does anyone know how to mount the efs volume and provide the local user permissions to it?
UseCase: Launch an AWS cloud9 environment that has added EBS of 500GB. This environment will be extensively used to build and publish dockers by developers.
So I did start an m.5.large instance-based environment and attached an EBS volume of 500GB.
Attachment information: i-00xxxxxxb53 (aws-cloud9-dev-6f2xxxx3bda8c):/dev/sdf
This is my total storage and I do not see 500GB volume.
On digging further, it looks like the EBS volume is attached but not at the correct mount point.
EC2 EBS configuration
Question: What should be the next step in order to use this EBS volume?
Question: What should be done in order to make use attached EBS for docker building?
Question: What should be the most efficient instance type for docker building?
Tyoe df -hT here you will find the Filesystem type of root if it is xfs or ext4
If say root (/) is xfs, run the following command for the 500 GiB Volume
$ mkfs -t xfs /dev/nvme1n1
If root (/) is ext4,
$ mkfs -t ext4 /dev/nvme1n1
Create a directory in root say named mount
$ mkdir /mount
Now mount the 500 GiB Volume to /mount
$ mount /dev/nvme1n1 /mount
Now it will be mounted and can viewed in df -hT
Also make sure to update the /etc/fstab so mount remains stable if there is a reboot
To update first find UUID if 500 GiB EBS Volume
$ blkid /dev/nvme1n1
Note down the UUID from the output
Now go to /etc/fstab using editor of your choice
$ vi /etc/fstab
There must be an entry already for /, update for mount and save the file, (note replace xfs with ext4 if filesytem is ext4)
UUID=<Add_UUID_here_without_""> /mount xfs defaults 0 0
Now finally run mount command
$ mount -a
Generally, if its new EBS volume you have to format it manually and then mount it, also manually. From docs:
After you attach an Amazon EBS volume to your instance, it is exposed as a block device. You can format the volume with any file system and then mount it.
The instruction for doing this in the link above.
we installed efs utility and configured the EFS filesystem with EFS Mount points with in the VPC.
Added the entry in /etc/fstab for permanent mount like below.
echo "mount fs-xxxxxxx /mnt/efs efs tls,_netdev 0 0" >> /etc/fstab
after this when i manually run the mount -a -t efs defaults - it is working fine file system got mounted successfully without any issue.
But when i try to invoke the same thing from ansible mount module like below
- name: Mount up efs
mount:
path: /mnt/efs
src: fs-xxxxxxxx
fstype: efs
opts: tls
state: mounted
become: true
become_method: pbrun
become_user: root
Note: Ansible is running as root privilaged user on the target host.
Expected Result:
EFS filesystem should get mounted without any issue.
Actual Result:
We are getting error in ansible saying like
Error:
only root can run mount.efs
when i start debugging the issue i see the entry in init.py for efs
https://github.com/aws/efs-utils/blob/555154b79572cd2a9f63782cac4c1062eb9b1ebd/src/mount_efs/init.py
we are validating the user with getpass python module but some how even i am using the become in the ansible it is not help me to get ride of this error.
Could you please anyone help me to resolve tis issue
Either the fstype is nfs or you may need to install the EFS Mount Helper.
I have created a custom CentOS 6.5 image and registered it to AWS as EBS root device type. When I launch an instance, it works perfectly well, except that the storage capacity (instance storage to be included according to the instance type) is not added to the instance.
I made a try booting an instance using the official CentOS 6.5 AMI that is located in the AWS Marketplace, but I got the same result.
Does anyone know the reason, if it is a known issue, or whatever?
Thanks in advance.
First you have to make sure that the instance store is attached at launch time. From the AWS console it should look something like this:
Once you boot the instance you have to create a filesystem in the drive by running:
mkfs.ext4 /dev/sdb
Then you need to mount that drive somewhere in your root filesystem:
mkdir -p /mnt/myinstancestore
mount /dev/sdb /mnt/myinstancestore
You can run these commands to check that your drive is mounted:
df -h
mount
You can also add the mount entry to your /etc/fstab file so that it mounts permanently after every reboot:
/dev/sdb /mnt/myinstancestore ext4 defaults 1 2
I'm having problem connecting EBS volume to my Ubuntu EC2 Instance.
Here's what I did:
From the Amazon AWS Console, I created a EBS 150GB volume and attached it to an Ubuntu 11.10 EC2 instance. Under the EBS volume properties, "Attachment" shows: "[my Ubuntu instance id]:/dev/sdf (attached)"
Tried mounting the drive on the Ubuntu box, and it told me "mount: /dev/sdf is not a block device"
sudo mount /dev/sdf /vol
So I checked with fdisk and tried to mount from the new location and it told me it wasn't the right file system.
sudo fdisk -l
sudo mount -v -t ext4 /dev/xvdf /vol
the error:
mount: wrong fs type, bad option, bad superblock on /dev/xvdf, missing
codepage or helper program, or other error In some cases useful info
is found in syslog - try dmesg | tail or so
"dmesg | tail" told me it gave the following error:
EXT4-fs (sda1): VFS: Can't find ext4 filesystem
I also tried putting the configurations into /etc/fstab file as instructed on http://www.webmastersessions.com/how-to-attach-ebs-volume-to-amazon-ec2-instance, but still gave same not the right file system error.
Questions:
Q1: Based on point 1 (above), why was the volume mapped to 'dev/sdf' when it's really mapped to '/dev/xvdf'?
Q2: What else do I need to do to get the EBS volume loaded? I thought it'll just take care of everything for me when I attach it to a instance.
Since this is a new volume, you need to format the EBS volume (block device) with a file system between step 1 and step 2. So the entire process with your sample mount point is:
Create EBS volume.
Attach EBS volume to /dev/sdf (EC2's external name for this particular device number).
Format file system /dev/xvdf (Ubuntu's internal name for this particular device number):
sudo mkfs.ext4 /dev/xvdf
Only format the file system if this is a new volume with no data on it. Formatting will make it difficult or impossible to retrieve any data that was on this volume previously.
Mount file system (with update to /etc/fstab so it stays mounted on reboot):
sudo mkdir -m 000 /vol
echo "/dev/xvdf /vol auto noatime 0 0" | sudo tee -a /etc/fstab
sudo mount /vol
Step 1: create volume
step 2: attach to your instance root volume
step 3: run sudo resize 2fs -p /dev/xvde
step 4: restart apache2 sudo service apache2 restart
step 4: run df -h
You can see total volume attached to your instance.