I'm trying to mount an EFS volume with NFS on macOS, but am having permissions trouble. I am running the following command to mount the volume:
sudo mount -t nfs -o vers=4 -o tcp -w <IP Address>:/ efs/
and am able to successfully mount the volume, but it mounts with root privileges, and I need to be able to grant access to the volume to the local user. I need the local user to be able to both read and write to the volume.
Trying to chown -R $(whoami) ./efs results in an Unknown error: 10039.
I can successfully chmod 666 the files inside of the mount (sometimes with odd behaviors), but I ultimately need to just grant the local user write access to the volume.
Am I missing an option in the mount command or does anyone know how to mount the efs volume and provide the local user permissions to it?
I'm running prometheus on ECS.
I'm mounting an EFS volume to my EC2 instance. When mounting the EFS I'm running chmod 777 on it. I'm attaching the EFS volume to the task-definition and then creating a mount point from the EFS volume the /prometheus container path.
When the container starts, it crashes with:
level=error ts=2020-08-10T16:04:39.961Z caller=query_logger.go:109 component=activeQueryTracker msg="Failed to create directory for logging active queries"
It's definitely permissions issue, since without mounting the volume it works fine. I also know that sometimes running chmod 777 won't suffice (for example running grafana the same way required to run chown 472:472 where 472 is grafana's user id), but I couldn't find what else to run.
Any ideas?
you can check if EFS file system policy has client root access enabled.
For troubleshooting, can check stopped tasks section.by clicking on any stopped task id you can see stopped reason.
I attached a volume to an EC2 instance and now when i'm trying to mount it i'm getting this error :
sudo mount /dev/xvdf /mnt/tmp
mount: /mnt/tmp: wrong fs type, bad option, bad superblock on /dev/xvdf, missing codepage or helper program, or other error.
What's the problem ?
I was using a Amazon Linux 2 image so i have to mount the volume using this command
sudo mount -o nouuid /dev/xvdf1 /mnt/tempvol
And now it works.
You have to create filesystem on this volume:
mkfs.ext4 /dev/xvdf
And then you can mount it.
Please remember, that it will format this volume
Please check for the mount partition name (/dev/xvdf), the name should be /dev/xvdf1 like this if the disk is partitioned or you can try with below command:
sudo mount /dev/xvdf /vol -t ext4
I created an EC2 instance and an EFS, and was able to mount EFS properly on the instance.
I need to auto mount in case the server is rebooted.
According to the documentation, i do that following in /etc/fstab
fs-xxxxxxxx:/ /mnt/efs efs defaults,_netdev 0 0
Using the EFS file system ID in place of xxxxxxxx
But when I reboot the server, EFS does not get mounted, and I save to remount it again
What should I do here?
I'm posting here a more detailed solution since this thread seems to show up near the top for related queries from search engine.
There are two methods to mount an Amazon EFS: "Amazon EFS mount helper" (amazon-efs-utils) and "NFS client" (nfs-utils).
Examples below shows how to mount manually and automatically with each method. Before using, replace the text [value] with your own values.
==============================
===============
Mounting with "Amazon EFS mount helper"
===============
To mount with "Amazon EFS mount helper" manually, you issue the following command format into CLI:
sudo mount -t efs [fs-XXXXXXXX]:/ /path/to/mount/dir
=====
To mount with "Amazon EFS mount helper" automatically, you insert the following line into /etc/fstab
[fs-XXXXXXXX]:/ /path/to/mount/dir efs defaults,_netdev 0 0
===============
Mounting with "NFS client"
===============
To mount with "NFS client" manually, you issue either of the following command format into CLI:
Use the command instruction given from "Amazon EC2 mount instructions (from local VPC)" when you click in to view the Elastic File System ID in question under EFS Web Console.
sudo mount -t nfs4 -o nfsvers=4.1,rsize=XXXXXXX,wsize=XXXXXXX,hard,timeo=XXX,retrans=X,noresvport [fs-XXXXXXXX].efs.[REGION].amazonaws.com:/ /path/to/mount/dir
OR
sudo mount -t nfs4 -o defaults,_netdev [fs-XXXXXXXX].efs.[REGION].amazonaws.com:/ /path/to/mount/dir
=====
To mount with "NFS client" automatically, you insert the following line into /etc/fstab
[fs-XXXXXXXX].efs.[REGION].amazonaws.com:/ /path/to/mount/dir nfs4 defaults,_netdev 0 0
==============================
Given the above example format, do you notice your problem?
You thought you've "Amazon EFS mount helper" installed, but based on the manual mount command format you posted in your first comment reply (not opening post), you actually only have "NFS client" installed on your system. You were using "Amazon EFS mount helper" format inside /etc/fstab to auto mount, but the manual mount command that worked for you is in "NFS client" format. Since your system doesn't have "Amazon EFS mount helper" installed, it doesn't understand the auto mount format inside /etc/fstab so auto mount it doesn't work for you.
The manual mount command you posted above that worked for you is only for "NFS client", not for "Amazon EFS mount helper".
mount -t nfs4 -o nfsvers=4.1 ...
Notice the -t parameter above is nfs4, which is the format for "NFS client". If you were using "Amazon EFS mount helper", the -t parameter should be efs.
To solve the problem, you can use either Amazon EFS mount helper (amazon-efs-utils) or NFS client (nfs-utils), but the command format (in CLI or /etc/fstab) and the mount client being used should be consistent.
In other words:
"Amazon EFS mount helper" <=> efs in both CLI and /etc/fstab
"NFS client" <=> nfs4 in both CLI and /etc/fstab
==============================
Installation instructions for mount client software:
===============
If you want to use "Amazon EFS mount helper", use the following installation instructions for Amazon Linux and Other Distros:
https://docs.aws.amazon.com/efs/latest/ug/using-amazon-efs-utils.html
=====
If you want to use "NFS client", use the following installation instructions on your EC2 instance:
On a Red Hat Enterprise Linux or SUSE Linux instance, including Amazon Linux, use this command:
sudo yum install -y nfs-utils
On an Ubuntu instance, use this command:
sudo apt-get install nfs-common
==============================
Once you have the mount client software installed, use the corresponding mounting instructions posted above.
To solve this using NFS4, please follow the instructions below:
On your AWS account, notice the following:
1) Go to your EFS management screen, you should your EFS_WHATEVER... and there is a small triangle next to it, click down to expand.
2) Notice there is a "DNS Name" right in the middle of the screen, it will say something like: "fs-1234567c.efs.us-west-1.amazonaws.com", note that down, this is your mounting point that we will use later on.
3) By default, if you have just created the new instance, then you must allow it to be seen by your servers, trying to connect will freeze since the firewall is blocking your connection. to allow this scroll down until you see your security group, this is something like sg-abcdef
4) Go into your EC2 servers, select the server that you want it to access your EFS and then click on its "security groups", it should take you into the security groups management screen, note down it's security group ID (this is something sg-12345)
4) Now, clear the filter of your VPC management screen to see all of the SGs...
5) Enter your EFS security group (i.e. sg-abcdef) and click the search button, this should bring up the EFS ACL
6) Click on "Inbound Rules" -> EDIT..
7) Click "ADD" and select "EFS" from the list, enter your server's SG (i.e. sg-12345) and describe it as "Server XXX access" if you like.
8) Now the server should be able to see the EFS Mount,
9) Go into your server and install the necessary components by running as ROOT:
apt-get install nfs-common
10) Now, to test the MOUNT, create a new directory... something like: mkdir /mnt/heider
11) Mount the FS using the following command:
mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-1234567c.efs.us-west-1.amazonaws.com:/ /mnt/heider
Please note that your fs-12345c..... is taken from your DNS name as mentioned above.
12) If this work then great, move to the next step, otherwise revise the above to see if you missed anything.
13) Now to auto-mount this, you need to:
14) Edit /etc/fstab
15) Add the following:
fs-1234567c.efs.us-west-1.amazonaws.com:/ /mnt/heider nfs4 defaults,_netdev 0 0
16) Save the file and exit
17) in Linux command shell type:
mount -a
this will test the mounting, if it's mounted then great, rebooting would auto-mount it.
18) This should auto-mount.
19) Reboot to test, all should be there.
I hope this helps.
In order to use the efs file system type, I believe you need to have the amazon-efs-utils package installed. This will install the additional dependencies.
Anyone who has this issue,
instead of
fs-xxxxxxxx:/ /mnt/efs efs defaults,_netdev 0 0
use
{target_ip}:/ /mnt/efs nfs4 defaults,_netdev 0 0
This works fine for me, and it auto mounts on the newly created instances
I'm having problem connecting EBS volume to my Ubuntu EC2 Instance.
Here's what I did:
From the Amazon AWS Console, I created a EBS 150GB volume and attached it to an Ubuntu 11.10 EC2 instance. Under the EBS volume properties, "Attachment" shows: "[my Ubuntu instance id]:/dev/sdf (attached)"
Tried mounting the drive on the Ubuntu box, and it told me "mount: /dev/sdf is not a block device"
sudo mount /dev/sdf /vol
So I checked with fdisk and tried to mount from the new location and it told me it wasn't the right file system.
sudo fdisk -l
sudo mount -v -t ext4 /dev/xvdf /vol
the error:
mount: wrong fs type, bad option, bad superblock on /dev/xvdf, missing
codepage or helper program, or other error In some cases useful info
is found in syslog - try dmesg | tail or so
"dmesg | tail" told me it gave the following error:
EXT4-fs (sda1): VFS: Can't find ext4 filesystem
I also tried putting the configurations into /etc/fstab file as instructed on http://www.webmastersessions.com/how-to-attach-ebs-volume-to-amazon-ec2-instance, but still gave same not the right file system error.
Questions:
Q1: Based on point 1 (above), why was the volume mapped to 'dev/sdf' when it's really mapped to '/dev/xvdf'?
Q2: What else do I need to do to get the EBS volume loaded? I thought it'll just take care of everything for me when I attach it to a instance.
Since this is a new volume, you need to format the EBS volume (block device) with a file system between step 1 and step 2. So the entire process with your sample mount point is:
Create EBS volume.
Attach EBS volume to /dev/sdf (EC2's external name for this particular device number).
Format file system /dev/xvdf (Ubuntu's internal name for this particular device number):
sudo mkfs.ext4 /dev/xvdf
Only format the file system if this is a new volume with no data on it. Formatting will make it difficult or impossible to retrieve any data that was on this volume previously.
Mount file system (with update to /etc/fstab so it stays mounted on reboot):
sudo mkdir -m 000 /vol
echo "/dev/xvdf /vol auto noatime 0 0" | sudo tee -a /etc/fstab
sudo mount /vol
Step 1: create volume
step 2: attach to your instance root volume
step 3: run sudo resize 2fs -p /dev/xvde
step 4: restart apache2 sudo service apache2 restart
step 4: run df -h
You can see total volume attached to your instance.