Unable to mount a volume on an EC2 instance - amazon-web-services

I attached a volume to an EC2 instance and now when i'm trying to mount it i'm getting this error :
sudo mount /dev/xvdf /mnt/tmp
mount: /mnt/tmp: wrong fs type, bad option, bad superblock on /dev/xvdf, missing codepage or helper program, or other error.
What's the problem ?

I was using a Amazon Linux 2 image so i have to mount the volume using this command
sudo mount -o nouuid /dev/xvdf1 /mnt/tempvol
And now it works.

You have to create filesystem on this volume:
mkfs.ext4 /dev/xvdf
And then you can mount it.
Please remember, that it will format this volume

Please check for the mount partition name (/dev/xvdf), the name should be /dev/xvdf1 like this if the disk is partitioned or you can try with below command:
sudo mount /dev/xvdf /vol -t ext4

Related

Mounting AWS EFS with NFS on macOS

I'm trying to mount an EFS volume with NFS on macOS, but am having permissions trouble. I am running the following command to mount the volume:
sudo mount -t nfs -o vers=4 -o tcp -w <IP Address>:/ efs/
and am able to successfully mount the volume, but it mounts with root privileges, and I need to be able to grant access to the volume to the local user. I need the local user to be able to both read and write to the volume.
Trying to chown -R $(whoami) ./efs results in an Unknown error: 10039.
I can successfully chmod 666 the files inside of the mount (sometimes with odd behaviors), but I ultimately need to just grant the local user write access to the volume.
Am I missing an option in the mount command or does anyone know how to mount the efs volume and provide the local user permissions to it?

EBS volume shows no mount point

UseCase: Launch an AWS cloud9 environment that has added EBS of 500GB. This environment will be extensively used to build and publish dockers by developers.
So I did start an m.5.large instance-based environment and attached an EBS volume of 500GB.
Attachment information: i-00xxxxxxb53 (aws-cloud9-dev-6f2xxxx3bda8c):/dev/sdf
This is my total storage and I do not see 500GB volume.
On digging further, it looks like the EBS volume is attached but not at the correct mount point.
EC2 EBS configuration
Question: What should be the next step in order to use this EBS volume?
Question: What should be done in order to make use attached EBS for docker building?
Question: What should be the most efficient instance type for docker building?
Tyoe df -hT here you will find the Filesystem type of root if it is xfs or ext4
If say root (/) is xfs, run the following command for the 500 GiB Volume
$ mkfs -t xfs /dev/nvme1n1
If root (/) is ext4,
$ mkfs -t ext4 /dev/nvme1n1
Create a directory in root say named mount
$ mkdir /mount
Now mount the 500 GiB Volume to /mount
$ mount /dev/nvme1n1 /mount
Now it will be mounted and can viewed in df -hT
Also make sure to update the /etc/fstab so mount remains stable if there is a reboot
To update first find UUID if 500 GiB EBS Volume
$ blkid /dev/nvme1n1
Note down the UUID from the output
Now go to /etc/fstab using editor of your choice
$ vi /etc/fstab
There must be an entry already for /, update for mount and save the file, (note replace xfs with ext4 if filesytem is ext4)
UUID=<Add_UUID_here_without_""> /mount xfs defaults 0 0
Now finally run mount command
$ mount -a
Generally, if its new EBS volume you have to format it manually and then mount it, also manually. From docs:
After you attach an Amazon EBS volume to your instance, it is exposed as a block device. You can format the volume with any file system and then mount it.
The instruction for doing this in the link above.

AWS EFS filsystem mounting issue via anislbe-playbook

we installed efs utility and configured the EFS filesystem with EFS Mount points with in the VPC.
Added the entry in /etc/fstab for permanent mount like below.
echo "mount fs-xxxxxxx /mnt/efs efs tls,_netdev 0 0" >> /etc/fstab
after this when i manually run the mount -a -t efs defaults - it is working fine file system got mounted successfully without any issue.
But when i try to invoke the same thing from ansible mount module like below
- name: Mount up efs
mount:
path: /mnt/efs
src: fs-xxxxxxxx
fstype: efs
opts: tls
state: mounted
become: true
become_method: pbrun
become_user: root
Note: Ansible is running as root privilaged user on the target host.
Expected Result:
EFS filesystem should get mounted without any issue.
Actual Result:
We are getting error in ansible saying like
Error:
only root can run mount.efs
when i start debugging the issue i see the entry in init.py for efs
https://github.com/aws/efs-utils/blob/555154b79572cd2a9f63782cac4c1062eb9b1ebd/src/mount_efs/init.py
we are validating the user with getpass python module but some how even i am using the become in the ansible it is not help me to get ride of this error.
Could you please anyone help me to resolve tis issue
Either the fstype is nfs or you may need to install the EFS Mount Helper.

Restoring an AWS Snapshot and using it

I have a snapshot and I can start a new EC2 and attach it but I cannot access the snapshot on the EBS drive without formatting it which destroys the data. Here is the command list I am using:
cat /proc/partitions
sudo mke2fs -F -t ext4 /dev/xvdf
sudo mount /dec/xvdf /data
This works but the EBS is empty due to the mke2fs.
How can I just mount the EBS snapshot and hence use it under /data?
Don't use mke2fs. As you have observed, that reformats the volume. If, for some reason, you don't think you can otherwise mount it, you may be mounting the wrong logical entity.
Identify the device or partition with lsblk and then mount it, as above.
If there are partitions, you need to supply the partition, not the device, as an argument to mount.
/dev/xvde # device
/dev/xvde1 # first partition

Add EBS to Ubuntu EC2 Instance

I'm having problem connecting EBS volume to my Ubuntu EC2 Instance.
Here's what I did:
From the Amazon AWS Console, I created a EBS 150GB volume and attached it to an Ubuntu 11.10 EC2 instance. Under the EBS volume properties, "Attachment" shows: "[my Ubuntu instance id]:/dev/sdf (attached)"
Tried mounting the drive on the Ubuntu box, and it told me "mount: /dev/sdf is not a block device"
sudo mount /dev/sdf /vol
So I checked with fdisk and tried to mount from the new location and it told me it wasn't the right file system.
sudo fdisk -l
sudo mount -v -t ext4 /dev/xvdf /vol
the error:
mount: wrong fs type, bad option, bad superblock on /dev/xvdf, missing
codepage or helper program, or other error In some cases useful info
is found in syslog - try dmesg | tail or so
"dmesg | tail" told me it gave the following error:
EXT4-fs (sda1): VFS: Can't find ext4 filesystem
I also tried putting the configurations into /etc/fstab file as instructed on http://www.webmastersessions.com/how-to-attach-ebs-volume-to-amazon-ec2-instance, but still gave same not the right file system error.
Questions:
Q1: Based on point 1 (above), why was the volume mapped to 'dev/sdf' when it's really mapped to '/dev/xvdf'?
Q2: What else do I need to do to get the EBS volume loaded? I thought it'll just take care of everything for me when I attach it to a instance.
Since this is a new volume, you need to format the EBS volume (block device) with a file system between step 1 and step 2. So the entire process with your sample mount point is:
Create EBS volume.
Attach EBS volume to /dev/sdf (EC2's external name for this particular device number).
Format file system /dev/xvdf (Ubuntu's internal name for this particular device number):
sudo mkfs.ext4 /dev/xvdf
Only format the file system if this is a new volume with no data on it. Formatting will make it difficult or impossible to retrieve any data that was on this volume previously.
Mount file system (with update to /etc/fstab so it stays mounted on reboot):
sudo mkdir -m 000 /vol
echo "/dev/xvdf /vol auto noatime 0 0" | sudo tee -a /etc/fstab
sudo mount /vol
Step 1: create volume
step 2: attach to your instance root volume
step 3: run sudo resize 2fs -p /dev/xvde
step 4: restart apache2 sudo service apache2 restart
step 4: run df -h
You can see total volume attached to your instance.