AWS block device mapping to mount a snapshot while creating separate root - amazon-web-services

I want to create a new instance with root mounting from its AMI (sda1), while at the same creating a secondary volume (sda2) from a snapshot.
I am using the following block device mapping to add sda2:
[
{
"DeviceName": "/dev/sda2",
"Ebs": {
"DeleteOnTermination": false,
"SnapshotId": "snap-0daafbeb9409cb652"
}
}
]
However, while an sda1 volume is created from the AMI, it appears that sda2 is mounted as root
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part
xvdb 202:16 0 8G 0 disk
└─xvdb1 202:17 0 8G 0 part /
What should be different to cause xvda1 (which links to sda1) to mount as root instead? I do not want to modify the AMI to do this, the starting point for this process is a stock Ubuntu image.
aws ec2 run-instances --image-id ami-c80b0aa2 ... --block-device-mappings file://mappings.json

This problem is caused by the volume labels of the partitions being mounted. In this specific case, both volumes have the same label, indicating they are the root partition, which is confusing the boot process.
The solution here is to clear the label of the volume that is not being mounted as the root filesystem.

Related

How to access the EBS volume on AWS via EC2?

I have an EBS (elastic storage) volume on AWS attached to my EC2 instance.
However, how do I make all the gigas available to that EC2 instance?
When I run
sudo file -s /dev/xvda
I get
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 80G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdb 202:16 0 8G 0 disk
So it looks like only the 8G part is mounted but not the whole 80G.
How do I mount the extra space?
I saw an article here https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html
It says I should format this volume, but as I see it has a subvolume xvda1, so I don't want to accidentally format everything before mounting it again.
Any idea how to make it work and mount this additional 80Giga?
Thanks!
Thank you to #jellycsc reference above.
Check file formats on volumes:
sudo file -s /dev/xvd*
List block devices:
lsblk
Then change the size of the block device:
sudo growpart /dev/xvda 1
// 1 is the number of the volume on the main EBS
Verify with:
lsblk
Then extend the file system (before was the volume).
Check:
df -h
Then check the filesystem:
sudo file -s /dev/xvd*
and apply:
sudo resize2fs /dev/xvda1

AWS beanstalk docker-pool devmapper running out thin pool

Hi I have a few clusters running with AWS beanstalk running docker images. My docker image size is 680MB and I verified that the Docker data volume (/dev/xvdcz) is at capacity of 12GB. I also verified that there are no dangling images / stale containers to prune. Currently it is holding 2 images (so should be roughly 1.4GB).
That said my new application deployments were consistently failing with error:
layer: devmapper: Thin Pool has 1805 free data blocks which is less than minimum required 2425 free data blocks. Create more free space in thin pool or use dm.min_free_space option to change behavior
On future investigation, I found there is a setting in /etc/sysconfig/docker-storage that sets the dm.baseSize to 100GB. Here's the file content:
DOCKER_STORAGE_OPTIONS="--storage-driver devicemapper --storage-opt dm.thinpooldev=/dev/mapper/docker-docker--pool --storage-opt dm.use_deferred_removal=true --storage-opt dm.use_deferred_deletion=true --storage-opt dm.fs=ext4 --storage-opt dm.basesize=100G"
This is coming from default AWS AMI. After that I added an ebextension to increase the docker volume (/dev/xvdcz) to 120GB and things have not failed since.
My question is why do we need baseSize to be 100GB and is there a way I can reduce it down to 10GB so I don't have to attach additional storage for each box? Ideally without building custom AMIs.
Here's the new config after I updated the volume to 120GB.
[ec2-user ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:1 0 8G 0 disk
├─nvme0n1p1 259:2 0 8G 0 part /
└─nvme0n1p128 259:3 0 1M 0 part
nvme1n1 259:0 0 120G 0 disk
└─nvme1n1p1 259:6 0 120G 0 part
├─docker-docker--pool_tdata 253:1 0 118.6G 0 lvm
│ └─docker-docker--pool 253:2 0 118.6G 0 lvm
│ └─docker-259:2-394917-4a52cb647e0a91037aaf4fb64345ae55567f2d8c5ff073048981987e6dcba7b0 253:3 0 100G 0 dm /var/lib/docker/devicemapper/mnt/4a52cb647e0a91037aaf4fb64345ae55567f2d8c5ff073048981987e6dcba7b0
└─docker-docker--pool_tmeta 253:0 0 124M 0 lvm
└─docker-docker--pool 253:2 0 118.6G 0 lvm
└─docker-259:2-394917-4a52cb647e0a91037aaf4fb64345ae55567f2d8c5ff073048981987e6dcba7b0 253:3 0 100G 0 dm /var/lib/docker/devicemapper/mnt/4a52cb647e0a91037aaf4fb64345ae55567f2d8c5ff073048981987e6dcba7b0
By default, EB only gives you 12GB of storage for images.
To expand it to 100GB, Create a .ebextensions/blockdevice-xvdcz.config file with the following contents.
option_settings:
aws:autoscaling:launchconfiguration:
BlockDeviceMappings: /dev/xvdcz=:100:true:gp2
See the "Configuring additional storage volumes" section of https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.container.console.html#docker-volumes
For improved performance on Amazon Linux AMI, Elastic Beanstalk
configures two Amazon EBS storage volumes for your Docker
environment's Amazon EC2 instances. In addition to the root volume
provisioned for all Elastic Beanstalk environments, a second 12GB
volume named xvdcz is provisioned for image storage on Docker
environments.
If you need more storage space or increased IOPS for Docker images,
you can customize the image storage volume by using the
BlockDeviceMapping configuration option in the
aws:autoscaling:launchconfiguration namespace.
For example, the following configuration file increases the storage
volume's size to 100 GB with 500 provisioned IOPS:

how to make an attached EBS volume accessible?

I changed my instance type, and now previously attached volumes are not available at startup. How do I attach and mount volumes?
In the volume info in the AWS console:
Attachment information i-e85c62d0 (hongse):/dev/sdf (attached)
however there is nothing at /dev/sdf on the instance.
I tried to mount it following the info on the AWS site such as:
ubuntu#hongse:~$ sudo mkdir /ebs1
ubuntu#hongse:~$ sudo mount /dev/sdf /ebs1
mount: special device /dev/sdf does not exist
but failed.
What other steps could I try to mount an existing volume?
Ok, you're using ubuntu in AWS with an ECS volume. Try this:
ubuntu#hostname1:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 128G 0 disk
└─xvda1 202:1 0 128G 0 part /
xvdf 202:80 0 1000G 0 disk
Note that xvdf (1TB) drive is not mounted in my example.
You will want to type the following to mount your disk:
ubuntu#hostname1:~$ sudo mount /dev/xvdf /ebs1
NOTE: I don't know the reason why the AWS console shows /dev/sdf and the actual host shows /dev/xvdf, but that's the way it is.

Is the EBS volume mounted ? and where?

In my EC2 instance, that is attached to a volume EBS of 100GB, I run this command:
[ec2-user ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 100G 0 disk
└─xvda1 202:1 0 8G 0 part /
Here is the file /etc/fstab:
UUID=ue9a1ccd-a7dd-77f8-8be8-08573456abctkv / ext4 defaults 1 1
I want to understand: why only the volume of 8GB has a mount point ?
Also, is the fact of mounting a volume on root '/', means that all the content of root is being stored on EBS volume?
I want to understand: why only the volume of 8GB has a mount point ?
Because additional volumes are not formatted/mounted by default. AWS does not know whether you'd like to have ext4 or NTFS or something else as well as which mount point you'd like to have.
Also, is the fact of mounting a volume on root '/', means that all the content of root is being stored on EBS volume?
Yes if you have EBS-backed instance (unlike so-called instance-backed instances) and if you do not have other volumes mounted (not to be confused with 'attached')
p.s. as far as I see, you initially had created 8GB volume then resized it via AWS console to 100GB. Pls note you resized the EBS volume (xvda) but did not resize the partition (xvda1). AWS will not resize it automatically by the same reason: it doesn't know how you're going to use the extra space.

How can I re-download the pem file in AWS EC2?

I made a key pair pem file called "test.pem", and I downloaded to my PC.
I made a new instance with this pem file.
Now I am in a different pc, and I don't have this pem file in my local, and my previous pc is in the middle of the sea (shipping).
How can I re-download the "test.pem" file again?
No, you cannot download .pem file again. You can download the .pem file ONLY once and that is when you create a new key-pair.
You can not download such security key files more than once.
You can reuse them for multiple instances.
The best thing you can do is:
Download it
store it at S3 (Of course in a private access bucket.)
You can recover you machine even if you lost pem file, there is a way:
1.create new instance with same region and VPC.
2.stop old machine (do not terminate).
3.Goto EBS , detach the root volume of old machine.
4.Now time to attach new volume to new instance(/dev/sdf).but this newly
attached volume will
be secondary for new instance because it will have its default root
volume.
5.Login to new machine and follow below steps:
# mount /dev/xvdf1 /mnt
# cp /root/.ssh/authorized_keys /mnt/root/.ssh/
# umount /mnt
detach secondary volume from new instance.
Attach this volume back to old instance.
login back to old machine using pem file you got at time of creation of
new instance for recovery.
You cannot re download the pem file or psk file again.
SOLUTION
Go to network and security --> key-pair
Create new Key pair , SAVE IT NOW
Delete the original one
You are good to go
My notes from doing this recently.
There's a few traps for the unwary and tools which might be unfamiliar to some.
Step 1) Detach your root volume from your machine using AWS console.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-detaching-volume.html
Amazon EC2 console > dashboard > instances > select instance > copy instance id i-06d2680a4d94c4f59 (29-5-22_flask_gunicorn_nginx)
instance must be in stopped state. (check dashboard as takes a few seconds after telling to stop)
Amazon EC2 console > dashboard > volumes > select volume with matching instance ID. vol-02e720595d57d3591
'actions' dropdown > detach
Step 2) Launch a fresh EC2 instance(Not from your old machine AMI)
(same region and Virtual private cloud)
take note of the new instance id : i-blah
Step 3) Attach your old volume to new EC2 machine
amazon EC2 console > EC2 > Volumes > selected volume.
refresh page after disconnecting the volume to update the 'actions' drop down > 'attach volume'
select the new EC2 instance
wait & refresh page until "Attached Instances" shows "attached"
Volume ID vol-blah
Attached Instances i-blah : /dev/sdf (attached)
Step 4) Now login to new ec2 machine and mount the old EBS volume
list the available disks
lsblk
#------------------------
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 26.6M 1 loop /snap/amazon-ssm-agent/5163
loop1 7:1 0 55.5M 1 loop /snap/core18/2344
loop2 7:2 0 61.9M 1 loop /snap/core20/1405
loop3 7:3 0 79.9M 1 loop /snap/lxd/22923
loop4 7:4 0 43.6M 1 loop /snap/snapd/15177
xvda 202:0 0 8G 0 disk
├─xvda1 202:1 0 7.9G 0 part /
├─xvda14 202:14 0 4M 0 part
└─xvda15 202:15 0 106M 0 part /boot/efi
xvdf 202:80 0 8G 0 disk
├─xvdf1 202:81 0 7.9G 0 part
├─xvdf14 202:94 0 4M 0 part
└─xvdf15 202:95 0 106M 0 part
#------------------------
sudo file -s /dev/xvda
/dev/xvda: DOS/MBR boot sector, extended partition table (last)
sudo file -s /dev/xvdf
/dev/xvdf: DOS/MBR boot sector, extended partition table (last)
#check file type used on existing volume
mount | grep "^/dev"
/dev/xvda1 on / type ext4 (rw,relatime,discard,errors=remount-ro)
/dev/xvda15 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)
sudo mkfs -t ext4 /dev/xvdf
sudo mkdir /newvolume
sudo mount /dev/xvdf /newvolume/
#check file type used on existing volume
mount | grep "^/dev"
/dev/xvda1 on / type ext4 (rw,relatime,discard,errors=remount-ro)
/dev/xvda15 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)
/dev/xvdf on /newvolume type ext4 (rw,relatime)
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 26.6M 1 loop /snap/amazon-ssm-agent/5163
loop1 7:1 0 55.5M 1 loop /snap/core18/2344
loop2 7:2 0 61.9M 1 loop /snap/core20/1405
loop3 7:3 0 79.9M 1 loop /snap/lxd/22923
loop4 7:4 0 43.6M 1 loop /snap/snapd/15177
xvda 202:0 0 8G 0 disk
├─xvda1 202:1 0 7.9G 0 part /
├─xvda14 202:14 0 4M 0 part
└─xvda15 202:15 0 106M 0 part /boot/efi
xvdf 202:80 0 8G 0 disk /newvolume
#check the disk space to validate the volume mount.
cd /newvolume
df -h .
Step 5) Now go to that partition then visit home directory inside that machine and go to .ssh folder.
cd ~/.ssh
cat authorized_keys
Step 6) Now generate a new private and public key. Then paste public key into authorized_keys file.
nb: since login was setup during this ec2 instance creation, this step already complete.
Step 7) Once you done with above steps, detach that volume from this ec2 machine.
stop the new instance, wait for instance to stop.
detach new volume, wait for volume to detach.
Step 8) Now attach this volume to your old machine as root volume
attach new volume to old instance as /dev/sda1 (see prompt as this attaches as root), wait for volume to attach.
start new instance, wait for start to complete.
nb: this error will occur if new volume not attached as root.
Failed to start the instance i-06dblah
Invalid value 'i-06dblah' for instanceId. Instance does not have a volume attached at root (/dev/sda1)
Step 9) Now try to login to your old machine with the newly generated key.
cd ~
sudo ssh-keygen -f "/root/.ssh/known_hosts" -R "xxx.xxx.xxx.xxx"
sudo ssh -i my_key_filename.pem ubuntu#xxx.xxx.xxx.xxx
can then remount your original volume and copy across files to retrieve.
it's worth noting here : all ssh access should be locked down to specific IP's and should be using key rotation tools. It's likely ppl will find this SE post due to their security being breached.
Login to old instance via ftp/sftp and download the key to your pc.