I was trying to add several EBS to an EC2 instance, I use something like that:
block_map = BlockDeviceMapping()
xvdf = EBSBlockDeviceType()
xvdf.delete_on_termination = True
xvdf.size = opts.ebs_vol_size
block_map['/dev/xvdf'] = xvdf
req = conn.request_spot_instances(key_name=opts.key_pair,
price=opts.price,
image_id=ami,
security_groups=[instance_group],
instance_type=opts.instance_type,
block_device_map=block_map,
count=count
)
EBS are created as I could see them within the EC2 instance in the AWS console. Beside that, I'm 100% sure they are created as I can list them with the lsblk command once I log into the EC2 instance. I also added a couple of entries to my /etc/fstab so that the EBS volumes at mounted at creation time.
However, they are not mounted. If I run the command mount -a the following error shows up:
mount: wrong fs type, bad option, bad superblock on /dev/xvdf,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
So, it seems EBS volumes are created but not formatted with EBSBlockDeviceType. After I formatted, I can run mount -a again and they are already mounted.
My question is, if is possible to create and format a volume within the EBSBlockDeviceType() constructor, so that I can mount it.
Another option I think I might have is attach an already formatted EBS snapshot usign the snapshot_id field in the boto.ec2.blockdevicemapping.BlockDeviceType class.
Thank you!
mount command fails for a newly allocated volume because there is no file system on it. BlockDeviceType (or EBSBlockDeviceType) does not have an option to chose a file system for the underlying EBS volume. Once the volume is allocated, user can create a file system of choice.
However, for a volume created from a formatted EBS snapshot (created from a volume having file system), there is no need to create file system again. You can use file -s <device name> to find out if the device already has a file system.
More details at: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html
Related
I am using an g4dn.xlarge instance on AWS with two volumes:
A 8GB default volume and
A 50GB second volume that I have successfully attached and mounted to the instance as confirmed by df -h
Once I then try to install anaconda, I successfully download the installer of my preferred version with wget but once I try to run the bash file I get the error
OSError: [Errno 28] No space left on device
[1883] Failed to execute script entry_point
Failed to write all bytes for lib/python3.7/config-3.7m-x86_64-linux-gnu/Makefile
fwrite: No space left on device
The question is: What more should i do to enable the installation happen in the second volume that I have already mounted?
So, we have two options:
You can follow THIS GUIDE in order to expand the size of the root partition. The way it works is that you stop the instance, you create a snapshot of the root volume and then you create a new volume from that instance but of bigger size, then attach it to the instance and launch the instance. It should be fine but didn't work for me
Create a new instance but with a bigger root partition size from the beginning
I have a snapshot and I can start a new EC2 and attach it but I cannot access the snapshot on the EBS drive without formatting it which destroys the data. Here is the command list I am using:
cat /proc/partitions
sudo mke2fs -F -t ext4 /dev/xvdf
sudo mount /dec/xvdf /data
This works but the EBS is empty due to the mke2fs.
How can I just mount the EBS snapshot and hence use it under /data?
Don't use mke2fs. As you have observed, that reformats the volume. If, for some reason, you don't think you can otherwise mount it, you may be mounting the wrong logical entity.
Identify the device or partition with lsblk and then mount it, as above.
If there are partitions, you need to supply the partition, not the device, as an argument to mount.
/dev/xvde # device
/dev/xvde1 # first partition
I have created a custom CentOS 6.5 image and registered it to AWS as EBS root device type. When I launch an instance, it works perfectly well, except that the storage capacity (instance storage to be included according to the instance type) is not added to the instance.
I made a try booting an instance using the official CentOS 6.5 AMI that is located in the AWS Marketplace, but I got the same result.
Does anyone know the reason, if it is a known issue, or whatever?
Thanks in advance.
First you have to make sure that the instance store is attached at launch time. From the AWS console it should look something like this:
Once you boot the instance you have to create a filesystem in the drive by running:
mkfs.ext4 /dev/sdb
Then you need to mount that drive somewhere in your root filesystem:
mkdir -p /mnt/myinstancestore
mount /dev/sdb /mnt/myinstancestore
You can run these commands to check that your drive is mounted:
df -h
mount
You can also add the mount entry to your /etc/fstab file so that it mounts permanently after every reboot:
/dev/sdb /mnt/myinstancestore ext4 defaults 1 2
So, the story is a bit long... but in a nutshell: I had a ec2 microinstance and I lost the connection with it by doing, installing and running whereami (silly me). Then I took a snapshot of the instance and created a volume from it. Then I created a new micro instance and added that volume I got from the one I broke, so now this new instance has two drives sda1 (by default) and sdf (which is the one I added), I atached it from the AWS panel and now I'd like to mount it on my new instance, but I cannot get it.
I installed the AWS complement for it sudo apt-get install ec2-api-tools ec2-ami-tools but even know it fails. I tried ec2-attach-volume volume_id -i instance_id -d device like this:
ec2-attach-volume vol-4d826724 -i i-6058a509 -d /dev/sdf
But now it asks me for a key: Required option '-K, --private-key KEY' missing (-h for usage)
And I am quite stuck on this...
Of course I do not want to format the drive I'm adding because I want to recover the information it has.
If you have already attached the volume from the AWS Management Console, you don't need to run ec2-attach-volume. You just need to mount it. You can find instructions here.
If you get a message saying that the device does not exist, see this answer.
I am having problems adding ephemeral storage into my existing EBS backed instance. I have a small instance running on 8GB EBS root-device, and I would like to add ephemeral storage into this instance and run it as a medium instance.
The procedure I have tried which did not work for me:
1) Took a snapshot from the instance EBS volume.
2) Registered new AMI based on the snapshot using ec2-api-tools:
ec2-register -a x86_64 -n "My AMI with ephemeral storage" --kernel <AKI-ID> --root-device-name "/dev/sda1" -b "/dev/sda1=<SNAP-ID>:8:true:standard" -b "/dev/sdc=ephemeral1"
3) Launched new medium instance with the new AMI I just created:
ec2-run-instances <AMI-ID> -t m1.medium --kernel <AKI-ID> -k <MY_KEY_NAME> -g default -b "/dev/sdc=ephemeral1"
4) SSH:ed into my new instance after it started up and the ephemeral storage is nowhere to be found (checked with fdisk -l for example). The root device is fine and correct, but eve nif trying out ephemeral0 instead of 1 did not change anything.
Apparently there is nothing in the API that tells you when you exceed your instance store mappings. A medium instance can only have 1 ephemeral drive. In fact /dev/sdc may only be able to mapped in large instances and up:
http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/InstanceStorage.html#StorageOnInstanceTypes