I am following an online tutorial to set up an EC2 instance for a group project. http://www.developintelligence.com/blog/2017/02/analyzing-4-million-yelp-reviews-python-aws-ec2-instance/.
The instance I used is r3.4xlarge, the tutorial says if I chose an instance with an SSD, I need to mount that and run the following code:
lsblk
sudo mkdir /mnt/ssd
sudo mount /dev/xvdb /mnt/ssd
sudo chown -R ubuntu /mnt/ssd
lsblk shows the following:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdb 202:16 0 300G 0 disk
However, when I run sudo mount /dev/xvdb /mnt/ssd, it gives me the error:
mount: wrong fs type, bad option, bad superblock on /dev/xvdb,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
Could someone provide a solution to this error? Thanks!
Before one mounts a filesystem in linux - the filesystem should be created.
In this case it might be
mkfs.ext4 /dev/xvdb
This would create an ext4 filesystem on a /dev/xvdb device.
Related
I want to change dm.basesize in my containers .
These are the size of containers to 20GB
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
`-xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 8G 0 disk
xvdg 202:96 0 8G 0 disk
I have a sh
#cloud-boothook
#!/bin/bash
cloud-init-per once docker_options echo 'OPTIONS="${OPTIONS} --storage-opt dm.basesize=20G"' >> /etc/sysconfig/docker
~
I executed this script
I stopped the docker service
[ec2-user#ip-172-31-41-55 ~]$ sudo service docker stop
Redirecting to /bin/systemctl stop docker.service
[ec2-user#ip-172-31-41-55 ~]$
I started docker service
[ec2-user#ip-172-31-41-55 ~]$ sudo service docker start
Redirecting to /bin/systemctl start docker.service
[ec2-user#ip-172-31-41-55 ~]$
But the container size doesn't change.
This is /etc/sysconfig/docker file
#The max number of open files for the daemon itself, and all
# running containers. The default value of 1048576 mirrors the value
# used by the systemd service unit.
DAEMON_MAXFILES=1048576
# Additional startup options for the Docker daemon, for example:
# OPTIONS="--ip-forward=true --iptables=true"
# By default we limit the number of open files per container
OPTIONS="--default-ulimit nofile=1024:4096"
# How many seconds the sysvinit script waits for the pidfile to appear
# when starting the daemon.
DAEMON_PIDFILE_TIMEOUT=10
I read in the aws documentation that I can to execute scripts in the aws instance when I start it . I don't want to restart my aws instance because I lost my data.
Is there a way to update my container size without restart the aws instance?
In the aws documentation I don't find how to set a script when I launch the aws instance.
I follow the tutorial
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html
I don't find a example how to set a script when I launch the aws instance.
UPDATED
I configured the file
/etc/docker/daemon.json
{
"storage-driver": "devicemapper",
"storage-opts": [
"dm.directlvm_device=/dev/xdf",
"dm.thinp_percent=95",
"dm.thinp_metapercent=1",
"dm.thinp_autoextend_threshold=80",
"dm.thinp_autoextend_percent=20",
"dm.directlvm_device_force=false"
]
}
When I start docker, I get
Error starting daemon: error initializing graphdriver: /dev/xdf is not available for use with devicemapper
How can I configure the parameter
dm.directlvm_device=/dev/xdf
I am running Debian 8.7 on Google Cloud. The instance had a disk of size 50G, and I increased its size to 100G, as shown in the lsblk output below:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 100G 0 disk
`-sda1 8:1 0 50G 0 part /
I then tried to increase the size of sda1 using
sudo growpart /dev/sda 1
, but got the following error:
failed [sfd_list:1] sfdisk --list --unit=S /dev/sda
FAILED: failed: sfdisk --list /dev/sda
It didn't tell me the specific reason for the failure. I googled around and couldn't find anyone who got this issue.
I followed the gcloud documentation and cannot figure out where the problem is.
Google Cloud images for Debian, Ubuntu, etc. have the ability to automatically resize the root file system on startup. If you resize the disk while the system is running, the next time the system is rebooted the partition and file system will be resized.
You can also resize the root file system while the system is running without rebooting.
Replace INSTANCE_NAME and ZONE in the following commands. The second command assumes that the file system is EXT4. Check for your system setup.
Resize the disk:
gcloud compute disks resize INSTANCE_NAME --zone ZONE --size 30GB --quiet
Resize the partition and file system:
gcloud compute ssh INSTANCE_NAME --zone ZONE --command "sudo expand-root.sh /dev/sda 1 ext4"
Debian 9 – Resize Root File System
I use a public snaphot.
I created one volume with 15G, and another with 25G, both from same snapshot. However, after mounting, df shows both at 8G and full. The lsblk shows the block devices with 15G and 25G. Do I need to give an extra argument when mounting?
How can I mount read/write?
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
..
xvdf 202:80 0 25G 0 disk /data
xvdg 202:96 0 15G 0 disk /data2
df
Filesystem 1K-blocks Used Available Use% Mounted on
..
/dev/xvdf 8869442 8869442 0 100% /data
/dev/xvdg 8869442 8869442 0 100% /data2
mount
..
/dev/xvdf on /data type iso9660 (ro,relatime)
/dev/xvdg on /data2 type iso9660 (ro,relatime)
Probably your volume raw capacity is larger than your filesystem size, so, as #avihoo-mamca suggested, you need to extend your filesystem to the volume size, using resize2fs.
I'm sure the problem appears because of my misunderstanding of Ec2+EBS configuring, so answer might be very simple.
I've created RedHat ec2 instance on the Amazon WS with 30GB EBS storage. But lsblk shows me, that only 6GB of total 30 is available for me:
xvda 202:0 0 30G 0 disk
└─xvda1 202:1 0 6G 0 part /
How can I mount all remaining storage space to my instance?
[UPDATE] commands output:
mount:
/dev/xvda1 on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sudo fdisk -l /dev/xvda:
WARNING: GPT (GUID Partition Table) detected on '/dev/xvda'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/xvda: 32.2 GB, 32212254720 bytes
255 heads, 63 sectors/track, 3916 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/xvda1 1 1306 10485759+ ee GPT
resize2fs /dev/xvda1:
resize2fs 1.41.12 (17-May-2010)
The filesystem is already 1572864 blocks long. Nothing to do!
I believe you are experiencing an issue that seems specific to EC2 and RHEL* where the partition won't extend using the standard tools.
If you follow the instructions of this previous answer you should be able to extend the partition to use the full space. Follow the instructions particularly carefully if expanding the root partition!
unable to resize root partition on EC2 centos
If you update your question with the output of fdisk -l /dev/xvda and mount it should help provide any extra information if the following isnt suitable:
I would assume that you could either re-partition xvda to provision the space for another mount point (/var or /home for example) or grow your current root partition into the extra space available - you can follow this guide here to do this
Obviously be sure to back up any data you have on there, this is potentially destructive!
[Update - how to use parted]
The following link will talk you through using GNU Parted to create a partition, you will essentially just need to create a new partition, then I would temporarily mount this to a directory such as /mnt/newhome, copy across all of the current contents of /home (recursively as root keeping permissions with cp -rp /home/* /mnt/newhome), then I would rename the current /home to /homeold, then make sure you have set up Fstab to have the correct entry: (assuming your new partition is /dev/xvda2)
/dev/xvda2 /home /ext4 noatime,errors=remount-ro 0 1
Good afternoon,
I am new to EC2 and have been trying to mount an EBS volume on an EC2 instance. Following the instructions at this StackOverflow question I did the following:
1. Format file system /dev/xvdf (Ubuntu's internal name for this particular device number):
sudo mkfs.ext4 /dev/xvdf
2. Mount file system (with update to /etc/fstab so it stays mounted on reboot):
sudo mkdir -m 000 /vol
echo "/dev/xvdf /vol auto noatime 0 0" | sudo tee -a /etc/fstab
sudo mount /vol
There now appears to be a folder (or volume) at /vol but it has been (prepopulated?) with a folder entitled lost+found, and does not have the 15GB that I assigned to the EBS volume (it has something much smaller).
Any help you could provide would be appreciated. Thanks!
UPDATE 1
After following the first suggestion (sudo mount /dev/xvdf /vol), here is the output of df:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda1 8256952 791440 7046084 11% /
udev 294216 8 294208 1% /dev
tmpfs 120876 164 120712 1% /run
none 5120 0 5120 0% /run/lock
none 302188 0 302188 0% /run/shm
/dev/xvdf 15481840 169456 14525952 2% /vol
This might indicate that I do in fact have the 15GB on /vol . However I still do have that strange lost+found folder in there. Anything I should be worried about?
Nothing is wrong with your /vol. It was mounted as shown by df output.
lost+found directory is used by filesystem to recover broken files (fsck stores recovered files there), so it's normal you can see it.
Small size problem might refer to kibibytes:
1 kibibyte = 2^10 = 1024 bytes
16G = 14.9Gib
Try at the last line:
sudo mount /dev/xvdf /vol