I am trying to mount a persistent disk with data to a VM to use it in Google Datalab. So far no success, ideally I would like to see my files in the Datalab notebook.
First, I added the disk in VM settings with Read/Write mode.
Second, I ran $lsblk to see what disks there are.
Then tried this: sudo mount -o ro /dev/sdd /mnt/disks/z
I got this error:
wrong fs type, bad option, bad superblock on /dev/sdd,
P.S. I used the disk I want to mount on another VM and downloaded some data on it. It was formatted as NTFS disk.
Any ideas?
I would need to create a file system as noted in the following document
$ mkfs.vfat /dev/sdX2
Creates a new partition, without creating a new file system on that partition.
Command: mkpart [part-type fs-type name] start end
Related
How can one download files from a GCP Storage bucket to a Container-Optimised OS (COS) on instance startup?
I know of the following solutions:
gcloud compute copy-files
SSH through console
SCP
Yet all of these have to be done manually and externally after an instance is started.
There is also cloud init, yet I can't find any info on how to copy files from a Storage bucket. Examples seem to be suggesting that it's better to include content of files in the cloud init file directly, which is not something I want to do because security. Is it possible to download files from Storge bucket using cloud init?
I considered using a startup script, yet COS lacks CLI tools such as gcloud or gsutil to be able to run any such commands in a startup script.
I know I could copy the files manually and then save the image as a boot disk, but I'm hoping there are solutions that avoid having to do so.
Most of all, I'm assuming I'm not asking for something impossible, given that COS instance setup allows me to specify Docker volumes that I could mount onto the starting container. This seems to suggest I should be able to have some private files on the instance the moment COS will attempt to run my image on startup. But how?
Trying to execute a startup-script with a cloud-sdk image and copying files there as suggested by Guillaume didn't work for me for a while, showing this log. Eventually I realised that the cloud-sdk image is 2.41GB when uncompressed and takes over 2 minutes to complete pulling. I tried again with an empty COS instance and the startup script completed successfully, downloading the data from a Storage bucket.
However, a 2.41GB image and over 2 minutes of boot time sound like a bit of an overkill to download a 2KB file. Don't they?
I'm glad to see a working solution to my question (thanks Guillaume!) although I'm still wondering: isn't there a nicer way to do this? I feel that this method is even less tidy than manually putting the files on the COS instance and then creating a machine image to use in the future.
Based on Guillaume's answer I created and published a gsutil wrapper image, available as voyz/gsutil_wrap. This way I am able to run a startup-script with the following command:
docker run -v /host/path:/container/path \
--entrypoint gsutil voyz/gsutil_wrap \
cp gs://bucket/path /container/path
It's essentially a copy of what Guillaume suggested, except it is using an image containing only a minimum setup required to run gsutil. As a result it weighs 0.22GB and pulls within 10-20 seconds on average - as opposed to 2.41GB and over 2 minutes respectively for the google/cloud-sdk image suggested by Guillaume.
Also, credit to this incredibly useful StackOverflow answer that allows gsutil to use the default service account for authentication.
The startup-script is the correct location to do this. And YES, COS lacks some useful library.
BUT you can run container! And, for example, the Google Cloud SDK container!
So, add this startup-script in the VM metadata:
key -> startup-script
value ->
docker run -v /local/path/to/copy/files:/dummy/container/path \
--entrypoint gsutil google/cloud-sdk \
cp gs://your_bucket/path/to/file /dummy/container/path
Note: the startup script is ran in root mode. Perform a chmod/chown in your startup script if you need to change the file access mode.
Let me know if you need more explanation on this command line
Of course, with a fresh COS image, the startup time is quite long (pull the container image and extract it).
To reduce the startup time, you can "bake" your image. I mean, start with a COS, download/install what you want on it (or only perform a docker pull of the googkle/cloud-sdk container) and create a custom image from this.
Like this, all the required dependencies will be present on the image and the boot start will be quicker.
I have changed the root volume size of my instance through AWS Console and the change is reflecting there.
When I log into my ubuntu machine and run 'fdisk -l' the previous disk capacity is shown.
Am I missing any other additional steps here?
After you increase the size of an EBS volume, you must use file
system–specific commands to extend the file system to the larger size.
You can extend the volume using growpart command and then resize the file system using resize2fs command.
Please refer https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html
I have an EC2 instance with a 20GB root volume. I attached an additional 200GB volume for partitioning to comply with DISA/NIST 800-53 STIG by creating separate partitions for directories such as /home, /var, and /tmp, as well as others required by company guidelines. Using Red Hat Enterprise Linux 7.5. I rarely ever do this and haven't done it for years so I'm willing to accept stupid solutions.
I've followed multiple tutorials using various methods and get the same result each time. The short (from my understanding), the OS cannot access certain files/directories on these newly mounted partitions.
Example: When I mount say /var/log/audit, the audit daemon will fail.
"Job for auditd.service failed because the control process exited with error code. See..."
systemctl status auditd says "Failed to start Security Auditing Service". I am also unable to login via public key when I mount /home but these types of problems go away when I unmount the new partitions. journalctl -xe reveals "Could not open dir /var/log/audit (Permission denied).
Permission for each dir is:
drwxr-xr-x root root /var
drwxr-xr-x root root /var/log
drws------ root root /var/log/audit
Which is consistent with the base OS image, which DOES work when the partition isn't mounted.
What I did:
-Create EC2 with 2 volumes (EBS)
-Partitioned the new volume /dev/xvdb
-Formatted partition to extf
-Create /etc/fstab entries for the partitions
-Mounted partitions to a temp place in /mnt then copied the contents using rsync -av <source> <dest>
-Unmounted the new partitions and updated fstab to reflect actual mount locations, e.g. /var/log/audit
I've tried:
-Variations such as different disk utilities (e.g. fdisk, parted)
-Using different partition schemes (GPT, DOS [default], Windows basic [default for the root volume, not sure why], etc.)
-Using the root volume from an EC2, detaching, attaching to other instance as a 2nd volume, and only repartitioning
Ownership, permissions, and directory structures are exactly the same, as far as I can tell, between the original directory and the directory created on the new partitions.
I know it sounds stupid but did you try restarting instance after new mounts?
The reason why I suggest this is because linux caches dir/file path to inode mapping. When you change mount, I am not sure if cache is invalidated. And that can possibly the reason for errors.
Also, though it is Ubuntu have a look at: https://help.ubuntu.com/community/Partitioning/Home/Moving
I m using CDH 4.3.0 and I have mounted hdfs using FUSE on my edge node. And the fuse mount point automatically changes the permissions from root:root to hdfs:hadoop.
When I try to export it over NFS, it throws a permissions error to me. Can anyone help me how to get around this. I ve read somewhere that it works only in kernel versions 2.6.27 and above, and mine is 2.6.18...Is this true ?
my mount command gives this output for hdfs fuse:
fuse on /hdfs-root/hdfs type fuse (rw,nosuid,nodev,allow_other,allow_other,default_permissions)
my /etc/exports looks like this:
/hdfs-root/hdfs/user *(fsid=0,rw,wdelay,anonuid=101,anongid=492,sync,insecure,no_subtree_check,no_root_squash)
my /etc/fstab looks like this:
hadoop-fuse-dfs#hdfs:: /hdfs-root/hdfs fuse allow_other,usetrash,rw 2 0
:/hdfs-root/hdfs/user /hdfsbkp nfs rw
the first line is for FUSE, and 2nd line is to export the mounted HDFS over NFS..
when I run mount -a, I get the following error...
"mount: :/hdfs-root/hdfs/user failed, reason given by server: Permission denied"
Also, I tried to change the ownership of FUSE mount to root:root, but it wouldnt let me do that either...by the way, we are using kerberos as authentication method to access hadoop...
Any help is really appreciated!!!
If I try to take a snapshot
rhc snapshot save -a django
It is not saving all the data and codes in the server.
Here is the link of my app
http://django-appspot.rhcloud.com
It is running on:
Django-1.5.1
python-2.7
Mysql
The size without /tmp , /ssh , /sandbox is 866M
I think I am exceeding the disk quota.
Currently I am unable to take the media folder backup. Is there any way around??
It is Solved
https://www.openshift.com/forums/openshift/rhc-snapshot-not-saving-all-the-data#comment-33800
Use WinFTP to download the backup files