I have created a standard persistent disk and successfully mounted it in Read/Write configuration on a node in my Kubernetes cluster.
I would now like to populate that disk with some content I currently have locally. The scp tool in the gcloud SDK seems like the ideal way to do this.
However, when I run:
gcloud compute scp ~/Desktop/subway-explorer-api/logbooks.sqlite gke-webapp-default-pool-49338587-d78l:/mnt/subway-explorer-datastore --zone us-central1-a
I get:
scp: /mnt/subway-explorer-datastore: Read-only file system
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
My questions are:
What is my error here? Why is the disk being reported as being read-only?
How do I fix it?
Is this indeed a good use of the gcloud scp utility (I got to here by looking at this answer), or is there a better way to do this?
I misinterpreted the folder the disk got mounted to, and I was trying to write to a folder that didn't actually exist. The error message led me to misdiagnose this as a permissions error, when in reality it was operator error.
For further details see this answer on the Unix StackExchange.
Related
I'm trying to copy files from my VM to my local computer.
I can do this with the standard command
sudo gcloud compute scp --recurse orca-1:/opt/test.txt .
However in downloading the log files they transfer but they're empty? (empty files are created with the same name)
I'm also unable to use the Cloud Shell 'Download' UI button because it gives No such file despite the absolute file path being correct (cat /path returns the data).
I understand it's a permissions thing somehow with log files?
Thanks for the replies to my thread above I figured out it was a permissions issue on my files.
Interestingly the first time I ran the commands it did not throw any errors or permission errors -- it downloaded all the expected files however they were empty. In testing again, now it threw permission errors. I then modified the files in question to have public read permissions, and it downloaded successfully.
How can one download files from a GCP Storage bucket to a Container-Optimised OS (COS) on instance startup?
I know of the following solutions:
gcloud compute copy-files
SSH through console
SCP
Yet all of these have to be done manually and externally after an instance is started.
There is also cloud init, yet I can't find any info on how to copy files from a Storage bucket. Examples seem to be suggesting that it's better to include content of files in the cloud init file directly, which is not something I want to do because security. Is it possible to download files from Storge bucket using cloud init?
I considered using a startup script, yet COS lacks CLI tools such as gcloud or gsutil to be able to run any such commands in a startup script.
I know I could copy the files manually and then save the image as a boot disk, but I'm hoping there are solutions that avoid having to do so.
Most of all, I'm assuming I'm not asking for something impossible, given that COS instance setup allows me to specify Docker volumes that I could mount onto the starting container. This seems to suggest I should be able to have some private files on the instance the moment COS will attempt to run my image on startup. But how?
Trying to execute a startup-script with a cloud-sdk image and copying files there as suggested by Guillaume didn't work for me for a while, showing this log. Eventually I realised that the cloud-sdk image is 2.41GB when uncompressed and takes over 2 minutes to complete pulling. I tried again with an empty COS instance and the startup script completed successfully, downloading the data from a Storage bucket.
However, a 2.41GB image and over 2 minutes of boot time sound like a bit of an overkill to download a 2KB file. Don't they?
I'm glad to see a working solution to my question (thanks Guillaume!) although I'm still wondering: isn't there a nicer way to do this? I feel that this method is even less tidy than manually putting the files on the COS instance and then creating a machine image to use in the future.
Based on Guillaume's answer I created and published a gsutil wrapper image, available as voyz/gsutil_wrap. This way I am able to run a startup-script with the following command:
docker run -v /host/path:/container/path \
--entrypoint gsutil voyz/gsutil_wrap \
cp gs://bucket/path /container/path
It's essentially a copy of what Guillaume suggested, except it is using an image containing only a minimum setup required to run gsutil. As a result it weighs 0.22GB and pulls within 10-20 seconds on average - as opposed to 2.41GB and over 2 minutes respectively for the google/cloud-sdk image suggested by Guillaume.
Also, credit to this incredibly useful StackOverflow answer that allows gsutil to use the default service account for authentication.
The startup-script is the correct location to do this. And YES, COS lacks some useful library.
BUT you can run container! And, for example, the Google Cloud SDK container!
So, add this startup-script in the VM metadata:
key -> startup-script
value ->
docker run -v /local/path/to/copy/files:/dummy/container/path \
--entrypoint gsutil google/cloud-sdk \
cp gs://your_bucket/path/to/file /dummy/container/path
Note: the startup script is ran in root mode. Perform a chmod/chown in your startup script if you need to change the file access mode.
Let me know if you need more explanation on this command line
Of course, with a fresh COS image, the startup time is quite long (pull the container image and extract it).
To reduce the startup time, you can "bake" your image. I mean, start with a COS, download/install what you want on it (or only perform a docker pull of the googkle/cloud-sdk container) and create a custom image from this.
Like this, all the required dependencies will be present on the image and the boot start will be quicker.
I am trying to mount a persistent disk with data to a VM to use it in Google Datalab. So far no success, ideally I would like to see my files in the Datalab notebook.
First, I added the disk in VM settings with Read/Write mode.
Second, I ran $lsblk to see what disks there are.
Then tried this: sudo mount -o ro /dev/sdd /mnt/disks/z
I got this error:
wrong fs type, bad option, bad superblock on /dev/sdd,
P.S. I used the disk I want to mount on another VM and downloaded some data on it. It was formatted as NTFS disk.
Any ideas?
I would need to create a file system as noted in the following document
$ mkfs.vfat /dev/sdX2
Creates a new partition, without creating a new file system on that partition.
Command: mkpart [part-type fs-type name] start end
I am trying to copy files from my instance to my local directory using following command
gcloud compute scp <instance-name>:~/<file-name> ~/Documents/
However, it is showing error as mentioned below
$USER/Documents/: Is a directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Copying from local directory to GCE works fine.
I have checked Stanford's tutorial and Google's documentation as well.
I have one another instance where there is no issue like this.
I somewhat believe it might be issue with SSH keys.
What might have gone wrong?
Your command is correct if your source and destination paths are correct
The command as you've posted in your question works for me when copying a file from the Google Compute Engine VM to my local machine.
$ gcloud compute scp vm1:~/.bashrc ~/Documents/
.bashrc 100% 3515 3.4KB/s 00:00
I also tried doing the copy from other side (i.e. from my local machine to GCE VM) and it works:
$ gcloud compute scp ~/Documents/.bashrc vm1:~/temp/
.bashrc 100% 3515 3.4KB/s 00:00
$ gcloud compute scp ~/Documents/.bashrc vm1:~/.bashrc-new
.bashrc 100% 3515 3.4KB/s 00:00
gcloud relies on the scp executable present in your PATH. The arguments you provide to the gcloud scp command are passed through to the scp binary. Assuming your source and destination paths are correct, it should work.
Recursive copying using scp
Based on your particular error message though, I've seen that variation only appear when the source path you're trying to copy from is a directory instead of file. For that particular case, you can pass a --recurse argument (similar to the -r argument supported by regular scp) which will recursively copy all files and directories under the specified directory.
gcloud compute scp --recurse SRC_PATH DEST_PATH
To copy files from VM to your desktop you can simply SSH into the VM and on top right corner there is a settings button, there you will find the download file option just enter the path of file.
If it is folder then first zip the folder then download it.
Everything was perfect except I was trying to run these commands on the terminal connected to GCE instead of local terminal.
oyashi#oyashi-torch-instance:~$ gcloud compute scp oyashi-torch-instance:~/spring1617_assignment1.zip ~/Documents/
/home/oyashi/Documents/: Is a directory ERROR: (gcloud.compute.scp)
[/usr/bin/scp] exited with return code [1].
But when I tried this one on my local terminal. This happened.
oyashi#oyashi:~/Documents$ gcloud compute scp oyashi-torch-instance:~/spring1617_assignment1.zip ~/Documents/
spring1617_assignment1.zip 100% 42KB 42.0KB/s 00:00
Thank you everyone for their comments and help. I know its a silly mistake from my end. But I posted this answer so that others might learn from my silliness.
If you need to pass the information of zone, project name you may like to do as it worked for me:
the instance name is the name you chose in the GCP instances.
gcloud beta compute scp --project "project_name" --zone "zone_name" instance_name:~jupyter/file_name /home/Downloads
I met the same problem. The point is you should run the scp command from a local terminal, rather than cloud terminal.
For copying file to local machine from Ubuntu vmware
For ex: you have instance by name : bhk
Run a basic nginx server and copy all the files in /var/www/html (nginx serving dir) and then from your local machine simple run wget <vm's IP>/<your file path>
For example If my vm's IP is 1.2.3.4 and I want to copy /home/me/myFolder/myFile , then simply copy this file in /var/www/html
then run wget 1.2.3.4/myfile
this works for me:
gcloud compute scp --project "my-project" ./my-file.zip user#instance-1:~
--project - google cloud project name
my-file.zip - local file to send to VM
user - vm linux username
instance-1 - instance name (vm name)
~ - instance destination path
I use below script to upload directory from local to remote directory
gcloud compute scp --recurse myweb-app/www/* user#instant-name:/var/www/html/sub-sites/myweb-app/www/
I m using CDH 4.3.0 and I have mounted hdfs using FUSE on my edge node. And the fuse mount point automatically changes the permissions from root:root to hdfs:hadoop.
When I try to export it over NFS, it throws a permissions error to me. Can anyone help me how to get around this. I ve read somewhere that it works only in kernel versions 2.6.27 and above, and mine is 2.6.18...Is this true ?
my mount command gives this output for hdfs fuse:
fuse on /hdfs-root/hdfs type fuse (rw,nosuid,nodev,allow_other,allow_other,default_permissions)
my /etc/exports looks like this:
/hdfs-root/hdfs/user *(fsid=0,rw,wdelay,anonuid=101,anongid=492,sync,insecure,no_subtree_check,no_root_squash)
my /etc/fstab looks like this:
hadoop-fuse-dfs#hdfs:: /hdfs-root/hdfs fuse allow_other,usetrash,rw 2 0
:/hdfs-root/hdfs/user /hdfsbkp nfs rw
the first line is for FUSE, and 2nd line is to export the mounted HDFS over NFS..
when I run mount -a, I get the following error...
"mount: :/hdfs-root/hdfs/user failed, reason given by server: Permission denied"
Also, I tried to change the ownership of FUSE mount to root:root, but it wouldnt let me do that either...by the way, we are using kerberos as authentication method to access hadoop...
Any help is really appreciated!!!