On SSH, I've tried using gsutil -m cp filename* Desktop to copy a file from the VM Instance to my computer desktop, like Google Cloud's own example in its documentation. I got a message saying that the file was copied successfully, but no mention of downloading anything, and I don't see the relevant file on my desktop. I've tried specifying the full desktop address instead of just 'Desktop', but SSH does not recognize the address.
Is there a way I can directly download files from the VM Instance to my desktop without having to go through a Google Cloud bucket?
In my opinion this is the easiest way. Directly to local machine in a few steps.
Related
When I download files to the bucket I use the command: gsutil -u absolute-bison-xxxxx cp $FILE gs://bucket_1 which works fine.
I am running downstream programs that I want the output to be saved to the same bucket, but when I for instance type: -output gs://bucket_1/file.out to specify the folder for the output, it does not recognise the bucket as a place to store the output. How do I set the path to the bucket?
From the command you have posted on your question, I think you are using Cloud SDK as stated in the documentation to copy files from your computer to your bucket.
If that is the case, following Kolban’s comment, you could use Cloud Storage FUSE to mount your Google Cloud Storage bucket as a file system.
In order to use Cloud Storage Fuse, you should be running an OS based on Linux kernel version 3.10 or newer, or Mac OS X 10.10.2 as stated on the installation notes; also, you could run it from Google Compute Engine VMs.
You could begin installing it following the installation notes and following this YouTube tutorial.
For more information about using Cloud Storage FUSE, or to file an issue, go to the Google Cloud [GitHub repository].
For a fully-supported file system product in Google Cloud, see Filestore.
You could use Firestore as recommended in the documentation. To be used locally on your machine, you could follow this guide.
I am having one Linux Ec2 instance on AWS and my local machine is Windows 10 (64-bit).
I want to download some files or folders from Ec2 to location windows machine.
I am not sure whether it is possible or not? if yes, how we can do that.
thanks.
I tried this it worked for me.
Download https://mobaxterm.mobatek.net/ its an Enhanced terminal for Windows.
You can link your EC2 instance directly via SSH its pretty simple to set up. Just follow the instuctions they've given. Once linked, its super easy to export, import, create files and folders all via mobaxterm.
folders in mobaxterm:
Got the command to Copy from Windows to Linux.
First you need to install putty (putty-64bit-0.74-installer.msi) on your windows machine
The Command is as follow it will copy the folder(e.g. DokerAutomationResult) to the windows machine from AWSLinux machine.
pscp -r ubuntu#xx.xxx.xx.xx:/home/ubuntu/DokerAutomationResult ./
[pscp -r ubuntu#(ipAddress):(locationOfLinuxFileLocation /(locationToCopyInWInodws) ]
For better Understanding:
https://www.youtube.com/watch?v=Sc0f-sxDJy0&ab_channel=Liv4IT
Yes it is possible to download files from ec2 linux instance to local system.
You can use scp -i key user#ip add:/file location which you want to download.
. will download file in your current location on local system
I have annotated dataset. I am trying to use this dataset at Linux virtual machine in google cloud. Please help with steps
If you have the gsutil tool installed in your VM (which you can easily install following this documentation), you can easily download the files to your VM using the following command:
gsutil cp -r gs://<YOUR-BUCKET-NAME>/<YOUR-FOLDER-PATH>/* <DESTINATION-DIRECTORY>
This would be the most straight-forward answer, however, if you need to automate this, you could use the Cloud Storage Client Libraries to do this process using code.
I made a fairly standard deployment of the Single-Node File Server on Google Cloud. It works fine as I can mount the file server's disk from other instances.
However, now I want to add another disk to the same file server. The documentation says I should use the following command to add another file system:
zfs create storagepool_name/file_system_name
I tried to run this command on the VM that is acting as the file server, but I get the error that the command zfs is not found.
Now I can probably install zfs myself, but I wonder whether that will somehow collide with whatever the deployment has already set up on the machine.
Is installing and setting up zfs myself a problem? If so, how do I add another disk to the file server?
I figured out what went wrong with my setup of the Single-Node File Server.
First, the default deployment settings seems to choose xfs as the default file system instead of zfs. The file server I had was using xfs, as can be seen in the metadata of the instance itself.
Secondly, as user John Hanley commented in my question, even with zfs selected as the file system, only the root user has its PATH variable set-up properly to be able to directly use the zfs command.
I just started to use cloud to do training for my deep learning program. For now every time I modified my local .py file I have to remove the old one in the remote virtual machine of Google Cloud Platform and upload the new one. I am just curious if there is a way that I can actually open the .py file in the remote visual machine through the command line? That would be very high efficiency.
Thank you very much!
To edit a file on a machine you can SSH into there are many potential solutions.
Use scp to copy files. E.g. scp mylocalfile ssh://my-host-address/myfolder
Use ssh mounting solutions: How do you edit files over SSH?
Edit using nano your-file-to-edit (my favorite) https://www.howtogeek.com/howto/42980/the-beginners-guide-to-nano-the-linux-command-line-text-editor/
Edit using vi or vim http://vim.wikia.com/wiki/Tutorial