Download File - Google Cloud VM - google-cloud-platform

I am trying to download a copy of my mysql history to keep on my local drive as a safeguard.
Once selected, a dropdown menu appears
And I am prompted to enter the file path for the download
But after all the variations I can think of, I keep receiving the following error message:

Download File means that you are downloading a file from the VM to your local computer. Therefore the expected path is a file on the VM.
If instead your want to upload c:\test.txt to your VM, select Upload File. Then enter c:\test.txt. The file will be uploaded to your home directory on the VM.

Related

GCP copy files in from VM to local

I'm trying to copy files from my VM to my local computer.
I can do this with the standard command
sudo gcloud compute scp --recurse orca-1:/opt/test.txt .
However in downloading the log files they transfer but they're empty? (empty files are created with the same name)
I'm also unable to use the Cloud Shell 'Download' UI button because it gives No such file despite the absolute file path being correct (cat /path returns the data).
I understand it's a permissions thing somehow with log files?
Thanks for the replies to my thread above I figured out it was a permissions issue on my files.
Interestingly the first time I ran the commands it did not throw any errors or permission errors -- it downloaded all the expected files however they were empty. In testing again, now it threw permission errors. I then modified the files in question to have public read permissions, and it downloaded successfully.

Why can't my GCP script/notebook find my file?

I have a working script that finds the data file when it is in the same directory as the script. This works both on my local machine and Google Colab.
When I try it on GCP though it can not find the file. I tried 3 approaches:
PySpark Notebook:
Upload the .ipynb file which includes a wget command. This downloads the file without error but I am unsure where it saves it to and the script can not find the file either (I assume because I am telling it that the file is in the same directory and pressumably using wget on GCP saves it somewhere else by default.)
PySpark with bucket:
I did the same as the PySpark notebook above but first I uploaded the dataset to the bucket and then used the two links provided in the file details when you click the file name inside the bucket on the console (neither worked). I would like to avoid this though as wget is much faster then downloading on my slow wifi then reuploading to the bucket through the console.
GCP SSH:
Create cluster
Access VM through SSH.
Upload .py file using the cog icon
wget the dataset and move both into the same folder
Run script using python gcp.py
Just gives me an error saying file not found.
Thanks.
As per your first and third approach, if you are running a PySpark code on Dataproc, irrespective of whether you use .ipynb file or .py file, please note the below points:
If you use the ‘wget’ command to download the file, then it will be downloaded in the current working directory where your code is executed.
When you try to access the file through the PySpark code, it will check defaultly in HDFS. If you want to access the downloaded file from the current working directory, use the “ file:///” URI with absolute file path.
If you want to access the file from HDFS, then you have to move the downloaded file to HDFS and then access from there using an absolute HDFS file path. Please refer the below example:
hadoop fs -put <local file_name> </HDFS/path/to/directory>

How can I add a file in the Google Cloud SSH? I am in need to host a website and wants to add .HTML file in SSH command line

I am going to host a website and needed to add a file (simple HTML file) in SSH command line in the location /var/www/html.
Based on the file location path I'm assuming you're running Linux (most probably Debian since it's the default solution for many GCP's VM's).
When you connect via SSH to your VM (it doesn't matter what kind of software terminal it is) or GCP's console and SSH button (which opens up a new window) there is a way to create and edit files that way.
For small files (like HTML page) you can use [nano]2.
When you log in to your instance create a file with nano (if it's not there):
sudo nano /var/www/html/index.html - it will open up the nano and you will see empty file. If index.html already existed you will see it's contents.
I assume you already have the file ready on your local computer so now just open it with text editor and copy it's contents to the nano's window (ctrl+c & ctrl + v work).
Next save the file with nano ctrl + o and close it with ctrl + x.
Now reload the webserver service (lets assume you use nginx) with sudo service nginx reload.

Google Cloud Storage - Failed to upload a file using Chrome

I'm getting an error message "Failed to upload a file" when uploading from Chrome (same error whether i'm using drag and drop or the file selector). There is no detail in the error message so I have not idea how to fix this.
I was able upload the same file from gcloud shell using this command:
gsutil cp <file> gs://<bucket>
I also checked the followings:
Permissions - should not be an issue because I was able to upload from the shell. Also i'm the storage admin
Checked the Google Cloud Log Viewer - I don't see any entry for the failed attempts, only the successful one that I initiated from the shell
I tried to upload from Chrome on Mac and different one from Linux machine and getting the same error
This is an old bucket and I used to upload from the browser many times in the past
Any idea what can be causing this issue?

not able to download files from azure blob

Hi using fileUris in Azure extension of template, not able to download file in VM while launching VM from template.
It throws following error:
VM has reported a failure when processing extension 'customScript'. Error message: "Enable failed: processing file downloads failed: failed to download file[1]: failed to download file: unexpected status code: got=404 expected=200".
To get more insight about the issue, you check the following log files on your VM:
- /var/log/azure/custom-script/handler.log
- /var/log/waagent.log
enter code here
Your files are downloaded to a path like: /var/lib/waagent/custom-script/download/0/ and the command output is saved to stdout and stderr files in this directory. Please read these files to find out output from your script.
You can find the logs for the extension at /var/log/azure/custom-script/handler.log.
Please open an issue on this GitHub repository if you encounter problems that you could not debug with these log files.
Source: https://github.com/Azure/custom-script-extension-linux
Plese check weather your repository is private or public in github or any reposi.
Once you chenge it to public it let you download the repository
For storage private networking I needed to add the subnet of the VM to the firewall and enable the "Microsoft.Storage" services endpoint.
The message means that the waagent service was not able to download a file using the URL provided in the config file. There could be many reasons for that. I suggest that you need to test that URL manually from the VM to see if the URL can be downloaded at all.