Google Cloud Storage - Failed to upload a file using Chrome - google-cloud-platform

I'm getting an error message "Failed to upload a file" when uploading from Chrome (same error whether i'm using drag and drop or the file selector). There is no detail in the error message so I have not idea how to fix this.
I was able upload the same file from gcloud shell using this command:
gsutil cp <file> gs://<bucket>
I also checked the followings:
Permissions - should not be an issue because I was able to upload from the shell. Also i'm the storage admin
Checked the Google Cloud Log Viewer - I don't see any entry for the failed attempts, only the successful one that I initiated from the shell
I tried to upload from Chrome on Mac and different one from Linux machine and getting the same error
This is an old bucket and I used to upload from the browser many times in the past
Any idea what can be causing this issue?

Related

GCP copy files in from VM to local

I'm trying to copy files from my VM to my local computer.
I can do this with the standard command
sudo gcloud compute scp --recurse orca-1:/opt/test.txt .
However in downloading the log files they transfer but they're empty? (empty files are created with the same name)
I'm also unable to use the Cloud Shell 'Download' UI button because it gives No such file despite the absolute file path being correct (cat /path returns the data).
I understand it's a permissions thing somehow with log files?
Thanks for the replies to my thread above I figured out it was a permissions issue on my files.
Interestingly the first time I ran the commands it did not throw any errors or permission errors -- it downloaded all the expected files however they were empty. In testing again, now it threw permission errors. I then modified the files in question to have public read permissions, and it downloaded successfully.

How to force download files from a google storage bucket instead of opening it it browser?

I have some audio files in a Google Bucket, and I am serving links to those file in a WordPress website.
How do I force download those files instead of playing in the browser.
Adding &response-content-disposition=attachment; to the end of the url doesn't work.
Tried in gsutil gsutil setmeta -h 'Content-Disposition:attachment' gs://samplebucket/*/*.mp3
I get the error
CommandException: Invalid or disallowed header (u'content-disposition).
Only these fields (plus x-goog-meta-* fields) can be set or unset:
[u'cache-control', u'content-disposition', u'content-encoding', u'content-language', u'content-type']`
as pointed by robsiemb, I had to invoke these commands under google cloud shell . In my case Windows shell turned out to be the culprit.

Error when deploying to codedeploy in AWS ec2

error : /scripts/execute-deploy.sh Script at specified location:
/scripts/execute-deploy.sh failed with error Errno::ENOENT with
message No such file or directory -
/opt/codedeploy-agent/deployment-root/0e164065-68f3-4cac-b540-6b70eaea7b0d/d-RSJV81S50/deployment-archive/scripts/execute-deploy.sh
Project on Github
I am trying to upload projects to an AWS ec2 instance, build them, and deploy them.
Right now, you can see the structure in the picture below.
I checked that the .zip file is saved without error in s3.
An error like this occurs when its building in codedeploy:
I tried googling. I tried to create a codedeploy application. I tried searching. Nothing has worked so far.
It says it could not locate the file, but there is actually a file in the directory.
This is my appspec.yml:
I really want to find a solution. Any help will be appreciated. I've been trying to solve it by myself for 4 days now.
Have you tried to manually execute your deployment in CodeDeploy applications via AWS Console ?

not able to download files from azure blob

Hi using fileUris in Azure extension of template, not able to download file in VM while launching VM from template.
It throws following error:
VM has reported a failure when processing extension 'customScript'. Error message: "Enable failed: processing file downloads failed: failed to download file[1]: failed to download file: unexpected status code: got=404 expected=200".
To get more insight about the issue, you check the following log files on your VM:
- /var/log/azure/custom-script/handler.log
- /var/log/waagent.log
enter code here
Your files are downloaded to a path like: /var/lib/waagent/custom-script/download/0/ and the command output is saved to stdout and stderr files in this directory. Please read these files to find out output from your script.
You can find the logs for the extension at /var/log/azure/custom-script/handler.log.
Please open an issue on this GitHub repository if you encounter problems that you could not debug with these log files.
Source: https://github.com/Azure/custom-script-extension-linux
Plese check weather your repository is private or public in github or any reposi.
Once you chenge it to public it let you download the repository
For storage private networking I needed to add the subnet of the VM to the firewall and enable the "Microsoft.Storage" services endpoint.
The message means that the waagent service was not able to download a file using the URL provided in the config file. There could be many reasons for that. I suggest that you need to test that URL manually from the VM to see if the URL can be downloaded at all.

gcloud job can't access my files, either they are in GCS or in my cloud shell

I'm trying to run my code of machine learning from images using tensorflow in Google CloudML. However, it seems the submitted job can't access to my files in my cloud shell or in GCS. Even though it is working fine in my local machine, I get the following error once I submit my job using the command gcloud from the cloud shell:
ERROR 2017-12-19 13:52:28 +0100 service IOError: [Errno 2] No such file or directory: '/home/user/pores-project-googleML/trainer/train.txt'
This folder can be found for sure in cloud shell, and I can check it when I type:
ls /home/user/pores-project-googleML/trainer/train.txt
I tried putting my file train.txt in GCS and access to it from my code (by specifying the path gs://my_bucket/my_path), but once the job submitted, I got a 'No such file or directory' error with the corresponding path.
To check where the job I submitted using gcloud is running, I added print(os.getcwd()) in the beginning of my python code trainer/task.py, which printed as a result in the logs: /user_dir. I couldn't find this path using the cloud shell, not even in GCS. So my question is how can I know in which machine my job is running? If it's in a certain container somewhere, how can I access from it to my files using the cloud shell and in GCS?
Before I do all of this, I succesfully completed the 'Image Classification using Flowers Dataset' tutorial.
The command I used to submit my job is:
gcloud ml-engine jobs submit training $JOB_NAME --job-dir $JOB_DIR --packages trainer-0.1.tar.gz --module-name $MAIN_TRAINER_MODULE --region us-central1
where:
TRAINER_PACKAGE_PATH=/home/use/pores-project-googleML/trainer
MAIN_TRAINER_MODULE="trainer.task"
JOB_DIR="gs://pores/AlexNet_CloudML/job_dir/"
JOB_NAME="census$(date +"%Y%m%d_%H%M%S")"
Regular Python IO library is not able to access files on GCS. Instead, you need to use GCS python client or gstuil cli to access GCS files.
Note that TensorFlow itself has native support of GCS (i.e., it can read GCS files directly).