Text Compression when serving from a Github Trigger - google-cloud-platform

I'm trying to figure out how to serve my js, css and html as compressed gzip from my Google Cloud Storage bucket. I've set up my static site properly, and also built a Cloud Build Trigger to sync the contents from the repository on push. My problem is that I don't want to have gzips of these files on my repository, but rather just serve them from the bucket.
I might be asking too much for such a simple setup, but perhaps there is a command I can add to my cloudbuild.yaml to make this work.
At the moment it is just this:
steps:
- name: gcr.io/cloud-builders/gsutil
args: ["-m", "rsync", "-r", "-c", "-d", ".", "gs://my-site.com"]
As far as I'm aware this just syncs the bucket to the repo. Is there another command that could ensure that the aforementioned files are transferred as gzip? I've seen use of the gsutil cp
but not within this specific Cloud Build pipeline setup from Github.
Any help would be greatly appreciated!

The gsutil command setmeta lets you add metadata information to the files that overwrites the default http server. Which is handy for Content-Type, and Cache-* options.
gsutil setmeta -h "Content-Encoding: gzip" gs://bucket_name/folder/*
For more info about Transcoding with gzip-uploaded files: https://cloud.google.com/storage/docs/transcoding

Related

how to use gsutil rsync. login and download bucket contents to a local directory

I have the following questions.
I got access to a cloud bucket to my email id. Now I want to download the whole bucket folder into a local directory on ubuntu. I installed gsutil from pip.
Is the command correct?
gsutil rsync gs://bucket_name .
the command seems generic how do I give my gmail credentials to it? The file is 1TB of size and I am allowed to download only once so I want to get the command right.
The command is correct if you want your current directory to mirror the contents of the bucket (including deleting any files on the right not found on the left). If you merely want to copy, you might want cp -r instead.
Here are the current docs on how to authenticate when running a standalone gsutil. It looks like you just need to run gsutil config.

How to force download files from a google storage bucket instead of opening it it browser?

I have some audio files in a Google Bucket, and I am serving links to those file in a WordPress website.
How do I force download those files instead of playing in the browser.
Adding &response-content-disposition=attachment; to the end of the url doesn't work.
Tried in gsutil gsutil setmeta -h 'Content-Disposition:attachment' gs://samplebucket/*/*.mp3
I get the error
CommandException: Invalid or disallowed header (u'content-disposition).
Only these fields (plus x-goog-meta-* fields) can be set or unset:
[u'cache-control', u'content-disposition', u'content-encoding', u'content-language', u'content-type']`
as pointed by robsiemb, I had to invoke these commands under google cloud shell . In my case Windows shell turned out to be the culprit.

File In Storage Bucket looses "Share publicly" permission after automated build

I'm a bit new to Google Cloud and am using a storage bucket to host a static website.
I've integrated automated builds via a build trigger when my master branch gets updated. I'm successfully able to see the changes when I push to GitHub, but when a preexisting file such as index.html gets updated, the file looses the permission to "Share publicly"
I've followed the tutorial below with the only difference being you the object permissions are now handled at the individual file level on the platform rather then a the top level for the bucket.
https://cloud.google.com/community/tutorials/automated-publishing-container-builder
This is my cloudbuild.yaml file
steps:
- name: gcr.io/cloud-builders/gsutil
args: ["-m", "rsync", "-r", "-c", "-d", ".", "gs://www.mysite.com"]
If you don't configure at the bucket level to have all objects in that bucket publicly readable by default, you'll need to re-apply the permission to the newly uploaded file.
If you know all your updated files need to be set as publicly readable, you can use the -a option with your rsync command and use the canned_acl named "public-read". Your cloudbuild.yaml file would look like this:
steps:
- name: gcr.io/cloud-builders/gsutil
args: ["-m", "rsync", "-a", "public-read", "-r", "-c", "-d", ".", "gs://www.mysite.com"]
If you don't want to set all object publicly readable at once, you'll need to set permissions on a per object basis by listing objects and applying permissions with the following command:
gsutil acl ch -u AllUsers:R gs://nameBucket/dir/namefile.ext

How to copy file from bucket GCS to my local machine

I need copy files from Google Cloud Storage to my local machine:
I try this command o terminal of compute engine:
$sudo gsutil cp -r gs://mirror-bf /var/www/html/mydir
That is my directory on local machine /var/www/html/mydir.
i have that error:
CommandException: Destination URL must name a directory, bucket, or bucket
subdirectory for the multiple source form of the cp command.
Where the mistake?
You must first create the directory /var/www/html/mydir.
Then, you must run the gsutil command on your local machine and not in the Google Cloud Shell. The Cloud Shell runs on a remote machine and can't deal directly with your local directories.
I have had a similar problem and went through the painful process of having to figuring it out too, so I thought I would provide my step by step solution (under Windows, hopefully similar for unix users) with snapshots and hope it helps others:
The first thing (as many others have pointed out on various stackoverflow threads), you have to run a local Console (in admin mode) for this to work (ie. do not use the cloud shell terminal).
Here are the steps:
Assuming you already have Python installed on your machine, you will then need to install the gsutil python package using pip from your console:
pip install gsutil
The Console looks like this:
You will then be able to run the gsutil config from that same console:
gsutil config
As you can see from the snapshot bellow, a .boto file needs to be created. It is needed to make sure you have permissions to access your drive.
Also note that you are now provided an URL, which is needed in order to get the authorization code (prompted in the console).
Open a browser and paste this URL in, then:
Log in to your Google account (ie. account linked to your Google Cloud)
Google ask you to confirm you want to give access to GSUTIL. Click Allow:
You will then be given an authorization code, which you can copy and paste to your console:
Finally you are asked for a project-id:
Get the project ID of interest from your Google Cloud.
In order to find these IDs, click on "My First Project" as circled here below:
Then you will be provided a list of all your projects and their ID.
Paste that ID in you console, hit enter and here you are! You now have created your .boto file. This should be all you need to be able to play with your Cloud storage.
Console output:
Boto config file "C:\Users\xxxx\.boto" created. If you need to use a proxy to access the Internet please see the instructions in that file.
You will then be able to copy your files and folders from the cloud to your PC using the following gsutil Command:
gsutil -m cp -r gs://myCloudFolderOfInterest/ "D:\MyDestinationFolder"
Files from within "myCloudFolderOfInterest" should then get copied to the destination "MyDestinationFolder" (on your local computer).
gsutil -m cp -r gs://bucketname/ "C:\Users\test"
I put a "r" before file path, i.e., r"C:\Users\test" and got the same error. So I removed the "r" and it worked for me.
Check with '.' as ./var
$sudo gsutil cp -r gs://mirror-bf ./var/www/html/mydir
or maybe below problem
gsutil cp does not support copying special file types such as sockets, device files, named pipes, or any other non-standard files intended to represent an operating system resource. You should not run gsutil cp with sources that include such files (for example, recursively copying the root directory on Linux that includes /dev ). If you do, gsutil cp may fail or hang.
Source: https://cloud.google.com/storage/docs/gsutil/commands/cp
the syntax that worked for me downloading to a Mac was
gsutil cp -r gs://bucketname dir Dropbox/directoryname

How can i deploy ember cli index to s3 without sha

i'm using ember-cli-deploy and ember-deploy-s3-index.
Following this article i managed to deploy the index to a bucket with static web hosting and another bucket holding the assets.
I want to automate (CI) the deploy process but there are two problems:
Each deploy adds an index file with a new name (test:b2907fa.html for example), and i need to manually change the index document to match the latest deploy in my s3 configuration.
I need to add permissions to the file on each deploy.
I would like to have a fixed name (override existing on deploy) for my index file, and that the file will have view permissions by default.
Is this possible?
Thanks.
Turn out you don't need to change the index document.
After deploy you need to run ember:deploy:activate --revision test:b2907fa --environment production and it will change it in the s3 bucket.
A simpler alternative with no add-ons/dependencies:
Deploying an ember cli app is as simple as syncing the contents of the dist/ folder to your server (after building with --production flag) These files can then be statically served
Here is a script I wrote to automate my deploy process:
printf "** Depoying application**\n"
cd ~/Desktop/Project/ember_test/censored
printf "\n** Building static files **\n"
ember build --environment=production
printf "\n** Synchronizing distribution folder to frontend.censored.co.za **\n"
rsync -rv ~/Desktop/Project/ember_test/censored/dist frontend#frontend.censored.co.za:/var/www/html/censored --exclude ".*/" --exclude ".*" --delete
printf "\n** Removing production build from local repository **\n"
rm -rv ~/Desktop/Project/ember_test/censored/dist/*
printf "\n** Deployment done. **\n"
This deploys to a linux server where you want to deploy to s3
So instead of my 3rd command (rsync) you would use s3cmd to put your folder into s3 (It would probably be a s3cmd put command)