I have issue with Google Cloud Bucket. When I call my files at Google Cloud they come with the headers private and max-age=0 so there will no caching options.
I use console for setting meta-data option. I type:
gsutil -m setmeta -r "Cache-Control:public, max-age=3600" gs://bucket/folder*
but it does not work. What should I do? This is a horrible issue for me.
The gsutil command is link :
gsutil setmeta -h [header:value|header] ... url...
The following command worked for me:
gsutil -m setmeta -h "Cache-Control:public, max-age=3600" gs://destination/object
-h Specifies a header:value to be added, or header to be removed, from each named object.
You can use wildcards with command, for example all objects:
gsutil -m setmeta -h "Cache-Control:public, max-age=3600" gs://YOUR_BUCKET/**/*
The metadata applies only to objects and I do not think you can apply a rule to bucket to apply for future objects.
Related
I'm trying to download an entire bucket to my local machine.
I'm aware that the command to do this is:
gsutil -m cp -r \ "gs://bucket_name/folder_name/" \ .
However, I'd like to specify exactly where this gets downloaded on my machine due to storage limitations.
Can anyone share any advice regarding this?
Thanks in advance,
Tommy
You can place your download files where you want to put by adding a destination url value to the last parameter of the gsutil cp command you are using, for example:
gsutil -m cp -r "gs://bucket_name" "D:\destination_folder"
cant upload all file with extension .css in the directory or sub directory to GCS bucket
gsutil -h "Cache-Control:public,max-age=2628000" -h "Content-Encoding:gzip" cp plugins/**.css gs://cdn.test.example.io/wp-content/plugins
response
CommandException: No URLs matched: plugins/**.css
there are some css files deep in the directory
i want to upload all css file to GCS bucket inside plugin folder or it's any sub-folder
The documentation describe this feature and it works as expected
For my test, I used Cloud Shell with the latest version of gsutil
> gsutil -v
gsutil version: 4.52
Check your version, and update it to see if that solve your issue
However you can also do this to upload only the file of Plugin directory
cd plugins
gsutil -h "Cache-Control:public,max-age=2628000" -h "Content-Encoding:gzip" cp **.css gs://cdn.test.example.io/wp-content/plugins
If you want to scan any directory and export the CSS files, you can use this command
gsutil cp ./**/*.css gs://cdn.test.example.io/wp-content/plugins
This seems like a really basic question but I can't seem to make it work.
I am using a static site generator for a website. I want to set all my html files to never be cached and all the rest to be cached. To do this, I'd like to upload all non-html files and set the cache headers. This is straight forward using:
gsutil -m -h "Cache-Control:public, max-age=31536000" rsync -x ".*\.html$" -r dist/ gs://bucket/
But how do I then upload only my html files? I've tried cp and rsync with wildcards, but I try something like:
gsutil -h "Content-Type:text/html" -h "Cache-Control:private, max-age=0, no-transform" rsync -r 'dist/**.html' gs://bucket/
I get: CommandException: Destination ('dist/**.html') must match exactly 1 URL
You want to copy the files into the bucket so have to use "cp" command.
Try the following code:
gsutil -h "Content-Type:text/html" -h "Cache-Control:private, max-age=0, no-transform" cp dist/**.html gs://YOUR_BUCKET
I´m executing a cp in a Visual Studio Online release task to change the --cache-control metadata, but it´s also changing the content-type of the files to text/plain.
Here´s the command:
s3://sourcefolder/ s3://sourcefolder/ --exclude "*" --
include "*.js" --include "*.png" --include "*.css" --include
"*.jpg" --include "*.gif" --include "*.eot" --include
"*.ttf" --include "*.svg" --include "*.woff" --include
"*.woff2" --recursive --metadata-directive REPLACE --cache-
control max-age=2592000,private
Before I execute this command, my javascript files were with correct content type: text/javascript, but after I execute this command it changes to text/plain. How can I avoid this?
I can't see a way of doing it for your specific use case. This is mainly because of different files which have different content-type value. I don't think aws s3 cp or aws s3 sync operations would work for you. This is caused by --metadata-directive REPLACE flag which is essentially removing all of the metadata and since you are not providing content-type it defaults to text/plain. However, in case you set it lets say to text/javascript, all the files will have that in their metadata which is clearly not right for images and css files.
However, I shall propose a solution that should work for you. Please try using latest version of s3cmd, as it has modify command available and you could use it as follows:
./s3cmd --recursive modify --add-header="Cache-Control:max-age=25920" \
--exclude "*" \
--include ... \
s3://yourbucket/
More about s3cmd usage and available flags -> s3cmd usage
I have a large number of files that I need to upload to Google Cloud Storage and add content type to them. All file names don't have an extension, but content type is the same for all of them.
I tried to use this command gsutil -m cp -r . gs://bucket_name/, but it uploads files with application/octet-stream content type.
Is there a way to override default content type that GCS sets?
Here, it gives an example of how to do this:
gsutil -h "Content-Type:text/html" \
-h "Cache-Control:public, max-age=3600" cp -r images \
gs://bucket/images