Google Cloud Storage custom error messages - google-cloud-platform

I am using Google cloud storage as CDN to store file for our website which is hosted on Fastly.
In case of PDF files, we are doing a redirect to URL of PDF file in google cloud storage.
Everything works fine except in case if the user manipulates the file location in URL (which is used to build create google storage object URL). In such case google storage display error message in XML format as follow:
<Error>
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
</Error>
Such message is fine for dev environments however on production this is not something we can show to the user in a browser.
So I want to understand is there any provision in Google cloud storage to customize these messages and pages.
Thanks in advance,
Yogesh

The best way I know of to avoid this error would be to use GCS's static website hosting feature. To do so, you'd purchase some domain name, create a GCS bucket that matches that domain name, then specify the "NotFoundPage" property of the website configuration to be an object with whatever you'd like the appropriate error to be. The main downside here is that this would only work over HTTP, not HTTPS.
For more on how to set up static website config, see https://cloud.google.com/storage/docs/hosting-static-website

Related

Provide directory listings using API Gateway as a proxy to S3

I'm trying to host a static site through S3 and API Gateway (using S3 directly isn't an option due to security policy). One of the pages runs a client-side script to pull back a set of files from a specific folder on the server. I've set up the hosting following the Amazon tutorial.
For this to run, my script needs to be able to obtain the list of files for a specific folder.
If I was hosting the site on my own server using Apache, I could rely on the directory listing feature, where a GET on a folder with no index.html returns a file list. The tutorial suggests that this should be possible, but I can't seem to get it to work. If I submit a request to a particular {prefix}/{identifier}, I can retrieve the specific file, but sending a request to {prefix}/ returns an error.
Is there a way I can replicate directory listings so my Javascript can pull it down and read it, or is the only solution to write a server-side API in Lambda?

Signed Cookies with Google Cloud CDN responds with 403

I am trying to get hls / dash streams working via Google Cloud CDN for a video on demand solution. The files / manifests sit in a Google Cloud Storage Bucket and everything looks properly configured since i followed every step of the documentation https://cloud.google.com/cdn/docs/using-signed-cookies.
Now i am using an equivalent Node.js code from Google Cloud CDN signed cookies with bucket as backend to create a signed cookie with the proper signing key name and value which i previously set up in google cloud. The cookie get's sent to my load balancer backend in Google Cloud.
Sadly, i always get a 403 response saying <?xml version='1.0' encoding='UTF-8'?><Error><Code>AccessDenied</Code><Message>Access denied.</Message></Error>.
Further info:
signed urls / cookies is activated on load balaner backend
IAM role in bucket for cdn-account is set to "objectViewer"
signing key is created, saved and used to sign the cookie
Would really appreciate any help on this.
Edit:
I just tried the exact python code google states to create the signed cookies from https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/cdn/snippets.py with the following params:
Call: sign_cookie('http://cdn.myurl.com/', 'mykeyname', 'mybase64encodedkey', 1614110180545)
The key is directly copied from google since I generated it there.
The load balancer log writes invalid_signed_cookie.
I'm stumbling across the same problem.
The weird thing is that it doesn't work correctly only in web browsers. I've seen GoolgeChrome and Safari return a 403 even though they contain cookies. However, I have noticed that the same request with the exact same cookie in curl returns 200. I think this means that it does not work correctly in web browser. I'm asking GCP support about this right now, but I'm not getting a good answer.
Edit:
As a result of several hypotheses and tests, I found out that when the cookie library I use formats and inserts values into the Set-Cookie header, URLEncoding is automatically executed and cookies that CloudCDN cannot understand are sent. Therefore, it is now possible for web browsers to retrieve content by adding it to the Set-Cookie header without URLEnconding.

How to reach website on GCloud App Engine Standard

I uploaded a website into Google cloud platform > storage and set up the DNS and it goes to google then says an error message about the bucket (I can't reproduce the error message, but it doesn't go to the right location). Google gives me a link to the website and I can get to it from: https://storage.googleapis.com/pampierce.me/index.html
but https://pampierce.me/index.html doesn't work.
Currently, the DNS CNAME is set to: c.storage.googleapis.com. What should it be set to?
Or is the problem that I shouldn't put an HTML / CSS / JS only website in Storage? If so, then where / how?
Thanks.
The issue is with the name of the bucket, this is why it is not working.
I checked the CNAME record for www.pampierce.me and it points to c.storage.googleapis.com. but pampierce.me points to 91.195.240.103. Note that www.pampierce.me is not the same as pampierce.me. This is about DNS but in general this config is okay.
Actually, the real issue is with your bucket. As well you can create a bucket with the name pampierce.me, this does not work when using Cloud Storage to host a site and for this reason the bucket should be named www.pampierce.me. This is mentioned here.
Once you have created the bucket www.pampierce.me and set all the files and steps you have already done, everything should be working fine. Also the way to access is http://www.pampierce.me/index.html (note that as before is not the same as http://pampierce.me/index.html).
Finally you will notice that I say http and not https and the reason is because Cloud Storage does not supports SSL for hosting a website
In case you may want to access using https://pampierce.me (naked domain and HTTPS), I suggest to follow this tutorial but also implies to use a Load Balancer which also means extra cost. Also the issue is with Cloud Storage and App Engine is a different product.

AWS S3 Redirect only works on bucket as a subdomain not bucket as a directory

Many people have received 100s of links to PoCs that are on an internal facing bucket and the links are in this structure.
https://s3.amazonaws.com/bucket_name/
I added a redirect using AWS's Static website hosting section in Properties and it ONLY redirects when the domain is formatted like this:
https://bucket_name.s3-website-us-east-1.amazonaws.com
Is this a bug with S3?
For now, how do I make it redirect using both types of links? My current workaround is to add a meta redirect tag in each html file.
The s3-website is the only endpoint that supports redirects unfortunately. Using the s3.amazonaws.com supposes that you will be using S3 as a storage layer, instead of a website. If the link is to a specific object, you can place an HTML file at that url with a JS redirect, but other than that there is really no way to achieve what you are trying to do.
In the future, i would recommend always setting up a Cloudfront distribution for those kinds of usecases, as that will allow you to change the origin later on.

How to control the URL that Django generates?

How can I get Django to generate the static URL's using static.mywebsite.com, instead of static.mywebsite.com.s3.amazonaws.com?
The S3 bucket used for static files works when I am using a bucket name with no periods. But with a bucket using periods, it gives Failed to load resource: net::ERR_INSECURE_RESPONSE
The reason I am trying to use a bucket with periods is because this guide and two StackOverflow posts say that this is required to get an address of static.mywebsite.com, instead of mywebsite-static.s3.amazaonaws.com:
http://carltonbale.com/how-to-alias-a-domain-name-or-sub-domain-to-amazon-s3/
how appoint a subdomain for a s3 bucket?
Amazon S3: Static Web Sites: Custom Domain or Subdomain
And it seems to work; when I browse to static.mywebsite.com, I get an XML from S3 saying AccessDenied. In the last link above, a user comments to ask if this will work with HTTPS. It initially appears that it does not.
Upon inspection, the URL's of the static files generated by Django still use the static.mywebsite.com.s3.amazonaws.com address. If I understand correctly, this is a HTTPS certificate issue due to the periods creating additional layers of subdomains.
So, I changed the settings to use:
STATIC_URL = 'https://%s.mywebsite.%s/' % ('static', 'com')
But that resulted in it generating static.s3.amazonaws.com.
How can I get Django to generate the URL's using static.mywebsite.com?