I'm trying to gracefully handle the 403 when visiting an S3 resource via an expired URL. Currently it returns an amz xml error page. I have uploaded a 403.html resource and thought I could redirect to that.
The bucket resources are assets saved/fetched by my app. Still, reading the docs I set bucket properties to handle the bucket as a static webpage page and uploaded a 403.html to bucket root. All public permissions are blocked, except public GET access to the resource 403.html. In bucket properties, website settings I indicated the 403.html as error page. Visiting http://<bucket>.s3-website-us-east-1.amazonaws.com/some-asset.html redirects correctly to http://<bucket>.s3-website-us-east-1.amazonaws.com/403.html
However, when I use aws-sdk js/node and call method getSignedUrl('getObject', params) to generate the signed url, it returns a different host url: https://<bucket>.s3.amazonaws.com/ Visiting expired resources from this method do not get redirected to 403.html. I'm guessing that since the host address is different this is the reason it is not automatically redirecting.
I have also set up static website routing rules for condition
<Condition>
<HttpErrorCodeReturnedEquals>403</HttpErrorCodeReturnedEquals>
</Condition>
<Redirect>
<ReplaceKeyWith>403.html</ReplaceKeyWith>
</Redirect>
Still that's not redirecting the signed urls. So I'm at a loss of how to gracefully handle these expired urls. Any help would be greatly appreciated.
S3 buckets have 2 public-facing interfaces, REST and website. That is the difference between the two hostnames, and the difference in behavior you are seeing.
They have two different feature sets.
feature REST Endpoint Website Endpoint
---------------- ------------------- -------------------
Access control yes no, public content only
Error messages XML HTML
Redirection no yes, bucket, rule, and object-level
Request types all supported GET and HEAD only
Root of bucket lists keys returns index document
SSL yes no
Source: http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteEndpoints.html
So, as you can see from the table, the REST endpoint supports signed URLs, but not friendly errors, while the website endpoint supports friendly errors, but not signed URLs. The two can't be mixed and matched, so what you're trying to do isn't natively supported by S3.
I have worked around this limitation by passing all requests for the bucket through HAProxy on an EC2 instance and on to the REST endpoint for the bucket.
When a 403 error message is returned, the proxy modifies the response body XML using the new embedded Lua interpreter, adding this before the <Error> tag.
<?xml-stylesheet type="text/xsl" href="/error.xsl"?>\n
The file /error.xsl is publicly readable, and uses browser-side XSLT to render a pretty error response.
The proxy also injects a couple of additional tags into the xml, <ProxyTime> and <ProxyHTTPCode> for use in the output. The resulting XML looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="/error.xsl"?>
<Error><ProxyTime>2015-10-13T17:36:01Z</ProxyTime><ProxyHTTPCode>403</ProxyHTTPCode><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>9D3E05D20C1BD6AC</RequestId><HostId>WvdkvIRIDMjfa/1Oi3DGVOTR0hABCDEFGHIJKLMNOPQRSTUVWXYZ+B8thZahg7W/I/ExAmPlEAQ=</HostId></Error>
Then I vary the output shown to the user with XSL tests to determine what error condition S3 has thrown:
<xsl:if test="//Code = 'AccessDenied'">
<p>It seems we may have provided you with a link to a resource to which you do not have access, or a resource which does not exist, or that our internal security mechanisms were unable to reach consensus on your authorization to view it.</p>
</xsl:if>
And the final result looks like this:
The above is a general "Access Denied" because no credentials were supplied. Here's an example of an expired signature.
I don't include the HostId in the output, since it's ugly and noisy, and, if I ever need it, the proxy captured and logged it for me, and I can cross-reference to the request-id.
As a bonus, of course, running the requests through my proxy means I can use my own domain name and my own SSL certificate when serving up bucket content, and I have real-time access logs with no delay. When the proxy is in the same region as the bucket, there is no additional charge for the extra step of data transfer, and I've been very happy with this setup.
Related
I am trying to get hls / dash streams working via Google Cloud CDN for a video on demand solution. The files / manifests sit in a Google Cloud Storage Bucket and everything looks properly configured since i followed every step of the documentation https://cloud.google.com/cdn/docs/using-signed-cookies.
Now i am using an equivalent Node.js code from Google Cloud CDN signed cookies with bucket as backend to create a signed cookie with the proper signing key name and value which i previously set up in google cloud. The cookie get's sent to my load balancer backend in Google Cloud.
Sadly, i always get a 403 response saying <?xml version='1.0' encoding='UTF-8'?><Error><Code>AccessDenied</Code><Message>Access denied.</Message></Error>.
Further info:
signed urls / cookies is activated on load balaner backend
IAM role in bucket for cdn-account is set to "objectViewer"
signing key is created, saved and used to sign the cookie
Would really appreciate any help on this.
Edit:
I just tried the exact python code google states to create the signed cookies from https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/cdn/snippets.py with the following params:
Call: sign_cookie('http://cdn.myurl.com/', 'mykeyname', 'mybase64encodedkey', 1614110180545)
The key is directly copied from google since I generated it there.
The load balancer log writes invalid_signed_cookie.
I'm stumbling across the same problem.
The weird thing is that it doesn't work correctly only in web browsers. I've seen GoolgeChrome and Safari return a 403 even though they contain cookies. However, I have noticed that the same request with the exact same cookie in curl returns 200. I think this means that it does not work correctly in web browser. I'm asking GCP support about this right now, but I'm not getting a good answer.
Edit:
As a result of several hypotheses and tests, I found out that when the cookie library I use formats and inserts values into the Set-Cookie header, URLEncoding is automatically executed and cookies that CloudCDN cannot understand are sent. Therefore, it is now possible for web browsers to retrieve content by adding it to the Set-Cookie header without URLEnconding.
I will be hosting a static web site on S3. The problem is that the web engine behind S3-as-a-web-server does not transform http://example.com/hello/ into http://example.com/hello/index.html.
When configuring the web site, there is a provision for the root document (the one which will be displayed when calling http://example.com), but not any deeper URLS (such as my example).
Is it possible to use the redirect rules to achieve that?
I actually have a solution for this problem, but is is really convoluted:
host the web site on an S3 bucket
deploy a CloudFront instance which origins in that bucket
use a Lambda#Edge which will rewrite the call once it hits CloudFront
I hope there is something more straightforward (I have hope in the redirect rules, though "redirect" suggests that something was already attained, which is not the case in my problem as S3 does not seem to understand what http://example.com/hello/ is.
When you specify the default index file and wants to serve index.html in a subpath,
You need to have the index.html in every level.
The documentation for S3 specifies the following
If you create such a folder structure in your bucket, you must have an
index document at each level. When a user specifies a URL that
resembles a folder lookup, the presence or absence of a trailing slash
determines the behavior of the website. For example, the following
URL, with a trailing slash, returns the photos/index.html index
document.
http://example-bucket.s3-website-region.amazonaws.com/photos/ However,
if you exclude the trailing slash from the preceding URL, Amazon S3
first looks for an object photos in the bucket. If the photos object
is not found, then it searches for an index document,
photos/index.html. If that document is found, Amazon S3 returns a 302
Found message and points to the photos/ key. For subsequent requests
to photos/, Amazon S3 returns photos/index.html.
Alternatively, If you want ALL paths to server index.html, this thread might be useful
I am using Google cloud storage as CDN to store file for our website which is hosted on Fastly.
In case of PDF files, we are doing a redirect to URL of PDF file in google cloud storage.
Everything works fine except in case if the user manipulates the file location in URL (which is used to build create google storage object URL). In such case google storage display error message in XML format as follow:
<Error>
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
</Error>
Such message is fine for dev environments however on production this is not something we can show to the user in a browser.
So I want to understand is there any provision in Google cloud storage to customize these messages and pages.
Thanks in advance,
Yogesh
The best way I know of to avoid this error would be to use GCS's static website hosting feature. To do so, you'd purchase some domain name, create a GCS bucket that matches that domain name, then specify the "NotFoundPage" property of the website configuration to be an object with whatever you'd like the appropriate error to be. The main downside here is that this would only work over HTTP, not HTTPS.
For more on how to set up static website config, see https://cloud.google.com/storage/docs/hosting-static-website
For a while, I was simply storing the contents of my website in a s3 bucket and could access all pages via the full url just fine. I wanted to make my website more secure by adding an SSL so I created a CloudFront Distribution to point to my s3 bucket.
The site will load just fine, but if the user tries to refresh the page or if they try to access a page using the full url (i.e., www.example.com/home), they will receive an AccessDenied page.
I have a policy on my s3 bucket that restricts access to only the Origin Access Identity and index.html is set as my domain root object.
I am not understanding what I am missing.
To demo, feel free to visit my site.
You will notice how it redirects you to kurtking.me/home. To reproduce the error, try refreshing the page or access a page via full URL (i.e., kurtking.me/life)
Any help is much appreciated as I have been trying to wrap my head around this and search for answers for a few days now.
I have figured it out and wanted to post my solution in case anyone else runs into this issue.
The issue was due to Angular being a SPA (Single Page App) and me using an S3 bucket to store it. When you try to go to a specific page via url access, CloudFront will take (for example, /about) and go to your S3 bucket looking for that file. Because Angular is a SPA, that file doesn't technically exist in your S3 bucket. This is why I was getting the error.
What I needed to do to fix it
If you open your distribution in Cloudfront, you will see an 'Error Pages' tab. I had to add two 'Custom Error Responses' that handled 400 and 403. The details are the same for 400 and 403, so I only include a photo for 400. See below:
Basically, what is happening is you are telling Cloudfront that regardless of a 400 or 403 error, to redirect back to index.html, thus giving control to Angular to decide if it can go to the path or not. If you would like to serve the client a 400 or 403 error, you need to define these routes in Angular.
After setting up the two custom error responses, I deployed my cloudfront solutions and wallah, it worked!
I used the following article to help guide me to this solution.
The better way to solve this is to allow list bucket permission and add a 404 error custom rule for cloudfront to point to index.html. 403 errors are also returned by WAF, and will cause additional security headaches, if they are added for custom error handling to index.html. Hence, better to avoid getting 403 errors from S3 in the first place for angular apps.
If you have this problem using CDK you need to add this :
MyAmplifyApp.addCustomRule({
source: '</^[^.]+$|\.(?!(css|gif|ico|jpg|js|png|txt|svg|woff|ttf|map|json)$)([^.]+$)/>',
target: '/index.html',
status: amplify.RedirectStatus.REWRITE
});
The accepted answer seems to work but it does not seem like a good practice. Instead check if the cloudfront origin is set to S3 Bucket(in which the static files are) or the actual s3 url endpoint. It should be the s3 url endpoint and not the s3 bucket.
The url after endpoint should be the origin
Add a Cloudfront function to rewrite the requested uri to /index.html if it doesn't match a regex.
For example, if none of your SPA routes contain a "." (dot), you could do something like this:
function handler(event) {
var request = event.request
if (/^(?!.*\..*).*$/.test(request.uri)) {
request.uri = '/index.html'
}
return request
}
This gets around any kind of side effects you would get by redirecting 403 -> index.html. For example, if you use a WAF to restrict access by IP address, if you try to navigate to the website from a "bad IP", a 403 will be thrown, but with the previously suggested 403 -> index.html redirect, you'd still see index.html. With a cloudfront function, you wont.
For those who are trying to achieve this using terraform, you only need to add this to your CloudFront Configuration:
resource "aws_cloudfront_distribution" "cf" {
...
custom_error_response {
error_code = 403
response_code = 200
response_page_path = "/index.html"
}
...
}
Please check from the console which error are you getting. In my case I was getting a 403 forbidden error, and using the settings which are shown in the screenshot worked for me
I'm hosting a website on s3 and I have a list of users.
Example structure: user/index.html
So when somebody want's to see a specific user he goes to url like www.example.com/user/?id=12345, what I wan't to do is use path like www.example.com/user/12345.
I hope it can be done with redirect rules, but I can't figure out how to do it.
I think I need something like that:
<?xml version="1.0"?>
<RoutingRules>
<RoutingRule><Condition><KeyPrefixEquals>user/?id=$id</KeyPrefixEquals> </Condition><Redirect><ReplaceKeyPrefixWith>user/$id</ReplaceKeyPrefixWith> </Redirect></RoutingRule>
</RoutingRules>
Amazon S3 Website Configuration Routing Rules Redirect Rule Property only supports ReplaceKeyPrefixWith, which allows redirection to a different path. The rules do not support any form of logic.
Your web application would need to perform such logic, then redirect users to the appropriate objects in Amazon S3.
Amazon s3 does not process query string parameters and, therefore, don't return different versions of an object based on parameter values