Amazon S3 web site parameters - amazon-web-services

I'm hosting a website on s3 and I have a list of users.
Example structure: user/index.html
So when somebody want's to see a specific user he goes to url like www.example.com/user/?id=12345, what I wan't to do is use path like www.example.com/user/12345.
I hope it can be done with redirect rules, but I can't figure out how to do it.
I think I need something like that:
<?xml version="1.0"?>
<RoutingRules>
<RoutingRule><Condition><KeyPrefixEquals>user/?id=$id</KeyPrefixEquals> </Condition><Redirect><ReplaceKeyPrefixWith>user/$id</ReplaceKeyPrefixWith> </Redirect></RoutingRule>
</RoutingRules>

Amazon S3 Website Configuration Routing Rules Redirect Rule Property only supports ReplaceKeyPrefixWith, which allows redirection to a different path. The rules do not support any form of logic.
Your web application would need to perform such logic, then redirect users to the appropriate objects in Amazon S3.

Amazon s3 does not process query string parameters and, therefore, don't return different versions of an object based on parameter values

Related

AWS, map multiple domains to one website but different path

I would like to know if is possible to have one website hosted on aws for example www.myname.com but then to have the possibility to map different domains to different paths. for example:
map domain www.hellogeorge.com (just an example) to www.myname.com/george
then another person to map his domain www.otherdomainchristian.com to www.myname.com/christian
i want to say that the www.myname.com/name is an api end point which generates an html webpage depending on the specific parameter used.
If such tecnologies are present may somebody guide me what to learn, study?
Thank u very much.
Well, it's pretty simple to do. You can use S3 bucket static web hosting to do the redirection. For example,
map domain www.hellogeorge.com (just an example) to www.myname.com/george
Create an S3 bucket called www.hellogeorge.com and then set redirection domain/path to www.myname.com/george
then another person to map his domain www.otherdomainchristian.com to www.myname.com/christian
Create another S3 bucket called www.otherdomainchristian.com and then set redirection domain/path to www.myname.com/christian
Now, when a user visits www.hellogeorge.com or www.otherdomainchristian.com, he/she will get a 301 redirection to the corresponding destination.
Next, you can use Amazon API Gateway to generate dynamic response depending on the request parameter.

How to redirect a naked URL in an AWS S3-powered web site?

I will be hosting a static web site on S3. The problem is that the web engine behind S3-as-a-web-server does not transform http://example.com/hello/ into http://example.com/hello/index.html.
When configuring the web site, there is a provision for the root document (the one which will be displayed when calling http://example.com), but not any deeper URLS (such as my example).
Is it possible to use the redirect rules to achieve that?
I actually have a solution for this problem, but is is really convoluted:
host the web site on an S3 bucket
deploy a CloudFront instance which origins in that bucket
use a Lambda#Edge which will rewrite the call once it hits CloudFront
I hope there is something more straightforward (I have hope in the redirect rules, though "redirect" suggests that something was already attained, which is not the case in my problem as S3 does not seem to understand what http://example.com/hello/ is.
When you specify the default index file and wants to serve index.html in a subpath,
You need to have the index.html in every level.
The documentation for S3 specifies the following
If you create such a folder structure in your bucket, you must have an
index document at each level. When a user specifies a URL that
resembles a folder lookup, the presence or absence of a trailing slash
determines the behavior of the website. For example, the following
URL, with a trailing slash, returns the photos/index.html index
document.
http://example-bucket.s3-website-region.amazonaws.com/photos/ However,
if you exclude the trailing slash from the preceding URL, Amazon S3
first looks for an object photos in the bucket. If the photos object
is not found, then it searches for an index document,
photos/index.html. If that document is found, Amazon S3 returns a 302
Found message and points to the photos/ key. For subsequent requests
to photos/, Amazon S3 returns photos/index.html.
Alternatively, If you want ALL paths to server index.html, this thread might be useful

AWS S3 Redirect only works on bucket as a subdomain not bucket as a directory

Many people have received 100s of links to PoCs that are on an internal facing bucket and the links are in this structure.
https://s3.amazonaws.com/bucket_name/
I added a redirect using AWS's Static website hosting section in Properties and it ONLY redirects when the domain is formatted like this:
https://bucket_name.s3-website-us-east-1.amazonaws.com
Is this a bug with S3?
For now, how do I make it redirect using both types of links? My current workaround is to add a meta redirect tag in each html file.
The s3-website is the only endpoint that supports redirects unfortunately. Using the s3.amazonaws.com supposes that you will be using S3 as a storage layer, instead of a website. If the link is to a specific object, you can place an HTML file at that url with a JS redirect, but other than that there is really no way to achieve what you are trying to do.
In the future, i would recommend always setting up a Cloudfront distribution for those kinds of usecases, as that will allow you to change the origin later on.

AWS S3 gracefully handle 403 after getSignedUrl expired

I'm trying to gracefully handle the 403 when visiting an S3 resource via an expired URL. Currently it returns an amz xml error page. I have uploaded a 403.html resource and thought I could redirect to that.
The bucket resources are assets saved/fetched by my app. Still, reading the docs I set bucket properties to handle the bucket as a static webpage page and uploaded a 403.html to bucket root. All public permissions are blocked, except public GET access to the resource 403.html. In bucket properties, website settings I indicated the 403.html as error page. Visiting http://<bucket>.s3-website-us-east-1.amazonaws.com/some-asset.html redirects correctly to http://<bucket>.s3-website-us-east-1.amazonaws.com/403.html
However, when I use aws-sdk js/node and call method getSignedUrl('getObject', params) to generate the signed url, it returns a different host url: https://<bucket>.s3.amazonaws.com/ Visiting expired resources from this method do not get redirected to 403.html. I'm guessing that since the host address is different this is the reason it is not automatically redirecting.
I have also set up static website routing rules for condition
<Condition>
<HttpErrorCodeReturnedEquals>403</HttpErrorCodeReturnedEquals>
</Condition>
<Redirect>
<ReplaceKeyWith>403.html</ReplaceKeyWith>
</Redirect>
Still that's not redirecting the signed urls. So I'm at a loss of how to gracefully handle these expired urls. Any help would be greatly appreciated.
S3 buckets have 2 public-facing interfaces, REST and website. That is the difference between the two hostnames, and the difference in behavior you are seeing.
They have two different feature sets.
feature REST Endpoint Website Endpoint
---------------- ------------------- -------------------
Access control yes no, public content only
Error messages XML HTML
Redirection no yes, bucket, rule, and object-level
Request types all supported GET and HEAD only
Root of bucket lists keys returns index document
SSL yes no
Source: http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteEndpoints.html
So, as you can see from the table, the REST endpoint supports signed URLs, but not friendly errors, while the website endpoint supports friendly errors, but not signed URLs. The two can't be mixed and matched, so what you're trying to do isn't natively supported by S3.
I have worked around this limitation by passing all requests for the bucket through HAProxy on an EC2 instance and on to the REST endpoint for the bucket.
When a 403 error message is returned, the proxy modifies the response body XML using the new embedded Lua interpreter, adding this before the <Error> tag.
<?xml-stylesheet type="text/xsl" href="/error.xsl"?>\n
The file /error.xsl is publicly readable, and uses browser-side XSLT to render a pretty error response.
The proxy also injects a couple of additional tags into the xml, <ProxyTime> and <ProxyHTTPCode> for use in the output. The resulting XML looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="/error.xsl"?>
<Error><ProxyTime>2015-10-13T17:36:01Z</ProxyTime><ProxyHTTPCode>403</ProxyHTTPCode><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>9D3E05D20C1BD6AC</RequestId><HostId>WvdkvIRIDMjfa/1Oi3DGVOTR0hABCDEFGHIJKLMNOPQRSTUVWXYZ+B8thZahg7W/I/ExAmPlEAQ=</HostId></Error>
Then I vary the output shown to the user with XSL tests to determine what error condition S3 has thrown:
<xsl:if test="//Code = 'AccessDenied'">
<p>It seems we may have provided you with a link to a resource to which you do not have access, or a resource which does not exist, or that our internal security mechanisms were unable to reach consensus on your authorization to view it.</p>
</xsl:if>
And the final result looks like this:
The above is a general "Access Denied" because no credentials were supplied. Here's an example of an expired signature.
I don't include the HostId in the output, since it's ugly and noisy, and, if I ever need it, the proxy captured and logged it for me, and I can cross-reference to the request-id.
As a bonus, of course, running the requests through my proxy means I can use my own domain name and my own SSL certificate when serving up bucket content, and I have real-time access logs with no delay. When the proxy is in the same region as the bucket, there is no additional charge for the extra step of data transfer, and I've been very happy with this setup.

Risk of enabling CORS?

Amazon S3 provides a way to enable CORS support on a per-bucket basis. By default, though, CORS is disabled.
I can't think of a single security risk involved with enabling wildcard CORS for my S3 buckets, but presumably there must be at least some danger, or else they'd have just enabled it for everything and wouldn't provide such elaborate rules for exactly which domains to trust.
Can someone describe an exploit or other reason why I wouldn't want to just set <AllowedOrigin>*</AllowedOrigin> on all of my buckets?
From Wikipedia:
Access-Control-Allow-Origin: *
This is generally not appropriate. The only case where this is
appropriate is when a page or API response is considered completely
public content and it is intended to be accessible to everyone,
including any code on any site.
Here is an example where you would need to use CORS but you should avoid using the wildcard: you store content on S3 that you need to access from your app (hosted on another URL) using AJAX, but you don't want to make that content publicly available to other sites (e.g. a HTML or JSON file that stores some of your app's data, where those files should not be available from other sites).
There are also other scenarios like the possibility to upload content to your S3 bucket, from other sites by using AJAX.