AWS S3 configure sub-domain - amazon-web-services

I'm using Amazon's simple storage service (S3). I noticed that others like Trello were able to configure sub-domain for their S3 links. In the following link they have trello-attachments as sub-domain.
https://trello-attachments.s3.amazonaws.com/.../.../..../file.png
Where can I configure this?

You don't have to configure it.
All buckets work that way if there are no dots in the bucket name and it's otherwise a hostname made up of valid characters. If your bucket isn't in the "US-Standard" region, you may have to use the correct endpoint instead of ".s3.amazonaws.com" to avoid a redirect (or to make it work at all).
An ordinary Amazon S3 REST request specifies a bucket by using the first slash-delimited component of the Request-URI path. Alternatively, you can use Amazon S3 virtual hosting to address a bucket in a REST API call by using the HTTP Host header. In practice, Amazon S3 interprets Host as meaning that most buckets are automatically accessible (for limited types of requests) at http://bucketname.s3.amazonaws.com.
— http://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html

Related

Does the region need to be in an S3 domain?

When you navigate to a file uploaded on S3, you'll see its URL in a format such as this (e.g. in this example the bucket name is example and the file is hello.txt):
https://example.s3.us-west-2.amazonaws.com/hello.txt
Notice that the region, us-west-2, is embedded in the domain.
I accidentally tried accessing the same url without the region, and noticed that it worked too:
https://example.s3.amazonaws.com/hello.txt
It seems much simpler to use these shorter URLs rather than the longer ones as I don't need to pass around the region.
Are there any advantages/disadvantages of excluding the region from the domain? Or are the two domains the same?
This is a deprecated feature of Amazon S3 known as Global Endpoints. Some regions support the global endpoint for backward compatibility purposes. AWS recommends that you use the standard endpoint syntax in the future.
For regions that support the global endpoint, your request is redirected to the standard endpoint. By default, Amazon routes global endpoint requests to the us-east-1 region. For buckets that are in supported regions other than us-east-1, Amazon S3 updates the DNS record for future requests (note that DNS updates require 24-48 hours to propagate). Amazon then redirects the request to the correct region using the HTTP 307 Temporary Redirect.
Are there any advantages/disadvantages of excluding the region from the domain? Or are the two domains the same?
The domains are not the same.
Advantages to using the legacy global endpoint: the URL is shorter.
Disadvantages: the request must be redirected and is, therefore, less efficient. Further, if you create a bucket in a region that does not support global endpoints, AWS will return an HTTP 400 Bad Request error response.
TLDR: It is a best practice to use the standard (regional) S3 endpoint syntax.

How to hide AWS S3 Bucket URL with custom CNAME

I want to connect CDN to an AWS S3 Bucket, but the AWS Document indicates that the bucket name must be the same as the CNAME. Therefore, it is very easy to guess the real s3 bucket url by others.
For example,
- My domain: example.com
- My S3 Bucket name: image.example.com
- My CDN CNAME(image.example.com) will point to image.example.com.s3.amazonaws.com
After that, people can access the CDN URL -> http://image.example.com to obtain the resources from my S3 Bucket. However, under this restriction, people can guess my real S3 bucket url from the CNAME (CNAME + s3.amazonaws.com) easily.
So, my question is that how can I hide my real s3 bucket url? Because I don't want to expose my real S3 url to anyone for preventing any attacks.
I am not sure I understand what you are asking for or what you are trying to do [hiding your bucket does not really help anything], however I will attempt to answer your question regarding "hiding" your bucket name. Before I answer, I would like to ask these two questions:
Why do you want to hide your S3 bucket url?
What kind of attacks are you trying to prevent?
You are correct that the S3 bucket name had to be the same as your URL. This is no longer a requirement as you can mask the S3 bucket using cloudfront. CloudFront as you know is a CDN from AWS. Thus the bucket name could be anything (randomstring).
You can restrict access to the bucket, such that only CloudFront can access it. Data in the bucket is then replicated to edge locations and served from there. Even if one knows the S3 URL, it will not do anything as access to the s3 bucket is restricted, an IAM rule grants CloudFront access and no one else.
Access restriction is done via origin access and while you can manually configure this using a bucket policy, you can also set a flag in CloudFront to do this on your behalf. More information is available here.
Use the CloudFront name in Route53. Do not use CNAME, but rather use A type, and set it up as an Alias. For more information see this document.
If you are using a different DNS provider, AWS aliases will naturally not be available. I suggest moving the zone file from your other provider to AWS. If you cannot do this, then you can still use a CNAME. Again see here for more information.
I suggest using your own domain name for CloudFront and setting up HTTPS. AWS offers certificates at no additional cost for services within AWS. You can register a certificate for your domain name which is either validated by a DNS entry or an Email. To set this up please see this document.
If you want to restrict access to specific files within AWS, you can use signed URLs. More information about that is provided here.

have multiple subdomains refer to the same S3 bucket without HTTP redirect

I have a bucket called subdomain.domain.com that hosts code that should be used whenever users go to various subdomains.
e.g. going to:
- a.domain.com
- b.domain.com
- c.domain.com
Should go to the same bucket.
I've set the CNAME for all the subdomain URL's to go to the URL of the subdomain.domain.com bucket. The problem is that, AWS tries to look for bucket a.domain.com' instead of just going tosubdomain.domain.com' bucket
I've read some suggestions saying I can create a bucket like a.domain.com and have it redirect back to subdomain.domain.com but I don't want a URL change and I'd like to be able to upload just to one bucket and all subdomains will be updated.
Some features that appear to be "missing" in S3 are actually designed into CloudFront, which complements S3. Pointing multiple domain names to a single bucket is one of those features. It isn't possible to do this with only S3 since, as you noticed, S3 matches the hostname with the bucket name.
Create a CloudFront distribution, defining each of the desired domain names as Alternate Domain Names.
For the origin server, type in the web site endpoint hostname of the bucket, found in the S3 console. (Don't select the bucket from the dropdown list).
Point the various hostnames to CloudFront in DNS.
CloudFront will translate the incoming hostnames so that S3 serves all the domains from a single bucket, the one you specified as the origin server.
Note that this configuration also allows you to optionally use SSL with your web hosting buckets, which is another feature that S3 relies on CloudFront to implement.

One domain to mulitple s3 buckets based on geolocation

We want to host images on our application as fast as possible. As we already have an AWS setup we prefer to host our images on S3 buckets (but are open for alternatives).
The challenge is routing the request to the closest S3 bucket.
Right now we use Amazon Route 53 with geolocation routing policy to the closes EC2 instance wich redirects to the respective bucket. We find this inefficent as the request goes:
origin->DNS->EC2->S3 and would prefer
origin->DNS->S3. Is it possible to bind two static website S3 buckets to the same domain where request are routed based on Geolocation?
Ps: We have looked into cloudfront, but since many of the images are dynamic and are only viewed once we would like the origin to be as close to the user as possible.
It's not possible to do this.
In order for an S3 bucket to serve files as a static website, the bucket name must match the domain that is being browsed. Due to this restriction, it's not possible to have more than one bucket serve files for the same domain because you cannot create more than one bucket with the same name, even in different regions.
CloudFront can be used to serve files from S3 buckets, and those S3 buckets don't need to have their names match the domain. So at first glance, this could be a workaround. However, CloudFront does not allow you to create more than one distribution for the same domain.
So unfortunately, as of this writing, geolocating is not possible from S3 buckets.
Edit for a deeper explanation:
Whether the DNS entry for your domain is a CNAME, an A record, or an ALIAS is irrelevant. The limitation is on the S3 side and has nothing to do with DNS.
A CNAME record will resolve example.com to s3.amazonaws.com to x.x.x.x and the connection will be made to S3. But your browser will still send example.com in the Host header.
When S3 serves files for webpages, it uses the Host header in the HTTP request to determine from which bucket the files should be served. This is because there is a single HTTP endpoint for S3. So, just like when your own web server is hosting multiple websites from the same server, it uses the Host header to determine which website you actually want.
Once S3 has the Host that you want, it compares it against the buckets available. It decided that the bucket name would be used to match against the Host header.
So after a lot of research we did not find an answer to the problem. We did however update our setup. The scenario is that a user clicks a button and will view some images in an IOS app. The request when the user pushes the button is geo rerouted to the nearest EC2 instance for faster performance. Instead of returning the same imagelinks in EU and US we updated it so when clicking in US you get links to an American S3 bucket and the same for Europe. We also put up two cloud front distributions, one in front of each S3 bucket, to increase speed.

Can't work with bucket name that has periods in it via REST API

I'm using the S3 REST api to manage objects in my bucket. This is working when my bucket name has dashes in it. For example, the host for a REST request would be my-bucket-name.s3.amazonaws.com.
I have another bucket named www.my-bucket-name.com, which would have the following host in a rest request: www.my-bucket-name.com.s3.amazonaws.com. Requests for bucket names like this will fail with Unable to communicate securely with peer: requested domain name does not match the server's certificate.. Per the docs, www.my-bucket-name.com is a valid bucket name. Do I need to encode it somehow? Is there some sort of alias?
This is one of the reasons S3 supports the virtual host method you're using, as well as the alternate, path-style method, for accessing buckets and their objects via the REST endpoint.
https://example.com.s3.amazonaws.com/foo
https://s3.amazonaws.com/example.com/foo
These reference the same object, but the second form works with SSL since the hostname matches the S3 wildcard cert (which is the problem you are experiencing -- wildcard SSL certs don't match dots in the hostname portion being wildcarded).
http://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html
There are some legitimate reasons to put dots in a bucket name.